{"query_id": "q-en-rust-00197aac97777e71a073a0bfbab0bf915653c70f8d24646b77205c19adf26a31", "query": "} } fn compute_components_recursive<'tcx>( /// Collect [Component]s for *all* the substs of `parent`. /// /// This should not be used to get the components of `parent` itself. /// Use [push_outlives_components] instead. pub(super) fn compute_components_recursive<'tcx>( tcx: TyCtxt<'tcx>, parent: GenericArg<'tcx>, out: &mut SmallVec<[Component<'tcx>; 4]>,", "positive_passages": [{"docid": "doc-en-rust-cc977dabed974d4bcbd1cb2e4f8886eb7e1f6cae7a3aa4a4ee14b49a9dba2970", "text": " $DIR/issue-107775.rs:35:16 | LL | map.insert(1, Struct::do_something); | - -------------------- this is of type `fn(u8) -> Pin + Send>> {::do_something::<'_>}`, which causes `map` to be inferred as `HashMap<{integer}, fn(u8) -> Pin + Send>> {::do_something::<'_>}>` | | | this is of type `{integer}`, which causes `map` to be inferred as `HashMap<{integer}, fn(u8) -> Pin + Send>> {::do_something::<'_>}>` LL | Self { map } | ^^^ expected `HashMap Pin<...>>`, found `HashMap<{integer}, ...>` | = note: expected struct `HashMap Pin + Send + 'static)>>>` found struct `HashMap<{integer}, fn(_) -> Pin + Send>> {::do_something::<'_>}>` error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-ed2179e0830404846bb598040cc05d41169750b05e3cef96b225bd70113ace6b", "text": " $DIR/issue-82772.rs:5:15 | LL | let Box { 0: _, .. }: Box<()>; | ^^^^ private field error[E0451]: field `1` of struct `Box` is private --> $DIR/issue-82772.rs:6:15 | LL | let Box { 1: _, .. }: Box<()>; | ^^^^ private field error[E0451]: field `1` of struct `ModPrivateStruct` is private --> $DIR/issue-82772.rs:7:28 | LL | let ModPrivateStruct { 1: _, .. } = ModPrivateStruct::default(); | ^^^^ private field error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0451`. ", "positive_passages": [{"docid": "doc-en-rust-a7f3e9810aa82b6f33a59ff9fa989846e97e88c833b24e7a93ba56e854e1291a", "text": " $DIR/rustc-macro-transparency.rs:30:5 | LL | struct Opaque; | -------------- similarly named unit struct `Opaque` defined here ... LL | opaque; | ^^^^^^ not a value |", "positive_passages": [{"docid": "doc-en-rust-68c6f35e025cebcb8286b1360c5297d850ed364c3a6a9f39a1bb805bfd2fed77", "text": "Given the following code: The current output is: $DIR/range-index-instead-of-colon.rs:4:17 | LL | &[1, 2, 3][1:2]; | ^ expected one of `.`, `?`, `]`, or an operator | help: you might have meant a range expression | LL | &[1, 2, 3][1..2]; | ~~ error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-d276d3e93a75aa1475bbb78a0eb01d825c6f04cfab539bfd15c93b5d66e7797e", "text": "Having written Python for several hours, I made this mistake and couldn't realize what was wrong for a while. No response claim $DIR/unused_parens_remove_json_suggestion.rs:18:8 {\"message\":\"unnecessary parentheses around `if` condition\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":500,\"byte_end\":504,\"line_start\":17,\"line_end\":17,\"column_start\":8,\"column_end\":12,\"is_primary\":true,\"text\":[{\"text\":\" if (_b) { --> $DIR/unused_parens_remove_json_suggestion.rs:17:8 | LL | if (_b) { | ^^^^ help: remove these parentheses | note: lint level defined here --> $DIR/unused_parens_remove_json_suggestion.rs:11:9 --> $DIR/unused_parens_remove_json_suggestion.rs:10:9 | LL | #![warn(unused_parens)] LL | #![deny(unused_parens)] | ^^^^^^^^^^^^^ \" } { \"message\": \"unnecessary parentheses around `if` condition\", \"code\": { \"code\": \"unused_parens\", \"explanation\": null }, \"level\": \"warning\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 618, \"byte_end\": 621, \"line_start\": 29, \"line_end\": 29, \"column_start\": 7, \"column_end\": 10, \"is_primary\": true, \"text\": [ { \"text\": \" if(c) {\", \"highlight_start\": 7, \"highlight_end\": 10 } ], \"label\": null, \"suggested_replacement\": null, \"suggestion_applicability\": null, \"expansion\": null } ], \"children\": [ { \"message\": \"remove these parentheses\", \"code\": null, \"level\": \"help\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 618, \"byte_end\": 621, \"line_start\": 29, \"line_end\": 29, \"column_start\": 7, \"column_end\": 10, \"is_primary\": true, \"text\": [ { \"text\": \" if(c) {\", \"highlight_start\": 7, \"highlight_end\": 10 } ], \"label\": null, \"suggested_replacement\": \" c\", \"suggestion_applicability\": \"MachineApplicable\", \"expansion\": null } ], \"children\": [], \"rendered\": null } ], \"rendered\": \"warning: unnecessary parentheses around `if` condition --> $DIR/unused_parens_remove_json_suggestion.rs:29:7 \"} {\"message\":\"unnecessary parentheses around `if` condition\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":631,\"byte_end\":634,\"line_start\":28,\"line_end\":28,\"column_start\":7,\"column_end\":10,\"is_primary\":true,\"text\":[{\"text\":\" if(c) { --> $DIR/unused_parens_remove_json_suggestion.rs:28:7 | LL | if(c) { | ^^^ help: remove these parentheses \" } { \"message\": \"unnecessary parentheses around `if` condition\", \"code\": { \"code\": \"unused_parens\", \"explanation\": null }, \"level\": \"warning\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 664, \"byte_end\": 667, \"line_start\": 33, \"line_end\": 33, \"column_start\": 8, \"column_end\": 11, \"is_primary\": true, \"text\": [ { \"text\": \" if (c){\", \"highlight_start\": 8, \"highlight_end\": 11 } ], \"label\": null, \"suggested_replacement\": null, \"suggestion_applicability\": null, \"expansion\": null } ], \"children\": [ { \"message\": \"remove these parentheses\", \"code\": null, \"level\": \"help\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 664, \"byte_end\": 667, \"line_start\": 33, \"line_end\": 33, \"column_start\": 8, \"column_end\": 11, \"is_primary\": true, \"text\": [ { \"text\": \" if (c){\", \"highlight_start\": 8, \"highlight_end\": 11 } ], \"label\": null, \"suggested_replacement\": \"c \", \"suggestion_applicability\": \"MachineApplicable\", \"expansion\": null } ], \"children\": [], \"rendered\": null } ], \"rendered\": \"warning: unnecessary parentheses around `if` condition --> $DIR/unused_parens_remove_json_suggestion.rs:33:8 \"} {\"message\":\"unnecessary parentheses around `if` condition\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":711,\"byte_end\":714,\"line_start\":32,\"line_end\":32,\"column_start\":8,\"column_end\":11,\"is_primary\":true,\"text\":[{\"text\":\" if (c){ --> $DIR/unused_parens_remove_json_suggestion.rs:32:8 | LL | if (c){ | ^^^ help: remove these parentheses \" } { \"message\": \"unnecessary parentheses around `while` condition\", \"code\": { \"code\": \"unused_parens\", \"explanation\": null }, \"level\": \"warning\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 712, \"byte_end\": 727, \"line_start\": 37, \"line_end\": 37, \"column_start\": 11, \"column_end\": 26, \"is_primary\": true, \"text\": [ { \"text\": \" while (false && true){\", \"highlight_start\": 11, \"highlight_end\": 26 } ], \"label\": null, \"suggested_replacement\": null, \"suggestion_applicability\": null, \"expansion\": null } ], \"children\": [ { \"message\": \"remove these parentheses\", \"code\": null, \"level\": \"help\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 712, \"byte_end\": 727, \"line_start\": 37, \"line_end\": 37, \"column_start\": 11, \"column_end\": 26, \"is_primary\": true, \"text\": [ { \"text\": \" while (false && true){\", \"highlight_start\": 11, \"highlight_end\": 26 } ], \"label\": null, \"suggested_replacement\": \"false && true \", \"suggestion_applicability\": \"MachineApplicable\", \"expansion\": null } ], \"children\": [], \"rendered\": null } ], \"rendered\": \"warning: unnecessary parentheses around `while` condition --> $DIR/unused_parens_remove_json_suggestion.rs:37:11 \"} {\"message\":\"unnecessary parentheses around `while` condition\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":793,\"byte_end\":808,\"line_start\":36,\"line_end\":36,\"column_start\":11,\"column_end\":26,\"is_primary\":true,\"text\":[{\"text\":\" while (false && true){\",\"highlight_start\":11,\"highlight_end\":26}],\"label\":null,\"suggested_replacement\":null,\"suggestion_applicability\":null,\"expansion\":null}],\"children\":[{\"message\":\"remove these parentheses\",\"code\":null,\"level\":\"help\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":793,\"byte_end\":808,\"line_start\":36,\"line_end\":36,\"column_start\":11,\"column_end\":26,\"is_primary\":true,\"text\":[{\"text\":\" while (false && true){\",\"highlight_start\":11,\"highlight_end\":26}],\"label\":null,\"suggested_replacement\":\"false && true \",\"suggestion_applicability\":\"MachineApplicable\",\"expansion\":null}],\"children\":[],\"rendered\":null}],\"rendered\":\"error: unnecessary parentheses around `while` condition --> $DIR/unused_parens_remove_json_suggestion.rs:36:11 | LL | while (false && true){ | ^^^^^^^^^^^^^^^ help: remove these parentheses \" } { \"message\": \"unnecessary parentheses around `if` condition\", \"code\": { \"code\": \"unused_parens\", \"explanation\": null }, \"level\": \"warning\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 740, \"byte_end\": 743, \"line_start\": 38, \"line_end\": 38, \"column_start\": 12, \"column_end\": 15, \"is_primary\": true, \"text\": [ { \"text\": \" if (c) {\", \"highlight_start\": 12, \"highlight_end\": 15 } ], \"label\": null, \"suggested_replacement\": null, \"suggestion_applicability\": null, \"expansion\": null } ], \"children\": [ { \"message\": \"remove these parentheses\", \"code\": null, \"level\": \"help\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 740, \"byte_end\": 743, \"line_start\": 38, \"line_end\": 38, \"column_start\": 12, \"column_end\": 15, \"is_primary\": true, \"text\": [ { \"text\": \" if (c) {\", \"highlight_start\": 12, \"highlight_end\": 15 } ], \"label\": null, \"suggested_replacement\": \"c\", \"suggestion_applicability\": \"MachineApplicable\", \"expansion\": null } ], \"children\": [], \"rendered\": null } ], \"rendered\": \"warning: unnecessary parentheses around `if` condition --> $DIR/unused_parens_remove_json_suggestion.rs:38:12 \"} {\"message\":\"unnecessary parentheses around `if` condition\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":821,\"byte_end\":824,\"line_start\":37,\"line_end\":37,\"column_start\":12,\"column_end\":15,\"is_primary\":true,\"text\":[{\"text\":\" if (c) { --> $DIR/unused_parens_remove_json_suggestion.rs:37:12 | LL | if (c) { | ^^^ help: remove these parentheses \" } { \"message\": \"unnecessary parentheses around `while` condition\", \"code\": { \"code\": \"unused_parens\", \"explanation\": null }, \"level\": \"warning\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 803, \"byte_end\": 818, \"line_start\": 44, \"line_end\": 44, \"column_start\": 10, \"column_end\": 25, \"is_primary\": true, \"text\": [ { \"text\": \" while(true && false) {\", \"highlight_start\": 10, \"highlight_end\": 25 } ], \"label\": null, \"suggested_replacement\": null, \"suggestion_applicability\": null, \"expansion\": null } ], \"children\": [ { \"message\": \"remove these parentheses\", \"code\": null, \"level\": \"help\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 803, \"byte_end\": 818, \"line_start\": 44, \"line_end\": 44, \"column_start\": 10, \"column_end\": 25, \"is_primary\": true, \"text\": [ { \"text\": \" while(true && false) {\", \"highlight_start\": 10, \"highlight_end\": 25 } ], \"label\": null, \"suggested_replacement\": \" true && false\", \"suggestion_applicability\": \"MachineApplicable\", \"expansion\": null } ], \"children\": [], \"rendered\": null } ], \"rendered\": \"warning: unnecessary parentheses around `while` condition --> $DIR/unused_parens_remove_json_suggestion.rs:44:10 \"} {\"message\":\"unnecessary parentheses around `while` condition\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":918,\"byte_end\":933,\"line_start\":43,\"line_end\":43,\"column_start\":10,\"column_end\":25,\"is_primary\":true,\"text\":[{\"text\":\" while(true && false) { --> $DIR/unused_parens_remove_json_suggestion.rs:43:10 | LL | while(true && false) { | ^^^^^^^^^^^^^^^ help: remove these parentheses \" } { \"message\": \"unnecessary parentheses around `for` head expression\", \"code\": { \"code\": \"unused_parens\", \"explanation\": null }, \"level\": \"warning\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 838, \"byte_end\": 846, \"line_start\": 45, \"line_end\": 45, \"column_start\": 18, \"column_end\": 26, \"is_primary\": true, \"text\": [ { \"text\": \" for _ in (0 .. 3){\", \"highlight_start\": 18, \"highlight_end\": 26 } ], \"label\": null, \"suggested_replacement\": null, \"suggestion_applicability\": null, \"expansion\": null } ], \"children\": [ { \"message\": \"remove these parentheses\", \"code\": null, \"level\": \"help\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 838, \"byte_end\": 846, \"line_start\": 45, \"line_end\": 45, \"column_start\": 18, \"column_end\": 26, \"is_primary\": true, \"text\": [ { \"text\": \" for _ in (0 .. 3){\", \"highlight_start\": 18, \"highlight_end\": 26 } ], \"label\": null, \"suggested_replacement\": \"0 .. 3 \", \"suggestion_applicability\": \"MachineApplicable\", \"expansion\": null } ], \"children\": [], \"rendered\": null } ], \"rendered\": \"warning: unnecessary parentheses around `for` head expression --> $DIR/unused_parens_remove_json_suggestion.rs:45:18 \"} {\"message\":\"unnecessary parentheses around `for` head expression\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":987,\"byte_end\":995,\"line_start\":44,\"line_end\":44,\"column_start\":18,\"column_end\":26,\"is_primary\":true,\"text\":[{\"text\":\" for _ in (0 .. 3){ --> $DIR/unused_parens_remove_json_suggestion.rs:44:18 | LL | for _ in (0 .. 3){ | ^^^^^^^^ help: remove these parentheses \" } { \"message\": \"unnecessary parentheses around `for` head expression\", \"code\": { \"code\": \"unused_parens\", \"explanation\": null }, \"level\": \"warning\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 905, \"byte_end\": 913, \"line_start\": 50, \"line_end\": 50, \"column_start\": 14, \"column_end\": 22, \"is_primary\": true, \"text\": [ { \"text\": \" for _ in (0 .. 3) {\", \"highlight_start\": 14, \"highlight_end\": 22 } ], \"label\": null, \"suggested_replacement\": null, \"suggestion_applicability\": null, \"expansion\": null } ], \"children\": [ { \"message\": \"remove these parentheses\", \"code\": null, \"level\": \"help\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 905, \"byte_end\": 913, \"line_start\": 50, \"line_end\": 50, \"column_start\": 14, \"column_end\": 22, \"is_primary\": true, \"text\": [ { \"text\": \" for _ in (0 .. 3) {\", \"highlight_start\": 14, \"highlight_end\": 22 } ], \"label\": null, \"suggested_replacement\": \"0 .. 3\", \"suggestion_applicability\": \"MachineApplicable\", \"expansion\": null } ], \"children\": [], \"rendered\": null } ], \"rendered\": \"warning: unnecessary parentheses around `for` head expression --> $DIR/unused_parens_remove_json_suggestion.rs:50:14 \"} {\"message\":\"unnecessary parentheses around `for` head expression\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":1088,\"byte_end\":1096,\"line_start\":49,\"line_end\":49,\"column_start\":14,\"column_end\":22,\"is_primary\":true,\"text\":[{\"text\":\" for _ in (0 .. 3) { --> $DIR/unused_parens_remove_json_suggestion.rs:49:14 | LL | for _ in (0 .. 3) { | ^^^^^^^^ help: remove these parentheses \" } { \"message\": \"unnecessary parentheses around `while` condition\", \"code\": { \"code\": \"unused_parens\", \"explanation\": null }, \"level\": \"warning\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 930, \"byte_end\": 945, \"line_start\": 51, \"line_end\": 51, \"column_start\": 15, \"column_end\": 30, \"is_primary\": true, \"text\": [ { \"text\": \" while (true && false) {\", \"highlight_start\": 15, \"highlight_end\": 30 } ], \"label\": null, \"suggested_replacement\": null, \"suggestion_applicability\": null, \"expansion\": null } ], \"children\": [ { \"message\": \"remove these parentheses\", \"code\": null, \"level\": \"help\", \"spans\": [ { \"file_name\": \"$DIR/unused_parens_remove_json_suggestion.rs\", \"byte_start\": 930, \"byte_end\": 945, \"line_start\": 51, \"line_end\": 51, \"column_start\": 15, \"column_end\": 30, \"is_primary\": true, \"text\": [ { \"text\": \" while (true && false) {\", \"highlight_start\": 15, \"highlight_end\": 30 } ], \"label\": null, \"suggested_replacement\": \"true && false\", \"suggestion_applicability\": \"MachineApplicable\", \"expansion\": null } ], \"children\": [], \"rendered\": null } ], \"rendered\": \"warning: unnecessary parentheses around `while` condition --> $DIR/unused_parens_remove_json_suggestion.rs:51:15 \"} {\"message\":\"unnecessary parentheses around `while` condition\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_remove_json_suggestion.rs\",\"byte_start\":1147,\"byte_end\":1162,\"line_start\":50,\"line_end\":50,\"column_start\":15,\"column_end\":30,\"is_primary\":true,\"text\":[{\"text\":\" while (true && false) { --> $DIR/unused_parens_remove_json_suggestion.rs:50:15 | LL | while (true && false) { | ^^^^^^^^^^^^^^^ help: remove these parentheses \" } \"} {\"message\":\"aborting due to 9 previous errors\",\"code\":null,\"level\":\"error\",\"spans\":[],\"children\":[],\"rendered\":\"error: aborting due to 9 previous errors \"} ", "positive_passages": [{"docid": "doc-en-rust-4ec7e73a404ac76d69c671c6f06db99d936ae8f0078fcfaac8411a19baadc5dd", "text": "Update test file to use + annotations instead of (see https://rust-) The files need to update are: - $DIR/rustc-macro-transparency.rs:26:5 | LL | Opaque; | ^^^^^^ help: a local variable with a similar name exists (notice the capitalization): `opaque` | ^^^^^^ not found in this scope error[E0423]: expected value, found macro `semitransparent` --> $DIR/rustc-macro-transparency.rs:29:5 | LL | struct SemiTransparent; | ----------------------- similarly named unit struct `SemiTransparent` defined here ... LL | semitransparent; | ^^^^^^^^^^^^^^^ not a value |", "positive_passages": [{"docid": "doc-en-rust-68c6f35e025cebcb8286b1360c5297d850ed364c3a6a9f39a1bb805bfd2fed77", "text": "Given the following code: The current output is: $DIR/issue-11515.rs:9:33 | LL | let test = box Test { func: closure };", "positive_passages": [{"docid": "doc-en-rust-118433c31bc29716faff9a460042fc6c662b57f7865f844f06298dd3fb733f90", "text": " $DIR/issue-117920.rs:3:5 | LL | use super::A; | ^^^^^ there are too many leading `super` keywords error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0433`. ", "positive_passages": [{"docid": "doc-en-rust-59c82c04b59b74b5696b4ce5b6fdb4432432dd7be9e6598c43fe9e4a14c94993", "text": " $DIR/const-int-unchecked.rs:191:25 | LL | const _: u32 = unsafe { std::intrinsics::ctlz_nonzero(0) }; | ------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^--- | | | `ctlz_nonzero` called on 0 | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #71800 error: any use of this value will cause an error --> $DIR/const-int-unchecked.rs:194:25 | LL | const _: u32 = unsafe { std::intrinsics::cttz_nonzero(0) }; | ------------------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^--- | | | `cttz_nonzero` called on 0 | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #71800 error: aborting due to 49 previous errors ", "positive_passages": [{"docid": "doc-en-rust-085409dcef93908a429af332ac7e4f51617f3e573aa37899e533c49777a5f9ea", "text": "Tracking issue for trailingzeros and leadingzeros for non zero types as introduced by . The feature gate for the issue is . On many architectures, this functions can perform better than trailingzeros/leading_zeros on the underlying integer type, as special handling of zero can be avoided.\nIt seems on my (common) architecture it doesn't give an advantage: : : So the question is: is this NonZero method worth keeping? Are the architectures where it gives a performance advantage common enough?\nbut as you show in your example if you want to make a distributable binary and due to this not set the target-cpu there is a big difference\nThe architecture I'm using is quite standard. A distributable binary could probably use it as \"universal target\". I don't know much common is going to be an instruction like tzcnt in the future. If you think a similar instruction will be sufficiently uncommon then please ignore my comments here :-)\nmerge cc\nTeam member has proposed to merge this. The next step is review by the rest of the tagged team members: [ ] [x] [x] [x] [x] [x] No concerns currently listed. Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete. As the automated representative of the governance process, I would like to thank the author for their work and everyone else who contributed. The RFC will be merged soon.\nstarted to make the stabilization PR but hit a problem I think that constcttz needs to be stabilized for this to be stabilized from //! If an intrinsic is supposed to be used from a with a attribute, //! the intrinsic's attribute must be , too. Such a change should not be done //! without T-lang consulation, because it bakes a feature into the language that cannot be //! replicated in user code without compiler support.", "commid": "rust_issue_79143", "tokennum": 482}], "negative_passages": []} {"query_id": "q-en-rust-092e4cb5fdaa34b9d0e9d374311f0cc4b40bfcd9aafe8749a3704875db8510bb", "query": "assert_eq!(vec![1; 2], vec![1, 1]); assert_eq!(vec![1; 1], vec![1]); assert_eq!(vec![1; 0], vec![]); // from_elem syntax (see RFC 832) let el = Box::new(1); let n = 3; assert_eq!(vec![el; n], vec![Box::new(1), Box::new(1), Box::new(1)]); }", "positive_passages": [{"docid": "doc-en-rust-0a396996c5e71e5549afbc67ed4bc7561878e9d205281062d3053330284332cc", "text": "Super easy to implement for anyone interested. The implementation in the RFC is sound, although the new code should be abstracted out into a function that the macro calls for inline and macro-sanity reasons. The function needs to be marked as public/stable, but should otherwise be marked as #[doc(hidden)], as it is not intended to be called directly -- only by the macro.\nWilling to mentor on details.\nI'd like to give it a shot :)\nBy the way: The seems very sparse. If it's okay, I'd expand that a bit while I'm on it, or should that go into a seperate PR?\nYeah, that's totally reasonable to do at the same time!\nGreat! What would be the preferred place where to put the new function? I've it to as for now (currently compiling it)\nSince it's supposed to only be used by the macro (and will be doc(hidden)), it makes sense to me to put it in Putting it in makes sense, too, though.\nhas a section labeled \"Internal methods and functions\" that seemed to fit well.", "commid": "rust_issue_22414", "tokennum": 236}], "negative_passages": []} {"query_id": "q-en-rust-09663c436c0fd4cc41c9ca1b07f9df18e7246923c52f4ded105f8d23379c302b", "query": "use rustc_hir::def::{self, DefKind, NonMacroAttrKind}; use rustc_hir::def_id; use rustc_middle::middle::stability; use rustc_middle::{span_bug, ty}; use rustc_middle::ty; use rustc_session::lint::builtin::UNUSED_MACROS; use rustc_session::Session; use rustc_span::edition::Edition;", "positive_passages": [{"docid": "doc-en-rust-7d929a55a801b35d30d58d913f2afb0e04998c1d70c5f0cf00e15985806a02d6", "text": " $DIR/issue-74047.rs:14:1 | LL | impl TryFrom for MyStream {} | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ missing `Error`, `try_from` in implementation | = help: implement the missing item: `type Error = Type;` = help: implement the missing item: `fn try_from(_: T) -> std::result::Result>::Error> { todo!() }` error: aborting due to previous error For more information about this error, try `rustc --explain E0046`. ", "positive_passages": [{"docid": "doc-en-rust-d0757f5a8271c276c1bd6c6b7fbdbb77aef77b04e3f124833572ad997be77bf8", "text": " $DIR/expr_before_ident_pat.rs:12:12 | LL | funny!(a, a); | ^ error[E0425]: cannot find value `a` in this scope --> $DIR/expr_before_ident_pat.rs:12:12 | LL | funny!(a, a); | ^ not found in this scope error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-a0ccb76a6e60f5a57f49822c16e14a4158fd6807515502bfbce2a615c588e5c2", "text": " $DIR/issue-35976.rs:14:9 | LL | fn wait(&self) where Self: Sized; | ----- this has a `Sized` requirement ... LL | arg.wait(); | ^^^^ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-d8977df5e10b75cc49fbb47eb9f2a769467cee6682dc525ce06703efc8a957e5", "text": " $DIR/issue-35976.rs:14:9 | LL | fn wait(&self) where Self: Sized; | ----- this has a `Sized` requirement ... LL | arg.wait(); | ^^^^ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-5f960f59572946d373b6c33323ee34953a0738c81a0bfbea0a933aa8d60f79bf", "text": "(cc with whom I was discussing this on Mastodon)\nHaven't look at this yet, maybe I misinterpreted what was achieving, and that commit just needs a revert -- can't recall why I did that.\nYeah, lol, I totally overlooked that the would be load-bearing due to the way we treat trait methods on as inherent. I'll revert , modify the test, etc.", "commid": "rust_issue_105159", "tokennum": 89}], "negative_passages": []} {"query_id": "q-en-rust-0be7240ef85bc21347d918bc5d890d696866e23a0db884eb854b166a26b2507e", "query": "Decl: Option<&'a DIDescriptor>, ) -> &'a DISubprogram; pub fn LLVMRustDIBuilderCreateMethod<'a>( Builder: &DIBuilder<'a>, Scope: &'a DIDescriptor, Name: *const c_char, NameLen: size_t, LinkageName: *const c_char, LinkageNameLen: size_t, File: &'a DIFile, LineNo: c_uint, Ty: &'a DIType, Flags: DIFlags, SPFlags: DISPFlags, TParam: &'a DIArray, ) -> &'a DISubprogram; pub fn LLVMRustDIBuilderCreateBasicType<'a>( Builder: &DIBuilder<'a>, Name: *const c_char,", "positive_passages": [{"docid": "doc-en-rust-84fcbfe2dfb6d5a723b82e44468381e2a3da70b15ab70d0a99a9e4d0ba3092c5", "text": "If you combine LTO, anything that uses alloc and debug compilation mode, the compiler fails. The minimum example to reproduce is this : With this : To reproduce, run (or just use ): You should see this error: The above error does not reproduce for other compiler versions or options, all of the below work: I found reporting something similar, but for them is throwing the error, so I opened a new issue instead. $DIR/issue-111220-2-tuple-struct-fields-projection.rs:27:13 | LL | let Self(a) = self; | ^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0603`. ", "positive_passages": [{"docid": "doc-en-rust-0867c49da7efb293e9753113abb91f30e498ac532bb3b84093850d54c0bc3742", "text": "I tried this code: I expected to see this happen: compilation should fail because the trait impl is defined outside of the module for the type but refers to its private field. Instead, this happened: this compiled successfully This reproduces on the playground for stable Rust, version 1.69.0:\nAdding a main function and attempting to call the (playground link: ) raises the compilation error: !\nThis can be reproduced with cross-crate (e.i., any other crate can access the private field). This means that all encapsulations that take advantage of the field being private can be bypassed. (For example, if we change Vec to a tuple struct, the same thing as setlen could be done in safe code by abusing this bug.) label +I-unsound Also, this can be reproduced in all versions where is stable (e.i., 1.32+). It's a matter of how you initialize Foo. If you provide a constructor for Foo, you won't get a compile error:\nJust for completeness, turning this into a segfault without external crates/just std:\nlabel A-visibility\nthis has been unsound ever since in version 1.32\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "commid": "rust_issue_111220", "tokennum": 283}], "negative_passages": []} {"query_id": "q-en-rust-0d16cedddbfede40fe5bd738e16658839441e79139cd48967347cbd0d866d385", "query": "//@[rust2021] edition:2021 fn main() { println!('hello world'); //[rust2015,rust2018,rust2021]~^ ERROR unterminated character literal //~^ ERROR unterminated character literal //[rust2021]~| ERROR prefix `world` is unknown }", "positive_passages": [{"docid": "doc-en-rust-eff82a61886b25b0f1dd4cf2300b1ef5995dc7fc7eccfb7efad8700de136ebcb", "text": " $DIR/borrowck-slice-pattern-element-loan.rs:28:20 | LL | if let [ref first, ref second, ..] = *s { | ---------- immutable borrow occurs here LL | if let [_, ref mut second2, ref mut third, ..] = *s { //~ERROR | ^^^^^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[first, second, second2, third]); | ------ borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:44:21 | LL | if let [.., ref fourth, ref third, _, ref first] = *s { | --------- immutable borrow occurs here LL | if let [.., ref mut third2, _, _] = *s { //~ERROR | ^^^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[first, third, third2, fourth]); | ----- borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:55:20 | LL | if let [.., _, ref from_end4, ref from_end3, _, ref from_end1] = *s { | ------------- immutable borrow occurs here ... LL | if let [_, ref mut from_begin1, ..] = *s { //~ERROR | ^^^^^^^^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[from_begin1, from_end1, from_end3, from_end4]); | --------- borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:58:23 | LL | if let [.., _, ref from_end4, ref from_end3, _, ref from_end1] = *s { | ------------- immutable borrow occurs here ... LL | if let [_, _, ref mut from_begin2, ..] = *s { //~ERROR | ^^^^^^^^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[from_begin2, from_end1, from_end3, from_end4]); | --------- borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:61:26 | LL | if let [.., _, ref from_end4, ref from_end3, _, ref from_end1] = *s { | ------------- immutable borrow occurs here ... LL | if let [_, _, _, ref mut from_begin3, ..] = *s { //~ERROR | ^^^^^^^^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[from_begin3, from_end1, from_end3, from_end4]); | --------- borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:69:21 | LL | if let [ref from_begin0, ref from_begin1, _, ref from_begin3, _, ..] = *s { | --------------- immutable borrow occurs here ... LL | if let [.., ref mut from_end2, _] = *s { //~ERROR | ^^^^^^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[from_begin0, from_begin1, from_begin3, from_end2]); | ----------- borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:72:21 | LL | if let [ref from_begin0, ref from_begin1, _, ref from_begin3, _, ..] = *s { | --------------- immutable borrow occurs here ... LL | if let [.., ref mut from_end3, _, _] = *s { //~ERROR | ^^^^^^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[from_begin0, from_begin1, from_begin3, from_end3]); | ----------- borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:75:21 | LL | if let [ref from_begin0, ref from_begin1, _, ref from_begin3, _, ..] = *s { | --------------- immutable borrow occurs here ... LL | if let [.., ref mut from_end4, _, _, _] = *s { //~ERROR | ^^^^^^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[from_begin0, from_begin1, from_begin3, from_end4]); | ----------- borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:92:20 | LL | if let [ref first, ref second, ..] = *s { | ---------- immutable borrow occurs here LL | if let [_, ref mut tail..] = *s { //~ERROR | ^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[first, second]); | ------ borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:110:17 | LL | if let [.., ref second, ref first] = *s { | ---------- immutable borrow occurs here LL | if let [ref mut tail.., _] = *s { //~ERROR | ^^^^^^^^^^^^ mutable borrow occurs here LL | nop(&[first, second]); | ------ borrow later used here error[E0502]: cannot borrow `s[..]` as mutable because it is also borrowed as immutable --> $DIR/borrowck-slice-pattern-element-loan.rs:119:17 | LL | if let [_, _, _, ref s1..] = *s { | ------ immutable borrow occurs here LL | if let [ref mut s2.., _, _, _] = *s { //~ERROR | ^^^^^^^^^^ mutable borrow occurs here LL | nop_subslice(s1); | -- borrow later used here error: aborting due to 11 previous errors For more information about this error, try `rustc --explain E0502`. ", "positive_passages": [{"docid": "doc-en-rust-76b755d0637c79bf29192d58b2ad99b90bea0196361a7ec01c503f6ca2dc1e96", "text": "Minimal example: This will complain about multiple mutable borrows, even though it should be valid.\nNoted in (but may deserve a separate bug report?). (FYI, slice patterns are unstable for a reason; they're pretty buggy at the moment).\nshould be closed after MIR borrowck will be by default", "commid": "rust_issue_42291", "tokennum": 67}], "negative_passages": []} {"query_id": "q-en-rust-0efd805496fc64855e98309807bd513bc80c8a990865136186c27195873badd7", "query": "rm $(TMPDIR)/foo $(RUSTC) foo.rs --crate-type=bin -o $(TMPDIR)/foo rm $(TMPDIR)/foo mv $(TMPDIR)/bar.bc $(TMPDIR)/foo.bc $(RUSTC) foo.rs --emit=bc,link --crate-type=rlib cmp $(TMPDIR)/foo.bc $(TMPDIR)/bar.bc rm $(TMPDIR)/bar.bc rm $(TMPDIR)/foo.bc rm $(TMPDIR)/$(call RLIB_GLOB,bar)", "positive_passages": [{"docid": "doc-en-rust-a80b97a4b1136e26628808d1be2a10e26d684af7165cc3301985491de63de5fb", "text": "Passing rustc the flag and passing may produce different files (that is, outputs). This is true of rust-core (see below), though was not the case for a simpler (\"Hello, world!\") test file. Test script: Diff output indicates binary files differ. Tested with , rust-core on . rustc compiled from master on Arch Linux. If this is, in fact, expected behavior- sorry for the trouble, unexpected to me.\nCan you provide a diff between the outputs? This is likely a known issue, but I would like to be sure.\n.bc is a binary format, so just gives \"Binary files and differ\" as its output. On my machine, is 19K and is 62K. I'm not familiar enough with the format to extract much more data.\nI note the difference persists when the second line is (two separate flags), though both produce both artifacts.\nCan you pass and diff those outputs?\nWith emit flags as and respectively, there is no difference in and (according to diff.)\nIt seems that every bytecode is invalid only when a rlib is produced. It stopped working between rust-git-0.9.1301 and 1362 (shortly before 2014-02-23 15:37:05 -0800). Going back thorugh the log, I found I believe that creates deflated bytecode. Perhaps should be stored with extension or .", "commid": "rust_issue_12992", "tokennum": 299}], "negative_passages": []} {"query_id": "q-en-rust-0f0f819b852595a28e7121953324f88c1ddfa430fc10d4e7e656f605108789ce", "query": "expr } pub fn peel_blocks(&self) -> &Self { let mut expr = self; while let ExprKind::Block(Block { expr: Some(inner), .. }, _) = &expr.kind { expr = inner; } expr } pub fn can_have_side_effects(&self) -> bool { match self.peel_drop_temps().kind { ExprKind::Path(_) | ExprKind::Lit(_) => false,", "positive_passages": [{"docid": "doc-en-rust-bc991a8c63d81686bebf07f793f5c69e366692742b1f03c258d8aba8722870bd", "text": " $DIR/issue-109153.rs:11:5 | LL | use bar::bar; | ^^^ ambiguous name | = note: ambiguous because of multiple glob imports of a name in the same module note: `bar` could refer to the module imported here --> $DIR/issue-109153.rs:1:5 | LL | use foo::*; | ^^^^^^ = help: consider adding an explicit import of `bar` to disambiguate note: `bar` could also refer to the module imported here --> $DIR/issue-109153.rs:12:5 | LL | use bar::*; | ^^^^^^ = help: consider adding an explicit import of `bar` to disambiguate error: aborting due to previous error For more information about this error, try `rustc --explain E0659`. ", "positive_passages": [{"docid": "doc-en-rust-14d87f94d6dd2f60938cc3099809c6fbd53892083d65f782d3f0ae98e232297d", "text": " $DIR/empty-struct-braces-pat-1.rs:31:9 | LL | XE::XEmpty3 => () | ^^^^^^^^^^^ not a unit struct, unit variant or constant | help: use the struct variant pattern syntax | LL | XE::XEmpty3 {} => () | ++ error: aborting due to 2 previous errors", "positive_passages": [{"docid": "doc-en-rust-64b23c5c36e9988ec4c5b0f9b27af4a414e6c71c77fec9aaba09da710256fe45", "text": "I wrote code that looked like this while not having my brain turned on 100%. Due to said brain not functioning, it took me a while to understand what rustc was trying to tell me with the message pointing to the pattern with the text . sure looked like a unit variant to me! What rustc was trying to tell me was that the definition of the variant was a struct variant (\"found struct variant\") and my pattern wasn't matching the definition (\"this pattern you specified looks like it's trying to match a variant that is not a unit struct, unit variant or constant\"). The change I needed to make was adding to my pattern, because I was trying to distinguish different variants from each other, not use pieces within the variants. Suggesting seems like a decent starting point to me, but there could definitely be cases I'm not considering here and I'm totally open to hearing that No response No response $DIR/feature-gate-const_constructor.rs:9:37 | LL | const EXTERNAL_CONST: Option = {Some}(1); | ^^^^^^^^^ | = help: add `#![feature(const_constructor)]` to the crate attributes to enable error: `E::V` is not yet stable as a const fn --> $DIR/feature-gate-const_constructor.rs:12:24 | LL | const LOCAL_CONST: E = {E::V}(1); | ^^^^^^^^^ | = help: add `#![feature(const_constructor)]` to the crate attributes to enable error: `std::prelude::v1::Some` is not yet stable as a const fn --> $DIR/feature-gate-const_constructor.rs:17:13 | LL | let _ = {Some}(1); | ^^^^^^^^^ | = help: add `#![feature(const_constructor)]` to the crate attributes to enable error: `E::V` is not yet stable as a const fn --> $DIR/feature-gate-const_constructor.rs:23:13 | LL | let _ = {E::V}(1); | ^^^^^^^^^ | = help: add `#![feature(const_constructor)]` to the crate attributes to enable error: aborting due to 4 previous errors ", "positive_passages": [{"docid": "doc-en-rust-68be5724b3b1ef2d5077bd80f85030139d8aa3b845ccb821ef16c47a34f61577", "text": "The feature allows calling any expression of a tuple-like constructor in a : cc\nImplemented on 2019-06-07 by in which was reviewed by\nIs there anything else needs to be done to resolve this issue?\nFCP and stabilization pull request.\nDo you think you have time to write a report (e.g. in the style of ) and amend the reference after?\nI propose that we stabilize . Tracking issue: Version target: 1.40 (2019-11-05 =beta, 2019-12-19 =stable). Tuple struct and tuple variant constructors are now considered to be constant functions. As such a call expression where the callee has a tuple struct or variant constructor \"function item\" type can be called : Consistency with other . This should also ensure that constructors implement traits and can be coerced to function pointers, if they are introduced. - Tests various syntactic forms, use in both and items, and constructors in both the current and extern crates. The case in should also get a test.\nThanks! Could you embed this report in a PR with the aforementioned test there as well? I'll FCP that PR.", "commid": "rust_issue_61456", "tokennum": 249}], "negative_passages": []} {"query_id": "q-en-rust-115bf578b0dd752af34ce5ebb7a7e3e2fe3a4207a17937cc772d68c25794a7b5", "query": " error: `std::prelude::v1::Some` is not yet stable as a const fn --> $DIR/feature-gate-const_constructor.rs:9:37 | LL | const EXTERNAL_CONST: Option = {Some}(1); | ^^^^^^^^^ | = help: add `#![feature(const_constructor)]` to the crate attributes to enable error: `E::V` is not yet stable as a const fn --> $DIR/feature-gate-const_constructor.rs:12:24 | LL | const LOCAL_CONST: E = {E::V}(1); | ^^^^^^^^^ | = help: add `#![feature(const_constructor)]` to the crate attributes to enable error: `std::prelude::v1::Some` is not yet stable as a const fn --> $DIR/feature-gate-const_constructor.rs:17:13 | LL | let _ = {Some}(1); | ^^^^^^^^^ | = help: add `#![feature(const_constructor)]` to the crate attributes to enable error: `E::V` is not yet stable as a const fn --> $DIR/feature-gate-const_constructor.rs:23:13 | LL | let _ = {E::V}(1); | ^^^^^^^^^ | = help: add `#![feature(const_constructor)]` to the crate attributes to enable error: aborting due to 4 previous errors ", "positive_passages": [{"docid": "doc-en-rust-dda53e6e6669bb67565cbda4f5249392dec83d9adef3173487f24b0d81452468", "text": "When using to create an enum variant in Rust 1.37+, we get an error: results in: but if the enum is constructed explicitly, the compilation succeeds: Looking at , it looks like should be considered a with . Playground links for both variants: ,\nDuplicate of (will close in a bit). It seems like folks open this issue because on stable, the error message does not point to the tracking issue, e.g. you don't see: can we fix this?\nI think we should just stabilize it\nIs this diagnostics problem specific to this feature gate? I meant a more general fix to const-fn related gates.\nNot sure. I thought we're using the normal feature gate infra\nReopening this to make sure we add a test per", "commid": "rust_issue_64247", "tokennum": 162}], "negative_passages": []} {"query_id": "q-en-rust-11819c7cd9b13f4e0939f95065b9a38dc1ab7174308a60d81bb9679feadd8486", "query": "edition = \"2018\" [dependencies] cortex-m = \"0.5.4\" cortex-m-rt = \"=0.5.4\" cortex-m = \"0.6.2\" cortex-m-rt = \"0.6.11\" panic-halt = \"0.2.0\" cortex-m-semihosting = \"0.3.1\"", "positive_passages": [{"docid": "doc-en-rust-f2ca5f3791d943e96ab8217cb0a49e57220112d34cd166ee35ef8468d6c1d242", "text": "I was working on making a run-make test today for a particular thumb target. While looking over how we do this elsewhere, I found this bit of code: From my local testing, it seems like having multiple lines of this form has the net effect of making the test a no-op that does not test anything: since a target matching line 13 will not match line 14, and vice versa, there are no targets that can run on this test. We should either revert the portion of the commit that did this: or figure out some other way to express the desired property here (or change the semantics of when multiple targets are provided to make it gather all such targets up in a set before doing the check for membership... but I am wary of making such a change since doing it right would require differentiating e.g. processor target vs OS target, for example...)\nAlso, It looks like there may be other instances of the same mistake elsewhere in the test suite, such as in:\nOof, not good. It shouldn't be hard to make rustbuild outright reject this scenario though.\nFurther effort on has led me to conclude that these lines do not work at all, at least not for cross-compilation.", "commid": "rust_issue_67018", "tokennum": 255}], "negative_passages": []} {"query_id": "q-en-rust-1191f827ae511bac499e5198001b136ec4c3a24343e060c7ecccbc1bbb0daa1f", "query": "| = note: see issue #65991 for more information = help: add `#![feature(trait_upcasting)]` to the crate attributes to enable = note: required when coercing `&dyn Bar` into `&dyn Foo` error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-118433c31bc29716faff9a460042fc6c662b57f7865f844f06298dd3fb733f90", "text": " $DIR/must_use-tuple.rs:8:6 | LL | (Ok::<(), ()>(()),); | ^^^^^^^^^^^^^^^^ | note: lint level defined here --> $DIR/must_use-tuple.rs:1:9 | LL | #![deny(unused_must_use)] | ^^^^^^^^^^^^^^^ = note: this `Result` may be an `Err` variant, which should be handled error: unused `std::result::Result` in tuple element 0 that must be used --> $DIR/must_use-tuple.rs:10:6 | LL | (Ok::<(), ()>(()), 0, Ok::<(), ()>(()), 5); | ^^^^^^^^^^^^^^^^ | = note: this `Result` may be an `Err` variant, which should be handled error: unused `std::result::Result` in tuple element 2 that must be used --> $DIR/must_use-tuple.rs:10:27 | LL | (Ok::<(), ()>(()), 0, Ok::<(), ()>(()), 5); | ^^^^^^^^^^^^^^^^ | = note: this `Result` may be an `Err` variant, which should be handled error: unused `std::result::Result` in tuple element 0 that must be used --> $DIR/must_use-tuple.rs:14:5 | LL | foo(); | ^^^^^^ | = note: this `Result` may be an `Err` variant, which should be handled error: unused `std::result::Result` in tuple element 0 that must be used --> $DIR/must_use-tuple.rs:16:6 | LL | ((Err::<(), ()>(()), ()), ()); | ^^^^^^^^^^^^^^^^^^^^^^^ | = note: this `Result` may be an `Err` variant, which should be handled error: aborting due to 5 previous errors ", "positive_passages": [{"docid": "doc-en-rust-2104c8e6bf9e2797bcd7cd1ab7750453fb0d900ab6484d0cbeb94c35ea8004f6", "text": "gives a warning, but does not.\nsuggested that we test this out with crater to see if this would generate much substantial noise and how common it would be.\nI'd expect it'd be a pretty substantial source of warnings, but that's exactly why I think we should introduce it. People compiling with are asking for these sorts of bugs to be caught, and we should serve them appropriately ;) In any case, running crater would require having an implementation to do a crater run on, so let's do it!\nLet's do it. (I'd personally even like an attribute to allow people to opt-in to this for , for example, when T is itself . That way if I make a , I'll get a warning on that I normally wouldn't want but plausibly do now that I'm getting s in the s.)\nI've opened a pull request to try this out:\nWe discussed this on the language team meeting; folks felt generally positive, but we'd like to do a crater run to see what the impact would be like and if there would be many false positives.\nI tried this using but it does not seemed to complain when the part of is not used, is the the correct behavior? I hit this when doing where I didn't use part of the tuple which cause it to not work, which I did not know is required to be used.\nputting it into a binding, even an unused one, is considered a use, like how doesn't lint. With just calling it lints:", "commid": "rust_issue_61061", "tokennum": 323}], "negative_passages": []} {"query_id": "q-en-rust-126a8538e7ce6582ae7507734f68ccaff72bee71ac8701fd04111f2a70ef26c4", "query": "remove_dir_all_recursive(None, p) } } #[cfg(not(all(target_os = \"macos\", target_arch = \"x86_64\")))] pub fn remove_dir_all(p: &Path) -> io::Result<()> { remove_dir_all_modern(p) } #[cfg(all(target_os = \"macos\", target_arch = \"x86_64\"))] pub fn remove_dir_all(p: &Path) -> io::Result<()> { if macos_weak::has_openat() { // openat() is available with macOS 10.10+, just like unlinkat() and fdopendir() remove_dir_all_modern(p) } else { // fall back to classic implementation crate::sys_common::fs::remove_dir_all(p) } } }", "positive_passages": [{"docid": "doc-en-rust-7a9cb418013612ad269cf96d0ff42f69f582d0034017b7851537538c0622479b", "text": "The fix for CVE-2022- (, ) appears to have introduced a regression that can cause file system corruption on some UNIX systems. The new code will attempt to use in any case where it does not know what type of file a directory entry represents. Some systems do not provide that information along with the entry names returned while reading a directory, and even on systems which provide a type hint, that hint may be . In cases where we do not know the type of the entry, the code tries first to unlink it as if it were something other than a directory: The code assumes that on a directory will fail with , or on Linux with the non-standard , but this is not always the case. POSIX does suggests as a failure in some, but not all cases: (See in The Open Group Base Specifications Issue 7, 2018 edition) Implementations are not required to prohibit on directories, and indeed UFS file systems on present day illumos and presumably at least some Solaris systems allow of a directory if the user has sufficient privileges. Other platforms may as well. Unfortunately unlinking a directory on UFS causes a sort of file system corruption that requires an unmount and a trip through to correct. Any code that uses on UFS has since started causing that kind of corruption, which took a bit of work to track down. I think the fix is probably relatively simple, and has the benefit of not touching the code path where we believe we know what sort of entry we are removing: In short, we should try to first, rather than first, as that always fails in a safe way that we can detect. $DIR/function-item-references.rs:40:18 | LL | Pointer::fmt(&zst_ref, f) | ^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` | note: the lint level is defined here --> $DIR/function-item-references.rs:3:9 | LL | #![warn(function_item_references)] | ^^^^^^^^^^^^^^^^^^^^^^^^ warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:77:22 | LL | println!(\"{:p}\", &foo); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:79:20 | LL | print!(\"{:p}\", &foo); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:81:21 | LL | format!(\"{:p}\", &foo); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:84:22 | LL | println!(\"{:p}\", &foo as *const _); | ^^^^^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:86:22 | LL | println!(\"{:p}\", zst_ref); | ^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:88:22 | LL | println!(\"{:p}\", cast_zst_ptr); | ^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:90:22 | LL | println!(\"{:p}\", coerced_zst_ptr); | ^^^^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:93:22 | LL | println!(\"{:p}\", &fn_item); | ^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:95:22 | LL | println!(\"{:p}\", indirect_ref); | ^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:98:22 | LL | println!(\"{:p}\", &nop); | ^^^^ help: cast `nop` to obtain a function pointer: `nop as fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:100:22 | LL | println!(\"{:p}\", &bar); | ^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:102:22 | LL | println!(\"{:p}\", &baz); | ^^^^ help: cast `baz` to obtain a function pointer: `baz as fn(_, _) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:104:22 | LL | println!(\"{:p}\", &unsafe_fn); | ^^^^^^^^^^ help: cast `unsafe_fn` to obtain a function pointer: `unsafe_fn as unsafe fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:106:22 | LL | println!(\"{:p}\", &c_fn); | ^^^^^ help: cast `c_fn` to obtain a function pointer: `c_fn as extern \"C\" fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:108:22 | LL | println!(\"{:p}\", &unsafe_c_fn); | ^^^^^^^^^^^^ help: cast `unsafe_c_fn` to obtain a function pointer: `unsafe_c_fn as unsafe extern \"C\" fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:110:22 | LL | println!(\"{:p}\", &variadic); | ^^^^^^^^^ help: cast `variadic` to obtain a function pointer: `variadic as unsafe extern \"C\" fn(_, ...)` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:112:22 | LL | println!(\"{:p}\", &std::env::var::); | ^^^^^^^^^^^^^^^^^^^^^^^^ help: cast `var` to obtain a function pointer: `var as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:115:32 | LL | println!(\"{:p} {:p} {:p}\", &nop, &foo, &bar); | ^^^^ help: cast `nop` to obtain a function pointer: `nop as fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:115:38 | LL | println!(\"{:p} {:p} {:p}\", &nop, &foo, &bar); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:115:44 | LL | println!(\"{:p} {:p} {:p}\", &nop, &foo, &bar); | ^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:130:41 | LL | std::mem::transmute::<_, usize>(&foo); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:132:50 | LL | std::mem::transmute::<_, (usize, usize)>((&foo, &bar)); | ^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:132:50 | LL | std::mem::transmute::<_, (usize, usize)>((&foo, &bar)); | ^^^^^^^^^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:142:15 | LL | print_ptr(&bar); | ^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:144:24 | LL | bound_by_ptr_trait(&bar); | ^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:146:30 | LL | bound_by_ptr_trait_tuple((&foo, &bar)); | ^^^^^^^^^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:146:30 | LL | bound_by_ptr_trait_tuple((&foo, &bar)); | ^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: 28 warnings emitted ", "positive_passages": [{"docid": "doc-en-rust-bebf3ac373245cb50b8b4c0e41523bb66c080f59a138926bab54cb74481b9855", "text": "I tried this code: I expected to see this happen: The compilation succeed or guide me to (1) cast the to with . (2) borrow the : . But in this case the addresses printed are the same (what?). Instead, this happened*: Cryptic error message: There is : . It took me minutes before I found the workaround. But it still surprised me. : $DIR/function-item-references.rs:40:18 | LL | Pointer::fmt(&zst_ref, f) | ^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` | note: the lint level is defined here --> $DIR/function-item-references.rs:3:9 | LL | #![warn(function_item_references)] | ^^^^^^^^^^^^^^^^^^^^^^^^ warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:77:22 | LL | println!(\"{:p}\", &foo); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:79:20 | LL | print!(\"{:p}\", &foo); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:81:21 | LL | format!(\"{:p}\", &foo); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:84:22 | LL | println!(\"{:p}\", &foo as *const _); | ^^^^^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:86:22 | LL | println!(\"{:p}\", zst_ref); | ^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:88:22 | LL | println!(\"{:p}\", cast_zst_ptr); | ^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:90:22 | LL | println!(\"{:p}\", coerced_zst_ptr); | ^^^^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:93:22 | LL | println!(\"{:p}\", &fn_item); | ^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:95:22 | LL | println!(\"{:p}\", indirect_ref); | ^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:98:22 | LL | println!(\"{:p}\", &nop); | ^^^^ help: cast `nop` to obtain a function pointer: `nop as fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:100:22 | LL | println!(\"{:p}\", &bar); | ^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:102:22 | LL | println!(\"{:p}\", &baz); | ^^^^ help: cast `baz` to obtain a function pointer: `baz as fn(_, _) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:104:22 | LL | println!(\"{:p}\", &unsafe_fn); | ^^^^^^^^^^ help: cast `unsafe_fn` to obtain a function pointer: `unsafe_fn as unsafe fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:106:22 | LL | println!(\"{:p}\", &c_fn); | ^^^^^ help: cast `c_fn` to obtain a function pointer: `c_fn as extern \"C\" fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:108:22 | LL | println!(\"{:p}\", &unsafe_c_fn); | ^^^^^^^^^^^^ help: cast `unsafe_c_fn` to obtain a function pointer: `unsafe_c_fn as unsafe extern \"C\" fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:110:22 | LL | println!(\"{:p}\", &variadic); | ^^^^^^^^^ help: cast `variadic` to obtain a function pointer: `variadic as unsafe extern \"C\" fn(_, ...)` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:112:22 | LL | println!(\"{:p}\", &std::env::var::); | ^^^^^^^^^^^^^^^^^^^^^^^^ help: cast `var` to obtain a function pointer: `var as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:115:32 | LL | println!(\"{:p} {:p} {:p}\", &nop, &foo, &bar); | ^^^^ help: cast `nop` to obtain a function pointer: `nop as fn()` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:115:38 | LL | println!(\"{:p} {:p} {:p}\", &nop, &foo, &bar); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:115:44 | LL | println!(\"{:p} {:p} {:p}\", &nop, &foo, &bar); | ^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:130:41 | LL | std::mem::transmute::<_, usize>(&foo); | ^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:132:50 | LL | std::mem::transmute::<_, (usize, usize)>((&foo, &bar)); | ^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:132:50 | LL | std::mem::transmute::<_, (usize, usize)>((&foo, &bar)); | ^^^^^^^^^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:142:15 | LL | print_ptr(&bar); | ^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:144:24 | LL | bound_by_ptr_trait(&bar); | ^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:146:30 | LL | bound_by_ptr_trait_tuple((&foo, &bar)); | ^^^^^^^^^^^^ help: cast `bar` to obtain a function pointer: `bar as fn(_) -> _` warning: taking a reference to a function item does not give a function pointer --> $DIR/function-item-references.rs:146:30 | LL | bound_by_ptr_trait_tuple((&foo, &bar)); | ^^^^^^^^^^^^ help: cast `foo` to obtain a function pointer: `foo as fn() -> _` warning: 28 warnings emitted ", "positive_passages": [{"docid": "doc-en-rust-71ee43a6931324c1d4412d134316ca0db53c2ac8c8b1c9db29569be2e44a0997", "text": ":)\nAs a point of interest, this would have been extremely useful in helping debug a segfault in The original code used , not .\nWithout the cast, this should not compile as transmute checks that the sizes match. I assume what you describe happens when using , i.e., not just remove the but also add a .\nUnfortunately I never committed the broken code so I'm not sure what I had originally ... the closest thing is which doesn't compile. Sorry for the noise.\nno worries. I already have the lint implemented, but in that case I'll wait until they discuss to write the tests\nI think should compile. That's no noise, it seems very relevant. :)\npoll T-lang Should we add a lint for taking a reference to a fn?\nTeam member has asked teams: T-lang, for consensus on: [x] [x] [x] [ ] [x] [ ]\nI think I agree here, but to confirm: this isn't actually about linting the taking of a reference to a place of type, but just about using on the voldemort-typed function name constants, right?\nYes.\nDiscussed in a (rather small) meeting today: Those present felt a lint would make sense. If somebody wanted to go ahead and try to author one, please feel free and nominate the resulting PR for discussion.\nall right, so if you want to open a PR, everything is settled for that. :)", "commid": "rust_issue_75239", "tokennum": 315}], "negative_passages": []} {"query_id": "q-en-rust-130881512be78fbfdb35904344db199d1c898c3177f659f9216994f54244e14a", "query": "if let Some(old_binding) = resolution.shadowed_glob { assert!(old_binding.is_glob_import()); if glob_binding.res() != old_binding.res() { resolution.shadowed_glob = Some(self.ambiguity( resolution.shadowed_glob = Some(this.ambiguity( AmbiguityKind::GlobVsGlob, old_binding, glob_binding, )); } else if !old_binding.vis.is_at_least(binding.vis, self.tcx) { } else if !old_binding.vis.is_at_least(binding.vis, this.tcx) { resolution.shadowed_glob = Some(glob_binding); } } else {", "positive_passages": [{"docid": "doc-en-rust-a23e3246976f07eb139e374e0803ff2e2de983d7ac4ec4cba804ec3187641873", "text": " $DIR/struct-variant-privacy.rs:9:14 --> $DIR/struct-variant-privacy.rs:10:14 | LL | foo::Bar::Baz { a: _a } => {} | ^^^ private enum", "positive_passages": [{"docid": "doc-en-rust-a7f3e9810aa82b6f33a59ff9fa989846e97e88c833b24e7a93ba56e854e1291a", "text": " $DIR/feature-gate-trait_upcasting.rs:11:25 | LL | let foo: &dyn Foo = bar;", "positive_passages": [{"docid": "doc-en-rust-118433c31bc29716faff9a460042fc6c662b57f7865f844f06298dd3fb733f90", "text": " $DIR/issue-72574-2.rs:6:20 | LL | Binder(_a, _x @ ..) => {} | ^^^^^^^ this is only allowed in slice patterns | = help: remove this and bind each tuple field independently help: if you don't need to use the contents of _x, discard the tuple's remaining fields | LL | Binder(_a, ..) => {} | ^^ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-a48b7de958c52aaf8f6556c4a1124cd11db50916d01764712aa5be78a9e191d2", "text": "and I are working on diagnostics for the rest idiom in slice patterns () and came across this issue when an attempt is made to use it in tuples and tuple structs. IIUC, this is allowed: and this is not allowed (note the addition of ): The error messages for the tuple and tuple struct cases include the following incorrect information: This should state that the id binding to rest is not allowed. patterns are allowed. is only allowed in slice patterns, not in tuple or tuple struct patterns. This should state that the idiom is only allowed in slice patterns. $DIR/suggest-borrow.rs:2:9 | LL | let x: [u8] = vec!(1, 2, 3)[..]; | ^ doesn't have a size known at compile-time | = help: the trait `Sized` is not implemented for `[u8]` = note: all local variables must have a statically known size = help: unsized locals are gated as an unstable feature help: consider borrowing here | LL | let x: &[u8] = vec!(1, 2, 3)[..]; | + error[E0308]: mismatched types --> $DIR/suggest-borrow.rs:3:20 | LL | let x: &[u8] = vec!(1, 2, 3)[..]; | ----- ^^^^^^^^^^^^^^^^^ | | | | | expected `&[u8]`, found slice `[{integer}]` | | help: consider borrowing here: `&vec!(1, 2, 3)[..]` | expected due to this error[E0308]: mismatched types --> $DIR/suggest-borrow.rs:4:19 | LL | let x: [u8] = &vec!(1, 2, 3)[..]; | ---- ^^^^^^^^^^^^^^^^^^ expected slice `[u8]`, found `&[{integer}]` | | | expected due to this | help: consider removing the borrow | LL - let x: [u8] = &vec!(1, 2, 3)[..]; LL + let x: [u8] = vec!(1, 2, 3)[..]; | help: alternatively, consider changing the type annotation | LL | let x: &[u8] = &vec!(1, 2, 3)[..]; | + error[E0277]: the size for values of type `[u8]` cannot be known at compilation time --> $DIR/suggest-borrow.rs:4:9 | LL | let x: [u8] = &vec!(1, 2, 3)[..]; | ^ doesn't have a size known at compile-time | = help: the trait `Sized` is not implemented for `[u8]` = note: all local variables must have a statically known size = help: unsized locals are gated as an unstable feature help: consider borrowing here | LL | let x: &[u8] = &vec!(1, 2, 3)[..]; | + error: aborting due to 4 previous errors Some errors have detailed explanations: E0277, E0308. For more information about an error, try `rustc --explain E0277`. ", "positive_passages": [{"docid": "doc-en-rust-09dba29fa8aa11509d6cd49c561b11d2548cab6627b306896c4b759f3fbc1ad4", "text": "I tried compiling this code: which resulted in the following output of rustc: So the compiler says So I tried adding a borrow and recompiling the code: which resulted in: so the compiler says and so I'm stuck in a loop. The compiler can't decide whether I should borrow or not. As a newbie to Rust, this is very frustrating and the opposite of helpful, it just confuses me further. Please fix this indecisiveness :) :\nFor reference, what you likely want to do is one of the following which means they cannot be the type of a local binding. You can give a binding a slice type (), because that's morally equivalent to a pointer (with sizing info), which has a compile time determined size, as all borrows are .\nCurrent output: After :", "commid": "rust_issue_72742", "tokennum": 172}], "negative_passages": []} {"query_id": "q-en-rust-14b39f9a843e2b57cdaba99d219efeecf096ccb84597797e012f55f4b45d1084", "query": "#![crate_name = \"compiletest\"] #![feature(test)] #![feature(vec_remove_item)] #![deny(warnings)] extern crate test;", "positive_passages": [{"docid": "doc-en-rust-ac0d9f4d8358a27aaa8edf3b10efc5a466a061e285df23b4965a04f245cadeb1", "text": "Implemented in rust impl #![feature(vec_remove_item)] #![deny(warnings)] extern crate test;", "positive_passages": [{"docid": "doc-en-rust-ee2c3a2cd4dd5f9723f8f1799ba3cb7cbe231b418343002b2cab203d04069198", "text": "Again here, as a maintainer of collections that offer the Vec and VecDeque APIs I'd like at least a summary comment explaining what this method does, why this design was followed, alternatives, and addressing the issues mentioned here that have received zero replies: should this method be to ? : naming issue: why s instead of elements? why instead of , etc.\nconcern rationale-and-vecdeque Just gonna formally register concern above\nNot an issue with the method per se, but it may need to be very clearly documented that the reference is not supposed to be to an item that is in the already, as it creates impossible borrow-checker conundrum. Users coming from the C world of pointers but no proper equality comparison, may be confused by this method, e.g. The name mentioned previously would help.\nas the proposer for FCP, do you have thoughts on concern?\nMy thinking is that the suggested by would sidestep most of the issues here.\ncc re.\nI don\u2019t have a strong desire for this particular feature myself, I proposed FCP as an effort to reduce the number of features in \"unstable limbo\". That said: Regarding , sure, I don\u2019t see a reason not to add it there as well. as proposed in sounds good to me. It is more general, and goes with precedent of also taking a boolean callable predicate rather than relying on . (Even further, the precedent of not having a method like that would rely on .)\nI notice two things: are a lot of methods similar to this we could add, and its not obvious to me which set of those methods is the ideal set. has sat in FCP concern limbo for 8 months without anyone complaining. I think we should not stabilize this just because it exists, but instead someone should come up with a well justified explanation of exactly what the best \"find and remove\" set of methods of vec-likes would be.\nGood points. (which is stable) and also fit the \"find and remove\" description, and we had a similar concern about the combinations of filtering v.s. non-filtering and self-exhausting on drop () v.s. not.\nresolve rationale-and-vecdeque I don't want to personally be on the hook for blocking this any more\n:bell: This is now entering its final comment period, as per the .", "commid": "rust_issue_40062", "tokennum": 501}], "negative_passages": []} {"query_id": "q-en-rust-14b39f9a843e2b57cdaba99d219efeecf096ccb84597797e012f55f4b45d1084", "query": "#![crate_name = \"compiletest\"] #![feature(test)] #![feature(vec_remove_item)] #![deny(warnings)] extern crate test;", "positive_passages": [{"docid": "doc-en-rust-f36d56ca8a5b853dddef1e388eccc3f4996314ced95e190f320b6523af18a2c2", "text": ":bell:\nThe final comment period, with a disposition to merge, as per the , is now complete. As the automated representative of the governance process, I would like to thank the author for their work and everyone else who contributed. The RFC will be merged soon.\nI still think this should be called instead of .\nMy problem with that name is that it only removes the first matching item, not all that are (though I haven't read the relevant RFC to know whether the name has been hashed out already; I assume it has been).\nI don't think the naming issue has been resolved. The implementation is still using the name. then?\nCan someone clarify what the status of this is wrt. the conversation above?\nAn example: I'm writing a proc macro attribute, and amongst the tokens, I expect other attribute to be there. This would be useful to add:\nUntil we sort this out, you can use\nPing. What's the status of this? Could we stabilize something?\n? Given we already have drainfilter for accepting a predicate, it would be nice to have a method that removed a single object from the vector (if there). Removing something from a vec is exceedingly common code. Getting the position and then removing that position is longwinded for such a common operation. (Btw, I think bit of naming is excellent as it highlights that there may be others still in the vec - a method name that prompts one to think is a good thing.)\nI use for this, and haven't found it too cumbersome. It differs in that it removes all copies of a given item, but is very easy for many purposes.\nRetain everything but the object? As work-arounds go that's shorter, but it's not ideal for expressing intent. Still keen on and .\nJust to bring up a different opinion: how about removing this method instead of stabilizing it? While yes, it would be convenient to have, it might not be worth it. For one, is not a good name, as many already said. The name doesn't imply that the first item from the start is removed. is better in that regard, but still doesn't feel completely right. More importantly, if we have , we should certainly also have . Having one but not the other feels arbitrary to me and makes for a surprising API (that's bad).", "commid": "rust_issue_40062", "tokennum": 500}], "negative_passages": []} {"query_id": "q-en-rust-14b39f9a843e2b57cdaba99d219efeecf096ccb84597797e012f55f4b45d1084", "query": "#![crate_name = \"compiletest\"] #![feature(test)] #![feature(vec_remove_item)] #![deny(warnings)] extern crate test;", "positive_passages": [{"docid": "doc-en-rust-a6a233343c4941fe4fb1380f06eae3729603f6c6f6d97ff568c7be2c0e92d2d6", "text": "Similarly, we would also want to add the variant of both of those methods. Potentially, we also want to add a variant that removes all items equal to the given . And as also already mentioned, we can also add all those methods to . And why not (without the versions)? Consequently, we would have to add 10 or 15 new methods. Adding fewer would make the API inconsistent IMO. All the functionality can be implemented with very little extra code with iterators.\nHow do you propose this be implemented with iterators? How about this signature: This uses to make it clear exactly what is removed. The argument and the return type makes it clear that we remove many items. The user has the option of how many to remove ( means all). The returned iterator can be implemented efficiently by shifting the items and then pulling a similar trick (using ) to to yield the removed items.\nalready mentioned this : I think that one is too complex. Especially when considering the call site, I think it's fairly hard to understand. And the name still does not imply that the search begins at the beginning, which is a big nono for me.\nRemoving an item from a vec is a very common codepath. Somehow it needs to be a one liner. Having swapremovefirst() and not having removefirst() would be nudging people in the direction of great performance. That seems quite rustic to me. swapremovelast is interesting as there maybe no swap needed if it happens to be the last item - cute. Just have those two functions. That\u2019s the 80% case covered. People who wish to maintain order will just have to have slightly more code. On Fri, 15 Nov 2019 at 19:20, Lukas Kalbertodt <:\nI don't understand the focus on removing here. Vec has a lot more members taking an index than and , and only a few to get hold of an index value ( variants and , sort of). Most of the complication in the \"workaround\" for this remove_item is finding the first index of something: v.iter().position( n == 2) Not too bad, the reminds you it's a linear search, but could be easier.", "commid": "rust_issue_40062", "tokennum": 469}], "negative_passages": []} {"query_id": "q-en-rust-14b39f9a843e2b57cdaba99d219efeecf096ccb84597797e012f55f4b45d1084", "query": "#![crate_name = \"compiletest\"] #![feature(test)] #![feature(vec_remove_item)] #![deny(warnings)] extern crate test;", "positive_passages": [{"docid": "doc-en-rust-2bfa86423016dc0b4d2cd712595eb9def72ae5df65ba94c9342a5feb4db8696b", "text": " #![feature(vec_remove_item)] #![deny(warnings)] extern crate test;", "positive_passages": [{"docid": "doc-en-rust-eb26f99012e5d90e23376380eb3ffa91828c552e642c5b8e9e77faa562982d56", "text": "I don't see where these concerns have been resolved. From re-reading this whole discussion, it isn't clear to me what the goal of adding this method is nor what problem does this method solve or whether this is a problem worth solving. What's more or less clear to me is that there is at least some design work to be done to at least ensure that libstd exposes a consistent API, and that sounds like this would warrant an RFC to me. The current discussion about using a method hints that these APIs might not only want to be consistent for the ordered collections (Vec, VecDeque, List) for which a notion of a \"first\" element makes sense, but might also want to be consistent with the e.g. and methods of slices. The main arguments for FCPing this appear to be that this has sit on nightly for too long without anybody complaining, and the main argument for stabilizing this has been that this has been in FCP for too long. I think it would be better to just encourage and support those that want to land this feature to write a small RFC for it. If somebody is interested in doing that, I would be able to help and give feedback.\nUnless a concern is made with by a relevant team member, then it won't stop FCP. And it won't go into FCP unless the majority of team members have voted for it (and there aren't any formal concerns listed). So the fact it passed FCP means that all relevant team members that they were okay with the current design and did not have any outstanding concerns at the time. Things do not go into FCP simply because it's been \"sitting around too long\". Of course if any team members do have any concerns, now would be the best time to make them.\nMistakes happen. And to me this thread clearly looks like many concerns were overlooked accidentally and no one explicitly marked them as concerns for rfcbot. Luckily those mistakes were caught.\nReopening because\nThe current signature is missing a ?Sized bound. Could I ask for a PR to fix the bound and add a test that covers remove_item(\"...\") on a Vec #![feature(vec_remove_item)] #![deny(warnings)] extern crate test;", "positive_passages": [{"docid": "doc-en-rust-20b648530e897677a59e79e0fafbf3838590a3525cc67cc4074a2b3eaad420b5", "text": "I agree with and The currently design involves: start with a reference to a value not in the vec, use its PartialEq impl to do a linear scan, remove the first found, shift everything after the removed element. To generalize almost all of the discussion above, people are questioning whether the conjunction of all 5 of these decisions is really as common as a name like removeitem would suggest, whether a different permutation of choices might be more widely applicable, and how to adjust naming to leave room for all the other equally or more common permutations: ones where the ref we want to remove is a ref to a vec element, or we want to remove by FnMut instead of PartialEq, or we want a binary search, or we want to remove the last, or all matches, or we want the more efficient swapremove behavior. Fortunately Vec already supports all permutations of these use cases equally well, with nice independent APIs for searching and removing. Many people have voiced opinions equivalent to \"removing an item from a vec is so common that there absolutely must be a function for it\". I think what they mean is that the union of all permutations of the above 5 choices is common, which is true; they don't necessarily mean the specific choices that go into removeitem as it exists would justify a method specific to those choices. points out that satisfying the set of use cases which all together are common would take a much larger number of new methods.\nPer , I think we should go ahead and remove . fcp close\nThat is not what I meant. When I said that I need in my code, I meant exactly that: that I could take my existing code and replace it with and it would have exactly the same behavior, because I really do need specifically, not the other fancier methods. I'm perfectly fine with changing the name so that we can accommodate other fancier methods in the future, but why would that mean that we shouldn't have at all?\nfcp cancel\nproposal cancelled.\nPer , I think we should go ahead and remove . fcp close\nTeam member has proposed to close this. The next step is review by the rest of the tagged team members: [x] [x] [x] [x] [ ] [x] No concerns currently listed. Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period.", "commid": "rust_issue_40062", "tokennum": 506}], "negative_passages": []} {"query_id": "q-en-rust-14b39f9a843e2b57cdaba99d219efeecf096ccb84597797e012f55f4b45d1084", "query": "#![crate_name = \"compiletest\"] #![feature(test)] #![feature(vec_remove_item)] #![deny(warnings)] extern crate test;", "positive_passages": [{"docid": "doc-en-rust-4fde0a489073ea47e374bf0dac8e839e7a7f929d1784009e1e4f92f84cc4743e", "text": "If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nChecking KodrAus\u2019 checkbox per\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to close, as per the , is now complete. As the automated representative of the governance process, I would like to thank the author for their work and everyone else who contributed.\nSince we've merged the deprecation would we like to close this tracking issue now? Or keep it open until the unstable deprecated method is removed entirely?\nI think keeping this open until we removed the method is a good idea. How long do we want to wait with that by the way? I guess a couple of cycles is sufficient?\nBefore closing, it would be great if someone could suggest a \"one clean and obvious\" way to remove the first/last occurance of an item from a Vector. Irrespective of the arguments made here, a method removing the first/last occurance of an item is provided across all major programming languages, and is to be \"expected\" Anyone new to rust is bound to google \"remove from vec rust\" and end up on this discussion which would only leave them more confused.\nshould answer those questions. But here is a copy of my answer for convenience: You can of course handle the case however you like (panic and ignoring are not the only possibilities). Like the first element, but replace with . Remember that has a runtime of O(n) as all elements after the index need to be shifted. has a runtime of O(1) as it swaps the to-be-removed element with the last one. If the order of elements is not important in your case, use instead of !\nIt might be worth adding more examples (maybe all of the ones above) to the docs.\nI'd like to add those to the docs, should they go along the example that we currently have?\nWhat examples are you referring to? I could imagine those \"remove examples\" either going on the type (as a new section \"Removing elements\" or something like that) or directly on the method (below the existing docs). And by the way, feel free to add (parts of) my text/examples verbatim.\nThank you.", "commid": "rust_issue_40062", "tokennum": 501}], "negative_passages": []} {"query_id": "q-en-rust-14b39f9a843e2b57cdaba99d219efeecf096ccb84597797e012f55f4b45d1084", "query": "#![crate_name = \"compiletest\"] #![feature(test)] #![feature(vec_remove_item)] #![deny(warnings)] extern crate test;", "positive_passages": [{"docid": "doc-en-rust-04df2af8400b69a3b2a8a0e9f7e8cd1fdccfe6b7f6da1d2bca38ac39f7b1d5c8", "text": "I was referring to those \"remove examples\" as to whether I should append them to what we currently have in docs for , put them somewhere else or perhaps modify examples for to include removal of specific element\nThat sounds fine to me. Feel free to just open a PR and we can still see about the specifics then :)", "commid": "rust_issue_40062", "tokennum": 67}], "negative_passages": []} {"query_id": "q-en-rust-14bf4e6bd3759b03214da3442840104fb6f8336f0fb47a52cc4674f4b2af90dd", "query": " fn main() { x = x = x; //~^ ERROR cannot find value `x` in this scope //~| ERROR cannot find value `x` in this scope //~| ERROR cannot find value `x` in this scope x = y = y = y; //~^ ERROR cannot find value `y` in this scope //~| ERROR cannot find value `y` in this scope //~| ERROR cannot find value `y` in this scope //~| ERROR cannot find value `x` in this scope x = y = y; //~^ ERROR cannot find value `x` in this scope //~| ERROR cannot find value `y` in this scope //~| ERROR cannot find value `y` in this scope x = x = y; //~^ ERROR cannot find value `x` in this scope //~| ERROR cannot find value `x` in this scope //~| ERROR cannot find value `y` in this scope x = x; // will suggest add `let` //~^ ERROR cannot find value `x` in this scope //~| ERROR cannot find value `x` in this scope x = y // will suggest add `let` //~^ ERROR cannot find value `x` in this scope //~| ERROR cannot find value `y` in this scope } ", "positive_passages": [{"docid": "doc-en-rust-4a2a3ca0578e56aa495d8a238a4a39754613458b174a8b87ffbebf00f635a35a", "text": " $DIR/issue-16922.rs:4:5 | LL | fn foo(value: &T) -> Box { | - let's call the lifetime of this reference `'1` LL | Box::new(value) as Box | ^^^^^^^^^^^^^^^ cast requires that `'1` must outlive `'static` | help: to declare that the trait object captures data from argument `value`, you can add an explicit `'_` lifetime bound | LL | fn foo(value: &T) -> Box { | ++++ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-a71721f88961ea6b81b7ebd2effb1c60287400743a9f36a25fc65e8df7e54b5f", "text": "we currently output The suggestion is incomplete and will not result in compilable code. Because the lifetime is coming from an argument of type , which is an unconstrained type parameter, itself will not have the needed lifetime and causes the obligation that we're trying to reverse by suggesting . The appropriate changes to the code would be either or , depending on whether appears in any other argument or not.\nThe suggested code .", "commid": "rust_issue_73497", "tokennum": 86}], "negative_passages": []} {"query_id": "q-en-rust-19b4cd0b2284614ff5f908e4db0ee827ca02d9256c02a9ca68f4fbd0dcbe1edd", "query": "run_pass_manager(cgcx, tm, llmod, config, true); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-pm\"); timeline.record(\"thin-done\"); // FIXME: this is a hack around a bug in LLVM right now. Discovered in // #46910 it was found out that on 32-bit MSVC LLVM will hit a codegen // error if there's an available_externally function in the LLVM module. // Typically we don't actually use these functions but ThinLTO makes // heavy use of them when inlining across modules. // // Tracked upstream at https://bugs.llvm.org/show_bug.cgi?id=35736 this // function call (and its definition on the C++ side of things) // shouldn't be necessary eventually and we can safetly delete these few // lines. llvm::LLVMRustThinLTORemoveAvailableExternally(llmod); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-rm-ae\"); timeline.record(\"no-ae\"); Ok(mtrans) } }", "positive_passages": [{"docid": "doc-en-rust-eac71975284c29468d15101371d7dec17505b09a4c2a386257a23b255df72ff3", "text": "I'm opening this up to serve as a tracking issue for enabling multiple codegen units in release mode by default. I've written up a before but the tl;dr; is that multiple codegen units enables us to run optimization/code generation in parallel, making use of all available computing resources often speeding up compilations by more than 2x. Historically this has not been done due to claims of a loss in performance, but the is intended to assuage such concerns. The most viable route forward seems to be to enable multiple CGUs and ThinLTO at the same time in release mode. Blocking issues: [x] Enable for libstd on all platforms - - - - Potential blockers/bugs: - -\ncc\nPresumably this will remain a stable compiler flag, but what else will change by default? Is the plan to make this only affect , or is the plan to also make this affect ? Do we want to make continue to use only a single CGU, to hedge against regressions for people who are already willing to trade off compiler time for runtime?\nI would specifically propose that any opt level greater than 1 uses 16 codegen units and ThinLTO enabled by default for those 16 codegen units.\nThis is very interesting. One would think that ThinLTO-driven inlining should always have to do less work when our much more conservative pre-LLVM inlining. But that doesn't always seem to be the case, as in the . Most tests on the seem to profit though. Overall it's still not a clear picture to me.\nwas that comment meant for a different thread? I forget which one as well though, so I'll respond here! As a \"quick benchmark\" I compiled the regex test suite with 16 CGUs + ThinLTO and then toggled inlining in all CGUs on/off. Surprisingly inlining in all CGUs was 5s faster to compile, and additionally . In those timings is where inlining is only in one CGU, where is inlining in all CGUs. One benchmark, , got twice as slow!\nYeah, the comment was in response to (rust-doom) but since that was closed already, I put it here.", "commid": "rust_issue_45320", "tokennum": 458}], "negative_passages": []} {"query_id": "q-en-rust-19b4cd0b2284614ff5f908e4db0ee827ca02d9256c02a9ca68f4fbd0dcbe1edd", "query": "run_pass_manager(cgcx, tm, llmod, config, true); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-pm\"); timeline.record(\"thin-done\"); // FIXME: this is a hack around a bug in LLVM right now. Discovered in // #46910 it was found out that on 32-bit MSVC LLVM will hit a codegen // error if there's an available_externally function in the LLVM module. // Typically we don't actually use these functions but ThinLTO makes // heavy use of them when inlining across modules. // // Tracked upstream at https://bugs.llvm.org/show_bug.cgi?id=35736 this // function call (and its definition on the C++ side of things) // shouldn't be necessary eventually and we can safetly delete these few // lines. llvm::LLVMRustThinLTORemoveAvailableExternally(llmod); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-rm-ae\"); timeline.record(\"no-ae\"); Ok(mtrans) } }", "positive_passages": [{"docid": "doc-en-rust-8459400afb569ad2188b3cd5b231230badf367bcfeb89dca33d60d66624907a0", "text": "Regarding the compilation time difference between pre- and post-trans inlining, my hypothesis would be that sometimes pre-trans inlining will lead to more code being eliminated early on (as in the case of regex apparently) and sometimes it will have the opposite effect. I suspect that there's room for improvement by tuning which LLVM passes we run specifically for ThinLTO. Or do we do that already?\nHeh it's true yeah, I'd imagine that there's always room for improvement in pass tuning in Rust :). Right now we perform (afaik) 0 customization of any pass manager in LLVM. All of the normal optimization passes, LTO optimization passes, and ThinLTO optimization passes are all the same as what's in LLVM itself.\nI wanted to also take the time and tabulate all the results from the to make sure it's all visibile in one place. Note that all the timings below are comparing a release build to a release build with 16 CGUs and ThinLTO enabled Improvements All of the following compile times improved, sorted by most improved to least improved Regressions The following crates regressed in compile times, sorted from smallest regression to largest alexcrichton's attempt to reproduce the regressions Here I attempt to reproduce the regressions on my own machine with Unfortunately the only regression I was able to reproduce was the rust-belt regression. I'll be looking more into that.\nLooking into , one interesting thing I've found is that the crate takes longer to compile with ThinLTO than it would otherwise. Looking at it appears that one codegen unit in this crate takes 99% of the time in LLVM. This codegen unit appears to be basically entirely dominated by . Almost all of the benefit of multiple codegen units is spreading out the work across all CPUs. Enabling ThinLTO to avoid losing any perf is fundamentally doing more work than what's already happening at , but we get speedups across the board in most cases. If we have one huge CGU, split it in two, and work on those in parallel then we've got 50% of the original time to run ThinLTO (in a perfect world). If ThinLTO takes more than 50% of the time then we'd have a build time regression, but that's almost never happening.", "commid": "rust_issue_45320", "tokennum": 488}], "negative_passages": []} {"query_id": "q-en-rust-19b4cd0b2284614ff5f908e4db0ee827ca02d9256c02a9ca68f4fbd0dcbe1edd", "query": "run_pass_manager(cgcx, tm, llmod, config, true); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-pm\"); timeline.record(\"thin-done\"); // FIXME: this is a hack around a bug in LLVM right now. Discovered in // #46910 it was found out that on 32-bit MSVC LLVM will hit a codegen // error if there's an available_externally function in the LLVM module. // Typically we don't actually use these functions but ThinLTO makes // heavy use of them when inlining across modules. // // Tracked upstream at https://bugs.llvm.org/show_bug.cgi?id=35736 this // function call (and its definition on the C++ side of things) // shouldn't be necessary eventually and we can safetly delete these few // lines. llvm::LLVMRustThinLTORemoveAvailableExternally(llmod); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-rm-ae\"); timeline.record(\"no-ae\"); Ok(mtrans) } }", "positive_passages": [{"docid": "doc-en-rust-826aee41cb0e8b13d2368b8675cf3e3aa442a17b3bcb7408e02c50b17283eed2", "text": "What's happening here is that we have one huge CGU, but when we split it up we still have one huge CGU. This means that the CGU which may take ~1% less time in LLVM only gives us a tiny sliver of a window to run ThinLTO passes. In the case of the crate this means that adding ThinLTO passes is an overall build time regression. So generalizing even further, I think that enabling ThinLTO and multiple CGUs by default is going to regress compile time performance in any crate where our paritioning implementation doesn't actually partition very well. In the case of we've got one huge function which hogs almost all the optimization time (I think) and the entire crate is basically dominated by that one function. I would be willing to conclude, however, that such a situation is likely quite rare. Almost all other crates in the ecosystem will benefit from the partitioning which should basically evenly split up a crate. maybe you have thoughts on this though?\nThank you so much for collecting and analyzing such a large amount of data, Your conclusions make sense to me. Unevenly distributed CGU sizes are also a problem for incremental compilation; e.g. the test sees 90% CGU re-use, yet compile time is almost the same as from-scratch. In the non-incremental case it might be an option to detect the problematic case right after partitioning and then switch to non-LTO mode? I'm not sure it's worth the extra complexity though. For cases where there is one big CGU but that CGU contains multiple functions, we should be able to rather easily redistribute functions to other CGUs. We'd only need a metric for the size of a . The number of MIR instructions might be a sufficient heuristic here. That would not help for but it might help for other crates. In conclusion, judging from the table above, I think we should make the default sooner rather than later. Open questions I see: What to do about ? I would have thought that it would always be a clear win to disable this when ThinLTO is enabled but that doesn't seem to be the case. Are there hardware configurations where this never is a win? E.g. should we default to traditional compilation if ?", "commid": "rust_issue_45320", "tokennum": 482}], "negative_passages": []} {"query_id": "q-en-rust-19b4cd0b2284614ff5f908e4db0ee827ca02d9256c02a9ca68f4fbd0dcbe1edd", "query": "run_pass_manager(cgcx, tm, llmod, config, true); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-pm\"); timeline.record(\"thin-done\"); // FIXME: this is a hack around a bug in LLVM right now. Discovered in // #46910 it was found out that on 32-bit MSVC LLVM will hit a codegen // error if there's an available_externally function in the LLVM module. // Typically we don't actually use these functions but ThinLTO makes // heavy use of them when inlining across modules. // // Tracked upstream at https://bugs.llvm.org/show_bug.cgi?id=35736 this // function call (and its definition on the C++ side of things) // shouldn't be necessary eventually and we can safetly delete these few // lines. llvm::LLVMRustThinLTORemoveAvailableExternally(llmod); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-rm-ae\"); timeline.record(\"no-ae\"); Ok(mtrans) } }", "positive_passages": [{"docid": "doc-en-rust-a39618a6d2ec0fbbc39b9a1ecc2ba00b73839dc85487f888bb218f3688a727ce", "text": "FWIW, as the author of , which is a made-for-fun Piston game, I am more than happy to take a small compile regression when it seems like vast majority of crates get huge real-time improvements!\nThis seems like a good thing to have in our back pocket, but I'm also wary of trying to do this by default. For example the case is one where we may wish to disable ThinLTO but the one massive function could also critically rely on a function in a different CGU being inlined? That sort of case would be difficult for us to determine... In any case though I agree that we probably don't need the complexity just yet so I think we can defer this for later. Agreed! So far all the crate's I've seen the O(instructions) has been quite a good metric for \"how long this takes in all LLVM-related passes\", so counting MIR sounds reasonable to me as well. Again though I also feel like this is ok to have in our back pocket. I'd want dig more into the minor change example, I'm curious how awry the CGU distribution is there and whether a heuristic like this would help. Quite surprisingly I've yet to see any data that it's beneficial to compile time in release mode or doesn't hurt runtime. (all data is that it hurts both compile time and runtime performance!) That being said we're seeing such big wins from ThinLTO on big projects today that we can probably just save this as a possible optimization for the future. On the topic of inline functions I recently dug up again (later closed in favor of , but the latter may no longer be true today). I suspect that may \"fix\" quite a bit of the compile time issue here without coming at a loss of performance? In any case possibly lots of interesting things we could do there. Also a good question! I find this to be a difficult one, however, because the CGU number will affect the output artifact, which means that if we do this sort of probing it'll be beneficial for performance but come at the cost of more difficult deterministic builds. You'd have to specify the CGUs manually or build on the same-ish hardware to get a deterministic build I think? I think though that if you have 1 CPU then this is universaly a huge regression.", "commid": "rust_issue_45320", "tokennum": 504}], "negative_passages": []} {"query_id": "q-en-rust-19b4cd0b2284614ff5f908e4db0ee827ca02d9256c02a9ca68f4fbd0dcbe1edd", "query": "run_pass_manager(cgcx, tm, llmod, config, true); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-pm\"); timeline.record(\"thin-done\"); // FIXME: this is a hack around a bug in LLVM right now. Discovered in // #46910 it was found out that on 32-bit MSVC LLVM will hit a codegen // error if there's an available_externally function in the LLVM module. // Typically we don't actually use these functions but ThinLTO makes // heavy use of them when inlining across modules. // // Tracked upstream at https://bugs.llvm.org/show_bug.cgi?id=35736 this // function call (and its definition on the C++ side of things) // shouldn't be necessary eventually and we can safetly delete these few // lines. llvm::LLVMRustThinLTORemoveAvailableExternally(llmod); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-rm-ae\"); timeline.record(\"no-ae\"); Ok(mtrans) } }", "positive_passages": [{"docid": "doc-en-rust-4426806565390046acd7ad33b00b13788407fc81b51697f17774c126464fdbb6", "text": "With one CPU we'd split the preexisting one huge CGU into N different ones, probably take roughly the same amount of time to opimize those, and then tack on ThinLTO and more optimization passes. My guess is that with one CPU we'd easily see 50% regressions. With 2+ CPUs however I'd probably expect to see benefits to compile time. Anything giving us twice the resources to churn the cgus more quickly should quickly start seeing wins in theory I think. For now though the deterministic builds wins me over in terms of leaving this as-is for all hardware configurations. That and I doubt anyone's compiling Rust on single-core machines nowadays!\nOne thing I think that's also worth pointing out is that up to this point we've mostly been measuring the runtime of an entire . That's actually, I believe, the absolute worst case scenario for where ThinLTO will provide benefit. Despite this, it's showing huge improvements for lots of projects! The benefit of ThinLTO and multiple CGUs is leveraging otherwise idle parallelism on the build machine. It's overall increasing the amount of work the compiler does. For a from scratch, though, you typically already have tons of crates compiling for the first half of the build in parallel. In that sense there's not actually any idle parallelism. Put another way, ThinLTO and multiple CGUs should only be beneficial for builds which dont have many crates compiling in parallel for long parts of the build. If a build is 100% parallel for the entire time then ThinLTO will likely regress compile time performance. Now you might realize, however, that one very common case where you're only building one crate is in an incremental build! Typically if you do an incremental build you're only building a handful of crates, often serially. In that sense I think that there's some massive wins of ThinLTO + multiple CGUs in incremental builds rather than entire crate builds. Although improving both is of course great as well :)\nFor deterministic builds you have to do some extra configuration anyway (e.g. path remapping) so I would not consider that a blocker. And I guess there are single core VMs around somewhere.", "commid": "rust_issue_45320", "tokennum": 475}], "negative_passages": []} {"query_id": "q-en-rust-19b4cd0b2284614ff5f908e4db0ee827ca02d9256c02a9ca68f4fbd0dcbe1edd", "query": "run_pass_manager(cgcx, tm, llmod, config, true); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-pm\"); timeline.record(\"thin-done\"); // FIXME: this is a hack around a bug in LLVM right now. Discovered in // #46910 it was found out that on 32-bit MSVC LLVM will hit a codegen // error if there's an available_externally function in the LLVM module. // Typically we don't actually use these functions but ThinLTO makes // heavy use of them when inlining across modules. // // Tracked upstream at https://bugs.llvm.org/show_bug.cgi?id=35736 this // function call (and its definition on the C++ side of things) // shouldn't be necessary eventually and we can safetly delete these few // lines. llvm::LLVMRustThinLTORemoveAvailableExternally(llmod); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-rm-ae\"); timeline.record(\"no-ae\"); Ok(mtrans) } }", "positive_passages": [{"docid": "doc-en-rust-9cbbd7c4c8d3f7f4c75a168140fc39d1686ff5f6d6e1524b514ba1c75b0b91ac", "text": "But all of this is such a niche case that I don't really care MIR-only RLIBs should put us into a pretty good spot regarding this, so I'm quite confident that we're on the right path overall.\nThat's really interesting. For incremental compilation enabled the situation might be different (because pre-trans inlining hurts re-use) but for the non-incremental case it sounds like it's pretty clear what to do.\nYeah, maybe we should revisit this at some point. Although I have to say the current solution of only and symbols and nothing in between is really nice and simple.\nHm yeah that's a good point about needing configuration anyway for deterministic builds. It now seems like a more plausible route to take! Also that's a very interesting apoint about incremental and inlining on our own end... Maybe we should dig more into those ThinLTO runtime regressions at some point! Also yeah I don't really want to change how we trans inline functions just yet, I do like the simplicity too :)\nout of curiosity how did you figure out this and which function it was. I've wanted to look into why the webrender build is so slow and would welcome tips.\noh sure I'd love to explain! So I originally found as a problematic crate when compiling rust-belt as it just took awhile and I decided to dig deeper. I checked out the crate and ran: That drops in the current directory, and opening that up I see: ! The graph here isn't always the easiest to read, but we've clearly got two huge bars, both of which correspond to taking a huge amount of time for that one CGU (first is optimization, second is ThinLTO + codegen). Next I ran: and that command dumps a bunch of IR files into . Our interesting CGU is so I opened up (we sure do love our long filenames). Inside that file it was 70k lines and some poking around showed that one function was 66k lines of IR. I sort of forget now how at this point I went from that IR to determining there was a huge function in there though...\nI don't know if this is the right place, but I think it should be considered whether the symbol issues reported in the Nightly section here: should be considered a blocker or not.", "commid": "rust_issue_45320", "tokennum": 500}], "negative_passages": []} {"query_id": "q-en-rust-19b4cd0b2284614ff5f908e4db0ee827ca02d9256c02a9ca68f4fbd0dcbe1edd", "query": "run_pass_manager(cgcx, tm, llmod, config, true); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-pm\"); timeline.record(\"thin-done\"); // FIXME: this is a hack around a bug in LLVM right now. Discovered in // #46910 it was found out that on 32-bit MSVC LLVM will hit a codegen // error if there's an available_externally function in the LLVM module. // Typically we don't actually use these functions but ThinLTO makes // heavy use of them when inlining across modules. // // Tracked upstream at https://bugs.llvm.org/show_bug.cgi?id=35736 this // function call (and its definition on the C++ side of things) // shouldn't be necessary eventually and we can safetly delete these few // lines. llvm::LLVMRustThinLTORemoveAvailableExternally(llmod); cgcx.save_temp_bitcode(&mtrans, \"thin-lto-after-rm-ae\"); timeline.record(\"no-ae\"); Ok(mtrans) } }", "positive_passages": [{"docid": "doc-en-rust-b0225b6de619b46503f20143d72f83ac92cc3a4f9288ee033156ed5537d68271", "text": "The tl;dr is that certain versions of llvm, (i've seen this on 3.8, and whatever llvm rustc nightly uses) appends seemingly random garbage to the end of some names, e.g. we get: instead of: This knocks the debuginfo out of sync (it doesn't have the garbage appended). I've been able to repro this with clang3.8 for c++ files as well (haven't tested on other llvm), and switching to 5.0 seems to have fixed the issue. I don't know if its within reach, but perhaps we should attempt upgrading to llvm 5.0 before releasing this on stable? Note I understand this is for release mode, but i see this in debug mode on rustc nightly right now as well...", "commid": "rust_issue_45320", "tokennum": 185}], "negative_passages": []} {"query_id": "q-en-rust-19d4d015787af8da602baf74610b2095c2d5679df95804cfbc2ec50b2e9218ae", "query": "self_ty: Ty<'tcx>, sig: ty::PolyGenSig<'tcx>, ) -> ty::Binder<'tcx, (ty::TraitRef<'tcx>, Ty<'tcx>, Ty<'tcx>)> { debug_assert!(!self_ty.has_escaping_bound_vars()); assert!(!self_ty.has_escaping_bound_vars()); let trait_ref = tcx.mk_trait_ref(fn_trait_def_id, [self_ty, sig.skip_binder().resume_ty]); sig.map_bound(|sig| (trait_ref, sig.yield_ty, sig.return_ty)) }", "positive_passages": [{"docid": "doc-en-rust-0e86f5cad76845a9579f9f9606cb740329e6e79be0820e577c6d84ca9b39e498", "text": "An there actually does panic for the test. I only moved this code between the files, the same comment and code pattern exists as well for normal closures: Originally posted by in\nI couldn't repro this failure, maybe was doing instead of ? The ordering is important here.\nI believe I did, yes.", "commid": "rust_issue_104825", "tokennum": 64}], "negative_passages": []} {"query_id": "q-en-rust-1a3e673a30b48a4775545f7bfc534bccb025127168857a7f905445540b83eb98", "query": "linker_flavor: LinkerFlavor::Ld, linker: Some(\"arm-none-eabi-ld\".into()), asm_args: cvs![\"-mthumb-interwork\", \"-march=armv4t\", \"-mlittle-endian\",], features: \"+soft-float,+strict-align\".into(), // Force-enable 32-bit atomics, which allows the use of atomic load/store only. // The resulting atomics are ABI incompatible with atomics backed by libatomic. features: \"+soft-float,+strict-align,+atomics-32\".into(), main_needs_argc_argv: false, atomic_cas: false, has_thumb_interworking: true,", "positive_passages": [{"docid": "doc-en-rust-9d3c81fe33119973c08c87410d7d37fcbfdb78fb83e02fe3f92be033b67d8c9f", "text": "If you try to compile the crate with , it fails with the following error: This seems to be a regression from to , and my best guess for the change that caused it is this pull request: $DIR/must_use-pin.rs:42:5 | LL | pin_must_use_ptr(); | ^^^^^^^^^^^^^^^^^^ | note: the lint level is defined here --> $DIR/must_use-pin.rs:1:9 | LL | #![deny(unused_must_use)] | ^^^^^^^^^^^^^^^ error: unused pinned boxed `MustUse` that must be used --> $DIR/must_use-pin.rs:44:5 | LL | pin_box_must_use(); | ^^^^^^^^^^^^^^^^^^ error: aborting due to 2 previous errors ", "positive_passages": [{"docid": "doc-en-rust-eb2f9b3bb39caefb93b615bcdd1a7a477ee66bea050804de4650e31a26511576", "text": " $DIR/issue_74400.rs:12:5 | LL | f(data, identity) | ^^^^^^^^^^^^^^^^^ | = help: consider adding an explicit lifetime bound `T: 'static`... error[E0308]: mismatched types --> $DIR/issue_74400.rs:12:5 | LL | f(data, identity) | ^^^^^^^^^^^^^^^^^ one type is more general than the other | = note: expected type `for<'r> Fn<(&'r T,)>` found type `Fn<(&T,)>` error: implementation of `FnOnce` is not general enough --> $DIR/issue_74400.rs:12:5 | LL | f(data, identity) | ^^^^^^^^^^^^^^^^^ implementation of `FnOnce` is not general enough | = note: `fn(&'2 T) -> &'2 T {identity::<&'2 T>}` must implement `FnOnce<(&'1 T,)>`, for any lifetime `'1`... = note: ...but it actually implements `FnOnce<(&'2 T,)>`, for some specific lifetime `'2` error: aborting due to 3 previous errors Some errors have detailed explanations: E0308, E0310. For more information about an error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-083ab7db4e2d48777362fd29f1c6170eec0fa54a54f82e95b6d3b4e25b7c55d1", "text": "Regression from to . The following code yields an incorrect error: On beta, this gives: On nightly, this:\nwould you be so kind to label this issue? It's sliding down the list while being hard to search.\nRegression occurred between nightly-2020-06-23 and nightly-2020-06-24, I suspect is related (it's ?).\nIt would be great to find the culprit PR. ping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nsearched nightlies: from nightly-2020-06-23 to nightly-2020-06-24 regressed nightly: nightly-2020-06-24 searched commits: from to regressed commit: // FIXME(with_negative_coherence): the infcx has region contraints from equating // the impl headers as requirements. Given that the only region constraints we // get are involving inference regions in the root, it shouldn't matter, but // still sus. // // We probably should just throw away the region obligations registered up until // now, or ideally use them as assumptions when proving the region obligations // that we get from proving the negative predicate below. let ref infcx = root_infcx.fork(); let ocx = ObligationCtxt::new(infcx);", "positive_passages": [{"docid": "doc-en-rust-1f22b06224aecd994d3e19d133edc7f3e152837884d6a163ec0313c23c6e85a5", "text": " $DIR/non-existent-field-present-in-subfield-recursion-limit.rs:41:22 | LL | let test = fooer.f; | ^ unknown field | = note: available fields are: `first`, `second`, `third` error: aborting due to previous error For more information about this error, try `rustc --explain E0609`. ", "positive_passages": [{"docid": "doc-en-rust-48cee304816ed8068fba74f198865051001db76ca3462cd9612353747a29af8e", "text": "I just encountered this output: We currently mention available fields when the field used isn't found. It would be nice if we peeked at the types for the available fields to see if any of them has any fields (including through deref) of the name we originally wanted. In this case, I would love to see the following output: $DIR/suggest-using-tick-underscore-lifetime-in-return-trait-object.rs:5:5 | LL | fn foo(value: &T) -> Box { | - let's call the lifetime of this reference `'1` LL | Box::new(value) as Box | ^^^^^^^^^^^^^^^ cast requires that `'1` must outlive `'static` | help: to declare that the trait object captures data from argument `value`, you can add an explicit `'_` lifetime bound | LL | fn foo(value: &T) -> Box { | ++++ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-a71721f88961ea6b81b7ebd2effb1c60287400743a9f36a25fc65e8df7e54b5f", "text": "we currently output The suggestion is incomplete and will not result in compilable code. Because the lifetime is coming from an argument of type , which is an unconstrained type parameter, itself will not have the needed lifetime and causes the obligation that we're trying to reverse by suggesting . The appropriate changes to the code would be either or , depending on whether appears in any other argument or not.\nThe suggested code .", "commid": "rust_issue_73497", "tokennum": 86}], "negative_passages": []} {"query_id": "q-en-rust-1f61d149bfe4d2efec8cc0f4e1fe36f5334007160327ff5c140962ce94bd4657", "query": "// Equate the headers to find their intersection (the general type, with infer vars, // that may apply both impls). let Some(_equate_obligations) = let Some(equate_obligations) = equate_impl_headers(infcx, param_env, &impl1_header, &impl2_header) else { return false;", "positive_passages": [{"docid": "doc-en-rust-1f22b06224aecd994d3e19d133edc7f3e152837884d6a163ec0313c23c6e85a5", "text": " $DIR/generic_const_early_param.rs:4:20 | LL | struct DataWrapper<'static> { | ^^^^^^^ 'static is a reserved lifetime name error[E0261]: use of undeclared lifetime name `'a` --> $DIR/generic_const_early_param.rs:6:12 | LL | struct DataWrapper<'static> { | - help: consider introducing lifetime `'a` here: `'a,` LL | LL | data: &'a [u8; Self::SIZE], | ^^ undeclared lifetime error[E0261]: use of undeclared lifetime name `'a` --> $DIR/generic_const_early_param.rs:11:18 | LL | impl DataWrapper<'a> { | - ^^ undeclared lifetime | | | help: consider introducing lifetime `'a` here: `<'a>` warning: the feature `generic_const_exprs` is incomplete and may not be safe to use and/or cause compiler crashes --> $DIR/generic_const_early_param.rs:1:12 | LL | #![feature(generic_const_exprs)] | ^^^^^^^^^^^^^^^^^^^ | = note: see issue #76560 for more information = note: `#[warn(incomplete_features)]` on by default error: lifetime may not live long enough --> $DIR/generic_const_early_param.rs:6:20 | LL | data: &'a [u8; Self::SIZE], | ^^^^^^^^^^ requires that `'_` must outlive `'static` error: aborting due to 4 previous errors; 1 warning emitted Some errors have detailed explanations: E0261, E0262. For more information about an error, try `rustc --explain E0261`. ", "positive_passages": [{"docid": "doc-en-rust-8aab2eabc1565327b3acc84c5358e9c71ba3303e9f7d58582dfb5d458f7a557d", "text": " $DIR/fn-no-semicolon-issue-124935-semi-after-item.rs:5:1 | LL | ; | ^ help: remove this semicolon error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-6cf3c2bb3c08a38bafff3532db72ea1908c7190bb562a2249ecc7c70a784b1e0", "text": "Tested on nightly and stable. The second diagnostic is strange. $DIR/do-not-ice-on-field-access-of-err-type.rs:5:24 | LL | let array = [(); { loop {} }]; | ^^^^^^^ | = note: this lint makes sure the compiler doesn't get stuck due to infinite loops in const eval. If your compilation actually takes a long time, you can safely allow the lint. help: the constant being evaluated --> $DIR/do-not-ice-on-field-access-of-err-type.rs:5:22 | LL | let array = [(); { loop {} }]; | ^^^^^^^^^^^ = note: `#[deny(long_running_const_eval)]` on by default error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-af3d2230aad15cc68acee8c540d3300b7de8b977332bd80ef9fd1fa98b3c3c7c", "text": " $DIR/missing-clone-for-suggestion.rs:17:7 | LL | fn f(x: *mut u8) { | - move occurs because `x` has type `*mut u8`, which does not implement the `Copy` trait LL | g(x); | - value moved here LL | g(x); | ^ value used here after move | note: consider changing this parameter type in function `g` to borrow instead if owning the value isn't necessary --> $DIR/missing-clone-for-suggestion.rs:13:12 | LL | fn g(x: T) {} | - ^ this parameter takes ownership of the value | | | in this function error: aborting due to previous error For more information about this error, try `rustc --explain E0382`. ", "positive_passages": [{"docid": "doc-en-rust-60121fc379df37aebbc4066801024ecdb08b92b61e49f5acda21d72ec5531588", "text": " $DIR/overlap-marker-trait-with-underscore-lifetime.rs:6:6 | LL | impl Marker for &'_ () {} | ^^^^^^ | note: multiple `impl`s satisfying `&(): Marker` found --> $DIR/overlap-marker-trait-with-underscore-lifetime.rs:6:1 | LL | impl Marker for &'_ () {} | ^^^^^^^^^^^^^^^^^^^^^^ LL | impl Marker for &'_ () {} | ^^^^^^^^^^^^^^^^^^^^^^ error[E0283]: type annotations needed: cannot satisfy `&(): Marker` --> $DIR/overlap-marker-trait-with-underscore-lifetime.rs:7:6 | LL | impl Marker for &'_ () {} | ^^^^^^ | note: multiple `impl`s satisfying `&(): Marker` found --> $DIR/overlap-marker-trait-with-underscore-lifetime.rs:6:1 | LL | impl Marker for &'_ () {} | ^^^^^^^^^^^^^^^^^^^^^^ LL | impl Marker for &'_ () {} | ^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0283`. ", "positive_passages": [{"docid": "doc-en-rust-eaf8aed72a7fb57ad8d28d7447bf226504be02fb120af8e4c8b07e9c7b0dfbfd", "text": "Not sure if we have a test for the following and rust #![feature(markertraitattr)] #[marker] trait Marker {} impl Marker for &' () {} impl Marker for &' () {} If not, we need to add them, since they would be important for special casing during canonicalization. $DIR/issue-52240.rs:9:27 | LL | if let (Some(Foo::Bar(ref mut val)), _) = (&arr.get(0), 0) { | ^^^^^^^^^^^ cannot mutably borrow field of immutable binding error: aborting due to previous error For more information about this error, try `rustc --explain E0596`. ", "positive_passages": [{"docid": "doc-en-rust-27abf7dffdd9fb3420783921a65ebbfb4054467a1c3c0f7240038bbba2872f0e", "text": "I expected just a compiler error . Compiler panicked: Meta\nThis is fixed in the beta version (1.30).", "commid": "rust_issue_54966", "tokennum": 24}], "negative_passages": []} {"query_id": "q-en-rust-261f1662f9dee88b99f7a1535713614f35c13feabf7ca5f08ecef6038fa2e283", "query": " error[E0596]: cannot borrow field of immutable binding as mutable --> $DIR/issue-52240.rs:9:27 | LL | if let (Some(Foo::Bar(ref mut val)), _) = (&arr.get(0), 0) { | ^^^^^^^^^^^ cannot mutably borrow field of immutable binding error: aborting due to previous error For more information about this error, try `rustc --explain E0596`. ", "positive_passages": [{"docid": "doc-en-rust-afa01b1e4a6f7a1fc98bc53710b50fb4a7badf9201d0c207379809b43c80a6cc", "text": "This lets you get a mutable reference to an immutable value under a specific case. Code: From what I can tell, the tuple and at least two depths of Enums are necessary, along with an on the right side. There's probably other variations, but that seems to work well enough. Expected output would be a compile error, instead output is This works on both stable and nightly\nwould fix this as well?\nFixed by . Off: I wonder why we can't ship nll yet. It would solve so many issues right now, because of the match ergonomics sigh Match ergonomics produces roughly 10% of all errors/unsoundness/ICEs atm (at least it feels like this)\nYes correctly fixed this issue.", "commid": "rust_issue_52240", "tokennum": 157}], "negative_passages": []} {"query_id": "q-en-rust-2656cc6b0172a9991ac736f35737855c5f22f478f343fb0201dcae568b53c764", "query": "/// Whether the `def_id` is an unstable const fn and what feature gate is necessary to enable it pub fn is_unstable_const_fn(self, def_id: DefId) -> Option { if self.is_constructor(def_id) { Some(sym::const_constructor) } else if self.is_const_fn_raw(def_id) { if self.is_const_fn_raw(def_id) { self.lookup_stability(def_id)?.const_stability } else { None", "positive_passages": [{"docid": "doc-en-rust-68be5724b3b1ef2d5077bd80f85030139d8aa3b845ccb821ef16c47a34f61577", "text": "The feature allows calling any expression of a tuple-like constructor in a : cc\nImplemented on 2019-06-07 by in which was reviewed by\nIs there anything else needs to be done to resolve this issue?\nFCP and stabilization pull request.\nDo you think you have time to write a report (e.g. in the style of ) and amend the reference after?\nI propose that we stabilize . Tracking issue: Version target: 1.40 (2019-11-05 =beta, 2019-12-19 =stable). Tuple struct and tuple variant constructors are now considered to be constant functions. As such a call expression where the callee has a tuple struct or variant constructor \"function item\" type can be called : Consistency with other . This should also ensure that constructors implement traits and can be coerced to function pointers, if they are introduced. - Tests various syntactic forms, use in both and items, and constructors in both the current and extern crates. The case in should also get a test.\nThanks! Could you embed this report in a PR with the aforementioned test there as well? I'll FCP that PR.", "commid": "rust_issue_61456", "tokennum": 249}], "negative_passages": []} {"query_id": "q-en-rust-2656cc6b0172a9991ac736f35737855c5f22f478f343fb0201dcae568b53c764", "query": "/// Whether the `def_id` is an unstable const fn and what feature gate is necessary to enable it pub fn is_unstable_const_fn(self, def_id: DefId) -> Option { if self.is_constructor(def_id) { Some(sym::const_constructor) } else if self.is_const_fn_raw(def_id) { if self.is_const_fn_raw(def_id) { self.lookup_stability(def_id)?.const_stability } else { None", "positive_passages": [{"docid": "doc-en-rust-dda53e6e6669bb67565cbda4f5249392dec83d9adef3173487f24b0d81452468", "text": "When using to create an enum variant in Rust 1.37+, we get an error: results in: but if the enum is constructed explicitly, the compilation succeeds: Looking at , it looks like should be considered a with . Playground links for both variants: ,\nDuplicate of (will close in a bit). It seems like folks open this issue because on stable, the error message does not point to the tracking issue, e.g. you don't see: can we fix this?\nI think we should just stabilize it\nIs this diagnostics problem specific to this feature gate? I meant a more general fix to const-fn related gates.\nNot sure. I thought we're using the normal feature gate infra\nReopening this to make sure we add a test per", "commid": "rust_issue_64247", "tokennum": 162}], "negative_passages": []} {"query_id": "q-en-rust-2663072dc5b75ce734eb345eb1f0c0a795bc80cbd959edcf491948a6953cde5a", "query": " Subproject commit 56444a4545bd71430d64b86b8a71714cfdbe9f5d Subproject commit 8bed48a751562c1c396b361bb6940c677268e997 ", "positive_passages": [{"docid": "doc-en-rust-75effd16b0a447141390847a66f06387ebe0f56068dc3dab9b391f7497509e5a", "text": "When confusing a Fn-like struct with a regular struct in an pattern, the output leaves a lot to be desired: use std::time::{Instant, SystemTime}; use std::time::{Duration, Instant, SystemTime}; use time::OffsetDateTime; use tracing::trace;", "positive_passages": [{"docid": "doc-en-rust-c962be447d92386b66463298067162c69ff532b13ba8529ac102b5acff31bf86", "text": "Sometimes when things go really wrong the compiler emits a ton of error messages. This can actually take a while to render them all. And since somewhat recently, hitting Ctrl-C no longer interrupts that, I get a thousand error messages displayed after Ctrl-C. I suspect this may have to do with The Ctrl-C handler does nothing at all on the first Ctrl-C unless the compiler is in the middle of const-eval. Cc\nDo you have a reproducer? I have not run into this myself. The implementation should also ensure that the compiler exits promptly if you hit ctrl+c again. Does it?\nif stuff keeps going on ctrl+c, sometimes suspending the process (ctrl+z) and then killing the terminal does the trick somehow :sweat_smile:\nUnfortunately no, I encountered this during some rustc hacking where I ended up getting well over 1k error messages in the standard library build and that turned out to be hard to cancel. I thought I had tried that, but I am not entirely sure.\nIs it possible the shell is part of the problem? If, by the time you hit ctrl-c, it's working through a large amount of output it still has buffered then it might be awhile before you see a response.\nIt's possible, sure -- but usually it reacts to Ctrl-C immediately even when there's tons of output.\nHave you tried reverting to check if that fixes it?\nNo. At the time I was busy getting a PR done and now I don't have a reproducing example any more.\nonly checks for ctrl-c during const eval, so if no const eval runs for an extended period of time, the ctrl-c will be silently ignored. This is not the only problem with that PR. It also caused , and shows the \"compilation was interrupted\" error after the prompt is already drawn by bash.\nis there a good way to make rustc loop endlessly without ever touching const eval? maybe via the trait solver? :thinking: it would help for debugging this.\nYeah, just take any issue (none are anyway). I can reproduce this issue with for example: nightly-2024-03-26 (before ): Exits after a single ^C nightly-2024-03-27 (after ): Requires ^C^C to exit bjorn3's analysis checks out.\ncool!", "commid": "rust_issue_124212", "tokennum": 514}], "negative_passages": []} {"query_id": "q-en-rust-26711a4be1b30e61729ade6f9c3747487e0bc67e71e9d77952633ac70ac946bb", "query": "use std::str; use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::{Arc, OnceLock}; use std::time::{Instant, SystemTime}; use std::time::{Duration, Instant, SystemTime}; use time::OffsetDateTime; use tracing::trace;", "positive_passages": [{"docid": "doc-en-rust-0deeca6e96f0956156d33fe5020552fadd682a536e7fc2c17a0d2caf9ddc8b2a", "text": "I wasn't sure how much of the compiler's code flowed through const-eval, is why I asked.\nAnother case where this is observed: IMO we should revert or at least make the signal handler run after a short timeout.\nI'll post a PR that adds a timeout later today, but by all means if you feel strongly just put up a PR. The initial handler here was just through the normal PR process, there's no reason to be more cautious on reverting or modifying it.", "commid": "rust_issue_124212", "tokennum": 110}], "negative_passages": []} {"query_id": "q-en-rust-2678476562124cd1c9742b18ce499ad1c251bb16a880f03ccac01105139d18c3", "query": " error[E0277]: the trait bound `Bar: Foo` is not satisfied --> $DIR/issue-64855.rs:5:19 | LL | pub struct Bar(::Type) where Self: ; | ^^^^^^^^^^^^^^^^^^^ the trait `Foo` is not implemented for `Bar` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "positive_passages": [{"docid": "doc-en-rust-7985c267ad2f3f852db605f6024dae811735b7f2e909df00229c2dacf6c3338a", "text": "Internal compiler error when missing bound on struct. Reproducible on stable/beta/nightly on . Also, not sure if it already has it's own issue, but is causing an illegal instruction error when you run it with on this code as well, which is also reproducible on playground if you turn the backtrace setting on. I tried this code: () I expected to see this happen: Instead, this happened: rustc versions (from playground): if infcx .type_implements_trait( tcx.lang_items().clone_trait().unwrap(), [tcx.erase_regions(ty)], self.param_env, ) .must_apply_modulo_regions() if let Some(clone_trait_def) = tcx.lang_items().clone_trait() && infcx .type_implements_trait( clone_trait_def, [tcx.erase_regions(ty)], self.param_env, ) .must_apply_modulo_regions() { err.span_suggestion_verbose( span.shrink_to_hi(),", "positive_passages": [{"docid": "doc-en-rust-60121fc379df37aebbc4066801024ecdb08b92b61e49f5acda21d72ec5531588", "text": " $DIR/issue-66958-non-copy-infered-type-arg.rs:11:20 | LL | Self::partial(self.0); | ------ value moved here LL | Self::full(self); | ^^^^ value used here after partial move | = note: move occurs because `self.0` has type `S`, which does not implement the `Copy` trait error: aborting due to previous error For more information about this error, try `rustc --explain E0382`. ", "positive_passages": [{"docid": "doc-en-rust-592cb108c3b10b7169263c448aa003f01706e3f97845e748be93bac138b7415d", "text": "I encounter an ICE: with rustc 1.41.0-nightly ( 2019-11-29) running on x8664-apple-darwin when compiling The same code does not ICE when using nightly-2019-11-27-x8664-apple-darwin Sorry, I haven't taken the time to make a minimal reproduction example.\nCan you post the ICE message and backtrace?\nCertainly! Although it is not the best stacktrace I have ever seen. Backtrace:\nBacktrace:\nmodify labels: A-mir\nI think I have a clue here: First the code: The problem seems to be a partial move of together with async together with a generic type (if we replace by for example, rust will complain as well and no ICE). If we remove the rust will complain: modify labels: +A-async-await -E-needs-mcve +E-needs-bisection\nRegression in cc modify labels: -E-needs-bisection\ntriage: has PR. P-high, removing nomination.", "commid": "rust_issue_66958", "tokennum": 221}], "negative_passages": []} {"query_id": "q-en-rust-28d7680677b4913d5a5b5cc77cca63b7cd7e8e410f7309d03e9186168f25f188", "query": "{ let old_len = self.in_scope_lifetimes.len(); let lt_def_names = params.iter().filter_map(|param| match param.kind { GenericParamKind::Lifetime { .. } => Some(param.ident.modern()), GenericParamKind::Lifetime { .. } => Some(ParamName::Plain(param.ident.modern())), _ => None, }); self.in_scope_lifetimes.extend(lt_def_names);", "positive_passages": [{"docid": "doc-en-rust-cb3dbc0c517a588095eda62daa96bdfedbcaf49475de56306468c3ba5d5694f1", "text": "The diagnostics (\"use of undeclared lifetime name\") are bad, but this indeed must be an error because the inner is an illegal use of the outer rather than an implicitly defined fresh lifetime.\nHm, I don't think that's quite what I'd expect: previously using in that context would be fine because there is no shadowing so to speak, that is, this compiles:\nFrom Similarly, if a fn definition is nested inside another fn definition, it is an error to mention lifetimes from that outer definition (without binding them explicitly). This is again intended for future-proofing and clarity, and is an edge case. (In the second example there's an explicit definition that shadows the previous definition for following uses, in the first example there's no second definition, only use)\nI see that this is in the RFC, but I think I more or less disagree with the reasoning given. Disallowing the use of inband lifetimes when nesting function seems arbitrary and not entirely helpful; it also makes a mechanical change from \"old\" to \"new\" more difficult because shadowing is visible only based on indent levels.\nIMO it should be an error, with a helpful note like the one you get when writing . Let's not make in-band lifetimes any more confusing than they already are.\nThe distinction is that inband lifetime's selling point is that you don't need to declare lifetimes. If we say that, and then clarify it with \"except\" then I think the feature feels incomplete. Inband lifetimes confusion does not increase with permitting this, IMO.\nThe problem is if it's allowed, then has a different meaning than This seems unfortunate and confusing. Even unergonomic if I may say so. There's no way that the distinction between and won't be missed in a quick scan of the second one. But, like, the RFC process did decide that isn't important, so whatever.\nWell, somewhat, but not really. Nothing outside of imports/definitions inherits into the inner function, unlike the impl where there is a history and multiple things which inherit (, lifetimes, generics).", "commid": "rust_issue_52532", "tokennum": 461}], "negative_passages": []} {"query_id": "q-en-rust-28eeabc9014baed96333528c3b1b337441fbbc2953f7df968968f3e5255cd1ea", "query": " // compile-flags: -Cdebuginfo=2 // build-pass // Regression test for #87142 // This test needs the above flags and the \"lib\" crate type. #![feature(type_alias_impl_trait, generator_trait, generators)] #![crate_type = \"lib\"] use std::ops::Generator; pub trait GeneratorProviderAlt: Sized { type Gen: Generator<(), Return = (), Yield = ()>; fn start(ctx: Context) -> Self::Gen; } pub struct Context { pub link: Box, } impl GeneratorProviderAlt for () { type Gen = impl Generator<(), Return = (), Yield = ()>; fn start(ctx: Context) -> Self::Gen { move || { match ctx { _ => (), } yield (); } } } ", "positive_passages": [{"docid": "doc-en-rust-4142730d7ccb19d319f6d7ad01edfb94edb723fcbc117571ebf399fd6ff47095", "text": "I haven't gotten it to error without nightly features, so I believe it has to do with the generated enum for the Generator. : $DIR/issue-82361.rs:10:9 | LL | / if true { LL | | a | | - expected because of this LL | | } else { LL | | b | | ^ | | | | | expected `usize`, found `&usize` | | help: consider dereferencing the borrow: `*b` LL | | }; | |_____- `if` and `else` have incompatible types error[E0308]: `if` and `else` have incompatible types --> $DIR/issue-82361.rs:16:9 | LL | / if true { LL | | 1 | | - expected because of this LL | | } else { LL | | &1 | | -^ | | | | | expected integer, found `&{integer}` | | help: consider removing the `&` LL | | }; | |_____- `if` and `else` have incompatible types error[E0308]: `if` and `else` have incompatible types --> $DIR/issue-82361.rs:22:9 | LL | / if true { LL | | 1 | | - expected because of this LL | | } else { LL | | &mut 1 | | -----^ | | | | | expected integer, found `&mut {integer}` | | help: consider removing the `&mut` LL | | }; | |_____- `if` and `else` have incompatible types error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-bc991a8c63d81686bebf07f793f5c69e366692742b1f03c258d8aba8722870bd", "text": " $DIR/feature-gate-generic_associated_types.rs:24:5 --> $DIR/feature-gate-generic_associated_types.rs:25:5 | LL | type Pointer2 = Box; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: add #![feature(generic_associated_types)] to the crate attributes to enable error: aborting due to 4 previous errors error[E0658]: where clauses on associated types are unstable (see issue #44265) --> $DIR/feature-gate-generic_associated_types.rs:30:5 | LL | type Assoc where Self: Sized; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: add #![feature(generic_associated_types)] to the crate attributes to enable error: aborting due to 6 previous errors For more information about this error, try `rustc --explain E0658`.", "positive_passages": [{"docid": "doc-en-rust-d2696cc37cdbedb80da25c591152044f2e8391cf31101bcf1942427d4cba1626", "text": "clauses on associated types don't work yet so should be feature gated, presumably with . cc . Happens on versions = 1.24", "commid": "rust_issue_49365", "tokennum": 28}], "negative_passages": []} {"query_id": "q-en-rust-2a622572d79bdf05f401ff62913dcc951040af27371cfec9381fc6c42b233721", "query": "(cfg, _) => cfg.as_deref().cloned(), }; debug!(\"Portability {:?} - {:?} = {:?}\", item.cfg, parent.cfg, cfg); debug!(\"Portability name={:?} {:?} - {:?} = {:?}\", item.name, item.cfg, parent.cfg, cfg); if let Some(ref cfg) = cfg { tags += &tag_html(\"portability\", &cfg.render_long_plain(), &cfg.render_short_html()); }", "positive_passages": [{"docid": "doc-en-rust-50c7092c496556fd83fc85aaaf3beb00c9b3629ddec4e33ab2c2f0ea6decbb91", "text": "From Given the following Rust code with in-line annotations of the expected behaviour: On: does not seem to be able to generate feature requirement labels on types in private modules () that are publicly reexported (). The labels do not show up on the module overview nor page, but do propagate into items like functions for the given type. This renders the type without any feature requirement, neither in the module overview: ! Nor on the page: ! Note that the label is propagated onto (does not have an explicit ), at least understood that! It is supposed to render the feature requirement for like , both in the module overview above as on the page: ! It seems tricky to combine feature requirements on the and reexport. In the case above they are the same, but what if: The type is only publicly available when and are specified. Specifying is fine but makes the contents of unreachable through the current module, specifying only should result in a \"module not found\" error. CC\nThis also affects re-exports of entire crates. Unfortunately there the fix is not clear at all, since the \"parent\" crate is obviously not guaranteed to have the same features as the crate being re-exported.\nWithin the same crate as well; it's not about the features that are enabled (though it is relevant which are available) but about the combined restrictions that are placed on the definition and reexport. Simplest I can think of is a union of all features (but gets complicated quick when inversions and vs is used) and later perhaps simplifying based on feature relations (if implies there is no need to display as requirement, if either definition or reexport requires ). That's necessary to cleanly solve crate reexports, since you effectively want to ensure that the top-level crate feature implies feature and hide it. If the whole resolver is made smart enough (perhaps this already exists, I'm not too familiar with Cargo) it could even throw warnings when there exist feature combinations that would lead to a compiler error (ie. explicitly reexporting a symbol that is not available for a certain set of features).\nI wonder if this is just an ordering issue of the passes: The private modules have disappeared by the point where we pass all the data down so we don't see them.", "commid": "rust_issue_83428", "tokennum": 486}], "negative_passages": []} {"query_id": "q-en-rust-2a622572d79bdf05f401ff62913dcc951040af27371cfec9381fc6c42b233721", "query": "(cfg, _) => cfg.as_deref().cloned(), }; debug!(\"Portability {:?} - {:?} = {:?}\", item.cfg, parent.cfg, cfg); debug!(\"Portability name={:?} {:?} - {:?} = {:?}\", item.name, item.cfg, parent.cfg, cfg); if let Some(ref cfg) = cfg { tags += &tag_html(\"portability\", &cfg.render_long_plain(), &cfg.render_short_html()); }", "positive_passages": [{"docid": "doc-en-rust-0600c28941e9e62aa58fa4a6908002601086eeaf89e39eb247ef1766d3b4621f", "text": "Thanks for the suggestion - I moved before and (I've never built rustc/rustdoc from source, pleasantly surprised that only took just under 7min on a ThreadRipper :partyingface:) but it doesn't change anything to the generated output unfortunately :(\nI observe this on public items as well: Does not show the badge at re-exports.\nSo from the following code: is stripped and then inlined. The stripped version contains all the correct information but the inlined one doesn't. Checking what's the best course here: either cloning the stripped item or propagating when inlining.", "commid": "rust_issue_83428", "tokennum": 130}], "negative_passages": []} {"query_id": "q-en-rust-2a63094784530007cefba111037ca9149f0702042068e8aac7cbe158b808dd0e", "query": "14 | } | - immutable borrow ends here error: aborting due to 2 previous errors error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-61273a09668607a4fc6bd295552f94cd92d081baebd97db8998d5806f4b4b95d", "text": "Minimal example: Error message (ran in playground): Tested on nightly as well ()\nRelated:\ntwice without checking the return value. Modify it to return and check where it is being called ether both calls have returned an (cancel one, emit the other, return ), wether only one was created (emit it, return ) or none has (return ).\nI'll fix this, thanks for the instruction!\nI would happily have a go at this, if that's ok with\nSure go for it!\nhave you had time to work on this? If not could I give it a go? :)\nI totally thought I was going to have time, but I had a crazy week at work this week. Go ahead and give it a go :) I'll find another bug to work on when I definitely have time\nIt looks like the PR to fix this was merged but has a few dangling threads. There seems to be some confusion about short-circuiting functionality and the necessity of the patch and also the error count still reads 2 rather than 1. Should a new issue be opened with those issues and this issue closed?", "commid": "rust_issue_42106", "tokennum": 232}], "negative_passages": []} {"query_id": "q-en-rust-2a9e6fd569d74dd5789a03965772ddb6893d19799f801a0e117d3aac55393bc4", "query": "--> $DIR/privacy-struct-ctor.rs:20:9 | LL | Z; | ^ constructor is not visible here due to private fields help: a tuple struct with a similar name exists | LL | S; | ^ help: possible better candidate is found in another module, you can import it into scope | LL | use m::n::Z; | | | | constructor is not visible here due to private fields | help: a tuple struct with a similar name exists: `S` error[E0423]: expected value, found struct `S` --> $DIR/privacy-struct-ctor.rs:33:5 | LL | S; | ^ constructor is not visible here due to private fields help: possible better candidate is found in another module, you can import it into scope | LL | use m::S; | error[E0423]: expected value, found struct `S2` --> $DIR/privacy-struct-ctor.rs:38:5", "positive_passages": [{"docid": "doc-en-rust-a807c1dab372d105219deafe7e918540761b044ffca4d9b7fabfa5c5e9c806db", "text": "For this code: You get the following error message: The suggestion may be excused to be not 100% perfect, but here it suggests to add an use statement despite that same item being imported already. What I want is this exactly: that it removes stuff from the suggestion list that is already being imported. Note that changing the glob import to a non glob one doesn't change anything.\ncc\nWhy is that even an error?\ntuple structs have private fields by default, you can't construct instances of B outside of the foo module.\noh ^^ maybe we should simply modify this error message then to say that the \"function\" is private?\nIt already says \"constructor is not visible here due to private fields\". That's okay, but maybe you might have meant another B from another module or something. However, the candidate offered is not any better as its the same one you are attempting to use.\nReopening to track \"removal of suggestion\" work for pub tuple structs with private fields.", "commid": "rust_issue_42944", "tokennum": 214}], "negative_passages": []} {"query_id": "q-en-rust-2abb820698ea4907c22db32c025f3f67631b7a2300c0c0f28a2933978cdd0572", "query": " fn main() { // There shall be no suggestions here. In particular not `Ok`. let _ = \u8bfb\u6587; //~ ERROR cannot find value `\u8bfb\u6587` in this scope } ", "positive_passages": [{"docid": "doc-en-rust-6c4d4f3b496f5ae06ffc89ae4053ef8049e709be468f2006bc9bfacc001716c8", "text": "In I noticed this surprising spelling suggestion: To me and don't seem like they would be similar enough to meet the threshold for showing such a suggestion. Can we calibrate this better for short idents? For comparison, even doesn't assume you mean . rustc 1.45.0-nightly ( 2020-05-23) Mentioning who worked on suggestions most recently in .\nI debugged it a bit. is selected as the candidate because the edit distance is just 2 to . The code to fix is likely in or around . I tried some quick tricks but got ICEs and failing tests and gave up. One thing I didn't try that could perhaps work is to consider edit distance between characters of different alphabets/logogram sets as infinite, rather than 1, which is the case right now.\nThank you for investigating! Independent of what we do with different logogram sets (your suggestion sounds plausible), an edit distance of 2 for a string of length 2 should not meet the threshold for showing a suggestion, even within a single logogram set.\nI think edit distance of 1 for a string of length 1 is reasonable (a lot of existing UI tests relies on it), but maybe an edit distance of 2 for a string of length 2 is not reasonable indeed. As you point out, with regular chars, the edit distance is still 2 but is not suggested. So maybe there is a simpler fix to be made for than looking at what alphabets/logogram sets characters belong to.", "commid": "rust_issue_72553", "tokennum": 308}], "negative_passages": []} {"query_id": "q-en-rust-2abf58efc91734a969bba1da105ba52739074cadcfef0773077387d7162bd16e", "query": "/// You can also use `dbg!()` without a value to just print the /// file and line whenever it's reached. /// /// Finally, if you want to `dbg!(..)` multiple values, it will treat them as /// a tuple (and return it, too): /// /// ``` /// assert_eq!(dbg!(1usize, 2u32), (1, 2)); /// ``` /// /// However, a single argument with a trailing comma will still not be treated /// as a tuple, following the convention of ignoring trailing commas in macro /// invocations. You can use a 1-tuple directly if you need one: /// /// ``` /// assert_eq!(1, dbg!(1u32,)); // trailing comma ignored /// assert_eq!((1,), dbg!((1u32,))); // 1-tuple /// ``` /// /// [stderr]: https://en.wikipedia.org/wiki/Standard_streams#Standard_error_(stderr) /// [`debug!`]: https://docs.rs/log/*/log/macro.debug.html /// [`log`]: https://crates.io/crates/log", "positive_passages": [{"docid": "doc-en-rust-c790b0598ed9e1e7176eca868d0d82a2e761fe3a9f953b2276ebcabf7316b32b", "text": "It would be nice if the compiler suggested the intended form: as this seems like a reasonable mistake. This is specific to a particular macro, though, so I'm not sure how flexible we can be with diagnostics here.\nOr we could just make the macro accept ...", "commid": "rust_issue_59763", "tokennum": 57}], "negative_passages": []} {"query_id": "q-en-rust-2ae604c63c18fde744b8574f8af047341e8f65c5b567d47c1250bca7b29985f9", "query": "--set target.i686-unknown-linux-gnu.linker=clang --build=i686-unknown-linux-gnu --set llvm.ninja=false --set llvm.use-linker=lld --set rust.use-lld=true --set rust.jemalloc ENV SCRIPT python3 ../x.py dist --build $HOSTS --host $HOSTS --target $HOSTS ENV SCRIPT python2.7 ../x.py dist --build $HOSTS --host $HOSTS --target $HOSTS ENV CARGO_TARGET_I686_UNKNOWN_LINUX_GNU_LINKER=clang # This was added when we switched from gcc to clang. It's not clear why this is", "positive_passages": [{"docid": "doc-en-rust-149edd16fbcc394a726cd24ee1d332167d49d0f05cfb412bdbe8f4fe87708b2a", "text": "When I install the latest master toolchains via and try to build with them, linking fails: I did some bisection, and it looks like the problem started happening with Cc\nThe error is actually already mentioned in However, I am building Miri with cargo, so I don't know how I'd \"use LLD instead\".\nHm, we probably need to revert that PR. I did not realize we'd need lld downstream too, I thought that was only on our builder. cc\nYes, a revert seems appropriate. We're going to have to figure out where that's actually coming from. The only change I saw between 10 and 11 by grepping was the removal of llgo, but maybe there's some bad remnant.\nYeah, I thought this issue would be limited to the rustc build, but it makes sense that it can also affect project linking against rustc shared objects. I've looked a bit into what the root cause is, and what I have so far as that the symbol comes from and is present as a weak symbol both when compiling with clang-10 and clang-11. The only difference I see from is this: So apparently was treated as a symbol of the library, while now it's treated as a symbol of -- I'm not sure what that actually means, but presumably that refers to the compiler runtime libraries. My guess is that previously it just ended up being nulled out as a missing weak symbol, but now it's actually being found and linking fails because ... reasons. I'm not good at linkers :) Edit: shows these as:\ncan you prepare a revert PR?\nDone in . If nobody has an idea on how to avoid the linker error, two more options would be to comment out the symbol in our llvm-project fork, or to avoid upgrading clang altogether and just build both Python 2 and Python 3 for that image.\nThe issue is probably going to be in how exactly the objects that comprise are compiled, not linked. I suspect just ends up ignoring the malformed object or otherwise not reporting the issue, but the object and the relevant symbols ultimately end up in the anyway. Alas, reasons these obscure failures can occur are many \u2013 I've seen them occur from things as trivial as forgetting to . The question is: is there perhaps a way to disable ExecutionEngine entirely? Intuitively nothing we do should depend on that code.", "commid": "rust_issue_81554", "tokennum": 510}], "negative_passages": []} {"query_id": "q-en-rust-2ae604c63c18fde744b8574f8af047341e8f65c5b567d47c1250bca7b29985f9", "query": "--set target.i686-unknown-linux-gnu.linker=clang --build=i686-unknown-linux-gnu --set llvm.ninja=false --set llvm.use-linker=lld --set rust.use-lld=true --set rust.jemalloc ENV SCRIPT python3 ../x.py dist --build $HOSTS --host $HOSTS --target $HOSTS ENV SCRIPT python2.7 ../x.py dist --build $HOSTS --host $HOSTS --target $HOSTS ENV CARGO_TARGET_I686_UNKNOWN_LINUX_GNU_LINKER=clang # This was added when we switched from gcc to clang. It's not clear why this is", "positive_passages": [{"docid": "doc-en-rust-4ff3af9619d28dadd6dc9c4368ec6277d3cf2c1ae698d090dad0b152ac47f3b3", "text": "And if not, I'm comfortable patching out in our fork as we get closer to Feb 11 and continue having trouble pinpointing exact cause of this problem.\nWe normally don't link ExecutionEngine, but dist-x8664-linux in particular builds LLVM with ThinLTO, in which case we link as shared, instead of statically linking individual components into rustcllvm.\nIt seems to be possible to configure the components in using , but I'm not sure to what degree we can restrict those (not sure if this job also ships LLVM tools).\nSo, I think we could have used to exclude unnecessary parts from , as LLVM tools can still statically link the parts not in the dylib. But in the end, we can't use the option anyway due to\nLooks like I'm getting this locally when trying to bootstrap rustc (while building ), might happen since the llvm upgrade but I'm not 100% sure. Are there some workarounds I can put into my for this? I'm using lld to link. // edition: 2021 // https://github.com/rust-lang/rust/pull/111761#issuecomment-1557777314 macro_rules! m { () => { extern crate core as std; //~^ ERROR macro-expanded `extern crate` items cannot shadow names passed with `--extern` } } m!(); use std::mem; fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-fb6c2b7293948418dacd066e2443df69f06462bb9ddabc0ee4f62acdb26dc3b5", "text": " $DIR/tainted-body-2.rs:9:5 | LL | missing; | ^^^^^^^ not found in this scope error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-d9788f9fb62c8de2f97f07962af96a35636fdc8a876f620741389e1c3bb30856", "text": " $DIR/issue-90213-expected-boxfuture-self-ice.rs:9:19 | LL | Self::foo(None) | ^^^^ expected struct `Box`, found enum `Option` | = note: expected struct `Box>` found enum `Option<_>` = note: for more on the distinction between the stack and the heap, read https://doc.rust-lang.org/book/ch15-01-box.html, https://doc.rust-lang.org/rust-by-example/std/box.html, and https://doc.rust-lang.org/std/boxed/index.html help: store this in the heap by calling `Box::new` | LL | Self::foo(Box::new(None)) | +++++++++ + error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-22da7a749da9ea5de82010003a3d02065aadf76edc99be61c6ff0e59a3f73108", "text": " $DIR/issue-50814-2.rs:14:24 | LL | const BAR: usize = [5, 6, 7][T::BOO]; | ^^^^^^^^^^^^^^^^^ index out of bounds: the length is 3 but the index is 42 note: erroneous constant encountered --> $DIR/issue-50814-2.rs:18:6 | LL | & as Foo>::BAR | ^^^^^^^^^^^^^^^^^^^^^ note: the above error was encountered while instantiating `fn foo::<()>` --> $DIR/issue-50814-2.rs:30:22 | LL | println!(\"{:x}\", foo::<()>() as *const usize as usize); | ^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0080`. ", "positive_passages": [{"docid": "doc-en-rust-4baf0e8d5127fe6f958b823add21bed6dcb30db386b7af7c0c25faafe018ab65", "text": "File: /tmp/icemaker/issue--2.rs auto-reduced (treereduce-rust): original: Version information ` Command: $DIR/unusual-rib-combinations.rs:29:21 | LL | struct Bar Foo<'a>)>; | ^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the only supported types are integers, `bool` and `char` = help: more complex types are supported with `#![feature(adt_const_params)]` error: aborting due to 9 previous errors Some errors have detailed explanations: E0106, E0214, E0308. Some errors have detailed explanations: E0106, E0214, E0308, E0771. For more information about an error, try `rustc --explain E0106`.", "positive_passages": [{"docid": "doc-en-rust-29afce09f42a612f76d7a6981c4b3b2da2fe69b05479108e0bd4cc862f184ca0", "text": "This is a fuzzed test case, found with and minimized with . I wasn't able to find any similar issues: : error[E0283]: type annotations needed: cannot resolve `_: A` --> $DIR/issue-63496.rs:4:21 | LL | const C: usize; | --------------- required by `A::C` LL | LL | fn f() -> ([u8; A::C], [u8; A::C]); | ^^^^ error[E0283]: type annotations needed: cannot resolve `_: A` --> $DIR/issue-63496.rs:4:33 | LL | const C: usize; | --------------- required by `A::C` LL | LL | fn f() -> ([u8; A::C], [u8; A::C]); | ^^^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0283`. ", "positive_passages": [{"docid": "doc-en-rust-bad15165f9c17e24bf9acf94720de1600514f6cf5915295f602149400d545451", "text": "Binder(<[type error] as ToBytes) This only happens if there are two erroneous functions, if I take out it works fine. It seems like I was trying to do this wrong in the first place since the first function gives an error . Not sure what the right way to do this is. fn fold(self, init: Acc, f: F) -> Acc where F: FnMut(Acc, Self::Item) -> Acc; // This has the same safety requirements as `Iterator::__iterator_get_unchecked` unsafe fn get_unchecked(&mut self, idx: usize) -> ::Item where", "positive_passages": [{"docid": "doc-en-rust-7accc4af3c6adb97fe69cbac3ba58b19914fb38653e6ecbb38372fc5512eb31d", "text": " $DIR/issue-59134-1.rs:8:37 | LL | const CONST: Self::MyType = bogus.field; | ^^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-bcb716d01e0aa92e792c79ce9116a6404c964666a769ff595f0e8e001f55676e", "text": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?", "commid": "rust_issue_59134", "tokennum": 497}], "negative_passages": []} {"query_id": "q-en-rust-2f5bc0c189246f20c47a42d98bf8c96ad23df9f1512c62e19e48a684ee31e93c", "query": " error[E0425]: cannot find value `bogus` in this scope --> $DIR/issue-59134-1.rs:8:37 | LL | const CONST: Self::MyType = bogus.field; | ^^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-1560eee77cf6c7650928cb9ddd0f8e103b6279cb08a15618e7077839aa0fe6b3", "text": "I think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "commid": "rust_issue_59134", "tokennum": 164}], "negative_passages": []} {"query_id": "q-en-rust-2f8986280a5719021b58695ab0120d53d203db96f907ebeb1dec3350131a32ae", "query": "} declare_lint! { /// The `function_item_references` lint detects function references that are /// formatted with [`fmt::Pointer`] or transmuted. /// /// [`fmt::Pointer`]: https://doc.rust-lang.org/std/fmt/trait.Pointer.html /// /// ### Example /// /// ```rust /// fn foo() { } /// /// fn main() { /// println!(\"{:p}\", &foo); /// } /// ``` /// /// {{produces}} /// /// ### Explanation /// /// Taking a reference to a function may be mistaken as a way to obtain a /// pointer to that function. This can give unexpected results when /// formatting the reference as a pointer or transmuting it. This lint is /// issued when function references are formatted as pointers, passed as /// arguments bound by [`fmt::Pointer`] or transmuted. pub FUNCTION_ITEM_REFERENCES, Warn, \"suggest casting to a function pointer when attempting to take references to function items\", } declare_lint! { /// The `uninhabited_static` lint detects uninhabited statics. /// /// ### Example", "positive_passages": [{"docid": "doc-en-rust-bebf3ac373245cb50b8b4c0e41523bb66c080f59a138926bab54cb74481b9855", "text": "I tried this code: I expected to see this happen: The compilation succeed or guide me to (1) cast the to with . (2) borrow the : . But in this case the addresses printed are the same (what?). Instead, this happened*: Cryptic error message: There is : . It took me minutes before I found the workaround. But it still surprised me. : $DIR/multiple-tail-expr-behind-cfg.rs:5:64 | LL | #[cfg(feature = \"validation\")] | ------------------------------ only `;` terminated statements or tail expressions are allowed after this attribute LL | [1, 2, 3].iter().map(|c| c.to_string()).collect::() | ^ expected `;` here LL | #[cfg(not(feature = \"validation\"))] | - unexpected token | help: add `;` here | LL | [1, 2, 3].iter().map(|c| c.to_string()).collect::(); | + help: alternatively, consider surrounding the expression with a block | LL | { [1, 2, 3].iter().map(|c| c.to_string()).collect::() } | + + help: it seems like you are trying to provide different expressions depending on `cfg`, consider using `if cfg!(..)` | LL ~ if cfg!(feature = \"validation\") { LL ~ [1, 2, 3].iter().map(|c| c.to_string()).collect::() LL ~ } else if cfg!(not(feature = \"validation\")) { LL ~ String::new() LL + } | error: expected `;`, found `#` --> $DIR/multiple-tail-expr-behind-cfg.rs:12:64 | LL | #[attr] | ------- only `;` terminated statements or tail expressions are allowed after this attribute LL | [1, 2, 3].iter().map(|c| c.to_string()).collect::() | ^ expected `;` here LL | #[attr] | - unexpected token | help: add `;` here | LL | [1, 2, 3].iter().map(|c| c.to_string()).collect::(); | + help: alternatively, consider surrounding the expression with a block | LL | { [1, 2, 3].iter().map(|c| c.to_string()).collect::() } | + + error: cannot find attribute `attr` in this scope --> $DIR/multiple-tail-expr-behind-cfg.rs:13:7 | LL | #[attr] | ^^^^ error: aborting due to 3 previous errors ", "positive_passages": [{"docid": "doc-en-rust-3361a3a7294dae817db30917d4ab6b69dff616118500b44d6f8b1f69eb3f2804", "text": "Error code: Metadata:\nWhat is the issue you are reporting here ?\nWell, I thought that compiler can understand that is related to the block of code, but it fails.\nCan you post a minimal reproducible example that demonstrates your issue? Or at least mention exactly what you'd expect to happen and why. Even code that is cfged out is still required to parse successfully.\nLet's show you what exactly problem I have. Full code: And I get this error:\nYou seem to already have posted this a month ago. What you're asking seems a rather big change to me, I'm not that familiar with how the compiler parser works but I believe it currently requires your program syntax to be correct before macros are expanded, which is not true for your example, so some fundamental changes would be needed.\nOh, damn, sorry. I totally forgot that I made a post. Well, if it's not a bug and parser works like this it is bad, anyway, exists alternative way to write like this. But for me my issue is a candidate to feature request or a bug. Anyway, I'm not a lang dev, so I don't know what scales of this defect (?).\nThe workaround for this is to wrap the expressions in a block:\nI solved this issue exactly as you did.\nAnother way to do this is using the macro and . I personally thinks it is more clear, and should optimize the same.\nWork is not the same, due to compile-time evaluating an expression that we pass in it and evaluates only it, but at the same time everything compiles what is inside of .", "commid": "rust_issue_106020", "tokennum": 344}], "negative_passages": []} {"query_id": "q-en-rust-3164e4b0dd778a958f190c7cf428137f4ea7d3ded37a691a4bf0f5c8e1535045", "query": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![deny(unused_variable)] fn main() { for _ in range(1i, 101) { let x = (); //~ ERROR: unused variable: `x` match () { a => {} //~ ERROR: unused variable: `a` } } } ", "positive_passages": [{"docid": "doc-en-rust-7a7ae495ffc87e24f18ee8967dbb91001d0a4b48fba5126b914fe62fb1c9ab2a", "text": "compiles without a peep, even though and are unused. Commenting out the gives the expected output:\nI think this an be closed\nThanks (fixed in )", "commid": "rust_issue_17999", "tokennum": 33}], "negative_passages": []} {"query_id": "q-en-rust-31743379056549de78800982bc85b35314576bbdff33beb7ec2dbc180b1a5fee", "query": "signature. Each type parameter must be explicitly declared, in an angle-bracket-enclosed, comma-separated list following the function name. ```{.ignore} fn iter(seq: &[T], f: F) where T: Copy, F: Fn(T) { for elt in seq { f(*elt); } } fn map(seq: &[T], f: F) -> Vec where T: Copy, U: Copy, F: Fn(T) -> U { let mut acc = vec![]; for elt in seq { acc.push(f(*elt)); } acc } ```rust,ignore // foo is generic over A and B fn foo(x: A, y: B) { ``` Inside the function signature and body, the name of the type parameter can be used as a type name. [Trait](#traits) bounds can be specified for type parameters to allow methods with that trait to be called on values of that type. This is specified using the `where` syntax, as in the above example. specified using the `where` syntax: ```rust,ignore fn foo(x: T) where T: Debug { ``` When a generic function is referenced, its type is instantiated based on the context of the reference. For example, calling the `iter` function defined", "positive_passages": [{"docid": "doc-en-rust-6282b24f5cf3463648b8a20bf5a3e7d492816f7eb5d365e7f79ac74b0b7fbcf8", "text": "In the in the section it is stated that (emphasis mine) But the example shows a type parameter on , which requires parenthesis-enclosed type parameters rather than angle-brackets. mentioned on IRC that this might be some special case to open the possibility to easier change the traits at a later time (if I understood correctly), but I found it confusing to see a specialization using parenthesis without ever having heard they existed.\nIsn't the text referring to Those angle brackets? \"following the function name\"\nYeah, I suspect the real confusion here is that the syntax with parens has not yet been introduced (nor is it explained anywhere, apparently), so it's the wrong example at this point in the document, and it's not clear to a first-time reader what is being referred to. It seems like there should be a sufficient example here that doesn't use closures, maybe one simple one with and , and one using traits over collections or iterators or something, to demonstrate that parameterized types can appear in the bounds, to make it easier to understand. That said, the reference is explicitly not intended to be pedagogical, and implicitly not yet expected to be complete.", "commid": "rust_issue_26320", "tokennum": 248}], "negative_passages": []} {"query_id": "q-en-rust-31a482838953ba040461eda0330f59950e6444eae9fa911cf4460fe468998f78", "query": "#![allow(unused_variables)]; //~ ERROR expected item, found `;` //~^ ERROR `main` function fn foo() {} //~^ ERROR `main` function ", "positive_passages": [{"docid": "doc-en-rust-6cf3c2bb3c08a38bafff3532db72ea1908c7190bb562a2249ecc7c70a784b1e0", "text": "Tested on nightly and stable. The second diagnostic is strange. $DIR/default-ty-closure.rs:3:20 | LL | struct X; | ^^^^ | = note: the only supported types are integers, `bool` and `char` error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-0b746ef7bb39b87baa1630231d41ddc7a554881a11dad9107726cf941a2c6908", "text": "File: auto-reduced (treereduce-rust): original: Version information ` Command:\n $DIR/issue-70167.rs:3:12 | LL | #![feature(const_generics)] | ^^^^^^^^^^^^^^ | = note: `#[warn(incomplete_features)]` on by default ", "positive_passages": [{"docid": "doc-en-rust-37acd1f3390bb5eae9106ee301264310bf2b12e6de290416e772a1545c55f554", "text": "results in an ICE (): $DIR/issue-66693.rs:13:5 | LL | panic!(&1); | ^^^^^^^^^^^ | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: argument to `panic!()` in a const context must have type `&str` --> $DIR/issue-66693.rs:6:15 | LL | const _: () = panic!(1); | ^^^^^^^^^ | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: argument to `panic!()` in a const context must have type `&str` --> $DIR/issue-66693.rs:9:19 | LL | static _FOO: () = panic!(true); | ^^^^^^^^^^^^ | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to 3 previous errors ", "positive_passages": [{"docid": "doc-en-rust-7c73542d67b097e4fd1fee1922c673ac020986f94704300c10704cf32974803c", "text": "The panic payload of the macro (as passed on to ) is generic, but the CTFE panic support assumes it will always be an . This leads to ICEs: Cc\nI guess we could support this, but we'd have to find some way to render the errors. Maybe we need to wait for const traits so we can require const Debug?\nAlternatively constck could require the argument to have type .\nalso requires a\nYeah, the libcore panic machinery does not support non-format-string panics.\nAye, left out the latter of my message. is likely the most common argument to , and it's not unusual with e.g. ; so I agree that waiting for const traits before supporting any other arguments is reasonable.\nIm hitting this ICE on stable without feature flag:\nHm indeed, that is interesting: any idea why we go on here after seeing an error? Does this kind of behavior mean any part of CTFE is reachable on stable somehow?\nyes. We're not gating CTFE on whether there were stability (or any other static checks) failing. I've been abusing this since 1.0 to demo things on the stable compiler and never got around to fixing it. I don't even know if we have a tracking issue for it.\n:rofl: So what would be a possible approach here? Replace such consts by early enough that their MIR never actually gets evaluated? We might as well use this issue, IMO...\nThat would require us to mutate all MIR and potentially even HIR. Other possible solutions: poison the mir Body if const checks fail (by adding a field like or by just nuking the body by replacing it with ) and thus make allow const evaluation to check if the mir Body failed the checks and bail out before doing anything make the have an field similar to\nOn second thought, let's leave this issue for the ICE, and make a new one for \"we can trigger nightly-only const code on stable because we don't abort compilation early enough\":\nGiven that this ICE is pre-existing, as shown by , can we un-block ? I'd really like to see const_panic stabilized, and this seems like a minor issue to be blocking on.\nThat comment is an example of accidentally partially exposing an unstable feature on stable -- a separate bug tracked in I don't see how that's an argument for shipping that feature on stable when it is broken.", "commid": "rust_issue_66693", "tokennum": 515}], "negative_passages": []} {"query_id": "q-en-rust-364ea11b770e8305c0c53250604cdc00dcbd3caefe0f00551b50048ae9a3beff", "query": " error: argument to `panic!()` in a const context must have type `&str` --> $DIR/issue-66693.rs:13:5 | LL | panic!(&1); | ^^^^^^^^^^^ | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: argument to `panic!()` in a const context must have type `&str` --> $DIR/issue-66693.rs:6:15 | LL | const _: () = panic!(1); | ^^^^^^^^^ | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: argument to `panic!()` in a const context must have type `&str` --> $DIR/issue-66693.rs:9:19 | LL | static _FOO: () = panic!(true); | ^^^^^^^^^^^^ | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to 3 previous errors ", "positive_passages": [{"docid": "doc-en-rust-21ce43edd15410cd7321881478b82e77d0a12e87f14b1a6a8e5e03c80674bc9f", "text": "In particular, applies to all unstable const features. Fixing this bug here is not even that hard I think: somewhere around here there needs to be a check that the argument has type . Re: , AFAIK that one is also blocked on figuring out how the diagnostics should look like, and a further concern (though maybe not blocking) is .\nI guess that might make this a non-issue, where it is proposed also would require as first argument.\nFor reference: , which was recently merged. The plan is to fix the behaviour of in the Rust 2021 edition. Given that this ICE is listed as a blocker for , does this mean that stabilizing that will have to wait for the next edition?\nNo, because breaking changes are allowed to be made to unstable features without moving to a new edition; that's kinda the whole idea. The 2021 edition will effectively just extend the same restriction to all invocations, const context or not.\nI'd like to tackle this but I'm not very experienced with the CTFE code. I get that we need to check from the that we've matched on in this branch and we need to match on the enum, but: For the first question, I see and that we probably want to check that the type of the tuple and that there's variants of that are , but do we stop the iteration when we get one that is a type? Is the projection list always guaranteed to terminate in a variant of that has a ? And once we have an instance of , do we just check that there's any_ projection in the list that has a ?\nyou can use to get the type of the . Then you can test its to be .", "commid": "rust_issue_66693", "tokennum": 352}], "negative_passages": []} {"query_id": "q-en-rust-366aec0b367c74bc1466d97196e8ad65fb71bfe07fecb94b8fa0d700979176fc", "query": "//! [rust-discord]: https://discord.gg/rust-lang //! [array]: prim@array //! [slice]: prim@slice // To run std tests without x.py without ending up with two copies of std, Miri needs to be // able to \"empty\" this crate. See . // rustc itself never sets the feature, so this line has no effect there.", "positive_passages": [{"docid": "doc-en-rust-87fdb067d72819e74c98852b7f8328aabdeb7869b05778cdbbb90b13a5f4412a", "text": "All existing standard library documentation implicitly assumes that the APIs are being used between the start of a Rust and end of . For example does not document any indication that the function would panic. It does not need to document that, because the function cannot panic, as long as the call occurs within the duration of . However it's possible to observe a panic like this: (Related PR and discussion: ) In general using the standard library from an callback, or before through a static constructor, is UB: according to \"we can't really guarantee anything specific happens [...]; at least not in a cross-platform way.\" Is this worth calling out centrally as a caveat to all other documentation of the standard library? At the top level of the whole crate (it would perhaps be more prominent than it deserves), at the module level, or in the Reference? Certainly for , , , the expectation users need to have is that nothing in there will work outside of . Are there APIs it makes sense to carve out as being permissible outside of ? Stuff like , , , , etc. We'd maybe need to do research into how constructors and atexit are being used in the wild. For example the crate relies on , , and to be usable before : It seems obvious that those things should work but there isn't documentation which guarantees it. I assume that makes the crate technically unsound as written.\nIMHO, core types should work in whatever context but libstd is more dicey. It more explicitly has a runtime or three (e.g. Rust's, libc's and the OS's). So at a minimum anything in the modules , , , , , , , etc should be regarded with suspicion unless documented otherwise. In summary, my thoughts are: is fine depends on the allocator used assumes code is running in main (except for the and possibly stuff)\nCore types can still panic and panics do IO.\nHm... how much of an issue is that? Panics don't have to do I/O (e.g. if stderr is closed) so if platforms don't support that before or after main then it can be silently skipped, no?\nBut do we do that properly today? And it's not just IO but also panic hooks which in turn can rely on various other parts of std.\nI think it would be fine to document that e.g.", "commid": "rust_issue_110708", "tokennum": 514}], "negative_passages": []} {"query_id": "q-en-rust-366aec0b367c74bc1466d97196e8ad65fb71bfe07fecb94b8fa0d700979176fc", "query": "//! [rust-discord]: https://discord.gg/rust-lang //! [array]: prim@array //! [slice]: prim@slice // To run std tests without x.py without ending up with two copies of std, Miri needs to be // able to \"empty\" this crate. See . // rustc itself never sets the feature, so this line has no effect there.", "positive_passages": [{"docid": "doc-en-rust-269dc65c84419fe490875f7cb05dfdc14afe570942f2ab6c33612886355f811c", "text": "after main they may want to set an empty (or aborting) panic hook if panicking is possible. I really don't want to be in a situation where the user is in a worse place than if they'd just used .\nCould we turn panics into immediate aborts after main? That would also avoid the backtrace printing and symbolication.\nI looked through the standard library, and is the only function that has limitation on usage before/after main (in this case, this is only a problem during TLS destruction). This function is also called from , and , which can therefore panic when called during TLS destruction. and might be able to be reworked to avoid this, but this is fundamentally a limitation of since it requires a reference to the current to work. The stack guard used by the stack overflow handler also accesses the same TLS slot as , but since the stack bounds don't have a this could be moved to a separate to properly support stack overflow handling during TLS destruction.\nIf we're going to make a statement at all then imo we should start out with something more cautious than making solid guarantees since this would be a pretty wide-ranging promise. before/after main behavior is not tested. It is best-effort and if someone relies on it they should have their own tests core and alloc are expected to work because they don't touch OS APIs or global state caveat: user code in any hookable API like panic handlers, global allocators, OOM handler or any future hooks. Especially panics affect a lot of code. we might replace hooks after main? most existing parts of std, backtrace and other things interacting with the OS are expected to work but there may be platform-specific edge-cases and the behavior may change in the future due to changing implementation details or new features that depend more runtime state (e.g. a std threadpool or async runtime) then finally a list of known limitations\nProposed documentation based on my previous comment:", "commid": "rust_issue_110708", "tokennum": 423}], "negative_passages": []} {"query_id": "q-en-rust-368c87b7e7fec3e10f7bbf439ade70d98cd1b1fedd577bf58839e10701c4edf9", "query": " // Regression test: if we suggest replacing an `impl Trait` argument to an async // fn with a named type parameter in order to add bounds, the suggested function // signature should be well-formed. // // edition:2018 trait Foo { type Bar; fn bar(&self) -> Self::Bar; } async fn run(_: &(), foo: impl Foo) -> std::io::Result<()> { let bar = foo.bar(); assert_is_send(&bar); //~^ ERROR: `::Bar` cannot be sent between threads safely Ok(()) } // Test our handling of cases where there is a generic parameter list in the // source, but only synthetic generic parameters async fn run2< >(_: &(), foo: impl Foo) -> std::io::Result<()> { let bar = foo.bar(); assert_is_send(&bar); //~^ ERROR: `::Bar` cannot be sent between threads safely Ok(()) } fn assert_is_send(_: &T) {} fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-bf02cff849637e34b780957988347f5e3bb12fc557fa33d07b85804f11218daa", "text": "(Edit: Actually, being an inherent fn doesn't matter. It happens with top-level fns too.) This prints the diagnostic: The suggestion in the last line is not valid syntax (). It also emits malformed syntax if the first parameter is a or parameter. The diagnostic becomes well-formed if any of the following is done: The first parameter is changed to instead ofThe fn is not an async fn. In either case, the diagnostic correctly says: Happens on both nightly: ... and stable:\nThis should hopefully be a fairly straightforward patch to the diagnostics code that generates the suggestion.", "commid": "rust_issue_79843", "tokennum": 132}], "negative_passages": []} {"query_id": "q-en-rust-36a37e6edf3f6e3bb91b8e47bf3a63cd1ecdb7caf5bf4cf16ef58594ea3b0b8f", "query": "} fn check_overalign_requests(mut allocator: T) { let size = 8; let align = 16; // greater than size let iterations = 100; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } for &align in &[4, 8, 16, 32] { // less than and bigger than `MIN_ALIGN` for &size in &[align/2, align-1] { // size less than alignment let iterations = 128; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) } } } } }", "positive_passages": [{"docid": "doc-en-rust-274e00961750a77a989d49939f2217b84fc9c477167ce26bfce8f7d99db482e1", "text": "The man page for says And yet Miri found libstd calling this function with an alignment of 4 on a 64bit-platform. This happens when size=2, align=4. The fact that size(mut allocator: T) { let size = 8; let align = 16; // greater than size let iterations = 100; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } for &align in &[4, 8, 16, 32] { // less than and bigger than `MIN_ALIGN` for &size in &[align/2, align-1] { // size less than alignment let iterations = 128; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) } } } } }", "positive_passages": [{"docid": "doc-en-rust-6a2014bc6c2e91397c8b0bb7757a0776b1ae02cef99d1d8f8794bbe08d857400", "text": "In practice, the fastest thing you can do, is calling malloc if , because if malloc will return an alignment of at least rounded to the previous power of two, which is always satisfied (remember that must be a power of two, so if its smaller than size, it is at least the previous one). will do this without having to branch on any user provided alignment argument. That's precisely what that code is doing. The else branch is calling a \"generic\" which should be able to handle any user-specified alignment. The implementation of based on : appears to have the bug you mention, and should probably call or similar. I suspect that the implementation of provided by jemalloc, which we use here: doesn't have this issue, because it probably just forwards the call to , which supports alignment smaller than .\nSo basically on the C side, one assumes that is not used with types that use the C equivalent of ? With that assumption, I agree it is enough for to guarantee an alignment of rounded down to the next power of two. (I think your post originally said \"up\", but now it says \"down\" so I guess we agree.) That's what I and our implementation indeed handles that case just fine. The bug in the way we call is fixed by Still, it would be nice not to be able to reason about \"in practice\", but get some commitment from jemalloc -- see And finally, this only concerns on Unixes. Does anyone know what the situation looks like with on Windows? libstd currently assumes even small allocations to be aligned there, and agrees with that (Ctrl-F \"on 64-bit platforms\").\nEDIT: C11 to C, but yes, malloc doesn't know anything about the alignment of a type. It still can be used, but the user needs to check whether the allocation is properly aligned. The docs are quite clear IMO: Anything more concrete than that will be target specific, and those details are outside jemalloc's control - if jemalloc wants to replace the system allocator in a target, it needs to comply with the ABI there. If it were to guarantee more, e.g., all allocations are 16byte aligned, it wouldn't work on platforms using the SysV i386 ABI.\nWell, with that statement from the jemalloc docs is wrong, isn't it?", "commid": "rust_issue_62251", "tokennum": 509}], "negative_passages": []} {"query_id": "q-en-rust-36a37e6edf3f6e3bb91b8e47bf3a63cd1ecdb7caf5bf4cf16ef58594ea3b0b8f", "query": "} fn check_overalign_requests(mut allocator: T) { let size = 8; let align = 16; // greater than size let iterations = 100; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } for &align in &[4, 8, 16, 32] { // less than and bigger than `MIN_ALIGN` for &size in &[align/2, align-1] { // size less than alignment let iterations = 128; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) } } } } }", "positive_passages": [{"docid": "doc-en-rust-195a207e9d5423ad73f3b0d3542db762d1d58e3153786ae0f5b30e3db5e02085", "text": "And for users to manually do the alignment, getting something more concrete would be useful. And also for Rust! Currently you are deducing a whole lot of things from a few words in the spec. I have zero confidence that this is the only way to read the spec. So something like: \"for we guarantee an alignment of ; for smaller sizes we guarantee an alignment of the size rounded down to the next power of two.\" would IMO be prudent to ask from the jemalloc devs. Or else we'll find ourselves with more bugs when someone interprets all this slightly differently.\nI don't know how useful would that be: malloc only needs to comply with the C standard. Parts of it are implementation-defined, and what those are defined to is often specified by the system ABI. That's the only thing the global allocator can rely on. On most platforms, the system allocator - which is the default global allocator for Rust programs - is not jemalloc - it is the one from glibc, musl, microsoft, apple, google, mozilla, etc. In some platforms, like FreeBSD, the system allocator is jemalloc, but even there we can't exploit that knowledge because users can just and now the global allocator is a different one. That's an use case we support. So even if jemalloc were to offer extra guarantees here, Rust's system allocator cannot make use of them. The only places were we can can exploit jemalloc's specific guarantees, is when using crates like jemallocator, to explicitly hardcode the allocator to a particular one. As in, if that allocator is not linked, you get a linker error or similar. But even there, it makes sense to stick to what C and the platform ABI guarantees, because jemalloc is free to break anything else in the next version.\nWell it would save us this entire discussion. If that doesn't demonstrate its usefulness I don't know what does.^^ I don't know what the \"system allocator\" is for Linux, but the one in glibc actually guarantees a 16-byte alignment on 64bit systems for all allocations. So by that standard jemalloc would just be incorrect.\nThe platform ABI guarantees that. What the malloc implementations guarantee is irrelevant.", "commid": "rust_issue_62251", "tokennum": 506}], "negative_passages": []} {"query_id": "q-en-rust-36a37e6edf3f6e3bb91b8e47bf3a63cd1ecdb7caf5bf4cf16ef58594ea3b0b8f", "query": "} fn check_overalign_requests(mut allocator: T) { let size = 8; let align = 16; // greater than size let iterations = 100; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } for &align in &[4, 8, 16, 32] { // less than and bigger than `MIN_ALIGN` for &size in &[align/2, align-1] { // size less than alignment let iterations = 128; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) } } } } }", "positive_passages": [{"docid": "doc-en-rust-42b2de4c9c2f2f1395d8e2119abdb03e165e405c04ffb252370c22d3d32bbb38", "text": "If glibc were to guarantee that all addresses are 32-byte aligned, there is no way in which we could exploit that information in the implementation of the system allocator, because the platform only requires 16-byte alignment, and it is valid to pick any allocator at run-time that satisfies that.\nSo there a document somewhere saying \"for x86-64 bit, 16-byte alignment is guaranteed\"? But that document contains an exception for small sizes, making s behavior legal?\nYes, there are documents containing the rules of what's legal, at least for Linux x8664. Those document allow an exception for small sizes. One API that uses those is , which requires the alignment to be at least , which is 8 on x86_64, and which allows you to allocate bytes, with a smaller alignment. I still have no idea what jemalloc has to do with any of this - your original question is about , and that has nothing to do with jemalloc. But yes, jemalloc exploits this behavior, to be able to satisfy 1 byte request, and return addresses that are 1 byte aligned. Otherwise each 1 byte request would consume 16 bytes of memory within a memory page.\nA document that could be relevant is the C standard. Unfortunately, as mentioned in the commit message for , it is rather tautological on this topic: The standard does not give any numeric value. Like many other aspects of C, it looks like the actual value of is implementation-defined.\nWhen using jemalloc, becomes jemalloc. That's how happened. Do you have a reference for something that concretely calls out an exception for small types? The C standard only does this very indirectly, as you showed.\nThat was a bug in our implementation of system that applies to pretty much all allocators, not only jemalloc. says (emphasis mine): Since the alignment of an object is a power of two that is always smaller than or equal to the object size, rounding down the alignment of an allocation to the previous power of two always produces an allocation properly aligned for every type that can fit it. That holds for all allocations, so there is no exception for that. That's the rule. Since that would mean that large allocations must be unreasonably aligned, the standard does provide an exception for allocations of size larger than than .", "commid": "rust_issue_62251", "tokennum": 493}], "negative_passages": []} {"query_id": "q-en-rust-36a37e6edf3f6e3bb91b8e47bf3a63cd1ecdb7caf5bf4cf16ef58594ea3b0b8f", "query": "} fn check_overalign_requests(mut allocator: T) { let size = 8; let align = 16; // greater than size let iterations = 100; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } for &align in &[4, 8, 16, 32] { // less than and bigger than `MIN_ALIGN` for &size in &[align/2, align-1] { // size less than alignment let iterations = 128; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) } } } } }", "positive_passages": [{"docid": "doc-en-rust-e05ac4321d48d2f8a21035e164ad9a84191f03cab40ec425dd16d385d3cf402f", "text": "These only need to be aligned at an boundary. If you believe this is incorrect, please show a counter-example for which this doesn't work.\nAFAIK it is a normal way of using to make it override the symbol? Don't we still do that in rustc, or when using \"jemallocator\"? I don't say it is incorrect, I say it is awfully indirect. It's like reading tea leaves. I am looking for a clear definite statement. That doesn't seem to exist though. :( There clearly is some kind of \"non-continuity\" where all allocations are aligned MINALIGN except for small ones. Or vice versa, all allocations are aligned \"size rounded down to power of 2\" except for big ones. You are convincingly deriving that non-continuity from standard wording. I don't say you are wrong, but this feels like interpreting ancient scripture. A clear spec looks different. Also, an object of size 2 with alignment requirement 4 is \"fundamental\" according to your definition, and yet rounding down the size to the next power of 2 does not give the right alignment. So your rule does have exceptions, namely types marked .\nCan you show such an object in C and/or Rust ? It's probably the most common way of switching allocators. What does is it requires to be linked to your binary, allowing (and making use of) jemalloc-specific APIs. If you do this, when using directly (not ), you can obviously rely on assumptions that only hold for . But cannot do that because, then it wouldn't work with other allocators.\nThe goal is not to rely on assumptions that only hold for . The goal is to make sure that we do not make assumptions that don't hold for . Because that's what we used to do, before got fixed. We were assuming something that was true for system but not for : that all allocations, no matter the size, are aligned. To make sure we don't do that again, I want to be sure that all assumptions we are making hold for . That's why I'd like jemalloc to spell out what we can assume.\nAh I forgot about the stride vs. size thing. So such an object does not exist.", "commid": "rust_issue_62251", "tokennum": 481}], "negative_passages": []} {"query_id": "q-en-rust-36a37e6edf3f6e3bb91b8e47bf3a63cd1ecdb7caf5bf4cf16ef58594ea3b0b8f", "query": "} fn check_overalign_requests(mut allocator: T) { let size = 8; let align = 16; // greater than size let iterations = 100; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } for &align in &[4, 8, 16, 32] { // less than and bigger than `MIN_ALIGN` for &size in &[align/2, align-1] { // size less than alignment let iterations = 128; unsafe { let pointers: Vec<_> = (0..iterations).map(|_| { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { assert_eq!((ptr.as_ptr() as usize) % align, 0, \"Got a pointer less aligned than requested\") } // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) // Clean up for &ptr in &pointers { allocator.dealloc(ptr, Layout::from_size_align(size, align).unwrap()) } } } } }", "positive_passages": [{"docid": "doc-en-rust-a835cfab690241d43fcf339b6b088eec02bd9664d4638636512f590be6c89bb5", "text": "But such a exists, and it is something we want to support in our allocator API (we even have a test for that). And we do have objects with align size, namely all ZST.\nYeah, I agree. The only cases for which the \"round to the previous power of two approach\" doesn't work is for ZSTs, and types (or allocations) with align size. Since those just cannot happen in C, the C standard doesn't mention them as exceptions. That is, we can only call when that holds, and if it that doesn't hold, we need to call a different API that supports the align size case.", "commid": "rust_issue_62251", "tokennum": 137}], "negative_passages": []} {"query_id": "q-en-rust-36b5c1eca31deac9837bad351ef50069e950e85098e344c82edc9a50e577296a", "query": "}, x => { bug!(\"unexpected sort of node in type_of_def_id(): {:?}\", x); bug!(\"unexpected sort of node in type_of(): {:?}\", x); } } }", "positive_passages": [{"docid": "doc-en-rust-740e0124fe7b9fb360f4e9638fbe9ccc902825c50c3d2e1b17fb8758a8ca2f54", "text": " $DIR/sugg_with_positional_args_and_debug_fmt.rs:6:28 | LL | println!(\"hello {:?}\", world = \"world\"); | ---- ^^^^^ this named argument is referred to by position in formatting string | | | this formatting argument uses named argument `world` by position | note: the lint level is defined here --> $DIR/sugg_with_positional_args_and_debug_fmt.rs:3:9 | LL | #![deny(warnings)] | ^^^^^^^^ = note: `#[deny(named_arguments_used_positionally)]` implied by `#[deny(warnings)]` help: use the named argument by name to avoid ambiguity | LL | println!(\"hello {world:?}\", world = \"world\"); | +++++ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-eea14e77b1002ed77ffdaae22bcc44c632112140cc820f8261acd02eb4c0008d", "text": "The suggestion that rustc gives is incorrect and seems to not take into account what characters are in the format argument. I tried this code: I expected to see this happen: Instead, this happened: Using beta.2/nightly : $DIR/issue-24780.rs:5:23 | LL | fn foo() -> Vec> { | ^^ help: remove extra angle bracket | ^ expected one of `!`, `+`, `::`, `;`, `where`, or `{` error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-53b468da9a6f195e87239ffd5e42111fc27af1e44172af9e2e84695d7d539a57", "text": "Alternate title: \"Rust is literally C++\" As of nightly-2020-10-29, rustc fails to parse the following code. The previous nightly and all past stable releases since 1.0.0 are not affected. The relevant commit range is\nMentioning because \"Tweak invalid header and body parsing\" sounds extremely relevant.\nAssigning and removing as in the prioritization working group.\nsearched nightlies: from nightly-2020-10-27 to nightly-2020-10-29 regressed nightly: nightly-2020-10-29 searched commits: from to regressed commit: %s\", cf, buildpath); let p = run::program_output(\"rustc\", [\"--out-dir\", buildpath, \"--test\", cf]); let p = run::program_output(rustc_sysroot(), [\"--out-dir\", buildpath, \"--test\", cf]); if p.status != 0 { error(#fmt[\"rustc failed: %dn%sn%s\", p.status, p.err, p.out]); ret;", "positive_passages": [{"docid": "doc-en-rust-ded8174e54900f87470131eb9a7413cb6ed146d7d7347f97b8465da4913ff300", "text": "cargo is relying on the to find rustc, but if one runs cargo with , cargo will error out with the unhelpful: cargo should detect this and return a proper error message for this. A proper workaround for this is to make sure that has an absolute path to the binaries set, as in: It needs to be absolute because cargo does a chdir to the build directory, which would break a relative path.\nWe have a function, that cargo can use to determine the path to itself, then use to launch the corresponding rustc.", "commid": "rust_issue_1806", "tokennum": 114}], "negative_passages": []} {"query_id": "q-en-rust-37cae60f51de95d157dcf437ff2a876b39f7f6c7094e75099a0c017e5116f3bc", "query": "let buildpath = fs::connect(_path, \"/test\"); need_dir(buildpath); #debug(\"Testing: %s -> %s\", cf, buildpath); let p = run::program_output(\"rustc\", [\"--out-dir\", buildpath, \"--test\", cf]); let p = run::program_output(rustc_sysroot(), [\"--out-dir\", buildpath, \"--test\", cf]); if p.status != 0 { error(#fmt[\"rustc failed: %dn%sn%s\", p.status, p.err, p.out]); ret;", "positive_passages": [{"docid": "doc-en-rust-9b23404c624087e4b6f505e241c33a30618643f870354e7d6c1e0c1a714a2788", "text": "Instead of just saying the key can't be verified, explain why.\nYeah, it should also not fail! Missing gpg is a warning condition, not an error condition.", "commid": "rust_issue_1643", "tokennum": 36}], "negative_passages": []} {"query_id": "q-en-rust-37eb485f94a24a8ce4db4d25e6fae32bea017fab0f996340337aa38fbc0680d3", "query": "args.push(~\"-dynamiclib\"); args.push(~\"-Wl,-dylib\"); // FIXME (#9639): This needs to handle non-utf8 paths args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); if !sess.opts.no_rpath { args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); } } else { args.push(~\"-shared\") }", "positive_passages": [{"docid": "doc-en-rust-a950a660acb91d70b8b4626ffdafe37172eb2873a306de813dafa2ee4d8f9db9", "text": "Currently (something after 0.5) rustc (IIUIC) is using rpaths to point libs and binaries to the appropriate locations. This is problematic when rustc and the other libs/bins are package e.g. for Fedora. The rpath usage should be made optional - can be used instead - and LDLIBRARYPATH.\nWhy is this problematic for packaging? Is it because the rpath values end up with the wrong values? Each crate's rpath has several values, one of which is supposed to be the final installation location (controlled with ). Can we just provide more control over which locations are used during rpathing?\nIn general Fedora recommends to not use rpaths [0], because the paths are hardcoded in the binary. Fedora preferres to point the dynamic linker to the appropriate paths [1]. But yes, we are more flexible about internal libraries - which the case is here. So yes, it would also help us if we could controll the rpath path (which goes a bit into the direction if issue ) [0] [1]\nThe following rpath is used in e.g. the cargo binary: The rpath referres to rpm's buildroot, where the lib won't stay (it will be instaled to ) Maybe the problem could adressed if the rpath would respect - but this could lead to problems at build time. Maybe LDLIBRARYPATH can be used to point to the correct dir's at build time, and the rpath can be used to point to the correct dir's at runtime. Just my 2ct - but you ar ethe expert :)\nThanks for those links. We do generate rpaths with the expectation that some of them will be invalidated by moving the binaries around. It's possible we could consider removing rpath, but it sure is nice to not think about dynamic loading when running binaries out of my build directory. We could easily add a flag to the configure script, to turn off the rpath for all the rust-provided libraries, with the assumption that package managers would modify during install. If we did that I feel like we would still want the installed rustc to generate rpaths with user-built libraries - otherwise the experience would be different depending on the rust installation. We could also just remove rpath completely - it doesn't work on windows anyway.", "commid": "rust_issue_5219", "tokennum": 512}], "negative_passages": []} {"query_id": "q-en-rust-37eb485f94a24a8ce4db4d25e6fae32bea017fab0f996340337aa38fbc0680d3", "query": "args.push(~\"-dynamiclib\"); args.push(~\"-Wl,-dylib\"); // FIXME (#9639): This needs to handle non-utf8 paths args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); if !sess.opts.no_rpath { args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); } } else { args.push(~\"-shared\") }", "positive_passages": [{"docid": "doc-en-rust-5151270ffdeea50f64b201e1c8168f621692c15bd4b7ae2d4388d3b0d3a4f6d7", "text": "Another thing to consider is that, if we just used rustpkg for all builds, and let rustpkg install all libraries at a known location, then we might be able to conveniently not use rpath while still doing the right thing at load time. Can user directories like be put into ?\nYes - I can fully understand how easy running rust in the build directory is when using rpaths :) For the build directory use case it should be possible to create wrappers for rustc and friends which use to point to the correct dirs. You mention thet some libs might reside in , could you explain the overall concept you are following with this? In general I'd expect libraries to appear in /lib, /usr/lib some and some private /usr/lib/myapp. There are also some xdg specs that specify and - but let usfirst see what you say about ~/.rustpkg. Oh, I don't think that ~/.rustpkg would be expanded correctly if put into\nJust to summarize my thoughts: If rpath is used, the rpath path beeing used should only use directories below (to point to non-standard dirs containing libs relevant for a rust component). The rpath path should not point to any build-time dir. To include libraries at buildtime or when using the binaries in the build directory I'd suggest using wrappers (isn't autotools doing it like this?) which set appropriately.\nAny follow up thoughts?\nOur package manager will by default not be installing libraries to a system location and will instead put them in the user's directory, . This doesn't seem out of the ordinary to me; consider cabal, etc. The behavior you recommend requires some deep changes to how Rust works, so need to be considered carefully. Does the current behavior prevent packaging in Fedora, and is there a minimal change we can make to unblock packaging? You'll probably be interested in this subject.\nThanks for the rustpkg explanation. Do I understand that rustpkg will create binaries and libs with rpaths pointing to the user's specific rustpkg dir ? My feeling is that this user specific rpaths should be configurable at build time. The use case is obviously when the binary or lib is going to be installed systemwide. The only rpath exception is the rpath pointing to the system wide package specific libdir (e.g. ).", "commid": "rust_issue_5219", "tokennum": 536}], "negative_passages": []} {"query_id": "q-en-rust-37eb485f94a24a8ce4db4d25e6fae32bea017fab0f996340337aa38fbc0680d3", "query": "args.push(~\"-dynamiclib\"); args.push(~\"-Wl,-dylib\"); // FIXME (#9639): This needs to handle non-utf8 paths args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); if !sess.opts.no_rpath { args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); } } else { args.push(~\"-shared\") }", "positive_passages": [{"docid": "doc-en-rust-7b543ce276287a523dcf1a37842437cfe9a628174cf6714aeeb2711fab0a1543", "text": "I think that I'll spend a bit more time with looking at and and might get a better feeling for the problem. Btw.: I noted that the libdir isn't configurabel. Fedora for example keeps x86_64 libs in (/usr)?/lib64, and i686 in (/usr)/lib. IIUIC this is currently not possible with rust as the libs will be installed to /usr/lib/arch-specific-path\nThe point about libdir is well taken and I updated . I am not entirely sure how rustpkg does it's build now or how and intend for it to work, but based on the current language design I would expect that, yes, binaries installed to would have rpaths pointing there. Currently there are several rpaths configured for each linked library so that they hopefully fall back to something useful when the libraries are moved around during install or during the bootstrapping process, something like: The relative path to the library The absolute path to where the library was at build time The system installation path Under this scheme I would expect that would, when linking against libraries in , create rpaths to .\nYes, there are several rpaths used (seen in and ) and I'd say - even with looking at rust itself, without the packaging glasses on - that it should be configurable what rpaths are used. Maybe it should be an option of . As said, system wide binaries are the best example for binaries where you don't want to many rpaths - including no rpaths pointing to some user dir.\nPlease don't tag TJC :(\nSorry, TJC! Maybe it's time to give in and start working on Rust :) Seriously though, sorry. I'm sure it's a real PITA.\nmentioned that this is a security issue, which I didn't realize. Debian also says .\nBy the way, it's possible to strip this with but it would be much nicer to not require a third party tool.\nI generally wonder why you don't add a user-specific to to point to the user specific rustpkg dir instead of using rpath. That makes it much easier to cope with - and that would also be in par with (at least) the Fedora guidelines. And I suppose with Debian's too, as it's the common way. (See 3rd party java binaries. They modify the to point to some hidden dir in the userdir.)\nWe used to.", "commid": "rust_issue_5219", "tokennum": 551}], "negative_passages": []} {"query_id": "q-en-rust-37eb485f94a24a8ce4db4d25e6fae32bea017fab0f996340337aa38fbc0680d3", "query": "args.push(~\"-dynamiclib\"); args.push(~\"-Wl,-dylib\"); // FIXME (#9639): This needs to handle non-utf8 paths args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); if !sess.opts.no_rpath { args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); } } else { args.push(~\"-shared\") }", "positive_passages": [{"docid": "doc-en-rust-bde3b6ae1e3e98b475a33f9a1a8a3307672213332add48ee4a6c55c0b0f17136", "text": "Users kept forgetting and/or getting annoyed having to adjust it when running from a build dir. Rpaths made it possible to run foo/bar/rustc no matter which rustc you chose.\nright. IIUIC - I always feel like a noob when it comes to autotools - autotools are generating wrappers for binaries which are created during a build process and moves the original binaries to hiddend irectories. The wrappers are then modifying the environment (setting env vars etc) to achieve the same goal (so e.g. that rustc can be run from the dev directory without messing with the env vars yourself). I wonder if this could help.\nOkay, it's actually not autools which creates these wrappers but libtool:\nSorry, I didn't mean to imply a disinterest in fixing this better, just noting how it came to be. Also strongly disinterested in using libtool. Would prefer going back to telling users to set env vars.\nHow about defaulting to static linking (when it works)? Dynamic linking is valuable, but mostly for global installs handled by a package manager. Of course, if we end up with a bunch of tools using that would be wasteful. I think RPATH will just lead to confusion, and it's an easy security hole to leave open.\nI would be very happy to get static linking working well enough to make it a default in many such cases. But we ought to do the right thing when linking dynamically too. Is rpath-using- considered acceptable? I.e. is it just the absolute paths that are problematic?\nThe absolute paths are the ones that can easily be a security issue, so isn't that bad. I would still rather avoid it for a globally installed package but it's easy enough to strip for people who don't need it and won't be an issue if you don't know about it (like the absolute path could be). Using the rpath could be a flag, but the absolute path should never be there.\nNominating for milestone 2: backwards compatible. (The choice of whether one needs to put a setting in or use some configure flag for certain systems, etc, seems like a backwards compatibility issue. Though maybe I misunderstand and these are issues only for people developing itself, not for clients of ?) It also could well be that this is more appropriately assocated with milestone 1: well-defined.", "commid": "rust_issue_5219", "tokennum": 524}], "negative_passages": []} {"query_id": "q-en-rust-37eb485f94a24a8ce4db4d25e6fae32bea017fab0f996340337aa38fbc0680d3", "query": "args.push(~\"-dynamiclib\"); args.push(~\"-Wl,-dylib\"); // FIXME (#9639): This needs to handle non-utf8 paths args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); if !sess.opts.no_rpath { args.push(~\"-Wl,-install_name,@rpath/\" + out_filename.filename_str().unwrap()); } } else { args.push(~\"-shared\") }", "positive_passages": [{"docid": "doc-en-rust-e2ab7893880a27d6b67cf522b730616f8bf2b057dd45e0e57f81fe2174c90e4c", "text": "I suspect and have a more concrete notion about this.\nI don't have anything much to add to this despite it coming up for triage today. I think it has to be fixed still, I'm still happy to do it via whatever combination of static linking by default and use of is sufficient for packagers, and I haven't seen any clear movement on it in the meantime.\naccepted for backwards-compatible milestone\nI'm getting worried that we won't resolve this for 1.0. I agree this is important to fix. Here's my current suggestion: Modify the build system to once again set LDLIBRARYPATH/PATH to find the appropriate dynamic libraries. Add a flag to rustc to disable rpath Add a configure flag to disable rpathing the rust build itself. Packagers can use this to do whatever they want. With this logic the rust package itself could avoid rpath where distros have their own way of dealing with library paths. The rustc installed by these distros would still be rpathing user's crates for the sake of convenience. Is this an acceptable compromise or do distros want us to completely disable rpath? I'm afraid that doing so would provide an inconsistent developer experience depending on how rustc is acquired; if we need to do that then we instead must find a way to completely eliminate rpath.\nIf we start to just statically link everything by default then rpath becomes much less important and we can just say it's up to users to figure it out if they opt into dynamic linking.\nI think we should completely remove absolute rpaths. It's okay to have $ORIGIN-based ones but it may not be a good default. I'm not sure what the answer is to that, but agree that static linking basically resolves this.\nIt doesn't appear that this is fully working. There are still rpaths in the binaries when configured with even though the flag for rustc does work. /usr/bin/rustc: RPATH=$ORIGIN/../lib:/build/rust-git/src/rust/x8664-unknown-linux-gnu/stage1/lib/rustlib/x8664-unknown-linux-gnu/lib:/usr/lib/rustlib/x86_64-unknown-linux-gnu/lib\nWhat steps did you follow to having the rpath show up? I thought I double checked on linux that the configure flag worked, but I may have overlooked something.", "commid": "rust_issue_5219", "tokennum": 541}], "negative_passages": []} {"query_id": "q-en-rust-37f5cc81e02c80adbacbc0e41b4bbe46a961f46a02a453bb7b5be56c1a5c1ac3", "query": " // issue: rust-lang/rust#125081 fn main() { let _: &str = '\u03b2; //~^ ERROR expected `while`, `for`, `loop` or `{` after a label //~| ERROR mismatched types } ", "positive_passages": [{"docid": "doc-en-rust-205d33f2c4ca407a5eba587eb9d79d5b66428139a1f256eedc1fb0c137300290", "text": " $DIR/overlap-marker-trait.rs:27:17 --> $DIR/overlap-marker-trait.rs:28:17 | LL | is_marker::(); | ^^^^^^^^^^^^^^^^^ the trait `Marker` is not implemented for `NotDebugOrDisplay` | note: required by a bound in `is_marker` --> $DIR/overlap-marker-trait.rs:15:17 --> $DIR/overlap-marker-trait.rs:16:17 | LL | fn is_marker() { } | ^^^^^^ required by this bound in `is_marker`", "positive_passages": [{"docid": "doc-en-rust-eaf8aed72a7fb57ad8d28d7447bf226504be02fb120af8e4c8b07e9c7b0dfbfd", "text": "Not sure if we have a test for the following and rust #![feature(markertraitattr)] #[marker] trait Marker {} impl Marker for &' () {} impl Marker for &' () {} If not, we need to add them, since they would be important for special casing during canonicalization. tests/ui/expect_tool_lint_rfc_2383.rs:36:18 | LL | #[expect(invalid_nan_comparisons)] | ^^^^^^^^^^^^^^^^^^^^^^^ | = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` error: this lint expectation is unfulfilled --> tests/ui/expect_tool_lint_rfc_2383.rs:107:14 | LL | #[expect(clippy::almost_swapped)]", "positive_passages": [{"docid": "doc-en-rust-9da6e95c335f48d6b06b8a6cd003c89c70bc53bedd6bf2c5c7ac9bd6bf6d5f89", "text": "I tried this code: I expected to catch lint and suppress it, as it should. Instead, compiler issues a diagnostic both about unused value and unfulfilled lint expectation: It does work as expected though if is applied to function. I used the latest stable (Rust 1.81), but it is reproducible both on latest beta (2024-09-04 ) and latest nightly (2024-09-05 ). labels: +F-lintreasons, +A-diagnostics tests/ui/expect_tool_lint_rfc_2383.rs:36:18 | LL | #[expect(invalid_nan_comparisons)] | ^^^^^^^^^^^^^^^^^^^^^^^ | = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` error: this lint expectation is unfulfilled --> tests/ui/expect_tool_lint_rfc_2383.rs:107:14 | LL | #[expect(clippy::almost_swapped)]", "positive_passages": [{"docid": "doc-en-rust-79ae52ec28826e7fd5d0571036ba651615fc4170373c979f134af2d5a515bd5f", "text": "are present for more than one HIR node in the and if one node is the (direct or indirect) parent of the other node, then add the level only_ for parent node. This will take care of any lints on the child (since we always walk up the HIR hierachy as mentioned earlier) as well as the parent and it will not cause any warnings about missed expectations. I am new to this area so if someone can review this idea or suggest alternatives that will be great. since you seemed to have worked in this area recently, can you please weigh in?\nYour investigation is accurate, and the fix is simple: visit the statement node when building lint levels. Do you mind making such a pr ? I don't think we need any special adjustment for cases, duplicated attributes are already taken into account, but a test won't hurt.\nThanks . Yes, I'll put together a PR. My only concern is if all we do is add the level at , although it will work perfectly for , etc., in the case of we will end up with an unfulfilled expectation on the node which will appear to the user as a warning. But anyway let me have a go and see what happens.\nOpened draft PR that adds the lint on . It does suffer from the extra warning problem.", "commid": "rust_issue_130142", "tokennum": 273}], "negative_passages": []} {"query_id": "q-en-rust-42fdaf7f0ece255b53eea55539c80cf903e22531950554ae0795e62bc9bacc2f", "query": "fn clone(&self) -> Self { *self } } fn main() { let one = (Some(Packed((&(), 0))), true); fn sanity_check_size(one: T) { let two = [one, one]; let stride = (&two[1] as *const _ as usize) - (&two[0] as *const _ as usize); assert_eq!(stride, std::mem::size_of_val(&one)); } fn main() { // This can fail if rustc and LLVM disagree on the size of a type. // In this case, `Option>` was erronously not // marked as packed despite needing alignment `1` and containing // its `&()` discriminant, which has alignment larger than `1`. assert_eq!(stride, std::mem::size_of_val(&one)); sanity_check_size((Some(Packed((&(), 0))), true)); // In #46769, `Option<(Packed<&()>, bool)>` was found to have // pointer alignment, without actually being aligned in size. // E.g. on 64-bit platforms, it had alignment `8` but size `9`. sanity_check_size(Some((Packed(&()), true))); }", "positive_passages": [{"docid": "doc-en-rust-35378ae24f994e1e82d353e3a09b882a3441c1abf89a5104c23d1fb33de146fb", "text": " // compile-flags: --test --persist-doctests /../../ -Z unstable-options // failure-status: 101 // only-linux #![crate_name = \"foo\"] //! ```rust //! use foo::dummy; //! dummy(); //! ``` ", "positive_passages": [{"docid": "doc-en-rust-6ebc6bcbbe025216226a076c9bb7935562c895f3554d0f8ba1bd87c3a0a550a0", "text": "When setting the directory to persist doctests to if an UNC path is used on windows there's a segfault. The offending setting of persist-doctests: . cargo-tarpaulin sets this programatically using a path it gets for the target directory from cargo-metadata. The path from cargo-metadata is an UNC path and this then causes a segfault. I'm going to detect UNC paths in my code and remove the on windows, but I felt this warrants a bug report as someone else will hit it eventually. Also, I'm not sure if other parts of the compiler fall prey to this issue? Compiler version is: the latest nightly (1.64.0-nightly 2022-06-28) The stack trace: $DIR/missing-lt-outlives-in-rpitit-114274.rs:8:55 | LL | fn iter(&self) -> impl Iterator>; | ^^^^^^^^ undeclared lifetime | = note: for more information on higher-ranked polymorphism, visit https://doc.rust-lang.org/nomicon/hrtb.html help: consider making the bound lifetime-generic with a new `'missing` lifetime | LL | fn iter(&self) -> impl for<'missing> Iterator>; | +++++++++++++ help: consider introducing lifetime `'missing` here | LL | fn iter<'missing>(&self) -> impl Iterator>; | ++++++++++ help: consider introducing lifetime `'missing` here | LL | trait Iterable<'missing> { | ++++++++++ error: aborting due to previous error For more information about this error, try `rustc --explain E0261`. ", "positive_passages": [{"docid": "doc-en-rust-dafd2920ba43b58ad54539303c7552eb922492a8ce202284909e7cf099e4b1e6", "text": " $DIR/unconditional_panic_98444.rs:6:13 | LL | let _ = xs[7]; | ^^^^^ index out of bounds: the length is 5 but the index is 7 | = note: `#[deny(unconditional_panic)]` on by default error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-909c854c9e85845042d7d8a745db55ef4e07f84f9161a0a54ccbd457f7aa3472", "text": " $DIR/unsized6.rs:17:12", "positive_passages": [{"docid": "doc-en-rust-09dba29fa8aa11509d6cd49c561b11d2548cab6627b306896c4b759f3fbc1ad4", "text": "I tried compiling this code: which resulted in the following output of rustc: So the compiler says So I tried adding a borrow and recompiling the code: which resulted in: so the compiler says and so I'm stuck in a loop. The compiler can't decide whether I should borrow or not. As a newbie to Rust, this is very frustrating and the opposite of helpful, it just confuses me further. Please fix this indecisiveness :) :\nFor reference, what you likely want to do is one of the following which means they cannot be the type of a local binding. You can give a binding a slice type (), because that's morally equivalent to a pointer (with sizing info), which has a compile time determined size, as all borrows are .\nCurrent output: After :", "commid": "rust_issue_72742", "tokennum": 172}], "negative_passages": []} {"query_id": "q-en-rust-46cc7ceeb9fe35208ab598bf94fb614c55d518d87bbf0d6b3ae98fa31b92a213", "query": "// The lifetime was defined on node that doesn't own a body, // which in practice can only mean a trait or an impl, that // is the parent of a method, and that is enforced below. assert_eq!(Some(param_owner_id), self.root_parent, \"free_scope: {:?} not recognized by the region scope tree for {:?} / {:?}\", param_owner, self.root_parent.map(|id| tcx.hir().local_def_id_from_hir_id(id)), self.root_body.map(|hir_id| DefId::local(hir_id.owner))); if Some(param_owner_id) != self.root_parent { tcx.sess.delay_span_bug( DUMMY_SP, &format!(\"free_scope: {:?} not recognized by the region scope tree for {:?} / {:?}\", param_owner, self.root_parent.map(|id| tcx.hir().local_def_id_from_hir_id(id)), self.root_body.map(|hir_id| DefId::local(hir_id.owner)))); } // The trait/impl lifetime is in scope for the method's body. self.root_body.unwrap().local_id", "positive_passages": [{"docid": "doc-en-rust-a307df6afd7fee0359c06b3ec49e368dd69f57ee370ac4b77c4fed4ee685c190", "text": "The following minimal example causes an internal compiler error. Error in question: rustc version: This error was encountered on stable, but I can confirm that it also occurs on the latest nightly ().\nHasn't the feature gate issue been fixed? :|\nI've removed the nomination and this to the tracking issue for GATs . The current impl is highly incomplete and ICEs are, sadly, expected. Also, I verified that a feature gate is indeed required.\ntriage: P-medium\nI don't know if I get you wrong, but the example above ICEs on stable, beta and nightly without feature gate\noops, looks like I accidentally left the feature gate in there when I tried it out, but it produces the same ICE when it is omitted as well.\nNo, niko edited your question. ping I think you got something wrong there. Can you please clarify that?\nOh, I guess I was indeed incorreect. I tried it on play but didn't notice that, after the \"correct\" error, it goes on to print an ICE.\nWhat about adding the I-nominated tag again?\nI'm not sure why we would nominate this for discussion. Yes, it is an ICE (and an ICE that occurs without the user adding a feature gate), but the message pretty clearly states that you're using an unstable feature. It would be great to fix it so that it issues the error diagnostic without ICEing, but I wouldn't re-prioritize that work above P-medium.\nassigning to self since its a clear annoyance\ntriage: P-medium. (dupe of )\n(the remaining bug, when the feature gate is enabled, is tracked in issue )", "commid": "rust_issue_60654", "tokennum": 354}], "negative_passages": []} {"query_id": "q-en-rust-46fe4a73a4ec17281f7b78e178c7786650e0e1aa70cf092d023b05f403f16b9e", "query": "Value(int), Missing, } fn main() { let x = Value(5); let y = Missing; match x { Value(n) => println!(\"x is {}\", n), Missing => println!(\"x is missing!\"), } match y { Value(n) => println!(\"y is {}\", n), Missing => println!(\"y is missing!\"), } } ``` This enum represents an `int` that we may or may not have. In the `Missing`", "positive_passages": [{"docid": "doc-en-rust-e2ad4b52738af25b28d330b9db3e2ab4fbb26ca5cce4b94bd69a3534ceeaccc9", "text": "See: The keyword hasn't been introduced until that point but it's used in the code example which is confusing. I believe an if/else solution would be more explanatory, especially because the reader probably wants to see how to unpack the variant of . The same enum could then be used with the keyword in the beginning of the next chapter ().\nYou are correct. Nice catch!\nAre you ok with simply reordering them ? Because introducing sum type without precedent knowledge of pattern matching is a bit pointles (or at least misses the fact that they are juste AWESOME for pattern matching). If you're ok with this I'll PR in a few hours :)\nMaybe, if it flows okay, yeah. I do want to put SOME kind of mention of it first.\nIt's probably easy to find out what is doing by simply looking at it. This part of the tutorial is just a bit confusing and should be fixed. However, my main concern here is that the reader is left clueless on how to solve this problem with if/else: In fact, I really don't know how the equivalent code without would look like. I've briefly tried to figure it out and gave up.\nWith an Option you coul unwrap I guess. By the way why did you write a fake option enum ? Because that's basically what is done with this, it might be better to give out the real name\nBecause we haven't talked about generics yet.\nNicely done. However, my other concern, how this example could be done without , remains unresolved. I'm certain that I'm not the only one who wants to know how it works with before continue reading about . Should I open another issue for that matter?\nI'm honestly not sure that it is possible with an arbitrary enum... You can test for some arbitrary values but i guess but that's it. Bt it's more an issue of sum types in general rather than a Rust issue (I can't find ways to do that in haskell nor in scala)\nAlright, I expected something like this as I was unable to figure out how to write it without . I don't have much experience with statically typed languages, therefore the concept was new to me. If you are right, could we mention in the tutorial that it is actually not possible to write an equivalent solution with ? Certainly, I can't be the only one who was a bit puzzled.", "commid": "rust_issue_18169", "tokennum": 520}], "negative_passages": []} {"query_id": "q-en-rust-46fe4a73a4ec17281f7b78e178c7786650e0e1aa70cf092d023b05f403f16b9e", "query": "Value(int), Missing, } fn main() { let x = Value(5); let y = Missing; match x { Value(n) => println!(\"x is {}\", n), Missing => println!(\"x is missing!\"), } match y { Value(n) => println!(\"y is {}\", n), Missing => println!(\"y is missing!\"), } } ``` This enum represents an `int` that we may or may not have. In the `Missing`", "positive_passages": [{"docid": "doc-en-rust-be6b0474da10c9d4a3d1c3d52a0445cc5bf1118265c9960d01ead7c6981581b8", "text": "Cheers.\nThanks a lot for the remark, it's exactly the kind of comment needed, if you see others don't hesitate to open new issues :) I'll try to add this tonight or tomorrow.", "commid": "rust_issue_18169", "tokennum": 44}], "negative_passages": []} {"query_id": "q-en-rust-4700e226a1862ee1002bd629256e1f41cdbc63f514f094e1d05094d83cd47009", "query": "span } = *self; hcx.hash_hir_item_like(attrs, |hcx| { let is_const = match *node { hir::TraitItemKind::Const(..) | hir::TraitItemKind::Type(..) => true, hir::TraitItemKind::Method(hir::MethodSig { constness, .. }, _) => { constness == hir::Constness::Const } }; hcx.hash_hir_item_like(attrs, is_const, |hcx| { name.hash_stable(hcx, hasher); attrs.hash_stable(hcx, hasher); generics.hash_stable(hcx, hasher);", "positive_passages": [{"docid": "doc-en-rust-77205d48863db45330465ccf4419b60ef659f197740f3bb7a65b2650285412b1", "text": "Found via rust-icci: https://travis- The commit history of the regex crate contains on changeset that only adds a comment which in turn leads to the body of a function being moved. This results in the update of the source location pointed to by a potential overflow panic message. This message is not updated properly in incremental mode: I have not found out yet why the change is not detected. HIR fingerprinting has special treatment of operations that can trigger overflows: There must be some kind of mismatch between when HIR fingerprinting thinks that an operation can panic and when we actually emit panic messages. This is another span-related papercut cc\nFull IR files for reference:", "commid": "rust_issue_45469", "tokennum": 150}], "negative_passages": []} {"query_id": "q-en-rust-470ea1065baa5d3bd3457efbf9d4f6b77a17eb046f847b528cd6fd1e2f47f919", "query": " error[E0277]: the `?` operator can only be applied to values that implement `std::ops::Try` --> $DIR/issue-72766.rs:14:5 | LL | SadGirl {}.call()?; | ^^^^^^^^^^^^^^^^^^ | | | the `?` operator cannot be applied to type `impl std::future::Future` | help: consider using `.await` here: `SadGirl {}.call().await?` | = help: the trait `std::ops::Try` is not implemented for `impl std::future::Future` = note: required by `std::ops::Try::into_result` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "positive_passages": [{"docid": "doc-en-rust-42d8b055d45d294a0a90315e21edde8f76f35ec5c44c40b7b02076d7b468fd4f", "text": " $DIR/invalid-const-arg-for-type-param.rs:6:23 | LL | let _: u32 = 5i32.try_into::<32>().unwrap(); | ^^^^^^^^------ help: remove these generics | | | expected 0 generic arguments | ^^^^^^^^ expected 0 generic arguments | note: associated function defined here, with 0 generic parameters --> $SRC_DIR/core/src/convert/mod.rs:LL:COL | LL | fn try_into(self) -> Result; | ^^^^^^^^ help: consider moving this generic argument to the `TryInto` trait, which takes up to 1 argument | LL | let _: u32 = TryInto::<32>::try_into(5i32).unwrap(); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ help: remove these generics | LL - let _: u32 = 5i32.try_into::<32>().unwrap(); LL + let _: u32 = 5i32.try_into().unwrap(); | error[E0599]: no method named `f` found for struct `S` in the current scope --> $DIR/invalid-const-arg-for-type-param.rs:9:7", "positive_passages": [{"docid": "doc-en-rust-c6674e20ed90b7f60d5a3d2c25149d2f37743c5a187ef4fdd77b1627c50694d6", "text": " $DIR/closure-in-projection-issue-97405.rs:24:5 | LL | assert_static(opaque(async move { t; }).next()); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: consider adding an explicit lifetime bound `::Item: 'static`... = note: ...so that the type `::Item` will meet its required lifetime bounds error[E0310]: the associated type `::Item` may not live long enough --> $DIR/closure-in-projection-issue-97405.rs:26:5 | LL | assert_static(opaque(move || { t; }).next()); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: consider adding an explicit lifetime bound `::Item: 'static`... = note: ...so that the type `::Item` will meet its required lifetime bounds error[E0310]: the associated type `::Item` may not live long enough --> $DIR/closure-in-projection-issue-97405.rs:28:5 | LL | assert_static(opaque(opaque(async move { t; }).next()).next()); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: consider adding an explicit lifetime bound `::Item: 'static`... = note: ...so that the type `::Item` will meet its required lifetime bounds error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0310`. ", "positive_passages": [{"docid": "doc-en-rust-cc977dabed974d4bcbd1cb2e4f8886eb7e1f6cae7a3aa4a4ee14b49a9dba2970", "text": " $DIR/lex-bad-str-literal-as-char-3.rs:5:21 | LL | println!('hello world'); | ^^^^^ unknown prefix | = note: prefixed identifiers and literals are reserved since Rust 2021 help: if you meant to write a string literal, use double quotes | LL | println!(\"hello world\"); | ~ ~ error[E0762]: unterminated character literal --> $DIR/lex-bad-str-literal-as-char-3.rs:5:26 |", "positive_passages": [{"docid": "doc-en-rust-eff82a61886b25b0f1dd4cf2300b1ef5995dc7fc7eccfb7efad8700de136ebcb", "text": " $DIR/issue-41366.rs:10:5 | LL | (&|_|()) as &dyn for<'x> Fn(>::V); | ^^-----^ | | | | | found signature of `fn(_) -> _` | expected signature of `for<'x> fn(>::V) -> _` | = note: required for the cast to the object type `dyn for<'x> std::ops::Fn(>::V)` error[E0271]: type mismatch resolving `for<'x> <[closure@$DIR/issue-41366.rs:10:7: 10:12] as std::ops::FnOnce<(>::V,)>>::Output == ()` --> $DIR/issue-41366.rs:10:5 | LL | (&|_|()) as &dyn for<'x> Fn(>::V); | ^^^^^^^^ expected bound lifetime parameter 'x, found concrete lifetime | = note: required for the cast to the object type `dyn for<'x> std::ops::Fn(>::V)` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0271`. ", "positive_passages": [{"docid": "doc-en-rust-bad15165f9c17e24bf9acf94720de1600514f6cf5915295f602149400d545451", "text": "Binder(<[type error] as ToBytes) This only happens if there are two erroneous functions, if I take out it works fine. It seems like I was trying to do this wrong in the first place since the first function gives an error . Not sure what the right way to do this is. // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // @has issue_31899/index.html // @has - 'Make this line a bit longer.' // @!has - 'rust rust-example-rendered' // @!has - 'use ndarray::arr2' // @!has - 'prohibited' /// A tuple or fixed size array that can be used to index an array. /// Make this line a bit longer. /// /// ``` /// use ndarray::arr2; /// /// let mut a = arr2(&[[0, 1], [0, 0]]); /// a[[1, 1]] = 1; /// assert_eq!(a[[0, 1]], 1); /// assert_eq!(a[[1, 1]], 1); /// ``` /// /// **Note** the blanket implementation that's not visible in rustdoc: /// `impl NdIndex for D where D: Dimension { ... }` pub fn bar() {} /// Some line /// /// # prohibited pub fn foo() {} /// Some line /// /// 1. prohibited /// 2. bar pub fn baz() {} /// Some line /// /// - prohibited /// - bar pub fn qux() {} /// Some line /// /// * prohibited /// * bar pub fn quz() {} /// Some line /// /// > prohibited /// > bar pub fn qur() {} /// Some line /// /// prohibited /// ===== /// /// Second /// ------ pub fn qut() {} ", "positive_passages": [{"docid": "doc-en-rust-c8c31d0935bfad6453ca56a52ce12aeae4d8a710911d54dfc638be37cf0f3959", "text": "A documentation comment like the one cited verbatim below, produces a main page summary that includes the first line of the code in the example. This is very surprising since there's an empty line in between, and the code example inserts a large colored box in the overview page (the parent module). cc since it's probably related to PR /// A tuple or fixed size array that can be used to index an array. /// /// /// /// Note the blanket implementation that's not visible in rustdoc: /// !", "commid": "rust_issue_31899", "tokennum": 121}], "negative_passages": []} {"query_id": "q-en-rust-496d056cfae95062ed5ba639706d3252d31abc888706828a3c2254344cc4d447", "query": " error[E0277]: the trait bound `T: Clone` is not satisfied --> $DIR/global-cache-and-parallel-frontend.rs:15:17 | LL | #[derive(Clone, Eq)] | ^^ the trait `Clone` is not implemented for `T`, which is required by `Struct: PartialEq` | note: required for `Struct` to implement `PartialEq` --> $DIR/global-cache-and-parallel-frontend.rs:18:19 | LL | impl PartialEq for Struct | ----- ^^^^^^^^^^^^ ^^^^^^^^^ | | | unsatisfied trait bound introduced here note: required by a bound in `Eq` --> $SRC_DIR/core/src/cmp.rs:LL:COL = note: this error originates in the derive macro `Eq` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider restricting type parameter `T` | LL | pub struct Struct(T); | +++++++++++++++++++ error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0277`. ", "positive_passages": [{"docid": "doc-en-rust-a4b53c3645bc265e33649832a59ad749f3726a8fed73b58b68318620ba8777f9", "text": " $DIR/global-cache-and-parallel-frontend.rs:15:17 | LL | #[derive(Clone, Eq)] | ^^ the trait `Clone` is not implemented for `T`, which is required by `Struct: PartialEq` | note: required for `Struct` to implement `PartialEq` --> $DIR/global-cache-and-parallel-frontend.rs:18:19 | LL | impl PartialEq for Struct | ----- ^^^^^^^^^^^^ ^^^^^^^^^ | | | unsatisfied trait bound introduced here note: required by a bound in `Eq` --> $SRC_DIR/core/src/cmp.rs:LL:COL = note: this error originates in the derive macro `Eq` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider restricting type parameter `T` | LL | pub struct Struct(T); | +++++++++++++++++++ error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0277`. ", "positive_passages": [{"docid": "doc-en-rust-91a511678b8bd37afda281060b509c61400313307c6b72d35c98ba6aba169bdd", "text": "Pretty much what the title says, I've narrowed it down to and today's nightly with the following: I've attached some backtraces.\nprobably duplicate of but made more prevalent by next solver\nIs it possible to provide a link to or a snippet of a reproducer? I know it probably happens to various crates, but it still would be nice to have something that reproduces.\nSee for an attached repro.\nThe repro case is basically the same as , but with other crates mentioned in the ICE logs from the issue comment.", "commid": "rust_issue_130088", "tokennum": 114}], "negative_passages": []} {"query_id": "q-en-rust-496d056cfae95062ed5ba639706d3252d31abc888706828a3c2254344cc4d447", "query": " error[E0277]: the trait bound `T: Clone` is not satisfied --> $DIR/global-cache-and-parallel-frontend.rs:15:17 | LL | #[derive(Clone, Eq)] | ^^ the trait `Clone` is not implemented for `T`, which is required by `Struct: PartialEq` | note: required for `Struct` to implement `PartialEq` --> $DIR/global-cache-and-parallel-frontend.rs:18:19 | LL | impl PartialEq for Struct | ----- ^^^^^^^^^^^^ ^^^^^^^^^ | | | unsatisfied trait bound introduced here note: required by a bound in `Eq` --> $SRC_DIR/core/src/cmp.rs:LL:COL = note: this error originates in the derive macro `Eq` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider restricting type parameter `T` | LL | pub struct Struct(T); | +++++++++++++++++++ error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0277`. ", "positive_passages": [{"docid": "doc-en-rust-32ff646f69c4d3591634de303f58948eacb7c8a704ef0a69b4d876dbb32ef910", "text": "Compile with latest Nightly got ICE like below: when compiling in code OS: x86_64-apple-darwin\ncc\nWyvern, could you provide some sort of reproducer? A link to some commit in some repository or better yet a smaller stripped down version of it?\nCan you provide an example which triggers this behavior?\nI'm not sure which commit/PR of lead to this crash, sometimes it compiled successfully, sometimes it not. Add this info to original post, hope to help investigation.\nare you running with greater than 1? The cache is broken when used with more than 1 thread (which we should fix)\nSorry, I wasn't clear. I meant a link to your project (the project you are trying to compile) if publicly accessible / not proprietary.\nI can hit plenty of these while fuzzing with -Zthreads so yeah its probably that.\nYeah, I think was the culprit. Comment out this option crashes disappears.\nthe easiest fix to rustc is to just disable this assert when we're using more than 1 thread. I've only it for debugging purposes to detect cases where we unnecessarily recompute cache entries\njust hit this while trying to bootstrap with Zthreads :sweat_smile:\ncc as this also showed up in your ci test run ^^\nWhen I just disable this assert, all relevant UI tests passed with no other bugs or ICE\nsamle code that triggers this in around 5 of 6 cases with -Zthreads=16", "commid": "rust_issue_129112", "tokennum": 318}], "negative_passages": []} {"query_id": "q-en-rust-497e17c2cc8895bbd252087420562758e075dadf26208d283034f699c059322d", "query": "(\"thread_local\", Whitelisted, Gated(Stability::Unstable, \"thread_local\", \"`#[thread_local]` is an experimental feature, and does not currently handle destructors. There is no corresponding `#[task_local]` mapping to the task model\", not currently handle destructors.\", cfg_fn!(thread_local))), (\"rustc_on_unimplemented\", Normal, Gated(Stability::Unstable,", "positive_passages": [{"docid": "doc-en-rust-a286439ec637b6a7e4cb759b48a2616bb0696b1ec7771f286f775eb9bc9b74ea", "text": "I will like to fix this. What should be the better warning message here? Or I just need to remove the last sentence?\nIt seems fine to just remove the sentence.\nFixed by", "commid": "rust_issue_47755", "tokennum": 37}], "negative_passages": []} {"query_id": "q-en-rust-499f87e22e4c82bb4c26c1e60d63547bc36b63ff62ec3453415212e1fe18bcc6", "query": " trait Foo {} impl Foo for T {} fn main() { let array = [(); { loop {} }]; //~ ERROR constant evaluation is taking a long time let tup = (7,); let x: &dyn Foo = &tup.0; } ", "positive_passages": [{"docid": "doc-en-rust-af3d2230aad15cc68acee8c540d3300b7de8b977332bd80ef9fd1fa98b3c3c7c", "text": " $DIR/borrow-after-move.rs:39:24 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | println!(\"{}\", &x); | ^^ value borrowed here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait | ^^ value borrowed here after move error: aborting due to 5 previous errors", "positive_passages": [{"docid": "doc-en-rust-86145dda3b6fd95b4efbdb679c46b9803ebc45ffea7ba9a1d53d1d81b4427672", "text": "The following code fails on stable: When compiled in release mode, the assert is optimized out by LLVM even though it checks the exact same condition as the other assert, as can be verified by testing in debug mode. Note that it's theoretically possible for the stack to be aligned correctly such that the bug is suppressed, but that's not likely. The 256's can be replaced with larger alignments if that happens. The bug is caused by the impl of copying to the stack with the default alignment of 16. ()\nand seem related\nThis is a soundness hole.\nSo... I guess the problem is that the that rustc emits for the unsized local doesn't use ? That's... kind of bad, because it doesn't look like even supports dynamically setting the alignment: If this is correct, the issue should be renamed to something like \"unsized locals do not respect alignment\".\nA way to fix this would be to change the ABI for dynamic trait calls that take by value to always pass by address. When the trait object is created is where the trampoline function would be generated, so there is no additional cost for static calls.\nThis can also be fixed by not allowing dynamically-sized local variables with an alignment more than some maximum value and having all dynamically-sized locals use that maximum alignment unless optimizations can prove that a smaller alignment suffices.\nMessing with the ABI wouldn't help with unsized locals in general (assuming is correct), since e.g. will also need an alloca with dynamic alignment in general. Given only a value, there is no static way to say anything about its alignment, so how would you distinguish whether calling a method on it is OK?\nCan we call and then align the pointer for the local ourselves as a workaround until we come up with a good solution?\nIn clang there's a function which might or might not be useful in this case.\nrequires a constant alignment. Non constant alignments are . This matches semantics of LLVM instruction, so it's not helping, as alignment of value in is unknown at compile time when moving from .\nI think that overallocating for unsized locals of unknown alignment may make sense. You can and then make use of to compute the offset. Sure, it increases stack requirements, but it seems acceptable for , and in most cases you end up with extra 7 bytes on stack or less. 256 alignment seems like an edge case where wasting 255 bytes doesn't matter.", "commid": "rust_issue_68304", "tokennum": 530}], "negative_passages": []} {"query_id": "q-en-rust-4a7b048a54f6995f9dfc91bd475870f46b5416567d1866948593732c8907864b", "query": "error[E0382]: borrow of moved value: `x` --> $DIR/borrow-after-move.rs:39:24 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | println!(\"{}\", &x); | ^^ value borrowed here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait | ^^ value borrowed here after move error: aborting due to 5 previous errors", "positive_passages": [{"docid": "doc-en-rust-35b067be423839cfae33dfa0b48475783f27ab1ed0904fd969f99c950d943054", "text": "if we keep setting as the default alignment we can skip the overallocating and offset dance for types with an alignment below that.\nMy idea was that rustc would prohibit coercing overaligned types to a dyn trait that takes by value. So:\nThe alignment is dynamically determined though, not sure if saving 15 bytes of stack space is worth the conditional branch?\nOh right. Yea that's probably not helpful.\nSince , this would be backwards incompatible (at latest) once trait object upcasting is finally implemented: and trait objects would have to have the same alignment restriction (because you can upcast to ), but today you can freely create such objects with any alignment.\nI was under the impression that calling didn't require dynamic , is that related to missing MIR optimizations, meaning we have unnecessary moves which end up duplicating the data on the stack using a dynamic ?\nCorrectness shouldn't rely on optimizations to fire...\nSure, I was just hoping we weren't relying on \"unsized locals\", only unsized parameters (i.e. ), for what we're exposing as stable. And I was hoping an optimization wouldn't need to be involved to get there. I wonder if it would even be possible to restrict \"unsized locals\" to the subset where we can handle it entirely through ABI, and not actually have dynamic s anywhere, since that would make me far more comfortable with 's implementation or even stabilization. Ironically, if that works, it's also the subset we could've implemented years earlier than we did, since most of the complexity, unknowns (and apparently bugs) are around dynamic s, not calling trait object methods on a dereference of .\nThat should be possible. I think everything that is necessary is performing the deref of the box as argument () instead of moving the box contents to a local first () That would also make it possible to call in cg_clif. Cranelift doesn't have alloca support, so the current MIR is untranslatable.\nHmm, but that's not trivially sound because it could lead to MIR calls where argument and return places overlap, which is UB. We'd potentially have to special-case it to derefs, but even then what's to stop someone from doing ? Maybe we could introduce a temporary for the return value instead, for these kinds of virtual calls? That should remove any potential overlap while allowing the call.", "commid": "rust_issue_68304", "tokennum": 517}], "negative_passages": []} {"query_id": "q-en-rust-4a7b048a54f6995f9dfc91bd475870f46b5416567d1866948593732c8907864b", "query": "error[E0382]: borrow of moved value: `x` --> $DIR/borrow-after-move.rs:39:24 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | println!(\"{}\", &x); | ^^ value borrowed here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait | ^^ value borrowed here after move error: aborting due to 5 previous errors", "positive_passages": [{"docid": "doc-en-rust-e0557b2e15a4d389c0583e27ec38e510a3d214eb85386293065b15f95dc546f5", "text": "(Unrelatedly, I just realize we use a copying-out shim for vtable entries that take by value, even when would be passed as at the call ABI level. We would probably have to make computing the call ABI a query for this microopt to not be expensive)\nThat would result in being moved to a temp first before dropping and calling the boxed closure. If it wasn't moved first before the drop of , then borrowck would give an error. (I meant for this change to be done at mir construction level, not as an optimization pass, so borrowck should catch any soundness issues)\nWe can try it, I guess, but right now a lot relies on many redundant copies/moves being created by MIR building and may not catch edge cases like this. But also I don't think moving out of a moves the itself today. Me too, but I was going with the \"correct by construction\" approach of creating a redundant copy/move, just on the return side, instead of the argument side, of the call.\nAfter revisiting on triage, removing nomination.\npre-triage: nominated for discussion in the next weekly meeting.\nI just checked and, a function call roughly this (touched up for clarity): Some of my previous comments assumed the wasn't there and the call would use the destination directly, but given that there is a move, that simplifies my suggestion.
$DIR/borrow-after-move.rs:39:24 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | println!(\"{}\", &x); | ^^ value borrowed here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait | ^^ value borrowed here after move error: aborting due to 5 previous errors", "positive_passages": [{"docid": "doc-en-rust-5beba8d62abc39b05f6e650dcce3335b6738bd1d349d26aea50dceae17630e1f", "text": "I'm in favor of this proposal, but I do think we want to be slightly careful to ensure we're not violating soundness. The main concern would be if there is some way to access or re-use the variable that was moved. I attempted to do so with this example () but it fails to compile: The error here arises because the closure would need to capture a to do anything, which prevents a later move. But maybe there is some kind of tricky example that can arise where we re-assign the value during argument evaluation?\ncompiles and does before , where is the result of .\nBorrowck should prevent that, right?\nborrowck may rely on MIR building creating extraneous copies, so we can't just assume it helps us. IMO we should limit it to functions taking unsized parameters and function calls passing such parameters or dereferences of es (any other pointer/reference doesn't support moving out of).\nWell, I think is right that borrowck would catch violations of soundness in some strict sense, but I guess what I meant is that we might be breaking backwards compatibility.\nThe example that feels pretty relevant. If I understood your proposal, it would no longer compile. If I recall (thinking back), I had a plan to handle this by save the to an alternate location in the actual call, we pass ownership of and then do a shallow free of the This still seems like it would work, though I'm not sure how easy/elegant it is to formulate. In a sense we'd be rewriting to .\nIsn't example dependent on doing a direct virtual call? My understanding is that you can't do this on stable with because the virtual call exists in one place, in libstd, and it doesn't do anything weird, so it would work with a restricted ruleset. EDIT: if it can work on stable, it's because calling 's will move the whole before the assignment is evaluated, which is fine, because the shallow drop is actually inside that , not its caller.\nI agree that this code doesn't compile on stable today, but I'm not sure if that's the point. I guess you're saying it's ok for it to stop compiling? I think what I would like is that our overall behavior is as consistent as possible between Sized and Unsized parameters. I think we could actually do a more general change that doesn't special case Sized vs Unsized or anything else.", "commid": "rust_issue_68304", "tokennum": 522}], "negative_passages": []} {"query_id": "q-en-rust-4a7b048a54f6995f9dfc91bd475870f46b5416567d1866948593732c8907864b", "query": "error[E0382]: borrow of moved value: `x` --> $DIR/borrow-after-move.rs:39:24 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | println!(\"{}\", &x); | ^^ value borrowed here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait | ^^ value borrowed here after move error: aborting due to 5 previous errors", "positive_passages": [{"docid": "doc-en-rust-c0a55b5a9c88bb1e08f508b1fb679a6cf9e068e808080659d5f7ae2be7470592", "text": "I think I am basically proposing the same thing as earlier, but with a twist: Today, when building the parameters, we introduce a temporary for each parameter. So if we had we might do: What I am saying is that we could say: If the argument is moved (not copied), and the \"place\" has outer derefs (e.g., we are passing where is some place), then we assign the place to a temporary and we make the argument to the function be : I don't believe this is observable to end-users, except potentially for when the destructor on runs. I believe that it accepts all the same code we used to accept. It also has nothing specific to or types at all. Does that make sense? I'm very curious to know if anyone sees any problems with it. (One obvious thing: if we ever add , its design would have to accommodate this requirement, but I think that's fine, in fact the complications around getting moves right for precisely this case are one of the reasons we've held off on thus far.)\nCan you not reinitialize today? If you move the whole , it is indeed more conservative (and should let us skip unnecessary moves even in cases). But also, do you want to move the locals check from typeck to borrowck? I don't have an intuition for when your approach wouldn't create unsized MIR locals, from the perspective of typeck. However, this is only relevant if we want a stricter feature-gate. At the very least it seems like it would be easier to implement a change only to MIR building, and it should fix problems with for the time being, even without a separate feature-gate.\nHmm, that's a good point, I think do that. Sigh. I knew that seemed too easy. I remember we used to try and prevent that, but we never did it fully, and in the move to NLL we loosened those rules back up, so this compiles: Still, you could do my proposal for unsized types, though it would introduce an incongruity. UPDATE: Oh, wait, I remember now. For unsized types, reinitialization is not possible, because you can't do with an unsized rvalue (we wouldn't know how many bytes to move). So in fact there isn't really the same incongruity.", "commid": "rust_issue_68304", "tokennum": 510}], "negative_passages": []} {"query_id": "q-en-rust-4a7b048a54f6995f9dfc91bd475870f46b5416567d1866948593732c8907864b", "query": "error[E0382]: borrow of moved value: `x` --> $DIR/borrow-after-move.rs:39:24 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | println!(\"{}\", &x); | ^^ value borrowed here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait | ^^ value borrowed here after move error: aborting due to 5 previous errors", "positive_passages": [{"docid": "doc-en-rust-8eda5e2da06426c5ca0d1e41f6002aa4939e49c4190b6fbed85bbea5acf8da8e", "text": "To be clear, I wasn't proposing changing anything apart from MIR building either, I don't think. IIUC, what you proposed is that would be compiled to: whereas under my proposal (limited to apply to unsized parameters, instead of moved parameters) would yield: I think I mis-stated the problem earlier. It's not so much that your proposal will introduce compilation errors, it's that I think it changes the semantics of code like this in what I see as a surprising way: I believe that this code would compile to the following under your proposal: which means that would be invoked with the new value of , right? Under my proposal, the code would still compile, but it would behave in what I see as the expected fashion. Specifically, it would generate:\nIsn't there a way to \"lock\" that value through the borrow-checker, so further accesses would be disallowed, until the call returns? I'd rather be stricter here.
$DIR/borrow-after-move.rs:39:24 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | println!(\"{}\", &x); | ^^ value borrowed here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait | ^^ value borrowed here after move error: aborting due to 5 previous errors", "positive_passages": [{"docid": "doc-en-rust-27fd42f01a24ee7bd4fa9f1c0f7bf282c5af37d103c7e5a053a6ed0f28b8ecdb", "text": "In particular, the IR for under your proposal, if I understood, would be But the access to that (I believe) we wish to prevent takes place in the \"evaluate remaining arguments\" part of things. So I think we'd have to add some sort of \"pseudo-move\" that tells borrowck that should not be assignable from that point forward, right? Well, it depends on your perspective. It's true that, given the need to permit reinitialization, the check is now specific to unsized values -- i.e., MIR construction will be different depending on whether we know the type to be sized or not. That's unfortunate. However, I think we still maintain a degree of congruence, from an end-user perspective. In particular, users could not have reinitialized a anyhow. However, under my proposal, users can write things like and the code works as expected (versus either having an unexpected result or getting an error). In any case, it seems like we're sort of \"narrowing in\" on a proposal here: when we see a function call with a parameter that is , and the value being passed is a deref, we want to modify the IR to not introduce a temporary and instead pass that parameter directly from its location, either we do some sort of \"no more assignments\" lock, or we move the pointer and introduce a temporary (and then pass in a deref of that temporary) Does that sound right so far?\nIf it's easier to do the more flexible approach (by moving the pointer) than the \"locking\" approach (which is a bit like borrowing the pointer until all arguments are evaluated, then releasing the borrow and doing a move instead), I don't mind it, I was only arguing with the \"uniformity\" aspect. Something important here: your approach is stabilizable, while mine is meant more for that one in the standard library, and not much else.\nYes, I was kind of shooting for something that we could conceivably stabilize.\nRemoving nomination, this was already discussed in triage meeting and it's under control.\nSo should we maybe try out the approach I proposed and see how well it works?\nyes, going to try that out.\nFor the record, the \"new plan\" is this.", "commid": "rust_issue_68304", "tokennum": 480}], "negative_passages": []} {"query_id": "q-en-rust-4a7b048a54f6995f9dfc91bd475870f46b5416567d1866948593732c8907864b", "query": "error[E0382]: borrow of moved value: `x` --> $DIR/borrow-after-move.rs:39:24 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | println!(\"{}\", &x); | ^^ value borrowed here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait | ^^ value borrowed here after move error: aborting due to 5 previous errors", "positive_passages": [{"docid": "doc-en-rust-7cf635da4f5adcfae533b40271ecf10f20aa01733028a100f12d429f49c1ae0c", "text": "When generating the arguments to a call, we ordinarily will make a temporary containing the value of each argument: But we will modify this such that, for arguments that meet the following conditions: The argument is moved (not copied), and its type is not known to be , and the \"place\" has outer derefs (e.g., we are passing P where P is some place), then we assign the place P (instead of ) to a temporary and we make the argument to the function be (instead of ). This means that given where , we used to generate but now we generate We are now moving the box itself, which is slightly different than the semantics for values known to be sized. In particular, it does not permit of the box after having moved its contents. But that is not supported for unsized types anyway. The change does permit modifications to the callee during argument evaluation and they will work in an analogous way to how they would work with sized values. So given something like: we will find that we (a) save the old value of to a temporary and then (b) overwrite while evaluating the arguments and then (c) invoke the with the old value of .\nUpdate: There is also\nis a perfectly normal operand, I don't know what you mean. It's the same kind of as we currently have in (on the RHS)\nOh sorry, somehow I thought operands could only refer directly to locals, not also dereference them. I think I am beginning to see the pieces here -- together with function argument handling, this entirely avoids having to dynamically determine the size of a local. I have to agree that is quite elegant, even though baking \"pass-by-reference\" into MIR semantics still seems like a heavy hammer to me. On the other hand this places a severe restriction on now unsized locals may be used, basically taking back large parts of Is there some long-term plan to bring that back (probably requires with dynamic alignment in LLVM)? Or should we amend the RFC or at least the tracking issue with the extra restrictions?\nI would keep the more general feature but use two different feature-gates (if we can). And just like with a few other feature-gates () we could warn that (the general one) is incomplete and could lead to misaligned variables (or something along those lines).", "commid": "rust_issue_68304", "tokennum": 499}], "negative_passages": []} {"query_id": "q-en-rust-4a7b048a54f6995f9dfc91bd475870f46b5416567d1866948593732c8907864b", "query": "error[E0382]: borrow of moved value: `x` --> $DIR/borrow-after-move.rs:39:24 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | println!(\"{}\", &x); | ^^ value borrowed here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait | ^^ value borrowed here after move error: aborting due to 5 previous errors", "positive_passages": [{"docid": "doc-en-rust-e815c1522a134b198016730d0062b3d332c75119cceb0b89e314acd16ab4a8f8", "text": "We could also add runtime asserts that the (dynamic) alignment is respected, once we know could never trigger them (e.g. from stable code).\nI was wondering about that too. It seems like it'd be good to review the more general feature in any case. This is partly why I brought up the idea of wanting a guaranteed \"no alloca\" path -- this was something I remember us discussing long ago, and so in some sense I feel like the work we're doing here isn't just working around LLVM's lack of dynamic alignment support (though it is doing that, too) but also working to complete the feature as originally envisioned.\nI two notes to the tracking issue so we don't overlook this in the future.\nThis is probably expected, but the fix failed to actually make sound:", "commid": "rust_issue_68304", "tokennum": 172}], "negative_passages": []} {"query_id": "q-en-rust-4ac199984a45b234d092426330dd856a1325eff0ed58c50b3bc406ab084562f3", "query": " #![feature(fn_delegation)] #![allow(incomplete_features)] mod to_reuse {} trait Trait { reuse to_reuse::foo { foo } //~^ ERROR cannot find function `foo` in module `to_reuse` //~| ERROR cannot find value `foo` in this scope } fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-146312829290b27802b9e72c4c1d00d7d6e734a8e4dd04d06994a15e12da085f", "text": " $DIR/unsafe-foreign-mod-2.rs:4:5 | LL | unsafe fn foo(); | ^^^^^^^^^^^^^^^^ | help: add unsafe to this `extern` block | LL | unsafe extern \"C\" unsafe { | ++++++ error: aborting due to 3 previous errors", "positive_passages": [{"docid": "doc-en-rust-314973a620198a343d95c6001fa5535e22745954ce83c0927d2a46bf1bfbe699", "text": "With the following code: We get this error: Making the block then gives: cc\n:+1:, this should error in both cases but the diagnostics are misleading. Going to fix it. r?", "commid": "rust_issue_126327", "tokennum": 42}], "negative_passages": []} {"query_id": "q-en-rust-4c5c2cbf3d1e12b2ec0579e07144c5d2ff900522f24a888491f9bb1ecfdd1543", "query": " // Don't suggest double quotes when encountering an expr of type `char` where a `&str` // is expected if the expr is not a char literal. // issue: rust-lang/rust#125595 fn main() { let _: &str = ('a'); //~ ERROR mismatched types let token = || 'a'; let _: &str = token(); //~ ERROR mismatched types } ", "positive_passages": [{"docid": "doc-en-rust-205d33f2c4ca407a5eba587eb9d79d5b66428139a1f256eedc1fb0c137300290", "text": " $DIR/outer-lifetime-in-const-generic-default.rs:4:17 | LL | let x: &'a (); | ^^ | = note: for more information, see issue #74052 error: aborting due to previous error For more information about this error, try `rustc --explain E0771`. ", "positive_passages": [{"docid": "doc-en-rust-fab3ca8c9179b639d7db0f30758e18fd05d791de7fbda0ef99ca6ccdc52995f6", "text": "Const generic defaults are being stabilized, $DIR/outer-lifetime-in-const-generic-default.rs:4:17 | LL | let x: &'a (); | ^^ | = note: for more information, see issue #74052 error: aborting due to previous error For more information about this error, try `rustc --explain E0771`. ", "positive_passages": [{"docid": "doc-en-rust-da4b543757f0f59cd2c68f75eec2130e5847519474f60ca31783db1ed199f8fc", "text": "Yeah I think in this case it helped uncover the bug, because of the fact it ICEd and showed us we weren't resolving the closure default.\nI guess this example shows that this did help us uncover a bug, but kind of indirectly: This just didn't uncover a case where \"lost\" bound vars.\nthis is now fixed on beta", "commid": "rust_issue_93647", "tokennum": 71}], "negative_passages": []} {"query_id": "q-en-rust-4c82e06350c3732594ebfca98df7cefc26bb3841efe36fe21d0aca03c62077ce", "query": "/// ``` ``` As of version 1.34.0, one can also omit the `fn main()`, but you will have to disambiguate the error type: ```ignore /// ``` /// use std::io; /// let mut input = String::new(); /// io::stdin().read_line(&mut input)?; /// # Ok::<(), io:Error>(()) /// ``` ``` This is an unfortunate consequence of the `?` operator adding an implicit conversion, so type inference fails because the type is not unique. Please note that you must write the `(())` in one sequence without intermediate whitespace so that rustdoc understands you want an implicit `Result`-returning function. ## Documenting macros Here\u2019s an example of documenting a macro:", "positive_passages": [{"docid": "doc-en-rust-381ae64d6501835356d822025536ad50a25cf2598179158099e24a3fc7856415", "text": "Rustdoc now allows us to use in our tests. Unfortunately, this requires writing the manually, which really shouldn't be needed IMHO. So we now have to write: rust ///# fn main() -Result<(), (){ /// let foo = Ok(1usize); /// assert!(foo? 0); ///# } /// where it should suffice to write: rust /// let foo = Ok(1usize); /// assert!(foo? 0); /// Doing so will make the tests shorter and hopefully lead to more and less in tests.\nFeels like a no-brainer to me; would be great. :)\nI think there are two things we want before doing this: , so that it can always be , and existing code will just work. The re-do, so we can make a type that accepts any error type (since it'll just error anyway).\n?\nYes, that's what I'm thinking, and I'm setting up a PR for that. However, it's still unstable.\nI talked about this back in March on the \" in main\" tracking issue: That sample you gave isn't even valid as-is, unless coercing to is also in the pipeline?\nThat'll be next, hopefully. I wonder if it suffices to or if we need to keep a generic error type (which might mess with type inference). In any event, having both the aforementioned and should be a no-brainer, right?", "commid": "rust_issue_56260", "tokennum": 336}], "negative_passages": []} {"query_id": "q-en-rust-4ca06783b01c0e83098606730a471bc711a3df4388da5972e3c0dc419b6a44b4", "query": " error: macro-expanded `extern crate` items cannot shadow names passed with `--extern` --> $DIR/issue-78325-inconsistent-resolution.rs:3:9 | LL | extern crate std as core; | ^^^^^^^^^^^^^^^^^^^^^^^^^ ... LL | define_other_core!(); | --------------------- in this macro invocation | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-7d929a55a801b35d30d58d913f2afb0e04998c1d70c5f0cf00e15985806a02d6", "text": " $DIR/missing-main-issue-124935-semi-after-item.rs:5:1 | LL | ; | ^ help: remove this semicolon | = help: function declarations are not followed by a semicolon error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-6cf3c2bb3c08a38bafff3532db72ea1908c7190bb562a2249ecc7c70a784b1e0", "text": "Tested on nightly and stable. The second diagnostic is strange. $DIR/semi-in-let-chain.rs:7:23 | LL | && let () = (); | ^ expected `{` | note: you likely meant to continue parsing the let-chain starting here --> $DIR/semi-in-let-chain.rs:8:9 | LL | && let () = () | ^^^^^^ help: consider removing this semicolon to parse the `let` as part of the same chain | LL - && let () = (); LL + && let () = () | error: expected `{`, found `;` --> $DIR/semi-in-let-chain.rs:15:20 | LL | && () == (); | ^ expected `{` | note: the `if` expression is missing a block after this condition --> $DIR/semi-in-let-chain.rs:14:8 | LL | if let () = () | ________^ LL | | && () == (); | |___________________^ error: expected `{`, found `;` --> $DIR/semi-in-let-chain.rs:23:20 | LL | && () == (); | ^ expected `{` | note: you likely meant to continue parsing the let-chain starting here --> $DIR/semi-in-let-chain.rs:24:9 | LL | && let () = () | ^^^^^^ help: consider removing this semicolon to parse the `let` as part of the same chain | LL - && () == (); LL + && () == () | error: aborting due to 3 previous errors ", "positive_passages": [{"docid": "doc-en-rust-4668bc6d1c9bffda968585ed7ffaf889c5025ae8fe26b23f458ec26b75a810f2", "text": "This is a common typo I make when hoisting an inconditional let binding into the let chain to merge two let-chains that could be one. The parser only needs to do look-ahead 1 and 2 to detect the case and recover. No response No response $DIR/dbg-issue-120327.rs:4:12 | LL | let a = String::new(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | dbg!(a); | ------- value moved here LL | return a; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(&a); | + error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:10:12 | LL | let a = String::new(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | dbg!(1, 2, a, 1, 2); | ------------------- value moved here LL | return a; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(1, 2, &a, 1, 2); | + error[E0382]: use of moved value: `b` --> $DIR/dbg-issue-120327.rs:16:12 | LL | let b: String = \"\".to_string(); | - move occurs because `b` has type `String`, which does not implement the `Copy` trait LL | dbg!(a, b); | ---------- value moved here LL | return b; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(a, &b); | + error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:22:12 | LL | fn x(a: String) -> String { | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | let b: String = \"\".to_string(); LL | dbg!(a, b); | ---------- value moved here LL | return a; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(&a, b); | + error[E0382]: use of moved value: `b` --> $DIR/dbg-issue-120327.rs:46:12 | LL | tmp => { | --- value moved here ... LL | let b: String = \"\".to_string(); | - move occurs because `b` has type `String`, which does not implement the `Copy` trait LL | my_dbg!(b, 1); LL | return b; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | my_dbg!(&b, 1); | + help: borrow this binding in the pattern to avoid moving the value | LL | ref tmp => { | +++ error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:57:12 | LL | let a = String::new(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | let _b = match a { LL | tmp => { | --- value moved here ... LL | return a; | ^ value used here after move | help: borrow this binding in the pattern to avoid moving the value | LL | ref tmp => { | +++ error[E0382]: borrow of moved value: `a` --> $DIR/dbg-issue-120327.rs:65:14 | LL | let a: String = \"\".to_string(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | let _res = get_expr(dbg!(a)); | ------- value moved here LL | let _l = a.len(); | ^ value borrowed here after move | help: consider borrowing instead of transferring ownership | LL | let _res = get_expr(dbg!(&a)); | + error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0382`. ", "positive_passages": [{"docid": "doc-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "text": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "commid": "rust_issue_120327", "tokennum": 503}], "negative_passages": []} {"query_id": "q-en-rust-508160d5a776e6461ef20e59034ea6a641b313cb19f7fa554fa726884f0dc8df", "query": " // Verify that the entry point injected by the test harness doesn't cause // weird artifacts in the coverage report (e.g. issue #10749). // compile-flags: --test #[allow(dead_code)] fn unused() {} #[test] fn my_test() {} ", "positive_passages": [{"docid": "doc-en-rust-099e817a0e38b98fb2e1c643ce49d6118ae26651c9c06c1d15c2983f1b67d09e", "text": " $DIR/nested_bad_const_param_ty.rs:6:10 | LL | #[derive(ConstParamTy)] | ^^^^^^^^^^^^ LL | LL | struct Foo([*const u8; 1]); | -------------- this field does not implement `ConstParamTy` | note: the `ConstParamTy` impl for `[*const u8; 1]` requires that `*const u8: ConstParamTy` --> $DIR/nested_bad_const_param_ty.rs:8:12 | LL | struct Foo([*const u8; 1]); | ^^^^^^^^^^^^^^ = note: this error originates in the derive macro `ConstParamTy` (in Nightly builds, run with -Z macro-backtrace for more info) error[E0204]: the trait `ConstParamTy` cannot be implemented for this type --> $DIR/nested_bad_const_param_ty.rs:10:10 | LL | #[derive(ConstParamTy)] | ^^^^^^^^^^^^ LL | LL | struct Foo2([*mut u8; 1]); | ------------ this field does not implement `ConstParamTy` | note: the `ConstParamTy` impl for `[*mut u8; 1]` requires that `*mut u8: ConstParamTy` --> $DIR/nested_bad_const_param_ty.rs:12:13 | LL | struct Foo2([*mut u8; 1]); | ^^^^^^^^^^^^ = note: this error originates in the derive macro `ConstParamTy` (in Nightly builds, run with -Z macro-backtrace for more info) error[E0204]: the trait `ConstParamTy` cannot be implemented for this type --> $DIR/nested_bad_const_param_ty.rs:14:10 | LL | #[derive(ConstParamTy)] | ^^^^^^^^^^^^ LL | LL | struct Foo3([fn(); 1]); | --------- this field does not implement `ConstParamTy` | note: the `ConstParamTy` impl for `[fn(); 1]` requires that `fn(): ConstParamTy` --> $DIR/nested_bad_const_param_ty.rs:16:13 | LL | struct Foo3([fn(); 1]); | ^^^^^^^^^ = note: this error originates in the derive macro `ConstParamTy` (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0204`. ", "positive_passages": [{"docid": "doc-en-rust-f555731ea4e464d4a3945f4923b0af1df45e1b6f6debc72c5133dab9c5118b02", "text": "The following code fails to compile, because using function pointers as const generics is forbidden: However, if the function pointer is inside an array, it suddenly compiles: Is this intentional, or should it also be forbidden? $DIR/generics.rs:19:9 --> $DIR/generics.rs:20:9 | LL | f3: extern \"C-cmse-nonsecure-call\" fn(T, u32, u32, u32) -> u64, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error[E0798]: function pointers with the `\"C-cmse-nonsecure-call\"` ABI cannot contain generics in their type --> $DIR/generics.rs:20:9 --> $DIR/generics.rs:21:9 | LL | f4: extern \"C-cmse-nonsecure-call\" fn(Wrapper, u32, u32, u32) -> u64, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 5 previous errors error[E0798]: return value of `\"C-cmse-nonsecure-call\"` function too large to pass via registers --> $DIR/generics.rs:27:73 | LL | type WithTraitObject = extern \"C-cmse-nonsecure-call\" fn(&dyn Trait) -> &dyn Trait; | ^^^^^^^^^^ this type doesn't fit in the available registers | = note: functions with the `\"C-cmse-nonsecure-call\"` ABI must pass their result via the available return registers = note: the result must either be a (transparently wrapped) i64, u64 or f64, or be at most 4 bytes in size error[E0798]: return value of `\"C-cmse-nonsecure-call\"` function too large to pass via registers --> $DIR/generics.rs:31:62 | LL | extern \"C-cmse-nonsecure-call\" fn(&'static dyn Trait) -> &'static dyn Trait; | ^^^^^^^^^^^^^^^^^^ this type doesn't fit in the available registers | = note: functions with the `\"C-cmse-nonsecure-call\"` ABI must pass their result via the available return registers = note: the result must either be a (transparently wrapped) i64, u64 or f64, or be at most 4 bytes in size error[E0798]: return value of `\"C-cmse-nonsecure-call\"` function too large to pass via registers --> $DIR/generics.rs:38:62 | LL | extern \"C-cmse-nonsecure-call\" fn(WrapperTransparent) -> WrapperTransparent; | ^^^^^^^^^^^^^^^^^^ this type doesn't fit in the available registers | = note: functions with the `\"C-cmse-nonsecure-call\"` ABI must pass their result via the available return registers = note: the result must either be a (transparently wrapped) i64, u64 or f64, or be at most 4 bytes in size error[E0045]: C-variadic function must have a compatible calling convention, like `C` or `cdecl` --> $DIR/generics.rs:41:20 | LL | type WithVarArgs = extern \"C-cmse-nonsecure-call\" fn(u32, ...); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ C-variadic function must have a compatible calling convention error: aborting due to 9 previous errors Some errors have detailed explanations: E0412, E0562, E0798. For more information about an error, try `rustc --explain E0412`. Some errors have detailed explanations: E0045, E0412, E0562, E0798. For more information about an error, try `rustc --explain E0045`. ", "positive_passages": [{"docid": "doc-en-rust-47f681e79359d4538a29b4fe6560472bcd9c6a462701bf8783cc73cb7468c995", "text": " $DIR/issue-109148.rs:6:9 | LL | extern crate core as std; | ^^^^^^^^^^^^^^^^^^^^^^^^^ ... LL | m!(); | ---- in this macro invocation | = note: this error originates in the macro `m` (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-fb6c2b7293948418dacd066e2443df69f06462bb9ddabc0ee4f62acdb26dc3b5", "text": " $DIR/dont-ice-on-assoc-projection.rs:15:32 | LL | impl Foo for T where T: Bar {} | ^^^^^^^^^ | = note: see issue #92827 for more information = help: add `#![feature(associated_const_equality)]` to the crate attributes to enable error[E0119]: conflicting implementations of trait `Foo` for type `()` --> $DIR/dont-ice-on-assoc-projection.rs:15:1 | LL | impl Foo for () {} | --------------- first implementation here LL | impl Foo for T where T: Bar {} | ^^^^^^^^^^^^^^^^^ conflicting implementation for `()` error: aborting due to 2 previous errors Some errors have detailed explanations: E0119, E0658. For more information about an error, try `rustc --explain E0119`. ", "positive_passages": [{"docid": "doc-en-rust-dafd2920ba43b58ad54539303c7552eb922492a8ce202284909e7cf099e4b1e6", "text": " $DIR/size-of-t.rs:7:30 | LL | let _arr: [u8; size_of::()]; | ^ cannot perform const operation using `T` | = note: type parameters may not be used in const expressions = help: use `#![feature(generic_const_exprs)]` to allow generic const expressions error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-4d3a55561d073ebb66923c8e15b6008c5d2560a7eb12bd917e265626672a4e1d", "text": "Example: Results in the following error: of course, is by default unless you say , so the error and the suggestion are clearly wrong. And if you do say explicitly, you get Note being suggested, which is ~not even valid syntax~ apparently valid syntax, but not something you would write. I expected to see this happen: Either this compiles cleanly Or the compiler emits a different error that makes sense $DIR/issue-92100.rs:5:10 | LL | [a.., a] => {} | ^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-a0ccb76a6e60f5a57f49822c16e14a4158fd6807515502bfbce2a615c588e5c2", "text": " $DIR/issue-93486.rs:3:36 | LL | vec![].last_mut().unwrap() = 3_u8; | -------------------------- ^ | | | cannot assign to this expression error: aborting due to previous error For more information about this error, try `rustc --explain E0070`. ", "positive_passages": [{"docid": "doc-en-rust-c875271ee78e88cab7b1e6cbc7c7d8dd341d5e80e0bb8796a460c13ee15625db", "text": "I was attempting to assign directly to the result of a vector's , which caused this error to fire. That is when I discovered that the help section was telling me to do add an extra let to my while statement, which is not syntactically correct. I tried this code: I expected to see this happen: The code would fail to compile and the help message would tell me to dereference the left side, or at the least give me code that was syntactically correct. Instead, this happened: The could failed to compile, and the help message gave syntactically incorrect code as seen in the backtrace section. $DIR/issue-117789.rs:3:17 | LL | auto trait Trait

{} | -----^^^ help: remove the parameters | | | auto trait cannot have generic parameters error[E0658]: auto traits are experimental and possibly buggy --> $DIR/issue-117789.rs:3:1 | LL | auto trait Trait

{} | ^^^^^^^^^^^^^^^^^^^^^^ | = note: see issue #13231 for more information = help: add `#![feature(auto_traits)]` to the crate attributes to enable error: aborting due to 2 previous errors Some errors have detailed explanations: E0567, E0658. For more information about an error, try `rustc --explain E0567`. ", "positive_passages": [{"docid": "doc-en-rust-f3bfa6d834c46600d3869cbdfdac1ba23ee221d3907dbcfa430172612f8bf8af", "text": " $DIR/unusual-rib-combinations.rs:29:22 | LL | struct Bar Foo<'a>)>; | ^^ | = note: for more information, see issue #74052 error[E0214]: parenthesized type parameters may only be used with a `Fn` trait --> $DIR/unusual-rib-combinations.rs:7:16 |", "positive_passages": [{"docid": "doc-en-rust-29afce09f42a612f76d7a6981c4b3b2da2fe69b05479108e0bd4cc862f184ca0", "text": "This is a fuzzed test case, found with and minimized with . I wasn't able to find any similar issues: : // run-pass // edition:2021 struct Props { field_1: u32, //~ WARNING: field is never read: `field_1` field_2: u32, //~ WARNING: field is never read: `field_2` } fn main() { // Test 1 let props_2 = Props { //~ WARNING: unused variable: `props_2` field_1: 1, field_2: 1, }; let _ = || { let _: Props = props_2; }; // Test 2 let mut arr = [1, 3, 4, 5]; let mref = &mut arr; let _c = || match arr { [_, _, _, _] => println!(\"A\") }; println!(\"{:#?}\", mref); } ", "positive_passages": [{"docid": "doc-en-rust-7cdc79ef87272187a3994c116d49523df7131d804d1b2f1d966c795a36b75a1b", "text": "Encountered an ICE when building 0.18.0 on edition 2021. I have roughly narrowed it down to the following (can probably be reduced further). Seems to be an issue with closure captures. $DIR/issue-52437.rs:2:13 | LL | [(); &(&'static: loop { |x| {}; }) as *const _ as usize] | ^^^^^^^ error[E0282]: type annotations needed --> $DIR/issue-52437.rs:2:30 | LL | [(); &(&'static: loop { |x| {}; }) as *const _ as usize] | ^ consider giving this closure parameter a type error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0282`. ", "positive_passages": [{"docid": "doc-en-rust-bad15165f9c17e24bf9acf94720de1600514f6cf5915295f602149400d545451", "text": "Binder(<[type error] as ToBytes) This only happens if there are two erroneous functions, if I take out it works fine. It seems like I was trying to do this wrong in the first place since the first function gives an error . Not sure what the right way to do this is. // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Windows file path handling use ascii::AsciiCast; use c_str::{CString, ToCStr}; use cast; use cmp::Eq; use from_str::FromStr; use iter::{AdditiveIterator, Extendable, Iterator}; use option::{Option, Some, None}; use str; use str::{OwnedStr, Str, StrVector}; use util; use vec::Vector; use super::{GenericPath, GenericPathUnsafe}; /// Iterator that yields successive components of a Path pub type ComponentIter<'self> = str::CharSplitIterator<'self, char>; /// Represents a Windows path // Notes for Windows path impl: // The MAX_PATH is 260, but 253 is the practical limit due to some API bugs // See http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247.aspx for good information // about windows paths. // That same page puts a bunch of restrictions on allowed characters in a path. // `foo.txt` means \"relative to current drive\", but will not be considered to be absolute here // as `\u2203P | P.join(\"foo.txt\") != \"foo.txt\"`. // `C:` is interesting, that means \"the current directory on drive C\". // Long absolute paths need to have ? prefix (or, for UNC, ?UNC). I think that can be // ignored for now, though, and only added in a hypothetical .to_pwstr() function. // However, if a path is parsed that has ?, this needs to be preserved as it disables the // processing of \".\" and \"..\" components and / as a separator. // Experimentally, ?foo is not the same thing as foo. // Also, foo is not valid either (certainly not equivalent to foo). // Similarly, C:Users is not equivalent to C:Users, although C:Usersfoo is equivalent // to C:Usersfoo. In fact the command prompt treats C:foobar as UNC path. But it might be // best to just ignore that and normalize it to C:foobar. // // Based on all this, I think the right approach is to do the following: // * Require valid utf-8 paths. Windows API may use WCHARs, but we don't, and utf-8 is convertible // to UTF-16 anyway (though does Windows use UTF-16 or UCS-2? Not sure). // * Parse the prefixes ?UNC, ?, and . explicitly. // * If ?UNC, treat following two path components as servershare. Don't error for missing // servershare. // * If ?, parse disk from following component, if present. Don't error for missing disk. // * If ., treat rest of path as just regular components. I don't know how . and .. are handled // here, they probably aren't, but I'm not going to worry about that. // * Else if starts with , treat following two components as servershare. Don't error for missing // servershare. // * Otherwise, attempt to parse drive from start of path. // // The only error condition imposed here is valid utf-8. All other invalid paths are simply // preserved by the data structure; let the Windows API error out on them. #[deriving(Clone, DeepClone)] pub struct Path { priv repr: ~str, // assumed to never be empty priv prefix: Option, priv sepidx: Option // index of the final separator in the non-prefix portion of repr } impl Eq for Path { #[inline] fn eq(&self, other: &Path) -> bool { self.repr == other.repr } } impl FromStr for Path { fn from_str(s: &str) -> Option { if contains_nul(s.as_bytes()) { None } else { Some(unsafe { GenericPathUnsafe::from_str_unchecked(s) }) } } } impl ToCStr for Path { #[inline] fn to_c_str(&self) -> CString { // The Path impl guarantees no embedded NULs unsafe { self.as_vec().to_c_str_unchecked() } } #[inline] unsafe fn to_c_str_unchecked(&self) -> CString { self.as_vec().to_c_str_unchecked() } } impl GenericPathUnsafe for Path { /// See `GenericPathUnsafe::from_vec_unchecked`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. #[inline] unsafe fn from_vec_unchecked(path: &[u8]) -> Path { if !str::is_utf8(path) { let path = str::from_utf8(path); // triggers not_utf8 condition GenericPathUnsafe::from_str_unchecked(path) } else { GenericPathUnsafe::from_str_unchecked(cast::transmute(path)) } } #[inline] unsafe fn from_str_unchecked(path: &str) -> Path { let (prefix, path) = Path::normalize_(path); assert!(!path.is_empty()); let mut ret = Path{ repr: path, prefix: prefix, sepidx: None }; ret.update_sepidx(); ret } /// See `GenericPathUnsafe::set_dirname_unchecked`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. #[inline] unsafe fn set_dirname_unchecked(&mut self, dirname: &[u8]) { if !str::is_utf8(dirname) { let dirname = str::from_utf8(dirname); // triggers not_utf8 condition self.set_dirname_str_unchecked(dirname); } else { self.set_dirname_str_unchecked(cast::transmute(dirname)) } } unsafe fn set_dirname_str_unchecked(&mut self, dirname: &str) { match self.sepidx_or_prefix_len() { None if \".\" == self.repr || \"..\" == self.repr => { self.update_normalized(dirname); } None => { let mut s = str::with_capacity(dirname.len() + self.repr.len() + 1); s.push_str(dirname); s.push_char(sep); s.push_str(self.repr); self.update_normalized(s); } Some((_,idxa,end)) if self.repr.slice(idxa,end) == \"..\" => { self.update_normalized(dirname); } Some((_,idxa,end)) if dirname.is_empty() => { let (prefix, path) = Path::normalize_(self.repr.slice(idxa,end)); self.repr = path; self.prefix = prefix; self.update_sepidx(); } Some((idxb,idxa,end)) => { let idx = if dirname.ends_with(\"\") { idxa } else { let prefix = parse_prefix(dirname); if prefix == Some(DiskPrefix) && prefix_len(prefix) == dirname.len() { idxa } else { idxb } }; let mut s = str::with_capacity(dirname.len() + end - idx); s.push_str(dirname); s.push_str(self.repr.slice(idx,end)); self.update_normalized(s); } } } /// See `GenericPathUnsafe::set_filename_unchecekd`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. #[inline] unsafe fn set_filename_unchecked(&mut self, filename: &[u8]) { if !str::is_utf8(filename) { let filename = str::from_utf8(filename); // triggers not_utf8 condition self.set_filename_str_unchecked(filename) } else { self.set_filename_str_unchecked(cast::transmute(filename)) } } unsafe fn set_filename_str_unchecked(&mut self, filename: &str) { match self.sepidx_or_prefix_len() { None if \"..\" == self.repr => { let mut s = str::with_capacity(3 + filename.len()); s.push_str(\"..\"); s.push_char(sep); s.push_str(filename); self.update_normalized(s); } None => { self.update_normalized(filename); } Some((_,idxa,end)) if self.repr.slice(idxa,end) == \"..\" => { let mut s = str::with_capacity(end + 1 + filename.len()); s.push_str(self.repr.slice_to(end)); s.push_char(sep); s.push_str(filename); self.update_normalized(s); } Some((idxb,idxa,_)) if self.prefix == Some(DiskPrefix) && idxa == self.prefix_len() => { let mut s = str::with_capacity(idxb + filename.len()); s.push_str(self.repr.slice_to(idxb)); s.push_str(filename); self.update_normalized(s); } Some((idxb,_,_)) => { let mut s = str::with_capacity(idxb + 1 + filename.len()); s.push_str(self.repr.slice_to(idxb)); s.push_char(sep); s.push_str(filename); self.update_normalized(s); } } } /// See `GenericPathUnsafe::push_unchecked`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. unsafe fn push_unchecked(&mut self, path: &[u8]) { if !str::is_utf8(path) { let path = str::from_utf8(path); // triggers not_utf8 condition self.push_str_unchecked(path); } else { self.push_str_unchecked(cast::transmute(path)); } } /// See `GenericPathUnsafe::push_str_unchecked`. /// /// Concatenating two Windows Paths is rather complicated. /// For the most part, it will behave as expected, except in the case of /// pushing a volume-relative path, e.g. `C:foo.txt`. Because we have no /// concept of per-volume cwds like Windows does, we can't behave exactly /// like Windows will. Instead, if the receiver is an absolute path on /// the same volume as the new path, it will be treated as the cwd that /// the new path is relative to. Otherwise, the new path will be treated /// as if it were absolute and will replace the receiver outright. unsafe fn push_str_unchecked(&mut self, path: &str) { fn is_vol_abs(path: &str, prefix: Option) -> bool { // assume prefix is Some(DiskPrefix) let rest = path.slice_from(prefix_len(prefix)); !rest.is_empty() && rest[0].is_ascii() && is_sep2(rest[0] as char) } fn shares_volume(me: &Path, path: &str) -> bool { // path is assumed to have a prefix of Some(DiskPrefix) match me.prefix { Some(DiskPrefix) => me.repr[0] == path[0].to_ascii().to_upper().to_byte(), Some(VerbatimDiskPrefix) => me.repr[4] == path[0].to_ascii().to_upper().to_byte(), _ => false } } fn is_sep_(prefix: Option, u: u8) -> bool { u.is_ascii() && if prefix_is_verbatim(prefix) { is_sep(u as char) } else { is_sep2(u as char) } } fn replace_path(me: &mut Path, path: &str, prefix: Option) { let newpath = Path::normalize__(path, prefix); me.repr = match newpath { Some(p) => p, None => path.to_owned() }; me.prefix = prefix; me.update_sepidx(); } fn append_path(me: &mut Path, path: &str) { // appends a path that has no prefix // if me is verbatim, we need to pre-normalize the new path let path_ = if me.is_verbatim() { Path::normalize__(path, None) } else { None }; let pathlen = path_.map_default(path.len(), |p| p.len()); let mut s = str::with_capacity(me.repr.len() + 1 + pathlen); s.push_str(me.repr); let plen = me.prefix_len(); if !(me.repr.len() > plen && me.repr[me.repr.len()-1] == sep as u8) { s.push_char(sep); } match path_ { None => s.push_str(path), Some(p) => s.push_str(p) }; me.update_normalized(s) } if !path.is_empty() { let prefix = parse_prefix(path); match prefix { Some(DiskPrefix) if !is_vol_abs(path, prefix) && shares_volume(self, path) => { // cwd-relative path, self is on the same volume append_path(self, path.slice_from(prefix_len(prefix))); } Some(_) => { // absolute path, or cwd-relative and self is not same volume replace_path(self, path, prefix); } None if !path.is_empty() && is_sep_(self.prefix, path[0]) => { // volume-relative path if self.prefix().is_some() { // truncate self down to the prefix, then append let n = self.prefix_len(); self.repr.truncate(n); append_path(self, path); } else { // we have no prefix, so nothing to be relative to replace_path(self, path, prefix); } } None => { // relative path append_path(self, path); } } } } } impl GenericPath for Path { /// See `GenericPath::as_str` for info. /// Always returns a `Some` value. #[inline] fn as_str<'a>(&'a self) -> Option<&'a str> { Some(self.repr.as_slice()) } #[inline] fn as_vec<'a>(&'a self) -> &'a [u8] { self.repr.as_bytes() } #[inline] fn dirname<'a>(&'a self) -> &'a [u8] { self.dirname_str().unwrap().as_bytes() } /// See `GenericPath::dirname_str` for info. /// Always returns a `Some` value. fn dirname_str<'a>(&'a self) -> Option<&'a str> { Some(match self.sepidx_or_prefix_len() { None if \"..\" == self.repr => self.repr.as_slice(), None => \".\", Some((_,idxa,end)) if self.repr.slice(idxa, end) == \"..\" => { self.repr.as_slice() } Some((idxb,_,end)) if self.repr.slice(idxb, end) == \"\" => { self.repr.as_slice() } Some((0,idxa,_)) => self.repr.slice_to(idxa), Some((idxb,idxa,_)) => { match self.prefix { Some(DiskPrefix) | Some(VerbatimDiskPrefix) if idxb == self.prefix_len() => { self.repr.slice_to(idxa) } _ => self.repr.slice_to(idxb) } } }) } #[inline] fn filename<'a>(&'a self) -> &'a [u8] { self.filename_str().unwrap().as_bytes() } /// See `GenericPath::filename_str` for info. /// Always returns a `Some` value. fn filename_str<'a>(&'a self) -> Option<&'a str> { Some(match self.sepidx_or_prefix_len() { None if \".\" == self.repr || \"..\" == self.repr => \"\", None => self.repr.as_slice(), Some((_,idxa,end)) if self.repr.slice(idxa, end) == \"..\" => \"\", Some((_,idxa,end)) => self.repr.slice(idxa, end) }) } /// See `GenericPath::filestem_str` for info. /// Always returns a `Some` value. #[inline] fn filestem_str<'a>(&'a self) -> Option<&'a str> { // filestem() returns a byte vector that's guaranteed valid UTF-8 Some(unsafe { cast::transmute(self.filestem()) }) } #[inline] fn extension_str<'a>(&'a self) -> Option<&'a str> { // extension() returns a byte vector that's guaranteed valid UTF-8 self.extension().map_move(|v| unsafe { cast::transmute(v) }) } fn dir_path(&self) -> Path { unsafe { GenericPathUnsafe::from_str_unchecked(self.dirname_str().unwrap()) } } fn file_path(&self) -> Option { match self.filename_str() { None | Some(\"\") => None, Some(s) => Some(unsafe { GenericPathUnsafe::from_str_unchecked(s) }) } } #[inline] fn push_path(&mut self, path: &Path) { self.push_str(path.as_str().unwrap()) } #[inline] fn pop_opt(&mut self) -> Option<~[u8]> { self.pop_opt_str().map_move(|s| s.into_bytes()) } fn pop_opt_str(&mut self) -> Option<~str> { match self.sepidx_or_prefix_len() { None if \".\" == self.repr => None, None => { let mut s = ~\".\"; util::swap(&mut s, &mut self.repr); self.sepidx = None; Some(s) } Some((idxb,idxa,end)) if idxb == idxa && idxb == end => None, Some((idxb,_,end)) if self.repr.slice(idxb, end) == \"\" => None, Some((idxb,idxa,end)) => { let s = self.repr.slice(idxa, end).to_owned(); let trunc = match self.prefix { Some(DiskPrefix) | Some(VerbatimDiskPrefix) | None => { let plen = self.prefix_len(); if idxb == plen { idxa } else { idxb } } _ => idxb }; self.repr.truncate(trunc); self.update_sepidx(); Some(s) } } } /// See `GenericPath::is_absolute` for info. /// /// A Windows Path is considered absolute only if it has a non-volume prefix, /// or if it has a volume prefix and the path starts with ''. /// A path of `foo` is not considered absolute because it's actually /// relative to the \"current volume\". A separate method `Path::is_vol_relative` /// is provided to indicate this case. Similarly a path of `C:foo` is not /// considered absolute because it's relative to the cwd on volume C:. A /// separate method `Path::is_cwd_relative` is provided to indicate this case. #[inline] fn is_absolute(&self) -> bool { match self.prefix { Some(DiskPrefix) => { let rest = self.repr.slice_from(self.prefix_len()); rest.len() > 0 && rest[0] == sep as u8 } Some(_) => true, None => false } } fn is_ancestor_of(&self, other: &Path) -> bool { if !self.equiv_prefix(other) { false } else if self.is_absolute() != other.is_absolute() || self.is_vol_relative() != other.is_vol_relative() { false } else { let mut ita = self.component_iter(); let mut itb = other.component_iter(); if \".\" == self.repr { return itb.next() != Some(\"..\"); } loop { match (ita.next(), itb.next()) { (None, _) => break, (Some(a), Some(b)) if a == b => { loop }, (Some(a), _) if a == \"..\" => { // if ita contains only .. components, it's an ancestor return ita.all(|x| x == \"..\"); } _ => return false } } true } } fn path_relative_from(&self, base: &Path) -> Option { fn comp_requires_verbatim(s: &str) -> bool { s == \".\" || s == \"..\" || s.contains_char(sep2) } if !self.equiv_prefix(base) { // prefixes differ if self.is_absolute() { Some(self.clone()) } else if self.prefix == Some(DiskPrefix) && base.prefix == Some(DiskPrefix) { // both drives, drive letters must differ or they'd be equiv Some(self.clone()) } else { None } } else if self.is_absolute() != base.is_absolute() { if self.is_absolute() { Some(self.clone()) } else { None } } else if self.is_vol_relative() != base.is_vol_relative() { if self.is_vol_relative() { Some(self.clone()) } else { None } } else { let mut ita = self.component_iter(); let mut itb = base.component_iter(); let mut comps = ~[]; let a_verb = self.is_verbatim(); let b_verb = base.is_verbatim(); loop { match (ita.next(), itb.next()) { (None, None) => break, (Some(a), None) if a_verb && comp_requires_verbatim(a) => { return Some(self.clone()) } (Some(a), None) => { comps.push(a); if !a_verb { comps.extend(&mut ita); break; } } (None, _) => comps.push(\"..\"), (Some(a), Some(b)) if comps.is_empty() && a == b => (), (Some(a), Some(b)) if !b_verb && b == \".\" => { if a_verb && comp_requires_verbatim(a) { return Some(self.clone()) } else { comps.push(a) } } (Some(_), Some(b)) if !b_verb && b == \"..\" => return None, (Some(a), Some(_)) if a_verb && comp_requires_verbatim(a) => { return Some(self.clone()) } (Some(a), Some(_)) => { comps.push(\"..\"); for _ in itb { comps.push(\"..\"); } comps.push(a); if !a_verb { comps.extend(&mut ita); break; } } } } Some(Path::from_str(comps.connect(\"\"))) } } } impl Path { /// Returns a new Path from a byte vector /// /// # Failure /// /// Raises the `null_byte` condition if the vector contains a NUL. /// Raises the `str::not_utf8` condition if invalid UTF-8. #[inline] pub fn new(v: &[u8]) -> Path { GenericPath::from_vec(v) } /// Returns a new Path from a string /// /// # Failure /// /// Raises the `null_byte` condition if the vector contains a NUL. #[inline] pub fn from_str(s: &str) -> Path { GenericPath::from_str(s) } /// Converts the Path into an owned byte vector pub fn into_vec(self) -> ~[u8] { self.repr.into_bytes() } /// Converts the Path into an owned string /// Returns an Option for compatibility with posix::Path, but the /// return value will always be Some. pub fn into_str(self) -> Option<~str> { Some(self.repr) } /// Returns a normalized string representation of a path, by removing all empty /// components, and unnecessary . and .. components. pub fn normalize(s: S) -> ~str { let (_, path) = Path::normalize_(s); path } /// Returns an iterator that yields each component of the path in turn. /// Does not yield the path prefix (including server/share components in UNC paths). /// Does not distinguish between volume-relative and relative paths, e.g. /// abc and abc. /// Does not distinguish between absolute and cwd-relative paths, e.g. /// C:foo and C:foo. pub fn component_iter<'a>(&'a self) -> ComponentIter<'a> { let s = match self.prefix { Some(_) => { let plen = self.prefix_len(); if self.repr.len() > plen && self.repr[plen] == sep as u8 { self.repr.slice_from(plen+1) } else { self.repr.slice_from(plen) } } None if self.repr[0] == sep as u8 => self.repr.slice_from(1), None => self.repr.as_slice() }; let ret = s.split_terminator_iter(sep); ret } /// Returns whether the path is considered \"volume-relative\", which means a path /// that looks like \"foo\". Paths of this form are relative to the current volume, /// but absolute within that volume. #[inline] pub fn is_vol_relative(&self) -> bool { self.prefix.is_none() && self.repr[0] == sep as u8 } /// Returns whether the path is considered \"cwd-relative\", which means a path /// with a volume prefix that is not absolute. This look like \"C:foo.txt\". Paths /// of this form are relative to the cwd on the given volume. #[inline] pub fn is_cwd_relative(&self) -> bool { self.prefix == Some(DiskPrefix) && !self.is_absolute() } /// Returns the PathPrefix for this Path #[inline] pub fn prefix(&self) -> Option { self.prefix } /// Returns whether the prefix is a verbatim prefix, i.e. ? #[inline] pub fn is_verbatim(&self) -> bool { prefix_is_verbatim(self.prefix) } fn equiv_prefix(&self, other: &Path) -> bool { match (self.prefix, other.prefix) { (Some(DiskPrefix), Some(VerbatimDiskPrefix)) => { self.is_absolute() && self.repr[0].to_ascii().eq_ignore_case(other.repr[4].to_ascii()) } (Some(VerbatimDiskPrefix), Some(DiskPrefix)) => { other.is_absolute() && self.repr[4].to_ascii().eq_ignore_case(other.repr[0].to_ascii()) } (Some(VerbatimDiskPrefix), Some(VerbatimDiskPrefix)) => { self.repr[4].to_ascii().eq_ignore_case(other.repr[4].to_ascii()) } (Some(UNCPrefix(_,_)), Some(VerbatimUNCPrefix(_,_))) => { self.repr.slice(2, self.prefix_len()) == other.repr.slice(8, other.prefix_len()) } (Some(VerbatimUNCPrefix(_,_)), Some(UNCPrefix(_,_))) => { self.repr.slice(8, self.prefix_len()) == other.repr.slice(2, other.prefix_len()) } (None, None) => true, (a, b) if a == b => { self.repr.slice_to(self.prefix_len()) == other.repr.slice_to(other.prefix_len()) } _ => false } } fn normalize_(s: S) -> (Option, ~str) { // make borrowck happy let (prefix, val) = { let prefix = parse_prefix(s.as_slice()); let path = Path::normalize__(s.as_slice(), prefix); (prefix, path) }; (prefix, match val { None => s.into_owned(), Some(val) => val }) } fn normalize__(s: &str, prefix: Option) -> Option<~str> { if prefix_is_verbatim(prefix) { // don't do any normalization match prefix { Some(VerbatimUNCPrefix(x, 0)) if s.len() == 8 + x => { // the server component has no trailing '' let mut s = s.into_owned(); s.push_char(sep); Some(s) } _ => None } } else { let (is_abs, comps) = normalize_helper(s, prefix); let mut comps = comps; match (comps.is_some(),prefix) { (false, Some(DiskPrefix)) => { if s[0] >= 'a' as u8 && s[0] <= 'z' as u8 { comps = Some(~[]); } } (false, Some(VerbatimDiskPrefix)) => { if s[4] >= 'a' as u8 && s[0] <= 'z' as u8 { comps = Some(~[]); } } _ => () } match comps { None => None, Some(comps) => { if prefix.is_some() && comps.is_empty() { match prefix.unwrap() { DiskPrefix => { let len = prefix_len(prefix) + is_abs as uint; let mut s = s.slice_to(len).to_owned(); s[0] = s[0].to_ascii().to_upper().to_byte(); if is_abs { s[2] = sep as u8; // normalize C:/ to C: } Some(s) } VerbatimDiskPrefix => { let len = prefix_len(prefix) + is_abs as uint; let mut s = s.slice_to(len).to_owned(); s[4] = s[4].to_ascii().to_upper().to_byte(); Some(s) } _ => { let plen = prefix_len(prefix); if s.len() > plen { Some(s.slice_to(plen).to_owned()) } else { None } } } } else if is_abs && comps.is_empty() { Some(str::from_char(sep)) } else { let prefix_ = s.slice_to(prefix_len(prefix)); let n = prefix_.len() + if is_abs { comps.len() } else { comps.len() - 1} + comps.iter().map(|v| v.len()).sum(); let mut s = str::with_capacity(n); match prefix { Some(DiskPrefix) => { s.push_char(prefix_[0].to_ascii().to_upper().to_char()); s.push_char(':'); } Some(VerbatimDiskPrefix) => { s.push_str(prefix_.slice_to(4)); s.push_char(prefix_[4].to_ascii().to_upper().to_char()); s.push_str(prefix_.slice_from(5)); } Some(UNCPrefix(a,b)) => { s.push_str(\"\"); s.push_str(prefix_.slice(2, a+2)); s.push_char(sep); s.push_str(prefix_.slice(3+a, 3+a+b)); } Some(_) => s.push_str(prefix_), None => () } let mut it = comps.move_iter(); if !is_abs { match it.next() { None => (), Some(comp) => s.push_str(comp) } } for comp in it { s.push_char(sep); s.push_str(comp); } Some(s) } } } } } fn update_sepidx(&mut self) { let s = if self.has_nonsemantic_trailing_slash() { self.repr.slice_to(self.repr.len()-1) } else { self.repr.as_slice() }; let idx = s.rfind(if !prefix_is_verbatim(self.prefix) { is_sep2 } else { is_sep }); let prefixlen = self.prefix_len(); self.sepidx = idx.and_then(|x| if x < prefixlen { None } else { Some(x) }); } fn prefix_len(&self) -> uint { prefix_len(self.prefix) } // Returns a tuple (before, after, end) where before is the index of the separator // and after is the index just after the separator. // end is the length of the string, normally, or the index of the final character if it is // a non-semantic trailing separator in a verbatim string. // If the prefix is considered the separator, before and after are the same. fn sepidx_or_prefix_len(&self) -> Option<(uint,uint,uint)> { match self.sepidx { None => match self.prefix_len() { 0 => None, x => Some((x,x,self.repr.len())) }, Some(x) => { if self.has_nonsemantic_trailing_slash() { Some((x,x+1,self.repr.len()-1)) } else { Some((x,x+1,self.repr.len())) } } } } fn has_nonsemantic_trailing_slash(&self) -> bool { self.is_verbatim() && self.repr.len() > self.prefix_len()+1 && self.repr[self.repr.len()-1] == sep as u8 } fn update_normalized(&mut self, s: S) { let (prefix, path) = Path::normalize_(s); self.repr = path; self.prefix = prefix; self.update_sepidx(); } } /// The standard path separator character pub static sep: char = ''; /// The alternative path separator character pub static sep2: char = '/'; /// Returns whether the given byte is a path separator. /// Only allows the primary separator ''; use is_sep2 to allow '/'. #[inline] pub fn is_sep(c: char) -> bool { c == sep } /// Returns whether the given byte is a path separator. /// Allows both the primary separator '' and the alternative separator '/'. #[inline] pub fn is_sep2(c: char) -> bool { c == sep || c == sep2 } /// Prefix types for Path #[deriving(Eq, Clone, DeepClone)] pub enum PathPrefix { /// Prefix `?`, uint is the length of the following component VerbatimPrefix(uint), /// Prefix `?UNC`, uints are the lengths of the UNC components VerbatimUNCPrefix(uint, uint), /// Prefix `?C:` (for any alphabetic character) VerbatimDiskPrefix, /// Prefix `.`, uint is the length of the following component DeviceNSPrefix(uint), /// UNC prefix `servershare`, uints are the lengths of the server/share UNCPrefix(uint, uint), /// Prefix `C:` for any alphabetic character DiskPrefix } /// Internal function; only public for tests. Don't use. // FIXME (#8169): Make private once visibility is fixed pub fn parse_prefix<'a>(mut path: &'a str) -> Option { if path.starts_with(\"\") { // path = path.slice_from(2); if path.starts_with(\"?\") { // ? path = path.slice_from(2); if path.starts_with(\"UNC\") { // ?UNCservershare path = path.slice_from(4); let (idx_a, idx_b) = match parse_two_comps(path, is_sep) { Some(x) => x, None => (path.len(), 0) }; return Some(VerbatimUNCPrefix(idx_a, idx_b)); } else { // ?path let idx = path.find(''); if idx == Some(2) && path[1] == ':' as u8 { let c = path[0]; if c.is_ascii() && ::char::is_alphabetic(c as char) { // ?C: path return Some(VerbatimDiskPrefix); } } let idx = idx.unwrap_or(path.len()); return Some(VerbatimPrefix(idx)); } } else if path.starts_with(\".\") { // .path path = path.slice_from(2); let idx = path.find('').unwrap_or(path.len()); return Some(DeviceNSPrefix(idx)); } match parse_two_comps(path, is_sep2) { Some((idx_a, idx_b)) if idx_a > 0 && idx_b > 0 => { // servershare return Some(UNCPrefix(idx_a, idx_b)); } _ => () } } else if path.len() > 1 && path[1] == ':' as u8 { // C: let c = path[0]; if c.is_ascii() && ::char::is_alphabetic(c as char) { return Some(DiskPrefix); } } return None; fn parse_two_comps<'a>(mut path: &'a str, f: &fn(char)->bool) -> Option<(uint, uint)> { let idx_a = match path.find(|x| f(x)) { None => return None, Some(x) => x }; path = path.slice_from(idx_a+1); let idx_b = path.find(f).unwrap_or(path.len()); Some((idx_a, idx_b)) } } // None result means the string didn't need normalizing fn normalize_helper<'a>(s: &'a str, prefix: Option) -> (bool,Option<~[&'a str]>) { let f = if !prefix_is_verbatim(prefix) { is_sep2 } else { is_sep }; let is_abs = s.len() > prefix_len(prefix) && f(s.char_at(prefix_len(prefix))); let s_ = s.slice_from(prefix_len(prefix)); let s_ = if is_abs { s_.slice_from(1) } else { s_ }; if is_abs && s_.is_empty() { return (is_abs, match prefix { Some(DiskPrefix) | None => (if is_sep(s.char_at(prefix_len(prefix))) { None } else { Some(~[]) }), Some(_) => Some(~[]), // need to trim the trailing separator }); } let mut comps: ~[&'a str] = ~[]; let mut n_up = 0u; let mut changed = false; for comp in s_.split_iter(f) { if comp.is_empty() { changed = true } else if comp == \".\" { changed = true } else if comp == \"..\" { let has_abs_prefix = match prefix { Some(DiskPrefix) => false, Some(_) => true, None => false }; if (is_abs || has_abs_prefix) && comps.is_empty() { changed = true } else if comps.len() == n_up { comps.push(\"..\"); n_up += 1 } else { comps.pop_opt(); changed = true } } else { comps.push(comp) } } if !changed && !prefix_is_verbatim(prefix) { changed = s.find(is_sep2).is_some(); } if changed { if comps.is_empty() && !is_abs && prefix.is_none() { if s == \".\" { return (is_abs, None); } comps.push(\".\"); } (is_abs, Some(comps)) } else { (is_abs, None) } } // FIXME (#8169): Pull this into parent module once visibility works #[inline(always)] fn contains_nul(v: &[u8]) -> bool { v.iter().any(|&x| x == 0) } fn prefix_is_verbatim(p: Option) -> bool { match p { Some(VerbatimPrefix(_)) | Some(VerbatimUNCPrefix(_,_)) | Some(VerbatimDiskPrefix) => true, Some(DeviceNSPrefix(_)) => true, // not really sure, but I think so _ => false } } fn prefix_len(p: Option) -> uint { match p { None => 0, Some(VerbatimPrefix(x)) => 4 + x, Some(VerbatimUNCPrefix(x,y)) => 8 + x + 1 + y, Some(VerbatimDiskPrefix) => 6, Some(UNCPrefix(x,y)) => 2 + x + 1 + y, Some(DeviceNSPrefix(x)) => 4 + x, Some(DiskPrefix) => 2 } } fn prefix_is_sep(p: Option, c: u8) -> bool { c.is_ascii() && if !prefix_is_verbatim(p) { is_sep2(c as char) } else { is_sep(c as char) } } #[cfg(test)] mod tests { use super::*; use option::{Some,None}; use iter::Iterator; use vec::Vector; macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_str(), Some($exp)); } ); (v: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_vec(), $exp); } ) ) macro_rules! b( ($($arg:expr),+) => ( bytes!($($arg),+) ) ) #[test] fn test_parse_prefix() { macro_rules! t( ($path:expr, $exp:expr) => ( { let path = $path; let exp = $exp; let res = parse_prefix(path); assert!(res == exp, \"parse_prefix(\"%s\"): expected %?, found %?\", path, exp, res); } ) ) t!(\"SERVERsharefoo\", Some(UNCPrefix(6,5))); t!(\"\", None); t!(\"SERVER\", None); t!(\"SERVER\", None); t!(\"SERVER\", None); t!(\"SERVERfoo\", None); t!(\"SERVERshare\", Some(UNCPrefix(6,5))); t!(\"SERVER/share/foo\", Some(UNCPrefix(6,5))); t!(\"SERVERshare/foo\", Some(UNCPrefix(6,5))); t!(\"//SERVER/share/foo\", None); t!(\"abc\", None); t!(\"?abc\", Some(VerbatimPrefix(1))); t!(\"?a/b/c\", Some(VerbatimPrefix(5))); t!(\"//?/a/b/c\", None); t!(\".ab\", Some(DeviceNSPrefix(1))); t!(\".a/b\", Some(DeviceNSPrefix(3))); t!(\"//./a/b\", None); t!(\"?UNCserversharefoo\", Some(VerbatimUNCPrefix(6,5))); t!(\"?UNCsharefoo\", Some(VerbatimUNCPrefix(0,5))); t!(\"?UNC\", Some(VerbatimUNCPrefix(0,0))); t!(\"?UNCserver/share/foo\", Some(VerbatimUNCPrefix(16,0))); t!(\"?UNCserver\", Some(VerbatimUNCPrefix(6,0))); t!(\"?UNCserver\", Some(VerbatimUNCPrefix(6,0))); t!(\"?UNC/server/share\", Some(VerbatimPrefix(16))); t!(\"?UNC\", Some(VerbatimPrefix(3))); t!(\"?C:ab.txt\", Some(VerbatimDiskPrefix)); t!(\"?z:\", Some(VerbatimDiskPrefix)); t!(\"?C:\", Some(VerbatimPrefix(2))); t!(\"?C:a.txt\", Some(VerbatimPrefix(7))); t!(\"?C:ab.txt\", Some(VerbatimPrefix(3))); t!(\"?C:/a\", Some(VerbatimPrefix(4))); t!(\"C:foo\", Some(DiskPrefix)); t!(\"z:/foo\", Some(DiskPrefix)); t!(\"d:\", Some(DiskPrefix)); t!(\"ab:\", None); t!(\"\u00fc:foo\", None); t!(\"3:foo\", None); t!(\" :foo\", None); t!(\"::foo\", None); t!(\"?C:\", Some(VerbatimPrefix(2))); t!(\"?z:\", Some(VerbatimDiskPrefix)); t!(\"?ab:\", Some(VerbatimPrefix(3))); t!(\"?C:a\", Some(VerbatimDiskPrefix)); t!(\"?C:/a\", Some(VerbatimPrefix(4))); t!(\"?C:a/b\", Some(VerbatimDiskPrefix)); } #[test] fn test_paths() { t!(v: Path::new([]), b!(\".\")); t!(v: Path::new(b!(\"\")), b!(\"\")); t!(v: Path::new(b!(\"abc\")), b!(\"abc\")); t!(s: Path::from_str(\"\"), \".\"); t!(s: Path::from_str(\"\"), \"\"); t!(s: Path::from_str(\"hi\"), \"hi\"); t!(s: Path::from_str(\"hi\"), \"hi\"); t!(s: Path::from_str(\"lib\"), \"lib\"); t!(s: Path::from_str(\"lib\"), \"lib\"); t!(s: Path::from_str(\"hithere\"), \"hithere\"); t!(s: Path::from_str(\"hithere.txt\"), \"hithere.txt\"); t!(s: Path::from_str(\"/\"), \"\"); t!(s: Path::from_str(\"hi/\"), \"hi\"); t!(s: Path::from_str(\"/lib\"), \"lib\"); t!(s: Path::from_str(\"/lib/\"), \"lib\"); t!(s: Path::from_str(\"hi/there\"), \"hithere\"); t!(s: Path::from_str(\"hithere\"), \"hithere\"); t!(s: Path::from_str(\"hi..there\"), \"there\"); t!(s: Path::from_str(\"hi/../there\"), \"there\"); t!(s: Path::from_str(\"..hithere\"), \"..hithere\"); t!(s: Path::from_str(\"..hithere\"), \"hithere\"); t!(s: Path::from_str(\"/../hi/there\"), \"hithere\"); t!(s: Path::from_str(\"foo..\"), \".\"); t!(s: Path::from_str(\"foo..\"), \"\"); t!(s: Path::from_str(\"foo....\"), \"\"); t!(s: Path::from_str(\"foo....bar\"), \"bar\"); t!(s: Path::from_str(\".hi.there.\"), \"hithere\"); t!(s: Path::from_str(\".hi.there...\"), \"hi\"); t!(s: Path::from_str(\"foo....\"), \"..\"); t!(s: Path::from_str(\"foo......\"), \"....\"); t!(s: Path::from_str(\"foo....bar\"), \"..bar\"); assert_eq!(Path::new(b!(\"foobar\")).into_vec(), b!(\"foobar\").to_owned()); assert_eq!(Path::new(b!(\"foo....bar\")).into_vec(), b!(\"bar\").to_owned()); assert_eq!(Path::from_str(\"foobar\").into_str(), Some(~\"foobar\")); assert_eq!(Path::from_str(\"foo....bar\").into_str(), Some(~\"bar\")); t!(s: Path::from_str(\"a\"), \"a\"); t!(s: Path::from_str(\"a\"), \"a\"); t!(s: Path::from_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"ab/\"), \"ab\"); t!(s: Path::from_str(\"b\"), \"b\"); t!(s: Path::from_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"abc\"), \"abc\"); t!(s: Path::from_str(\"servershare/path\"), \"serversharepath\"); t!(s: Path::from_str(\"server/share/path\"), \"serversharepath\"); t!(s: Path::from_str(\"C:ab.txt\"), \"C:ab.txt\"); t!(s: Path::from_str(\"C:a/b.txt\"), \"C:ab.txt\"); t!(s: Path::from_str(\"z:ab.txt\"), \"Z:ab.txt\"); t!(s: Path::from_str(\"z:/a/b.txt\"), \"Z:ab.txt\"); t!(s: Path::from_str(\"ab:/a/b.txt\"), \"ab:ab.txt\"); t!(s: Path::from_str(\"C:\"), \"C:\"); t!(s: Path::from_str(\"C:\"), \"C:\"); t!(s: Path::from_str(\"q:\"), \"Q:\"); t!(s: Path::from_str(\"C:/\"), \"C:\"); t!(s: Path::from_str(\"C:foo..\"), \"C:\"); t!(s: Path::from_str(\"C:foo..\"), \"C:\"); t!(s: Path::from_str(\"C:a\"), \"C:a\"); t!(s: Path::from_str(\"C:a/\"), \"C:a\"); t!(s: Path::from_str(\"C:ab\"), \"C:ab\"); t!(s: Path::from_str(\"C:ab/\"), \"C:ab\"); t!(s: Path::from_str(\"C:a\"), \"C:a\"); t!(s: Path::from_str(\"C:a/\"), \"C:a\"); t!(s: Path::from_str(\"C:ab\"), \"C:ab\"); t!(s: Path::from_str(\"C:ab/\"), \"C:ab\"); t!(s: Path::from_str(\"?z:ab.txt\"), \"?z:ab.txt\"); t!(s: Path::from_str(\"?C:/a/b.txt\"), \"?C:/a/b.txt\"); t!(s: Path::from_str(\"?C:a/b.txt\"), \"?C:a/b.txt\"); t!(s: Path::from_str(\"?testab.txt\"), \"?testab.txt\"); t!(s: Path::from_str(\"?foobar\"), \"?foobar\"); t!(s: Path::from_str(\".foobar\"), \".foobar\"); t!(s: Path::from_str(\".\"), \".\"); t!(s: Path::from_str(\"?UNCserversharefoo\"), \"?UNCserversharefoo\"); t!(s: Path::from_str(\"?UNCserver/share\"), \"?UNCserver/share\"); t!(s: Path::from_str(\"?UNCserver\"), \"?UNCserver\"); t!(s: Path::from_str(\"?UNC\"), \"?UNC\"); t!(s: Path::from_str(\"?UNC\"), \"?UNC\"); // I'm not sure whether .foo/bar should normalize to .foobar // as information is sparse and this isn't really googleable. // I'm going to err on the side of not normalizing it, as this skips the filesystem t!(s: Path::from_str(\".foo/bar\"), \".foo/bar\"); t!(s: Path::from_str(\".foobar\"), \".foobar\"); } #[test] fn test_null_byte() { use path2::null_byte::cond; let mut handled = false; let mut p = do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"foobar\", 0)); (b!(\"bar\").to_owned()) }).inside { Path::new(b!(\"foobar\", 0)) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"bar\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.set_filename(b!(\"f\", 0, \"o\")) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"foo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"null\", 0, \"byte\")); (b!(\"nullbyte\").to_owned()) }).inside { p.set_dirname(b!(\"null\", 0, \"byte\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"nullbytefoo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.push(b!(\"f\", 0, \"o\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"nullbytefoofoo\")); } #[test] fn test_null_byte_fail() { use path2::null_byte::cond; use task; macro_rules! t( ($name:expr => $code:block) => ( { let mut t = task::task(); t.supervised(); t.name($name); let res = do t.try $code; assert!(res.is_err()); } ) ) t!(~\"new() wnul\" => { do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { Path::new(b!(\"foobar\", 0)) }; }) t!(~\"set_filename wnul\" => { let mut p = Path::new(b!(\"foobar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_filename(b!(\"foo\", 0)) }; }) t!(~\"set_dirname wnul\" => { let mut p = Path::new(b!(\"foobar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_dirname(b!(\"foo\", 0)) }; }) t!(~\"push wnul\" => { let mut p = Path::new(b!(\"foobar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.push(b!(\"foo\", 0)) }; }) } #[test] #[should_fail] fn test_not_utf8_fail() { Path::new(b!(\"hello\", 0x80, \".txt\")); } #[test] fn test_components() { macro_rules! t( (s: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::from_str($path); assert_eq!(path.$op(), Some($exp)); } ); (s: $path:expr, $op:ident, $exp:expr, opt) => ( { let path = Path::from_str($path); let left = path.$op(); assert_eq!(left, $exp); } ); (v: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::new($path); assert_eq!(path.$op(), $exp); } ) ) t!(v: b!(\"abc\"), filename, b!(\"c\")); t!(s: \"abc\", filename_str, \"c\"); t!(s: \"abc\", filename_str, \"c\"); t!(s: \"a\", filename_str, \"a\"); t!(s: \"a\", filename_str, \"a\"); t!(s: \".\", filename_str, \"\"); t!(s: \"\", filename_str, \"\"); t!(s: \"..\", filename_str, \"\"); t!(s: \"....\", filename_str, \"\"); t!(s: \"c:foo.txt\", filename_str, \"foo.txt\"); t!(s: \"C:\", filename_str, \"\"); t!(s: \"C:\", filename_str, \"\"); t!(s: \"serversharefoo.txt\", filename_str, \"foo.txt\"); t!(s: \"servershare\", filename_str, \"\"); t!(s: \"server\", filename_str, \"server\"); t!(s: \"?barfoo.txt\", filename_str, \"foo.txt\"); t!(s: \"?bar\", filename_str, \"\"); t!(s: \"?\", filename_str, \"\"); t!(s: \"?UNCserversharefoo.txt\", filename_str, \"foo.txt\"); t!(s: \"?UNCserver\", filename_str, \"\"); t!(s: \"?UNC\", filename_str, \"\"); t!(s: \"?C:foo.txt\", filename_str, \"foo.txt\"); t!(s: \"?C:\", filename_str, \"\"); t!(s: \"?C:\", filename_str, \"\"); t!(s: \"?foo/bar\", filename_str, \"\"); t!(s: \"?C:/foo\", filename_str, \"\"); t!(s: \".foobar\", filename_str, \"bar\"); t!(s: \".foo\", filename_str, \"\"); t!(s: \".foo/bar\", filename_str, \"\"); t!(s: \".foobar/baz\", filename_str, \"bar/baz\"); t!(s: \".\", filename_str, \"\"); t!(s: \"?ab\", filename_str, \"b\"); t!(v: b!(\"abc\"), dirname, b!(\"ab\")); t!(s: \"abc\", dirname_str, \"ab\"); t!(s: \"abc\", dirname_str, \"ab\"); t!(s: \"a\", dirname_str, \".\"); t!(s: \"a\", dirname_str, \"\"); t!(s: \".\", dirname_str, \".\"); t!(s: \"\", dirname_str, \"\"); t!(s: \"..\", dirname_str, \"..\"); t!(s: \"....\", dirname_str, \"....\"); t!(s: \"c:foo.txt\", dirname_str, \"C:\"); t!(s: \"C:\", dirname_str, \"C:\"); t!(s: \"C:\", dirname_str, \"C:\"); t!(s: \"C:foo.txt\", dirname_str, \"C:\"); t!(s: \"serversharefoo.txt\", dirname_str, \"servershare\"); t!(s: \"servershare\", dirname_str, \"servershare\"); t!(s: \"server\", dirname_str, \"\"); t!(s: \"?barfoo.txt\", dirname_str, \"?bar\"); t!(s: \"?bar\", dirname_str, \"?bar\"); t!(s: \"?\", dirname_str, \"?\"); t!(s: \"?UNCserversharefoo.txt\", dirname_str, \"?UNCservershare\"); t!(s: \"?UNCserver\", dirname_str, \"?UNCserver\"); t!(s: \"?UNC\", dirname_str, \"?UNC\"); t!(s: \"?C:foo.txt\", dirname_str, \"?C:\"); t!(s: \"?C:\", dirname_str, \"?C:\"); t!(s: \"?C:\", dirname_str, \"?C:\"); t!(s: \"?C:/foo/bar\", dirname_str, \"?C:/foo/bar\"); t!(s: \"?foo/bar\", dirname_str, \"?foo/bar\"); t!(s: \".foobar\", dirname_str, \".foo\"); t!(s: \".foo\", dirname_str, \".foo\"); t!(s: \"?ab\", dirname_str, \"?a\"); t!(v: b!(\"hithere.txt\"), filestem, b!(\"there\")); t!(s: \"hithere.txt\", filestem_str, \"there\"); t!(s: \"hithere\", filestem_str, \"there\"); t!(s: \"there.txt\", filestem_str, \"there\"); t!(s: \"there\", filestem_str, \"there\"); t!(s: \".\", filestem_str, \"\"); t!(s: \"\", filestem_str, \"\"); t!(s: \"foo.bar\", filestem_str, \".bar\"); t!(s: \".bar\", filestem_str, \".bar\"); t!(s: \"..bar\", filestem_str, \".\"); t!(s: \"hithere..txt\", filestem_str, \"there.\"); t!(s: \"..\", filestem_str, \"\"); t!(s: \"....\", filestem_str, \"\"); // filestem is based on filename, so we don't need the full set of prefix tests t!(v: b!(\"hithere.txt\"), extension, Some(b!(\"txt\"))); t!(v: b!(\"hithere\"), extension, None); t!(s: \"hithere.txt\", extension_str, Some(\"txt\"), opt); t!(s: \"hithere\", extension_str, None, opt); t!(s: \"there.txt\", extension_str, Some(\"txt\"), opt); t!(s: \"there\", extension_str, None, opt); t!(s: \".\", extension_str, None, opt); t!(s: \"\", extension_str, None, opt); t!(s: \"foo.bar\", extension_str, None, opt); t!(s: \".bar\", extension_str, None, opt); t!(s: \"..bar\", extension_str, Some(\"bar\"), opt); t!(s: \"hithere..txt\", extension_str, Some(\"txt\"), opt); t!(s: \"..\", extension_str, None, opt); t!(s: \"....\", extension_str, None, opt); // extension is based on filename, so we don't need the full set of prefix tests } #[test] fn test_push() { macro_rules! t( (s: $path:expr, $join:expr) => ( { let path = ($path); let join = ($join); let mut p1 = Path::from_str(path); let p2 = p1.clone(); p1.push_str(join); assert_eq!(p1, p2.join_str(join)); } ) ) t!(s: \"abc\", \"..\"); t!(s: \"abc\", \"d\"); t!(s: \"ab\", \"cd\"); t!(s: \"ab\", \"cd\"); // this is just a sanity-check test. push_str and join_str share an implementation, // so there's no need for the full set of prefix tests // we do want to check one odd case though to ensure the prefix is re-parsed let mut p = Path::from_str(\"?C:\"); assert_eq!(p.prefix(), Some(VerbatimPrefix(2))); p.push_str(\"foo\"); assert_eq!(p.prefix(), Some(VerbatimDiskPrefix)); assert_eq!(p.as_str(), Some(\"?C:foo\")); // and another with verbatim non-normalized paths let mut p = Path::from_str(\"?C:a\"); p.push_str(\"foo\"); assert_eq!(p.as_str(), Some(\"?C:afoo\")); } #[test] fn test_push_path() { macro_rules! t( (s: $path:expr, $push:expr, $exp:expr) => ( { let mut p = Path::from_str($path); let push = Path::from_str($push); p.push_path(&push); assert_eq!(p.as_str(), Some($exp)); } ) ) t!(s: \"abc\", \"d\", \"abcd\"); t!(s: \"abc\", \"d\", \"abcd\"); t!(s: \"ab\", \"cd\", \"abcd\"); t!(s: \"ab\", \"cd\", \"cd\"); t!(s: \"ab\", \".\", \"ab\"); t!(s: \"ab\", \"..c\", \"ac\"); t!(s: \"ab\", \"C:a.txt\", \"C:a.txt\"); t!(s: \"ab\", \"......c\", \"..c\"); t!(s: \"ab\", \"C:a.txt\", \"C:a.txt\"); t!(s: \"C:a\", \"C:b.txt\", \"C:b.txt\"); t!(s: \"C:abc\", \"C:d\", \"C:abcd\"); t!(s: \"C:abc\", \"C:d\", \"C:abcd\"); t!(s: \"C:ab\", \"......c\", \"C:..c\"); t!(s: \"C:ab\", \"......c\", \"C:c\"); t!(s: \"serversharefoo\", \"bar\", \"serversharefoobar\"); t!(s: \"serversharefoo\", \"....bar\", \"serversharebar\"); t!(s: \"serversharefoo\", \"C:baz\", \"C:baz\"); t!(s: \"?C:ab\", \"C:cd\", \"?C:abcd\"); t!(s: \"?C:ab\", \"C:cd\", \"C:cd\"); t!(s: \"?C:ab\", \"C:cd\", \"C:cd\"); t!(s: \"?foobar\", \"baz\", \"?foobarbaz\"); t!(s: \"?C:ab\", \"......c\", \"?C:ab......c\"); t!(s: \"?foobar\", \"....c\", \"?foobar....c\"); t!(s: \"?\", \"foo\", \"?foo\"); t!(s: \"?UNCserversharefoo\", \"bar\", \"?UNCserversharefoobar\"); t!(s: \"?UNCservershare\", \"C:a\", \"C:a\"); t!(s: \"?UNCservershare\", \"C:a\", \"C:a\"); t!(s: \"?UNCserver\", \"foo\", \"?UNCserverfoo\"); t!(s: \"C:a\", \"?UNCservershare\", \"?UNCservershare\"); t!(s: \".foobar\", \"baz\", \".foobarbaz\"); t!(s: \".foobar\", \"C:a\", \"C:a\"); // again, not sure about the following, but I'm assuming . should be verbatim t!(s: \".foo\", \"..bar\", \".foo..bar\"); t!(s: \"?C:\", \"foo\", \"?C:foo\"); // this is a weird one } #[test] fn test_pop() { macro_rules! t( (s: $path:expr, $left:expr, $right:expr) => ( { let pstr = $path; let mut p = Path::from_str(pstr); let file = p.pop_opt_str(); let left = $left; assert!(p.as_str() == Some(left), \"`%s`.pop() failed; expected remainder `%s`, found `%s`\", pstr, left, p.as_str().unwrap()); let right = $right; let res = file.map(|s| s.as_slice()); assert!(res == right, \"`%s`.pop() failed; expected `%?`, found `%?`\", pstr, right, res); } ); (v: [$($path:expr),+], [$($left:expr),+], Some($($right:expr),+)) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file.map(|v| v.as_slice()), Some(b!($($right),+))); } ); (v: [$($path:expr),+], [$($left:expr),+], None) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file, None); } ) ) t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \".\", \".\", None); t!(s: \"a\", \"\", Some(\"a\")); t!(s: \"\", \"\", None); t!(v: [\"abc\"], [\"ab\"], Some(\"c\")); t!(v: [\"a\"], [\".\"], Some(\"a\")); t!(v: [\".\"], [\".\"], None); t!(v: [\"a\"], [\"\"], Some(\"a\")); t!(v: [\"\"], [\"\"], None); t!(s: \"C:ab\", \"C:a\", Some(\"b\")); t!(s: \"C:a\", \"C:\", Some(\"a\")); t!(s: \"C:\", \"C:\", None); t!(s: \"C:ab\", \"C:a\", Some(\"b\")); t!(s: \"C:a\", \"C:\", Some(\"a\")); t!(s: \"C:\", \"C:\", None); t!(s: \"servershareab\", \"serversharea\", Some(\"b\")); t!(s: \"serversharea\", \"servershare\", Some(\"a\")); t!(s: \"servershare\", \"servershare\", None); t!(s: \"?abc\", \"?ab\", Some(\"c\")); t!(s: \"?ab\", \"?a\", Some(\"b\")); t!(s: \"?a\", \"?a\", None); t!(s: \"?C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"?C:a\", \"?C:\", Some(\"a\")); t!(s: \"?C:\", \"?C:\", None); t!(s: \"?UNCservershareab\", \"?UNCserversharea\", Some(\"b\")); t!(s: \"?UNCserversharea\", \"?UNCservershare\", Some(\"a\")); t!(s: \"?UNCservershare\", \"?UNCservershare\", None); t!(s: \".abc\", \".ab\", Some(\"c\")); t!(s: \".ab\", \".a\", Some(\"b\")); t!(s: \".a\", \".a\", None); t!(s: \"?ab\", \"?a\", Some(\"b\")); } #[test] fn test_join() { t!(s: Path::from_str(\"abc\").join_str(\"..\"), \"ab\"); t!(s: Path::from_str(\"abc\").join_str(\"d\"), \"abcd\"); t!(s: Path::from_str(\"ab\").join_str(\"cd\"), \"abcd\"); t!(s: Path::from_str(\"ab\").join_str(\"cd\"), \"cd\"); t!(s: Path::from_str(\".\").join_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"\").join_str(\"ab\"), \"ab\"); t!(v: Path::new(b!(\"abc\")).join(b!(\"..\")), b!(\"ab\")); t!(v: Path::new(b!(\"abc\")).join(b!(\"d\")), b!(\"abcd\")); // full join testing is covered under test_push_path, so no need for // the full set of prefix tests } #[test] fn test_join_path() { macro_rules! t( (s: $path:expr, $join:expr, $exp:expr) => ( { let path = Path::from_str($path); let join = Path::from_str($join); let res = path.join_path(&join); assert_eq!(res.as_str(), Some($exp)); } ) ) t!(s: \"abc\", \"..\", \"ab\"); t!(s: \"abc\", \"d\", \"abcd\"); t!(s: \"ab\", \"cd\", \"abcd\"); t!(s: \"ab\", \"cd\", \"cd\"); t!(s: \".\", \"ab\", \"ab\"); t!(s: \"\", \"ab\", \"ab\"); // join_path is implemented using push_path, so there's no need for // the full set of prefix tests } #[test] fn test_with_helpers() { macro_rules! t( (s: $path:expr, $op:ident, $arg:expr, $res:expr) => ( { let pstr = $path; let path = Path::from_str(pstr); let arg = $arg; let res = path.$op(arg); let exp = $res; assert!(res.as_str() == Some(exp), \"`%s`.%s(\"%s\"): Expected `%s`, found `%s`\", pstr, stringify!($op), arg, exp, res.as_str().unwrap()); } ) ) t!(s: \"abc\", with_dirname_str, \"d\", \"dc\"); t!(s: \"abc\", with_dirname_str, \"de\", \"dec\"); t!(s: \"abc\", with_dirname_str, \"\", \"c\"); t!(s: \"abc\", with_dirname_str, \"\", \"c\"); t!(s: \"abc\", with_dirname_str, \"/\", \"c\"); t!(s: \"abc\", with_dirname_str, \".\", \"c\"); t!(s: \"abc\", with_dirname_str, \"..\", \"..c\"); t!(s: \"\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"\", with_dirname_str, \"\", \".\"); t!(s: \"foo\", with_dirname_str, \"bar\", \"barfoo\"); t!(s: \"..\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"....\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"..\", with_dirname_str, \"\", \".\"); t!(s: \"....\", with_dirname_str, \"\", \".\"); t!(s: \".\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"foo\", with_dirname_str, \"..\", \"..foo\"); t!(s: \"foo\", with_dirname_str, \"....\", \"....foo\"); t!(s: \"C:ab\", with_dirname_str, \"foo\", \"foob\"); t!(s: \"foo\", with_dirname_str, \"C:ab\", \"C:abfoo\"); t!(s: \"C:ab\", with_dirname_str, \"servershare\", \"servershareb\"); t!(s: \"a\", with_dirname_str, \"servershare\", \"serversharea\"); t!(s: \"ab\", with_dirname_str, \"?\", \"?b\"); t!(s: \"ab\", with_dirname_str, \"C:\", \"C:b\"); t!(s: \"ab\", with_dirname_str, \"C:\", \"C:b\"); t!(s: \"ab\", with_dirname_str, \"C:/\", \"C:b\"); t!(s: \"C:\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"C:\", with_dirname_str, \"foo\", \"foo\"); t!(s: \".\", with_dirname_str, \"C:\", \"C:\"); t!(s: \".\", with_dirname_str, \"C:/\", \"C:\"); t!(s: \"?C:foo\", with_dirname_str, \"C:\", \"C:foo\"); t!(s: \"?C:\", with_dirname_str, \"bar\", \"bar\"); t!(s: \"foobar\", with_dirname_str, \"?C:baz\", \"?C:bazbar\"); t!(s: \"?foo\", with_dirname_str, \"C:bar\", \"C:bar\"); t!(s: \"?afoo\", with_dirname_str, \"C:bar\", \"C:barfoo\"); t!(s: \"?afoo/bar\", with_dirname_str, \"C:baz\", \"C:bazfoobar\"); t!(s: \"?UNCserversharebaz\", with_dirname_str, \"a\", \"abaz\"); t!(s: \"foobar\", with_dirname_str, \"?UNCserversharebaz\", \"?UNCserversharebazbar\"); t!(s: \".foo\", with_dirname_str, \"bar\", \"bar\"); t!(s: \".foobar\", with_dirname_str, \"baz\", \"bazbar\"); t!(s: \".foobar\", with_dirname_str, \"baz\", \"bazbar\"); t!(s: \".foobar\", with_dirname_str, \"baz/\", \"bazbar\"); t!(s: \"abc\", with_filename_str, \"d\", \"abd\"); t!(s: \".\", with_filename_str, \"foo\", \"foo\"); t!(s: \"abc\", with_filename_str, \"d\", \"abd\"); t!(s: \"\", with_filename_str, \"foo\", \"foo\"); t!(s: \"a\", with_filename_str, \"foo\", \"foo\"); t!(s: \"foo\", with_filename_str, \"bar\", \"bar\"); t!(s: \"\", with_filename_str, \"foo\", \"foo\"); t!(s: \"a\", with_filename_str, \"foo\", \"foo\"); t!(s: \"abc\", with_filename_str, \"\", \"ab\"); t!(s: \"abc\", with_filename_str, \".\", \"ab\"); t!(s: \"abc\", with_filename_str, \"..\", \"a\"); t!(s: \"a\", with_filename_str, \"\", \"\"); t!(s: \"foo\", with_filename_str, \"\", \".\"); t!(s: \"abc\", with_filename_str, \"de\", \"abde\"); t!(s: \"abc\", with_filename_str, \"d\", \"abd\"); t!(s: \"..\", with_filename_str, \"foo\", \"..foo\"); t!(s: \"....\", with_filename_str, \"foo\", \"....foo\"); t!(s: \"..\", with_filename_str, \"\", \"..\"); t!(s: \"....\", with_filename_str, \"\", \"....\"); t!(s: \"C:foobar\", with_filename_str, \"baz\", \"C:foobaz\"); t!(s: \"C:foo\", with_filename_str, \"bar\", \"C:bar\"); t!(s: \"C:\", with_filename_str, \"foo\", \"C:foo\"); t!(s: \"C:foobar\", with_filename_str, \"baz\", \"C:foobaz\"); t!(s: \"C:foo\", with_filename_str, \"bar\", \"C:bar\"); t!(s: \"C:\", with_filename_str, \"foo\", \"C:foo\"); t!(s: \"C:foo\", with_filename_str, \"\", \"C:\"); t!(s: \"C:foo\", with_filename_str, \"\", \"C:\"); t!(s: \"C:foobar\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:foo\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:foobar\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:foo\", with_filename_str, \"..\", \"C:..\"); t!(s: \"C:\", with_filename_str, \"..\", \"C:..\"); t!(s: \"serversharefoo\", with_filename_str, \"bar\", \"serversharebar\"); t!(s: \"servershare\", with_filename_str, \"foo\", \"serversharefoo\"); t!(s: \"serversharefoo\", with_filename_str, \"\", \"servershare\"); t!(s: \"servershare\", with_filename_str, \"\", \"servershare\"); t!(s: \"serversharefoo\", with_filename_str, \"..\", \"servershare\"); t!(s: \"servershare\", with_filename_str, \"..\", \"servershare\"); t!(s: \"?C:foobar\", with_filename_str, \"baz\", \"?C:foobaz\"); t!(s: \"?C:foo\", with_filename_str, \"bar\", \"?C:bar\"); t!(s: \"?C:\", with_filename_str, \"foo\", \"?C:foo\"); t!(s: \"?C:foo\", with_filename_str, \"..\", \"?C:..\"); t!(s: \"?foobar\", with_filename_str, \"baz\", \"?foobaz\"); t!(s: \"?foo\", with_filename_str, \"bar\", \"?foobar\"); t!(s: \"?\", with_filename_str, \"foo\", \"?foo\"); t!(s: \"?foobar\", with_filename_str, \"..\", \"?foo..\"); t!(s: \".foobar\", with_filename_str, \"baz\", \".foobaz\"); t!(s: \".foo\", with_filename_str, \"bar\", \".foobar\"); t!(s: \".foobar\", with_filename_str, \"..\", \".foo..\"); t!(s: \"hithere.txt\", with_filestem_str, \"here\", \"hihere.txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"\", \"hi.txt\"); t!(s: \"hithere.txt\", with_filestem_str, \".\", \"hi..txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"..\", \"hi...txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"\", \"hi.txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"foobar\", \"hifoobar.txt\"); t!(s: \"hithere.foo.txt\", with_filestem_str, \"here\", \"hihere.txt\"); t!(s: \"hithere\", with_filestem_str, \"here\", \"hihere\"); t!(s: \"hithere\", with_filestem_str, \"\", \"hi\"); t!(s: \"hi\", with_filestem_str, \"\", \".\"); t!(s: \"hi\", with_filestem_str, \"\", \"\"); t!(s: \"hithere\", with_filestem_str, \"..\", \".\"); t!(s: \"hithere\", with_filestem_str, \".\", \"hi\"); t!(s: \"hithere.\", with_filestem_str, \"foo\", \"hifoo.\"); t!(s: \"hithere.\", with_filestem_str, \"\", \"hi\"); t!(s: \"hithere.\", with_filestem_str, \".\", \".\"); t!(s: \"hithere.\", with_filestem_str, \"..\", \"hi...\"); t!(s: \"\", with_filestem_str, \"foo\", \"foo\"); t!(s: \".\", with_filestem_str, \"foo\", \"foo\"); t!(s: \"hithere..\", with_filestem_str, \"here\", \"hihere.\"); t!(s: \"hithere..\", with_filestem_str, \"\", \"hi\"); // filestem setter calls filename setter internally, no need for extended tests t!(s: \"hithere.txt\", with_extension_str, \"exe\", \"hithere.exe\"); t!(s: \"hithere.txt\", with_extension_str, \"\", \"hithere\"); t!(s: \"hithere.txt\", with_extension_str, \".\", \"hithere..\"); t!(s: \"hithere.txt\", with_extension_str, \"..\", \"hithere...\"); t!(s: \"hithere\", with_extension_str, \"txt\", \"hithere.txt\"); t!(s: \"hithere\", with_extension_str, \".\", \"hithere..\"); t!(s: \"hithere\", with_extension_str, \"..\", \"hithere...\"); t!(s: \"hithere.\", with_extension_str, \"txt\", \"hithere.txt\"); t!(s: \"hi.foo\", with_extension_str, \"txt\", \"hi.foo.txt\"); t!(s: \"hithere.txt\", with_extension_str, \".foo\", \"hithere..foo\"); t!(s: \"\", with_extension_str, \"txt\", \"\"); t!(s: \"\", with_extension_str, \".\", \"\"); t!(s: \"\", with_extension_str, \"..\", \"\"); t!(s: \".\", with_extension_str, \"txt\", \".\"); // extension setter calls filename setter internally, no need for extended tests } #[test] fn test_setters() { macro_rules! t( (s: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::from_str(path); p1.$set(arg); let p2 = Path::from_str(path); assert_eq!(p1, p2.$with(arg)); } ); (v: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::new(path); p1.$set(arg); let p2 = Path::new(path); assert_eq!(p1, p2.$with(arg)); } ) ) t!(v: b!(\"abc\"), set_dirname, with_dirname, b!(\"d\")); t!(v: b!(\"abc\"), set_dirname, with_dirname, b!(\"de\")); t!(s: \"abc\", set_dirname_str, with_dirname_str, \"d\"); t!(s: \"abc\", set_dirname_str, with_dirname_str, \"de\"); t!(s: \"\", set_dirname_str, with_dirname_str, \"foo\"); t!(s: \"foo\", set_dirname_str, with_dirname_str, \"bar\"); t!(s: \"abc\", set_dirname_str, with_dirname_str, \"\"); t!(s: \"....\", set_dirname_str, with_dirname_str, \"x\"); t!(s: \"foo\", set_dirname_str, with_dirname_str, \"....\"); t!(v: b!(\"abc\"), set_filename, with_filename, b!(\"d\")); t!(v: b!(\"\"), set_filename, with_filename, b!(\"foo\")); t!(s: \"abc\", set_filename_str, with_filename_str, \"d\"); t!(s: \"\", set_filename_str, with_filename_str, \"foo\"); t!(s: \".\", set_filename_str, with_filename_str, \"foo\"); t!(s: \"ab\", set_filename_str, with_filename_str, \"\"); t!(s: \"a\", set_filename_str, with_filename_str, \"\"); t!(v: b!(\"hithere.txt\"), set_filestem, with_filestem, b!(\"here\")); t!(s: \"hithere.txt\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hithere.\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hithere\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hithere.txt\", set_filestem_str, with_filestem_str, \"\"); t!(s: \"hithere\", set_filestem_str, with_filestem_str, \"\"); t!(v: b!(\"hithere.txt\"), set_extension, with_extension, b!(\"exe\")); t!(s: \"hithere.txt\", set_extension_str, with_extension_str, \"exe\"); t!(s: \"hithere.\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hithere\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hithere.txt\", set_extension_str, with_extension_str, \"\"); t!(s: \"hithere\", set_extension_str, with_extension_str, \"\"); t!(s: \".\", set_extension_str, with_extension_str, \"txt\"); // with_ helpers use the setter internally, so the tests for the with_ helpers // will suffice. No need for the full set of prefix tests. } #[test] fn test_getters() { macro_rules! t( (s: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename_str(), $filename); assert_eq!(path.dirname_str(), $dirname); assert_eq!(path.filestem_str(), $filestem); assert_eq!(path.extension_str(), $ext); } ); (v: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename(), $filename); assert_eq!(path.dirname(), $dirname); assert_eq!(path.filestem(), $filestem); assert_eq!(path.extension(), $ext); } ) ) t!(v: Path::new(b!(\"abc\")), b!(\"c\"), b!(\"ab\"), b!(\"c\"), None); t!(s: Path::from_str(\"abc\"), Some(\"c\"), Some(\"ab\"), Some(\"c\"), None); t!(s: Path::from_str(\".\"), Some(\"\"), Some(\".\"), Some(\"\"), None); t!(s: Path::from_str(\"\"), Some(\"\"), Some(\"\"), Some(\"\"), None); t!(s: Path::from_str(\"..\"), Some(\"\"), Some(\"..\"), Some(\"\"), None); t!(s: Path::from_str(\"....\"), Some(\"\"), Some(\"....\"), Some(\"\"), None); t!(s: Path::from_str(\"hithere.txt\"), Some(\"there.txt\"), Some(\"hi\"), Some(\"there\"), Some(\"txt\")); t!(s: Path::from_str(\"hithere\"), Some(\"there\"), Some(\"hi\"), Some(\"there\"), None); t!(s: Path::from_str(\"hithere.\"), Some(\"there.\"), Some(\"hi\"), Some(\"there\"), Some(\"\")); t!(s: Path::from_str(\"hi.there\"), Some(\".there\"), Some(\"hi\"), Some(\".there\"), None); t!(s: Path::from_str(\"hi..there\"), Some(\"..there\"), Some(\"hi\"), Some(\".\"), Some(\"there\")); // these are already tested in test_components, so no need for extended tests } #[test] fn test_dir_file_path() { t!(s: Path::from_str(\"hithere\").dir_path(), \"hi\"); t!(s: Path::from_str(\"hi\").dir_path(), \".\"); t!(s: Path::from_str(\"hi\").dir_path(), \"\"); t!(s: Path::from_str(\"\").dir_path(), \"\"); t!(s: Path::from_str(\"..\").dir_path(), \"..\"); t!(s: Path::from_str(\"....\").dir_path(), \"....\"); macro_rules! t( ($path:expr, $exp:expr) => ( { let path = $path; let left = path.and_then_ref(|p| p.as_str()); assert_eq!(left, $exp); } ); ) t!(Path::from_str(\"hithere\").file_path(), Some(\"there\")); t!(Path::from_str(\"hi\").file_path(), Some(\"hi\")); t!(Path::from_str(\".\").file_path(), None); t!(Path::from_str(\"\").file_path(), None); t!(Path::from_str(\"..\").file_path(), None); t!(Path::from_str(\"....\").file_path(), None); // dir_path and file_path are just dirname and filename interpreted as paths. // No need for extended tests } #[test] fn test_is_absolute() { macro_rules! t( ($path:expr, $abs:expr, $vol:expr, $cwd:expr) => ( { let path = Path::from_str($path); let (abs, vol, cwd) = ($abs, $vol, $cwd); let b = path.is_absolute(); assert!(b == abs, \"Path '%s'.is_absolute(): expected %?, found %?\", path.as_str().unwrap(), abs, b); let b = path.is_vol_relative(); assert!(b == vol, \"Path '%s'.is_vol_relative(): expected %?, found %?\", path.as_str().unwrap(), vol, b); let b = path.is_cwd_relative(); assert!(b == cwd, \"Path '%s'.is_cwd_relative(): expected %?, found %?\", path.as_str().unwrap(), cwd, b); } ) ) t!(\"abc\", false, false, false); t!(\"abc\", false, true, false); t!(\"a\", false, false, false); t!(\"a\", false, true, false); t!(\".\", false, false, false); t!(\"\", false, true, false); t!(\"..\", false, false, false); t!(\"....\", false, false, false); t!(\"C:ab.txt\", false, false, true); t!(\"C:ab.txt\", true, false, false); t!(\"servershareab.txt\", true, false, false); t!(\"?abc.txt\", true, false, false); t!(\"?C:ab.txt\", true, false, false); t!(\"?C:ab.txt\", true, false, false); // NB: not equivalent to C:ab.txt t!(\"?UNCservershareab.txt\", true, false, false); t!(\".ab\", true, false, false); } #[test] fn test_is_ancestor_of() { macro_rules! t( (s: $path:expr, $dest:expr, $exp:expr) => ( { let path = Path::from_str($path); let dest = Path::from_str($dest); let exp = $exp; let res = path.is_ancestor_of(&dest); assert!(res == exp, \"`%s`.is_ancestor_of(`%s`): Expected %?, found %?\", path.as_str().unwrap(), dest.as_str().unwrap(), exp, res); } ) ) t!(s: \"abc\", \"abcd\", true); t!(s: \"abc\", \"abc\", true); t!(s: \"abc\", \"ab\", false); t!(s: \"abc\", \"abc\", true); t!(s: \"ab\", \"abc\", true); t!(s: \"abcd\", \"abc\", false); t!(s: \"ab\", \"abc\", false); t!(s: \"ab\", \"abc\", false); t!(s: \"abc\", \"abd\", false); t!(s: \"..abc\", \"abc\", false); t!(s: \"abc\", \"..abc\", false); t!(s: \"abc\", \"abcd\", false); t!(s: \"abcd\", \"abc\", false); t!(s: \"..ab\", \"..abc\", true); t!(s: \".\", \"ab\", true); t!(s: \".\", \".\", true); t!(s: \"\", \"\", true); t!(s: \"\", \"ab\", true); t!(s: \"..\", \"ab\", true); t!(s: \"....\", \"ab\", true); t!(s: \"foobar\", \"foobar\", false); t!(s: \"foobar\", \"foobar\", false); t!(s: \"foo\", \"C:foo\", false); t!(s: \"C:foo\", \"foo\", false); t!(s: \"C:foo\", \"C:foobar\", true); t!(s: \"C:foobar\", \"C:foo\", false); t!(s: \"C:foo\", \"C:foobar\", true); t!(s: \"C:\", \"C:\", true); t!(s: \"C:\", \"C:\", false); t!(s: \"C:\", \"C:\", false); t!(s: \"C:\", \"C:\", true); t!(s: \"C:foobar\", \"C:foo\", false); t!(s: \"C:foobar\", \"C:foo\", false); t!(s: \"C:foo\", \"foo\", false); t!(s: \"foo\", \"C:foo\", false); t!(s: \"serversharefoo\", \"serversharefoobar\", true); t!(s: \"servershare\", \"serversharefoo\", true); t!(s: \"serversharefoo\", \"servershare\", false); t!(s: \"C:foo\", \"serversharefoo\", false); t!(s: \"serversharefoo\", \"C:foo\", false); t!(s: \"?foobar\", \"?foobarbaz\", true); t!(s: \"?foobarbaz\", \"?foobar\", false); t!(s: \"?foobar\", \"foobarbaz\", false); t!(s: \"foobar\", \"?foobarbaz\", false); t!(s: \"?C:foobar\", \"?C:foobarbaz\", true); t!(s: \"?C:foobarbaz\", \"?C:foobar\", false); t!(s: \"?C:\", \"?C:foo\", true); t!(s: \"?C:\", \"?C:\", false); // this is a weird one t!(s: \"?C:\", \"?C:\", false); t!(s: \"?C:a\", \"?c:ab\", true); t!(s: \"?c:a\", \"?C:ab\", true); t!(s: \"?C:a\", \"?D:ab\", false); t!(s: \"?foo\", \"?foobar\", false); t!(s: \"?ab\", \"?abc\", true); t!(s: \"?ab\", \"?ab\", true); t!(s: \"?ab\", \"?ab\", true); t!(s: \"?abc\", \"?ab\", false); t!(s: \"?abc\", \"?ab\", false); t!(s: \"?UNCabc\", \"?UNCabcd\", true); t!(s: \"?UNCabcd\", \"?UNCabc\", false); t!(s: \"?UNCab\", \"?UNCabc\", true); t!(s: \".foobar\", \".foobarbaz\", true); t!(s: \".foobarbaz\", \".foobar\", false); t!(s: \".foo\", \".foobar\", true); t!(s: \".foo\", \".foobar\", false); t!(s: \"ab\", \"?ab\", false); t!(s: \"?ab\", \"ab\", false); t!(s: \"ab\", \"?C:ab\", false); t!(s: \"?C:ab\", \"ab\", false); t!(s: \"Z:ab\", \"?z:ab\", true); t!(s: \"C:ab\", \"?D:ab\", false); t!(s: \"ab\", \"?ab\", false); t!(s: \"?ab\", \"ab\", false); t!(s: \"C:ab\", \"?C:ab\", true); t!(s: \"?C:ab\", \"C:ab\", true); t!(s: \"C:ab\", \"?C:ab\", false); t!(s: \"C:ab\", \"?C:ab\", false); t!(s: \"?C:ab\", \"C:ab\", false); t!(s: \"?C:ab\", \"C:ab\", false); t!(s: \"C:ab\", \"?C:ab\", true); t!(s: \"?C:ab\", \"C:ab\", true); t!(s: \"abc\", \"?UNCabc\", true); t!(s: \"?UNCabc\", \"abc\", true); } #[test] fn test_path_relative_from() { macro_rules! t( (s: $path:expr, $other:expr, $exp:expr) => ( { let path = Path::from_str($path); let other = Path::from_str($other); let res = path.path_relative_from(&other); let exp = $exp; assert!(res.and_then_ref(|x| x.as_str()) == exp, \"`%s`.path_relative_from(`%s`): Expected %?, got %?\", path.as_str().unwrap(), other.as_str().unwrap(), exp, res.and_then_ref(|x| x.as_str())); } ) ) t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"abc\", \"abd\", Some(\"..c\")); t!(s: \"abc\", \"abcd\", Some(\"..\")); t!(s: \"abc\", \"abc\", Some(\".\")); t!(s: \"abc\", \"abcde\", Some(\"....\")); t!(s: \"abc\", \"ade\", Some(\"....bc\")); t!(s: \"abc\", \"def\", Some(\"......abc\")); t!(s: \"abc\", \"abc\", None); t!(s: \"abc\", \"abc\", Some(\"abc\")); t!(s: \"abc\", \"abcd\", Some(\"..\")); t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"abc\", \"abcde\", Some(\"....\")); t!(s: \"abc\", \"ade\", Some(\"....bc\")); t!(s: \"abc\", \"def\", Some(\"......abc\")); t!(s: \"hithere.txt\", \"hithere\", Some(\"..there.txt\")); t!(s: \".\", \"a\", Some(\"..\")); t!(s: \".\", \"ab\", Some(\"....\")); t!(s: \".\", \".\", Some(\".\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \"ab\", \".\", Some(\"ab\")); t!(s: \"..\", \".\", Some(\"..\")); t!(s: \"abc\", \"abc\", Some(\".\")); t!(s: \"abc\", \"abc\", Some(\".\")); t!(s: \"\", \"\", Some(\".\")); t!(s: \"\", \".\", Some(\"\")); t!(s: \"....a\", \"b\", Some(\"......a\")); t!(s: \"a\", \"....b\", None); t!(s: \"....a\", \"....b\", Some(\"..a\")); t!(s: \"....a\", \"....ab\", Some(\"..\")); t!(s: \"....ab\", \"....a\", Some(\"b\")); t!(s: \"C:abc\", \"C:ab\", Some(\"c\")); t!(s: \"C:ab\", \"C:abc\", Some(\"..\")); t!(s: \"C:\" ,\"C:ab\", Some(\"....\")); t!(s: \"C:ab\", \"C:cd\", Some(\"....ab\")); t!(s: \"C:ab\", \"D:cd\", Some(\"C:ab\")); t!(s: \"C:ab\", \"C:..c\", None); t!(s: \"C:..a\", \"C:bc\", Some(\"......a\")); t!(s: \"C:abc\", \"C:ab\", Some(\"c\")); t!(s: \"C:ab\", \"C:abc\", Some(\"..\")); t!(s: \"C:\", \"C:ab\", Some(\"....\")); t!(s: \"C:ab\", \"C:cd\", Some(\"....ab\")); t!(s: \"C:ab\", \"C:ab\", Some(\"C:ab\")); t!(s: \"C:ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"ab\", \"abc\", Some(\"..\")); t!(s: \"abce\", \"abcd\", Some(\"..e\")); t!(s: \"acd\", \"abd\", Some(\"acd\")); t!(s: \"bcd\", \"acd\", Some(\"bcd\")); t!(s: \"abc\", \"de\", Some(\"abc\")); t!(s: \"de\", \"abc\", None); t!(s: \"de\", \"abc\", None); t!(s: \"C:abc\", \"abc\", Some(\"C:abc\")); t!(s: \"C:c\", \"abc\", Some(\"C:c\")); t!(s: \"?ab\", \"ab\", Some(\"?ab\")); t!(s: \"?ab\", \"ab\", Some(\"?ab\")); t!(s: \"?ab\", \"b\", Some(\"?ab\")); t!(s: \"?ab\", \"b\", Some(\"?ab\")); t!(s: \"?ab\", \"?abc\", Some(\"..\")); t!(s: \"?abc\", \"?ab\", Some(\"c\")); t!(s: \"?ab\", \"?cd\", Some(\"?ab\")); t!(s: \"?a\", \"?b\", Some(\"?a\")); t!(s: \"?C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"?C:a\", \"?C:ab\", Some(\"..\")); t!(s: \"?C:a\", \"?C:b\", Some(\"..a\")); t!(s: \"?C:a\", \"?D:a\", Some(\"?C:a\")); t!(s: \"?C:ab\", \"?c:a\", Some(\"b\")); t!(s: \"?C:ab\", \"C:a\", Some(\"b\")); t!(s: \"?C:a\", \"C:ab\", Some(\"..\")); t!(s: \"C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"C:a\", \"?C:ab\", Some(\"..\")); t!(s: \"?C:a\", \"D:a\", Some(\"?C:a\")); t!(s: \"?c:ab\", \"C:a\", Some(\"b\")); t!(s: \"?C:ab\", \"C:ab\", Some(\"?C:ab\")); t!(s: \"?C:a.b\", \"C:a\", Some(\"?C:a.b\")); t!(s: \"?C:ab/c\", \"C:a\", Some(\"?C:ab/c\")); t!(s: \"?C:a..b\", \"C:a\", Some(\"?C:a..b\")); t!(s: \"C:ab\", \"?C:ab\", None); t!(s: \"?C:a.b\", \"?C:a\", Some(\"?C:a.b\")); t!(s: \"?C:ab/c\", \"?C:a\", Some(\"?C:ab/c\")); t!(s: \"?C:a..b\", \"?C:a\", Some(\"?C:a..b\")); t!(s: \"?C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"?C:.b\", \"?C:.\", Some(\"b\")); t!(s: \"C:b\", \"?C:.\", Some(\"..b\")); t!(s: \"?a.bc\", \"?a.b\", Some(\"c\")); t!(s: \"?abc\", \"?a.d\", Some(\"....bc\")); t!(s: \"?a..b\", \"?a..\", Some(\"b\")); t!(s: \"?ab..\", \"?ab\", Some(\"?ab..\")); t!(s: \"?abc\", \"?a..b\", Some(\"....bc\")); t!(s: \"?UNCabc\", \"?UNCab\", Some(\"c\")); t!(s: \"?UNCab\", \"?UNCabc\", Some(\"..\")); t!(s: \"?UNCabc\", \"?UNCacd\", Some(\"?UNCabc\")); t!(s: \"?UNCbcd\", \"?UNCacd\", Some(\"?UNCbcd\")); t!(s: \"?UNCabc\", \"?abc\", Some(\"?UNCabc\")); t!(s: \"?UNCabc\", \"?C:abc\", Some(\"?UNCabc\")); t!(s: \"?UNCabc/d\", \"?UNCab\", Some(\"?UNCabc/d\")); t!(s: \"?UNCab.\", \"?UNCab\", Some(\"?UNCab.\")); t!(s: \"?UNCab..\", \"?UNCab\", Some(\"?UNCab..\")); t!(s: \"?UNCabc\", \"ab\", Some(\"c\")); t!(s: \"?UNCab\", \"abc\", Some(\"..\")); t!(s: \"?UNCabc\", \"acd\", Some(\"?UNCabc\")); t!(s: \"?UNCbcd\", \"acd\", Some(\"?UNCbcd\")); t!(s: \"?UNCab.\", \"ab\", Some(\"?UNCab.\")); t!(s: \"?UNCabc/d\", \"ab\", Some(\"?UNCabc/d\")); t!(s: \"?UNCab..\", \"ab\", Some(\"?UNCab..\")); t!(s: \"abc\", \"?UNCab\", Some(\"c\")); t!(s: \"abc\", \"?UNCacd\", Some(\"abc\")); } #[test] fn test_component_iter() { macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = Path::from_str($path); let comps = path.component_iter().to_owned_vec(); let exp: &[&str] = $exp; assert_eq!(comps.as_slice(), exp); } ); (v: [$($arg:expr),+], $exp:expr) => ( { let path = Path::new(b!($($arg),+)); let comps = path.component_iter().to_owned_vec(); let exp: &[&str] = $exp; assert_eq!(comps.as_slice(), exp); } ) ) t!(v: [\"abc\"], [\"a\", \"b\", \"c\"]); t!(s: \"abc\", [\"a\", \"b\", \"c\"]); t!(s: \"abd\", [\"a\", \"b\", \"d\"]); t!(s: \"abcd\", [\"a\", \"b\", \"cd\"]); t!(s: \"abc\", [\"a\", \"b\", \"c\"]); t!(s: \"a\", [\"a\"]); t!(s: \"a\", [\"a\"]); t!(s: \"\", []); t!(s: \".\", [\".\"]); t!(s: \"..\", [\"..\"]); t!(s: \"....\", [\"..\", \"..\"]); t!(s: \"....foo\", [\"..\", \"..\", \"foo\"]); t!(s: \"C:foobar\", [\"foo\", \"bar\"]); t!(s: \"C:foo\", [\"foo\"]); t!(s: \"C:\", []); t!(s: \"C:foobar\", [\"foo\", \"bar\"]); t!(s: \"C:foo\", [\"foo\"]); t!(s: \"C:\", []); t!(s: \"serversharefoobar\", [\"foo\", \"bar\"]); t!(s: \"serversharefoo\", [\"foo\"]); t!(s: \"servershare\", []); t!(s: \"?foobarbaz\", [\"bar\", \"baz\"]); t!(s: \"?foobar\", [\"bar\"]); t!(s: \"?foo\", []); t!(s: \"?\", []); t!(s: \"?ab\", [\"b\"]); t!(s: \"?ab\", [\"b\"]); t!(s: \"?foobarbaz\", [\"bar\", \"\", \"baz\"]); t!(s: \"?C:foobar\", [\"foo\", \"bar\"]); t!(s: \"?C:foo\", [\"foo\"]); t!(s: \"?C:\", []); t!(s: \"?C:foo\", [\"foo\"]); t!(s: \"?UNCserversharefoobar\", [\"foo\", \"bar\"]); t!(s: \"?UNCserversharefoo\", [\"foo\"]); t!(s: \"?UNCservershare\", []); t!(s: \".foobarbaz\", [\"bar\", \"baz\"]); t!(s: \".foobar\", [\"bar\"]); t!(s: \".foo\", []); } } ", "positive_passages": [{"docid": "doc-en-rust-c0a929f18bcfc09b4140d3b23a5400af201e9bb77c0ec58f078f31dbf3455980", "text": "and all interfaces directly with the file-system should probably use rather than trying to coerce to handle these cases. (Similar issue to .)\nnominating feature-complete\nAccepted for backwards-compatible\nMy last referenced commit is a month old, but I'm still working on this issue (currently finishing up the support for Windows paths).\nThe issues here of dealing with filesystems that are not utf8 seem related to , at least tangentially.", "commid": "rust_issue_7225", "tokennum": 94}], "negative_passages": []} {"query_id": "q-en-rust-5cb8cde1af8ca1abd1d84eae7a66de58e76baaff688842d5cf1cc47024f0dd8d", "query": "- [rustdoc: Only look at blanket impls in `get_blanket_impls`][83681] - [Rework rustdoc const type][82873] [85667]: https://github.com/rust-lang/rust/pull/85667 [83386]: https://github.com/rust-lang/rust/pull/83386 [82771]: https://github.com/rust-lang/rust/pull/82771 [84147]: https://github.com/rust-lang/rust/pull/84147", "positive_passages": [{"docid": "doc-en-rust-56861bf654c3538806d7378297409a08d9df48c6a42df7b0eb2c277d290a5fb4", "text": "was re-stabilized in 1.53 after being backed out of 1.51 due to breakage, primarily in the lexical-core crate. This breakage has not been fixed, as far as I can tell, in a vast majority of crates: we're still seeing ~1300 regressions, majority of which are lexical-core in the . There was no discussion on the re-stabilization PR as to whether this level of breakage is acceptable. Comments on the original breakage issue suggest that 0.7.5 was published with a fix, but older versions (0.4, 0.5, 0.6) did not receive similar treatment. The majority of the breakage is in crates locked at 0.7.4; the other versions do share ~400 different crates amongst them, so I think it'd be good to try and help get those series patches as well.\ncc Even with 0.7.5 out, it does look like it'll be some time before crates actually update lockfiles (if ever). We don't have a firm policy on whether cargo update being sufficient is \"enough\" that breakage is OK -- it's a new decision each time pretty much -- but this level of breakage is pretty bad. I guess there may not be much we can do -- IIRC a prelude trait for example adding BITS wouldn't help here, right? Those still introduce conflicts IIRC.\nThere\u2019s no way around the symbol collision. I think it\u2019s important that stabilize and I\u2019d like us to get out of the way on that. I can take the list of broken crates and start walking it. I can also work on a backport on the older minor series; the reason I did not do so last time is that I was unfamiliar with \u2019s dependents and didn\u2019t know whether the older series were still in use. What is the most complete course of action we can take to clear for stabilization? Should I yank the conflicting versions in each minor series? Alternatively, would it make sense, and is it even possible, to stabilize only in ?\nI wouldn't worry about filing pull requests with 1300+ crates bumping lock files (in part because those are unlikely to get merged quickly). I do think getting patch releases out for each series would already be a significant step forward.", "commid": "rust_issue_85667", "tokennum": 485}], "negative_passages": []} {"query_id": "q-en-rust-5cb8cde1af8ca1abd1d84eae7a66de58e76baaff688842d5cf1cc47024f0dd8d", "query": "- [rustdoc: Only look at blanket impls in `get_blanket_impls`][83681] - [Rework rustdoc const type][82873] [85667]: https://github.com/rust-lang/rust/pull/85667 [83386]: https://github.com/rust-lang/rust/pull/83386 [82771]: https://github.com/rust-lang/rust/pull/82771 [84147]: https://github.com/rust-lang/rust/pull/84147", "positive_passages": [{"docid": "doc-en-rust-853e13bc77576d7947d5a0dc1bae646536762bf976cbd7b83c8de29bfacbee31", "text": "I think yanking is likely not the right step to take at this time. In terms of stabilizing only with - no, that's not really possible. In theory we could stabilize with a different name to avoid all the breakage, but I doubt there's a great name...\nThe constant would have to be available everywhere, and stable everywhere, but one could think about adding a syntactic hack to interpret the syntax differently depending on the edition, similarly to how it was done for . That would have the desired effect of avoiding breakage on editions 2018 and before, but I'm not sure it's a good idea to double down on adding such hacks.\nAttaching it to an edition would not only require compiler changes, but also a lint, a suggestion for rustfix, documentation, etc.\nIndeed. Such a policy would be useful. If the decision is that that's fine, we might want to make crater update the lock files when something fails before it considers a crate broken. As another data point: also ignores the lock files and picks the latest matching version of everything.\nI assumed the edition route was an infeasible idea but hey might as well get a \u201cno\u201d in writing. I\u2019ll start cutting fixes for the minor series listed in the regression report this weekend.\nThank you! Let me know if I can help.\nI'm wondering if we should have a in (just like ) that will make Rustc consider everything that was stabilized in a later version as still unstable. could add it to the uploaded crate if it's not explicitly set, and it makes it easier for crates to track an explicit MSRV.\ndid you get a chance to release new patch releases on each of the channels to avoid this breakage for the lexical crates, at least? I believe that was the plan but I haven't seen updates on this issue at least :)\nHi; I am so sorry for the radio silence; I have had quite the month since then. Fortunately, Alex Huszagh already did the patch-release issuance back in April. versions , , , and all appear to have the necessary disambiguation in place. These versions also do not appear in the regression report from Bors. I've also run on each of these releases with and passed, so these tips ought to be correct when the constants stabilize. I hope day-before isn't too late and I apologize again for the delay.", "commid": "rust_issue_85667", "tokennum": 510}], "negative_passages": []} {"query_id": "q-en-rust-5cb8cde1af8ca1abd1d84eae7a66de58e76baaff688842d5cf1cc47024f0dd8d", "query": "- [rustdoc: Only look at blanket impls in `get_blanket_impls`][83681] - [Rework rustdoc const type][82873] [85667]: https://github.com/rust-lang/rust/pull/85667 [83386]: https://github.com/rust-lang/rust/pull/83386 [82771]: https://github.com/rust-lang/rust/pull/82771 [84147]: https://github.com/rust-lang/rust/pull/84147", "positive_passages": [{"docid": "doc-en-rust-b454ea6237523ff5cd281da779ee4af261cbb366d01a5ff0fc804512d12ccc07", "text": "If you're comfortable with a \"run to un-break your project\", then is ready.\nNo worries. We are planning to move ahead with a compat note indicating cargo update.\nJust an FYI: this was fixed ~2 months ago, with , , and . It means that any branch used in the wild should be able to be updated to a compatible version now.", "commid": "rust_issue_85667", "tokennum": 79}], "negative_passages": []} {"query_id": "q-en-rust-5ce3dbdd6620efe4974137b9fc2a3ed9f2c5e47626f5951d8705e312a8f6b911", "query": " // Out-of-line module is found on the filesystem if passed through a proc macro (issue #58818). // check-pass // aux-build:test-macros.rs #[macro_use] extern crate test_macros; mod outer { identity! { mod inner; } } fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-79e2c770627c8cad0f02499a2dfc372d359970372318fd2b2f5fbacb68de3e2f", "text": "I have this crate layout: src/ parent/ where is an empty file and is: This compiles successfully and we end up with a module at backed by the file But if goes anywhere near a proc macro, it no longer works. where is this macro: The error is: so it is looking for a file corresponding to as though the were not contained inside of . I hit this while working on The same problem does not occur if is defined as a macro_rules macro. Mentioning who may know what is going wrong or whether there is a simple rustc fix.\nI never audited the module search code and don't know how it fits into the larger macro story. Even without macros it's , so I'd expect issues in macros as well. I'll investigate.\nStill the same behavior as of rustc 1.40.0-nightly ( 2019-10-01).\nIt's still in my queue, just not with a high enough priority :(\nThis was fixed by , I'll add a test.", "commid": "rust_issue_58818", "tokennum": 217}], "negative_passages": []} {"query_id": "q-en-rust-5cebaa722c8ddb42da922342169ca497276968610a7f32ae18ce844bab0f0ad8", "query": "} _ => { Err(if self.prev_token_kind == PrevTokenKind::DocComment { self.span_fatal_err(self.prev_span, Error::UselessDocComment) } else { self.expected_ident_found() }) self.span_fatal_err(self.prev_span, Error::UselessDocComment) } else { self.expected_ident_found() }) } } }", "positive_passages": [{"docid": "doc-en-rust-90bd225a208b9bb471a32eda4959929a256e6fff107b5af48e291c8d9be22cb1", "text": "root: yup-oauth2 - 360 detected crates which regressed due to this; cc ) -> Self { let ext = match &mut cx.ext { let ext = match &mut *cx.ext { ExtData::Some(ext) => ExtData::Some(*ext), ExtData::None(()) => ExtData::None(()), };", "positive_passages": [{"docid": "doc-en-rust-c4a5273a933e8a31db83bc0532f1121df06f4a5ea514257a527d1d7d6a098f8b", "text": "https://crater-\nI'll need to re-check later but this seems to bisect to somehow.\nThe UI test changes from that PR appear to back this up, as confusing as it is.\nI'm not sure what's going on in that crate, if it built successfully by mistake, or how that PR interacts with it, so let's cc the PR author and reviewer for validation.\nAfter , no longer implements and , because the context now contains a which doesn't implement those traits. So this is an issue with the standard library and not the crate being built. Repro: label -T-compiler +T-libs +S-has-mcve\nWG-prioritization assigning priority (). Noticing this on which says that was meant to be confined on nightly. Unsure about the fallout of this regression but for now marking as critical for tracking purposes. label -I-prioritize +P-critical\nRevert up:\nIsn't not unwind safe?\nThat line was by your PR, so while what you say is technically true, talking about what \"already happened\" when we are traveling back and forth through time via version control may be more confusing than helpful.\nThat line was already there. I the line below it though. Original PR:\nApologies, I guess I can't count numbers today? Hm, so, the full error of the regression is this: Note that it's trying to make a pointer implement . And for what it's worth, stable does indeed document is . Thus unfortunately that line you pointed to, as opposed to the one I thought you were referring to, is effectively just documenting that all are , and that will include regardless of whether it is is . And confusingly, , unlike .\nI have opened as an alternative.", "commid": "rust_issue_125193", "tokennum": 371}], "negative_passages": []} {"query_id": "q-en-rust-5e61bea41f23c737508ba5bc5029c04bf352c2d223e8e2763e198ebc244091db", "query": " error[E0599]: no method named `clone` found for struct `issue_69725::Struct` in the current scope --> $DIR/issue-69725.rs:7:32 | LL | let _ = Struct::::new().clone(); | ^^^^^ method not found in `issue_69725::Struct` | ::: $DIR/auxiliary/issue-69725.rs:2:1 | LL | pub struct Struct(A); | ------------------------ doesn't satisfy `issue_69725::Struct: std::clone::Clone` | = note: the method `clone` exists but the following trait bounds were not satisfied: `A: std::clone::Clone` which is required by `issue_69725::Struct: std::clone::Clone` error: aborting due to previous error For more information about this error, try `rustc --explain E0599`. ", "positive_passages": [{"docid": "doc-en-rust-1e91a33f7369d84f0bc42974a03df193d853988de2edb10da26581d11b8a406f", "text": "I tried code very similar to this: and it produces some type of type checking error when uncommenting the line beginning with . : Backtrace:\nThe error is likely due to the fact that the trait bounds on don't include for , though the compiler/type checker crashing is still likely a bug.\nThat code was introduced in , cc Submitted as a quick fix.\nI can't get it to work in the same crate, just with two crates, where the first one depends on the other: modify labels: -E-needs-mcve\nThanks I included a test in the PR.", "commid": "rust_issue_69725", "tokennum": 124}], "negative_passages": []} {"query_id": "q-en-rust-5e6ae733304d448c9cc31431d251ebfcf3d083b73e23fde25cc59a575da46781", "query": "} } } else { if matches!(def_kind, DefKind::AnonConst) && tcx.features().generic_const_exprs { let hir_id = tcx.local_def_id_to_hir_id(def_id); let parent_def_id = tcx.hir().get_parent_item(hir_id); if let Some(defaulted_param_def_id) = tcx.hir().opt_const_param_default_param_def_id(hir_id) { // In `generics_of` we set the generics' parent to be our parent's parent which means that // we lose out on the predicates of our actual parent if we dont return those predicates here. // (See comment in `generics_of` for more information on why the parent shenanigans is necessary) // // struct Foo::ASSOC }>(T) where T: Trait; // ^^^ ^^^^^^^^^^^^^^^^^^^^^^^ the def id we are calling // ^^^ explicit_predicates_of on // parent item we dont have set as the // parent of generics returned by `generics_of` // // In the above code we want the anon const to have predicates in its param env for `T: Trait` // and we would be calling `explicit_predicates_of(Foo)` here let parent_preds = tcx.explicit_predicates_of(parent_def_id); // If we dont filter out `ConstArgHasType` predicates then every single defaulted const parameter // will ICE because of #106994. FIXME(generic_const_exprs): remove this when a more general solution // to #106994 is implemented. let filtered_predicates = parent_preds .predicates .into_iter() .filter(|(pred, _)| { if let ty::ClauseKind::ConstArgHasType(ct, _) = pred.kind().skip_binder() { match ct.kind() { ty::ConstKind::Param(param_const) => { let defaulted_param_idx = tcx .generics_of(parent_def_id) .param_def_id_to_index[&defaulted_param_def_id.to_def_id()]; param_const.index < defaulted_param_idx } _ => bug!( \"`ConstArgHasType` in `predicates_of` that isn't a `Param` const\" ), if matches!(def_kind, DefKind::AnonConst) && tcx.features().generic_const_exprs && let Some(defaulted_param_def_id) = tcx.hir().opt_const_param_default_param_def_id(tcx.local_def_id_to_hir_id(def_id)) { // In `generics_of` we set the generics' parent to be our parent's parent which means that // we lose out on the predicates of our actual parent if we dont return those predicates here. // (See comment in `generics_of` for more information on why the parent shenanigans is necessary) // // struct Foo::ASSOC }>(T) where T: Trait; // ^^^ ^^^^^^^^^^^^^^^^^^^^^^^ the def id we are calling // ^^^ explicit_predicates_of on // parent item we dont have set as the // parent of generics returned by `generics_of` // // In the above code we want the anon const to have predicates in its param env for `T: Trait` // and we would be calling `explicit_predicates_of(Foo)` here let parent_def_id = tcx.local_parent(def_id); let parent_preds = tcx.explicit_predicates_of(parent_def_id); // If we dont filter out `ConstArgHasType` predicates then every single defaulted const parameter // will ICE because of #106994. FIXME(generic_const_exprs): remove this when a more general solution // to #106994 is implemented. let filtered_predicates = parent_preds .predicates .into_iter() .filter(|(pred, _)| { if let ty::ClauseKind::ConstArgHasType(ct, _) = pred.kind().skip_binder() { match ct.kind() { ty::ConstKind::Param(param_const) => { let defaulted_param_idx = tcx .generics_of(parent_def_id) .param_def_id_to_index[&defaulted_param_def_id.to_def_id()]; param_const.index < defaulted_param_idx } } else { true _ => bug!( \"`ConstArgHasType` in `predicates_of` that isn't a `Param` const\" ), } }) .cloned(); return GenericPredicates { parent: parent_preds.parent, predicates: { tcx.arena.alloc_from_iter(filtered_predicates) }, }; } let parent_def_kind = tcx.def_kind(parent_def_id); if matches!(parent_def_kind, DefKind::OpaqueTy) { // In `instantiate_identity` we inherit the predicates of our parent. // However, opaque types do not have a parent (see `gather_explicit_predicates_of`), which means // that we lose out on the predicates of our actual parent if we dont return those predicates here. // // // fn foo() -> impl Iterator::ASSOC }> > { todo!() } // ^^^^^^^^^^^^^^^^^^^ the def id we are calling // explicit_predicates_of on // // In the above code we want the anon const to have predicates in its param env for `T: Trait`. // However, the anon const cannot inherit predicates from its parent since it's opaque. // // To fix this, we call `explicit_predicates_of` directly on `foo`, the parent's parent. // In the above example this is `foo::{opaque#0}` or `impl Iterator` let parent_hir_id = tcx.local_def_id_to_hir_id(parent_def_id.def_id); // In the above example this is the function `foo` let item_def_id = tcx.hir().get_parent_item(parent_hir_id); // In the above code example we would be calling `explicit_predicates_of(foo)` here return tcx.explicit_predicates_of(item_def_id); } } else { true } }) .cloned(); return GenericPredicates { parent: parent_preds.parent, predicates: { tcx.arena.alloc_from_iter(filtered_predicates) }, }; } gather_explicit_predicates_of(tcx, def_id) }", "positive_passages": [{"docid": "doc-en-rust-577b2d31afb6271cb2335741b57a534d2c858d544dfcacd2268b43d5423d6d50", "text": " $DIR/unresolved-seg-after-ambiguous.rs:19:14 | LL | use self::a::E::in_exist; | ^ `E` is a struct, not a module warning: `E` is ambiguous --> $DIR/unresolved-seg-after-ambiguous.rs:19:14 | LL | use self::a::E::in_exist; | ^ ambiguous name | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #114095 = note: ambiguous because of multiple glob imports of a name in the same module note: `E` could refer to the struct imported here --> $DIR/unresolved-seg-after-ambiguous.rs:13:17 | LL | pub use self::c::*; | ^^^^^^^^^^ = help: consider adding an explicit import of `E` to disambiguate note: `E` could also refer to the struct imported here --> $DIR/unresolved-seg-after-ambiguous.rs:12:17 | LL | pub use self::d::*; | ^^^^^^^^^^ = help: consider adding an explicit import of `E` to disambiguate = note: `#[warn(ambiguous_glob_imports)]` on by default error: aborting due to 1 previous error; 1 warning emitted For more information about this error, try `rustc --explain E0432`. ", "positive_passages": [{"docid": "doc-en-rust-ffa80db9e721c264afb46e4b02c27fd4cf817d4c7e2ec371cb0a73e24a929b37", "text": " $DIR/issue-42944.rs:9:9 | LL | B(()); //~ ERROR expected function, found struct `B` [E0423] | ^ constructor is not visible here due to private fields error[E0425]: cannot find function `B` in this scope --> $DIR/issue-42944.rs:15:9 | LL | B(()); //~ ERROR cannot find function `B` in this scope [E0425] | ^ not found in this scope help: possible candidate is found in another module, you can import it into scope | LL | use foo::B; | error: aborting due to 2 previous errors Some errors occurred: E0423, E0425. For more information about an error, try `rustc --explain E0423`. ", "positive_passages": [{"docid": "doc-en-rust-a807c1dab372d105219deafe7e918540761b044ffca4d9b7fabfa5c5e9c806db", "text": "For this code: You get the following error message: The suggestion may be excused to be not 100% perfect, but here it suggests to add an use statement despite that same item being imported already. What I want is this exactly: that it removes stuff from the suggestion list that is already being imported. Note that changing the glob import to a non glob one doesn't change anything.\ncc\nWhy is that even an error?\ntuple structs have private fields by default, you can't construct instances of B outside of the foo module.\noh ^^ maybe we should simply modify this error message then to say that the \"function\" is private?\nIt already says \"constructor is not visible here due to private fields\". That's okay, but maybe you might have meant another B from another module or something. However, the candidate offered is not any better as its the same one you are attempting to use.\nReopening to track \"removal of suggestion\" work for pub tuple structs with private fields.", "commid": "rust_issue_42944", "tokennum": 214}], "negative_passages": []} {"query_id": "q-en-rust-60a93986dc2148222b698b25c52a2ba21f61ee3bbc692b02fe4a08dc446997cd", "query": "= note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) error: aborting due to 7 previous errors error: reference to packed field is unaligned --> $DIR/unaligned_references.rs:90:20 | LL | let _ref = &m1.1.a; | ^^^^^^^ | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #82523 = note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) error: reference to packed field is unaligned --> $DIR/unaligned_references.rs:100:20 | LL | let _ref = &m2.1.a; | ^^^^^^^ | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #82523 = note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) error: aborting due to 9 previous errors Future incompatibility report: Future breakage diagnostic: error: reference to packed field is unaligned", "positive_passages": [{"docid": "doc-en-rust-d380ca66a3c478ea98f8da700dbf73638e54c0836be0e8e46532c0dbb72351f3", "text": "I tried this code: I expected to see this happen: The reference is aligned Instead, this happened: : label I-unsound A-mir\nWe have a pass to add moves for drops in packed structs, but it doesn't fire in this case. That's because there's a bug - this computation cannot just take into account the innermost field, it needs to compute a minimum of some sort.\nis this a regression? What did you exactly expected the output to be? (I'd like to run a bisection). thanks.\nThis was right in . Getting a consistent reproduction of this is slightly difficult because the reference might happen to align correctly by accident. In other words, the fact that the example above prints a pointer that is not 2-aligned means that something is certainly wrong. However, if the address were to be 2-aligned that could either be because the bug is fixed or because we happened to get lucky. The consistent way to check this is by looking at . In 1.52.0 the MIR contains which is correct (the move ensures that the resulting place is aligned) while 1.53.0 contains Note that there is no move.\nWG-prioritization assigning priority (). label -I-prioritize +P-high E-needs-bisection\nI have run a bisection with on your . The bisection could not pinpoint a single commit, maybe it sinked into a rollup merge. But if it helps, the regression should be in and coming from one of these PRs do you see in any of those a possible culprit? (I don't have the context to)\nYeah, it's , cc\nNot sure how would have an effect here, it should be a NOP for with . Looks like something else is amiss as well and that PR surfaced the bug? Where is the logic that moves packed structs to a new place before dropping implemented? That's odd, I would have expected this to be inside ... after all calling on a packed struct also has to do this move. Why does MIR building adjust the caller to satisfy a requirement that the callee should be responsible for?\nthe pass is called or something like that.", "commid": "rust_issue_99838", "tokennum": 465}], "negative_passages": []} {"query_id": "q-en-rust-60a93986dc2148222b698b25c52a2ba21f61ee3bbc692b02fe4a08dc446997cd", "query": "= note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) error: aborting due to 7 previous errors error: reference to packed field is unaligned --> $DIR/unaligned_references.rs:90:20 | LL | let _ref = &m1.1.a; | ^^^^^^^ | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #82523 = note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) error: reference to packed field is unaligned --> $DIR/unaligned_references.rs:100:20 | LL | let _ref = &m2.1.a; | ^^^^^^^ | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #82523 = note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) error: aborting due to 9 previous errors Future incompatibility report: Future breakage diagnostic: error: reference to packed field is unaligned", "positive_passages": [{"docid": "doc-en-rust-68d7d02aab47f0ccb4315d63eeee38030b338c848906e6471b100c61288b127b", "text": "And this can't take place inside the callee because the callee receives a , and if that reference is unaligned then UB has already occurred and there's nothing the callee can do to fix things anymore\ntakes a raw pointer, so it could very well be the place where the packed struct alignment trouble is handled. And IMO it'd be a much more natural place than doing this by transforming caller MIR.\nI don't understand the suggestion. The example above never calls for a packed struct. The offending drop in the MIR in the example above is . The type of that place is - but if were to be responsible for moving, then we would need to emit a move in literally every drop shim for a non-trivial alignment type.\nOh I see, I didn't quite read the example correctly. The problematic drop is the one that happens on field assignment, and we know it's a packed field because of the path we are accessing it with. EDIT: no wait this makes no sense. Now I am confused by your comment.\nNo what I just said makes no sense. What happens is that this is a partial drop -- one field has been moved out () and then the logic for dropping the remaining fields seems to be wrong. So is relevant because it changed . And it looks like somehow the change is wrong; here is also despite the reference being unaligned.\nshould fix this.", "commid": "rust_issue_99838", "tokennum": 299}], "negative_passages": []} {"query_id": "q-en-rust-61043b9b735737fc9f42b8ea47ecec735d2abb30f40b48f8a480571f27823b7e", "query": "| = note: see issue #65991 for more information = help: add `#![feature(trait_upcasting)]` to the crate attributes to enable = note: required when coercing `Box<(dyn Fn() + 'static)>` into `Box<(dyn FnMut() + 'static)>` error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-118433c31bc29716faff9a460042fc6c662b57f7865f844f06298dd3fb733f90", "text": " $DIR/rpitit-hidden-types-self-implied-wf.rs:8:27 | LL | fn extend(s: &str) -> (Option<&'static &'_ ()>, &'static str) { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the pointer is valid for the static lifetime note: but the referenced data is only valid for the anonymous lifetime defined here --> $DIR/rpitit-hidden-types-self-implied-wf.rs:8:18 | LL | fn extend(s: &str) -> (Option<&'static &'_ ()>, &'static str) { | ^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0491`. ", "positive_passages": [{"docid": "doc-en-rust-af4a6d734eec2405d68437928d1762b0a1450d4c9315ae958757d9bfdc6d7292", "text": "This is a use-after-free that compiles with 1.74.0-nightly (2023-09-13 ): cc", "commid": "rust_issue_116060", "tokennum": 30}], "negative_passages": []} {"query_id": "q-en-rust-61314521de84f66f2625eb43c63e03155ce36b8f1624c705067cb518ff9709e6", "query": " // ignore-tidy-linelength // Verify that span interning correctly handles having a span of exactly MAX_LEN length. // compile-flags: --crate-type=lib // check-pass #![allow(dead_code)] fn a<'a, T>() -> &'a T { todo!()//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// } ", "positive_passages": [{"docid": "doc-en-rust-1e4df30a7ceba21852c6ed075eb049ae6714282c369dffcdc43b0f08eb472d35", "text": "Not sure how helpful those notes are but I put them in here. Working on a web application using Sycamore.\nWhat's the code that induced this compiler panic?\nlabel +E-needs-mcve\nThis happened to me with . Removing a whitespace at the end of a line made the compiler panic. I attach a minimized workspace to reproduce the crash. What's funny is that there is a file () where deleting even just one character makes the crash disappear.\nGiven that this automated(?) reduction is still a big workspace, I'm actually going to leave the here because my hunch is this is still probably further minimizable, but thank you! Entirely possible I'm wrong, and this should be reduced enough to go by.\nA slightly smaller ICE reproducer:\nIt appears the content after the function name and parameters , starting after the return , ending at the function body's enclosing brace, will trigger the ICE. If I change the lifetime name to shorter than 3 characters, it does not ICE, e.g. doesn't ice, but will.\nI think I found the issue: it has to do with interning. Specifically, if you manage to fit a function of the following signature to somewhere around (defined in ) it will trigger some kind of edge case in span data interning and cause the to be confused with (or for some other cause that lead to this). Minimal reproducer is just and longer lifetime names will also trigger ICE.\nCan someone else try to see if this ICEs for them? I can reproduce the ICE on nightly but stable seems to handle it just fine.\nsearched nightlies: from nightly-2022-01-01 to nightly-2023-02-10 regressed nightly: nightly-2023-01-03 searched commit range: regressed commit: \"env_logger 0.7.1\", \"env_logger 0.9.0\", \"fst\", \"itertools 0.9.0\", \"itertools 0.10.1\", \"json\", \"lazy_static\", \"log\",", "positive_passages": [{"docid": "doc-en-rust-61ae42eb84e53959cf65c1aa95374806ebdcd9401bb058e30a16c644d381bd04", "text": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great!\nThis might be due to no longer builds: ` We need a new version I think. The last published version of is 5 months old already :thinking: Can you please check why there are no new versions being published?\nTIL scheduled GitHub Action Workflows apparently require regular activity in the repository to keep going :eyes:\nI didn't realize anyone was still using those crates so I disabled the action, I believe it was broken anyway. Does rustfmt and/or other crates still use those packages? If so I can try to find time to restart the action.\nRacer those crates. And RLS . So without those crates compiling racer with the latest nightly isn't possible, which makes compiling RLS impossible and therefore blocks the sync of RLS back to rust-lang/rust.\nWould it maybe be an option to make racer depend on to compile instead of publishing AP crates?\nWe stopped using them in rustfmt a while ago and I can't imagine a scenario where we'd switch back. Agreed on racer, which AIUI is the last true direct consumer of those crates. However, it's also in and those crates only get bumped in racer when a racer consumer needs them to be bumped.\nAh ok. In that case if this is purely for something in maintenance mode I think my own personal tenure in maintaining these crates is over with, so I would prefer to hand over the reins to someone else. Would someone be willing to volunteer to take over these crates?\nlet's sync up on #t-infra on Zulip -- I think it makes sense to move these crates into infra control/publication.\nLet's wait if Racer can move over to rustc-dev racer-rust/racer. If so, there shouldn't be a need to keep maintaining rustc-ap crates at all. Looking at the dependents on , there really is only , the crates themselves and (which by now uses ).\nhas some relevant considerations as well.", "commid": "rust_issue_91543", "tokennum": 476}], "negative_passages": []} {"query_id": "q-en-rust-61397016d359181c97e04b7e3999ae57227440d8db76e093fa1ebb0c5e811f8a", "query": "version = \"0.18.2\" dependencies = [ \"derive-new\", \"env_logger 0.7.1\", \"env_logger 0.9.0\", \"fst\", \"itertools 0.9.0\", \"itertools 0.10.1\", \"json\", \"lazy_static\", \"log\",", "positive_passages": [{"docid": "doc-en-rust-a04cedb3845472cf468557dd248e7c41b462fa547c486446d30eb621ab58e9b4", "text": "One way or another I think the AP crates need to be accounted for as per the accepted RFC, though it certainly would be a simpler story if they no longer need to be updated/published\nHappy to coordinate as necessary, just let me know!\nOver a month now. Is this gonna be fixed?\nit's blocked on", "commid": "rust_issue_91543", "tokennum": 65}], "negative_passages": []} {"query_id": "q-en-rust-614137e41658b494328f1cf865e0e04d656785179b3030275fdb4ede360507dc", "query": "} impl Foo for U { // OK, T, U are used everywhere. Note that the coherence check // hasn't executed yet, so no errors about overlap. //~^ ERROR conflicting implementations of trait `Foo<_>` for type `[isize; 0]` } impl Bar for T {", "positive_passages": [{"docid": "doc-en-rust-971b90aaf1316210c41e93f144e98b6981b2461b1128426fc3dcd08ac2a1f1d4", "text": "No response No response No response\n(for compiler people:) We could fix this by merging the impl WF and coherence checks into one compiler stage. I'll put up a PR and see what T-types thinks.\nlabels +AsyncAwait-Triaged We discussed this in a WG-async meeting and are deferring to the resolution suggested above. He noted that this is a fix to the diagnostics only. That fact that this is an error is correct.", "commid": "rust_issue_116982", "tokennum": 104}], "negative_passages": []} {"query_id": "q-en-rust-61437691895cc7e0f5c09967f6826605cd15d388980f8737de9719992c3aea0d", "query": "needlesArr.iter().fold(|x, y| { }); //~^^ ERROR this function takes 2 parameters but 1 parameter was supplied //~^^^ NOTE the following parameter types were expected // //~| NOTE the following parameter types were expected //~| NOTE expected 2 parameters // the first error is, um, non-ideal. }", "positive_passages": [{"docid": "doc-en-rust-bf9aa06787b60c8699d49c16614a2956727a3497af7c6edbb25f28f3e8d09eba", "text": "From: src/test/compile- Error E0061 needs a span_label, updating it from: To: Bonus: the types could be incorporated into the label rather than a note (though we may want to avoid types with long names in the label)\nPlanning on tackling the bonus here. Is there a prefered max length for output on the console. In the case it would exceed this should i truncate the typename or just not include?\n- great point. If you want to work on truncation, feel free to add it, too. I'm not sure if the term crate will tell you the current terminal size, but it may.\nI'm fixing and I noticed that it and this issue currently share the same code for generating this error, and adding code to add a to the message in would fix both.\nDid you do the bonus as well? I can tackle that if you haven't.\nI haven't yet - I can fix the issue (currently testing/fixing unit tests) and you can take on the bonus part of the issue.\nSounds great to me! Thanks.\n- I opened up issue to track the bonus portion of the work for you.", "commid": "rust_issue_35216", "tokennum": 249}], "negative_passages": []} {"query_id": "q-en-rust-61476c907914973156625c7ea3d9adbb6ebc2b72eb825ef5b233a9185be448c7", "query": "// of a macro that is not vendored by Rust and included in the toolchain. // See https://github.com/rust-analyzer/rust-analyzer/issues/6038. // On certain platforms right now the \"main modules\" modules that are // documented don't compile (missing things in `libc` which is empty), // so just omit them with an empty module and add the \"unstable\" attribute. // Unix, linux, wasi and windows are handled a bit differently. #[cfg(all( doc, not(any( any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") )) ) ))] #[path = \".\"] mod doc { // When documenting std we want to show the `unix`, `windows`, `linux` and `wasi` // modules as these are the \"main modules\" that are used across platforms, // so these modules are enabled when `cfg(doc)` is set. // This should help show platform-specific functionality in a hopefully cross-platform // way in the documentation. pub mod unix; pub mod linux; pub mod wasi; pub mod windows; } #[unstable(issue = \"none\", feature = \"std_internals\")] pub mod unix {} #[cfg(all( doc, any(", "positive_passages": [{"docid": "doc-en-rust-dfb1853f080d636b0154a993ccc58567464750f8671998b9ed42ac8ff219ae7d", "text": "More information on\nTo be clear, I'm not actually sure that's a private link, I didn't look at it in detail. It might just be doc(hidden) or something.\nI didn't either. Just that most of weird things happening are in .", "commid": "rust_issue_88304", "tokennum": 57}], "negative_passages": []} {"query_id": "q-en-rust-6167a2d58ee7c53ffad20e6ab97c8765897f8441e123dd014c74dcb3499c3285", "query": "None); // If the item has the name of a field, give a help note if let (&ty::TyStruct(did, _), Some(_)) = (&rcvr_ty.sty, rcvr_expr) { if let (&ty::TyStruct(did, substs), Some(expr)) = (&rcvr_ty.sty, rcvr_expr) { let fields = ty::lookup_struct_fields(cx, did); if fields.iter().any(|f| f.name == item_name) { cx.sess.span_note(span, &format!(\"use `(s.{0})(...)` if you meant to call the function stored in the `{0}` field\", item_name)); if let Some(field) = fields.iter().find(|f| f.name == item_name) { let expr_string = match cx.sess.codemap().span_to_snippet(expr.span) { Ok(expr_string) => expr_string, _ => \"s\".into() // Default to a generic placeholder for the // expression when we can't generate a string // snippet }; let span_stored_function = || { cx.sess.span_note(span, &format!(\"use `({0}.{1})(...)` if you meant to call the function stored in the `{1}` field\", expr_string, item_name)); }; let span_did_you_mean = || { cx.sess.span_note(span, &format!(\"did you mean to write `{0}.{1}`?\", expr_string, item_name)); }; // Determine if the field can be used as a function in some way let field_ty = ty::lookup_field_type(cx, did, field.id, substs); if let Ok(fn_once_trait_did) = cx.lang_items.require(FnOnceTraitLangItem) { let infcx = fcx.infcx(); infcx.probe(|_| { let fn_once_substs = Substs::new_trait(vec![infcx.next_ty_var()], Vec::new(), field_ty); let trait_ref = ty::TraitRef::new(fn_once_trait_did, cx.mk_substs(fn_once_substs)); let poly_trait_ref = trait_ref.to_poly_trait_ref(); let obligation = Obligation::misc(span, fcx.body_id, poly_trait_ref.as_predicate()); let mut selcx = SelectionContext::new(infcx, fcx); if selcx.evaluate_obligation(&obligation) { span_stored_function(); } else { span_did_you_mean(); } }); } else { match field_ty.sty { // fallback to matching a closure or function pointer ty::TyClosure(..) | ty::TyBareFn(..) => span_stored_function(), _ => span_did_you_mean(), } } } }", "positive_passages": [{"docid": "doc-en-rust-5e99db89097748d7a5bb96727be9b26d8b479b8435a424d5e1140133f0406230", "text": "I forget if there's already a bug on this, but it would be nice if, when you wrote and doesn't have a field named , but does have a nullary method named , the compiler gave a hint like \"did you mean to write ?\" It could also do vice versa (if you write A.B()ABA.B?) I ran into this in trans, where I have frequently written instead of .\nTest: The error message is better now than it was: in that it suggests there is a method named . But it could still be improved. (The intention may not have been to take the method's value; it seems more likely that the programmer thought it wasn't a method at all.)\nMy previous comment still holds. It would be great if you would see the \"try writing an anonymous function\" hint for something like and instead see a \"did you mean to write ?\" in situations like this one.\nI'd also like to see one when trying to call a function-typed struct field: emits Foof. It'd be nice if it tried to see if it could auto-deref to a struct field, and if so, say (f.f)().\nA nice idea. Assigning P-low.\nCurrently gives which is pretty close to the proposed behavior. Replacing the wih yields the same as before (i.e. does not suggest removing the ):\nUpdated the example gives: which now prints It's nice that the note is there, but the text is suboptimal: isn't a function.\nI'm working on a fix for this, but I'm struggling with the final portion. With this code: I get the output: My current changes are . I'm not sure if it's possible to have the output say \"\" and \"kitty.x\" with the input to the function. Any ideas? Or should I just keep the message in the same format as it is now and output \"s.func\" and \"did you mean to write ?\"? Edit: Ideally, I would like to output: I'm wondering if that's at all possible, especially because instead of , there might be an expression.\nI got around that error message. Now I'm making sure the previous message is displayed under the proper circumstances.", "commid": "rust_issue_2392", "tokennum": 489}], "negative_passages": []} {"query_id": "q-en-rust-617182d78c02595d4a9628a45e0bef0873789a7a1ad2c3d6bd6b357f795b5bc2", "query": "debug!(\"overloaded_deref_ty({:?})\", ty); let tcx = self.infcx.tcx; if ty.references_error() { return None; } // let trait_ref = ty::TraitRef::new(tcx, tcx.lang_items().deref_trait()?, [ty]); let cause = traits::ObligationCause::misc(self.span, self.body_id);", "positive_passages": [{"docid": "doc-en-rust-e8a556184d3ee8d481a844eb181edcbf5b39ce86ed75ae52fc28a83741bad2d0", "text": " $DIR/issue-104390.rs:1:27 | LL | fn f1() -> impl Sized { & 2E } | ^^ error: expected at least one digit in exponent --> $DIR/issue-104390.rs:2:28 | LL | fn f2() -> impl Sized { && 2E } | ^^ error: expected at least one digit in exponent --> $DIR/issue-104390.rs:3:29 | LL | fn f3() -> impl Sized { &'a 2E } | ^^ error: expected at least one digit in exponent --> $DIR/issue-104390.rs:5:34 | LL | fn f4() -> impl Sized { &'static 2E } | ^^ error: expected at least one digit in exponent --> $DIR/issue-104390.rs:7:28 | LL | fn f5() -> impl Sized { *& 2E } | ^^ error: expected at least one digit in exponent --> $DIR/issue-104390.rs:8:29 | LL | fn f6() -> impl Sized { &'_ 2E } | ^^ error: borrow expressions cannot be annotated with lifetimes --> $DIR/issue-104390.rs:3:25 | LL | fn f3() -> impl Sized { &'a 2E } | ^--^^^ | | | annotated with lifetime here | help: remove the lifetime annotation error: borrow expressions cannot be annotated with lifetimes --> $DIR/issue-104390.rs:5:25 | LL | fn f4() -> impl Sized { &'static 2E } | ^-------^^^ | | | annotated with lifetime here | help: remove the lifetime annotation error: borrow expressions cannot be annotated with lifetimes --> $DIR/issue-104390.rs:8:25 | LL | fn f6() -> impl Sized { &'_ 2E } | ^--^^^ | | | annotated with lifetime here | help: remove the lifetime annotation error: aborting due to 9 previous errors ", "positive_passages": [{"docid": "doc-en-rust-ea5d4597e087d144986ec6fc4d7469cb0c68467fc4e06211f6297ba361262d0d", "text": " $DIR/issue-50814-2.rs:16:24 | LL | const BAR: usize = [5, 6, 7][T::BOO]; | ^^^^^^^^^^^^^^^^^ index out of bounds: the length is 3 but the index is 42 note: erroneous constant encountered --> $DIR/issue-50814-2.rs:20:6 | LL | & as Foo>::BAR | ^^^^^^^^^^^^^^^^^^^^^ note: erroneous constant encountered --> $DIR/issue-50814-2.rs:20:5 | LL | & as Foo>::BAR | ^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0080`. ", "positive_passages": [{"docid": "doc-en-rust-4baf0e8d5127fe6f958b823add21bed6dcb30db386b7af7c0c25faafe018ab65", "text": "File: /tmp/icemaker/issue--2.rs auto-reduced (treereduce-rust): original: Version information ` Command: $DIR/issue-68890.rs:1:11 | LL | enum e{A((?'a a+?+l))} | ^ error: expected one of `)`, `+`, or `,`, found `a` --> $DIR/issue-68890.rs:1:15 | LL | enum e{A((?'a a+?+l))} | ^ expected one of `)`, `+`, or `,` error: expected trait bound, not lifetime bound --> $DIR/issue-68890.rs:1:11 | LL | enum e{A((?'a a+?+l))} | ^^^ error: aborting due to 3 previous errors ", "positive_passages": [{"docid": "doc-en-rust-f31e389170b25741411b93e85e658b2ad8e4a2a2e6b80563232e2387d740f3ad", "text": "I'm seeing an internal compiler error on the following input (found by ): The error happens on and but not on .\ntriage : P-medium, removing nomination, adding needs-bisect.\n(the bisection here would be entirely automatable if were addressed.)\nBisected roughly: ICE occurred from but not panicked, panicked from\nin particular looks suspicious. cc\nPre-, we return early with an error here: Post-, in the corresponding code happens in a call to : and the parser \"recovers\" from the error encountered in , allowing this line to be reached:\nI opened , which takes the stance that is no longer appropriate here. It's possible that there's a better fix that changes the way works, but that would require a deeper expertise of how these things are supposed to work than I currently have.", "commid": "rust_issue_68890", "tokennum": 174}], "negative_passages": []} {"query_id": "q-en-rust-67b6f8a07fabf5a41f70f1b622357703534bccc88a5f7b962b523a40c6407e57", "query": " // Regression test for #65553 // // `D::Error:` is lowered to `D::Error: ReEmpty` - check that we don't ICE in // NLL for the unexpected region. // check-pass trait Deserializer { type Error; } fn d1() where D::Error: {} fn d2() { d1::(); } fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-e14d33ba39600496f5ab1dfe09731cf4421241a31efd1d602d84ce27824d49eb", "text": "The following code () panics the compiler in stable 1.38.0, beta 1.39.0-beta.6, and nightly 2019-10-17: Removing the bound or filling it with makes it compile just fine. Error + backtrace:\ncc\nHmm, I didn't know that adding a \"syntactically trivial bound\" could cause something like this. Is there some implicit bound that gets in cases like this?\nminimized test case ():\nThe minimized test worked in previous stable Rust versions (at least in 1.33 under 2015 edition; though 2018 edition breaks there, which indicates this is with 99.999% certainty injected by MIR-borrowck...)\ntriage: P-high. Adding self to assignee list; removing I-nominated.", "commid": "rust_issue_65553", "tokennum": 166}], "negative_passages": []} {"query_id": "q-en-rust-67d5d1fb8ecc6105712f9767d62104ce4bb42a1ddc5c2c05eed3c61b26b77152", "query": "let _ = StableTupleStruct (1); let _ = FrozenTupleStruct (1); let _ = LockedTupleStruct (1); // At the moment, the following just checks that the stability // level of expanded code does not trigger the // lint. Eventually, we will want to lint the contents of the // macro in the module *defining* it. Also, stability levels // on macros themselves are not yet linted. macro_test!(); } fn test_method_param(foo: F) {", "positive_passages": [{"docid": "doc-en-rust-cd34ad2c493ec5a906302f20259163687870f739e42d7eb79de1288707dd2e75", "text": "The expansion of and both currently cause code to fail the stability check because they expand to reveal unstable implementation details. A seemingly reasonable solution to this would be to just not run the analysis on any code that has been macro-expanded. cc\nI definitely agree with this approach: we should be able to annotate a macro with a stability level, and that's what should count for its users. Do we have a way of telling what part of an AST was generated via macro expansion by the time we get to linting?\nMostly; there's the field of a : for normal code, for macro generated code (basically containing the backtrace of macro invocations that led to that point).\nOK, that makes this sound like a pretty easy fix. I'll take a quick look at it tomorrow (unless wants to?)\nHm, this seems slightly subtle, since the contents of a macro would then never be checked for stability. This differs to a conventional function, e.g. given , won't warn, but the contents of is still checked, while where is just never checked. (Of course, macros and phasing are both feature gated so you have to opt-in to instability to use non-standard macros anyway.)\nGood point. Also, it looks tricky with the current infrastructure to even lint on the macro invocation -- the backtrace of macro expansion does not appear to link to the def_id of the macro (or give any other way of getting at the macro's stability). Nevertheless, I think we should bail out of the lint when we hit the results of expansion; those should be checked, if anywhere, at the site of macro definition. One other question: do syntax extensions also produce the span information you mentioned above? I ask because original problematic cases are written syntax extensions. If not, is there some other way of telling that an expansion happened?\nI have a patch ready that bails of out the lint on expanded code. This also seems to mask uses of . If y'all are ok with this strategy for now, I'll file a PR.\nSounds good to me. Thanks!\ncc", "commid": "rust_issue_15703", "tokennum": 448}], "negative_passages": []} {"query_id": "q-en-rust-682ad45db2e111f22f956f56d131af9c68feafd0efb53533861144481018d6d4", "query": "pat_src: PatternSource, bindings: &mut SmallVec<[(PatBoundCtx, FxHashSet); 1]>, ) { // We walk the pattern before declaring the pattern's inner bindings, // so that we avoid resolving a literal expression to a binding defined // by the pattern. visit::walk_pat(self, pat); self.resolve_pattern_inner(pat, pat_src, bindings); // This has to happen *after* we determine which pat_idents are variants: self.check_consistent_bindings_top(pat); visit::walk_pat(self, pat); } /// Resolve bindings in a pattern. This is a helper to `resolve_pattern`.", "positive_passages": [{"docid": "doc-en-rust-a0ccb76a6e60f5a57f49822c16e14a4158fd6807515502bfbce2a615c588e5c2", "text": " $DIR/issue-102182-impl-trait-recover.rs:1:11 | LL | fn foo() {} | ^^^^^^^^^^ not a trait | help: use the trait bounds directly | LL - fn foo() {} LL + fn foo() {} | error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-b73840ea4e47f23b52e55abe148348ade905c37101ff17ff6768ea2c63c1a9cf", "text": "When writing the current error is \"expected one of these tokens\", without any attempt to recover the parse. It should instead be closer to: $DIR/issue-105069.rs:1:5 | LL | use self::A::*; | ^^^^^^^^^^ = help: consider adding an explicit import of `V` to disambiguate note: `V` could also refer to the variant imported here --> $DIR/issue-105069.rs:3:5 | LL | use self::B::*; | ^^^^^^^^^^ = help: consider adding an explicit import of `V` to disambiguate error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-14d87f94d6dd2f60938cc3099809c6fbd53892083d65f782d3f0ae98e232297d", "text": " $DIR/mut-mut-wont-coerce.rs:36:14 | LL | make_foo(&mut &mut *result); | -------- ^^^^^^^^^^^^^^^^^ expected `*mut *mut Foo`, found `&mut &mut Foo` | | | arguments to this function are incorrect | = note: expected raw pointer `*mut *mut Foo` found mutable reference `&mut &mut Foo` note: function defined here --> $DIR/mut-mut-wont-coerce.rs:30:4 | LL | fn make_foo(_: *mut *mut Foo) { | ^^^^^^^^ ---------------- error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-2117c29f4bd8913f03832bd4148b56b6d548d806460c70ac387c75f8d0672cb4", "text": "The title already says it, but as an illustrating example you can have a look at Since &mut T can be implicitly converted to *mut T, I wondered whether there's a reason that it doesn't work for &mut &mut T (besides \"no one has implemented it\" :smile: ). pointed out that this might be (in IRC). The example code also illustrates a somewhat unexpected failure of type inference, which might be worth filing as a separate issue.\nHmm... take the following: Rust will currently complain about mismatched types for the second coercion. It's not clear such a coercion should exist: it's like converting a to a .\nNote that first coercing to instead of seems to work ().\nYour testcase is constructing a , not a .\nAh, thanks for pointing that out.\nIn don't know if I understand your point, but if you argue that such a coercion should not exist because it might be unsafe, the line in my original example shows that this conversion is already possible in safe code.\nHeh, C++ has special rules for multi-level pointer conversions exactly to prevent these kinds of type unsafeties: \"More qualified\" relations can be translated to Rust's // relations.\nisn't doing the same thing: the outer is actually constructing a temporary, so you end up with an (as opposed to a ). Also, safety isn't really relevant; we allow arbitrary explicit pointer casts anyway.\nOh, wait, I see, you expect the original to interpreted as , and then converted from there to . That makes sense.\nI think this would work if borrow expressions would take into account that they might get coerced later and pass down the from (where here), instead of only working for .\nTriage: this still reproduces.\nClosing this as the bug is fixed and playground link above works for all editions.\nThe playground link had the problematic line commented out: It still gives a compile error:\noops my bad then\nThis idea has been suggested. A T-compiler member has pointed out that other languages apply rules to restrain pointer coercion, specifically because multi-level pointer coercion is prone to being a bit of a footgun. Not quite \"this is a bad idea\", but despite the simplicity of the proposal, it deserves being treated with caution. There is no strong argument that this is a bug because it is consistent, if annoying.", "commid": "rust_issue_34117", "tokennum": 515}], "negative_passages": []} {"query_id": "q-en-rust-70fa4043d1bf7c29773cfade4cd063b2d63c3536fe130ed58289b87fbc4d100d", "query": " error[E0308]: mismatched types --> $DIR/mut-mut-wont-coerce.rs:36:14 | LL | make_foo(&mut &mut *result); | -------- ^^^^^^^^^^^^^^^^^ expected `*mut *mut Foo`, found `&mut &mut Foo` | | | arguments to this function are incorrect | = note: expected raw pointer `*mut *mut Foo` found mutable reference `&mut &mut Foo` note: function defined here --> $DIR/mut-mut-wont-coerce.rs:30:4 | LL | fn make_foo(_: *mut *mut Foo) { | ^^^^^^^^ ---------------- error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-ddc41e3c675bcec624d7138668d58a699ac42a32b987d965f581c0e1d166c8b3", "text": "Legalizing this behavior with narrow cases seems permissible, but amounts to a new language feature design request. Transitive coercions, as part of the revised coercion rules, are . Accordingly, I think this should also be closed... but not before documenting the current behavior in the form of a regression test.", "commid": "rust_issue_34117", "tokennum": 66}], "negative_passages": []} {"query_id": "q-en-rust-70ff2e2c73913345bbd87ce0af8864966a21900fae39ebca4025dbb82440a9e1", "query": " warning: unused variable: `props_2` --> $DIR/issue-87987.rs:11:9 | LL | let props_2 = Props { | ^^^^^^^ help: if this is intentional, prefix it with an underscore: `_props_2` | = note: `#[warn(unused_variables)]` on by default warning: field is never read: `field_1` --> $DIR/issue-87987.rs:5:5 | LL | field_1: u32, | ^^^^^^^^^^^^ | = note: `#[warn(dead_code)]` on by default warning: field is never read: `field_2` --> $DIR/issue-87987.rs:6:5 | LL | field_2: u32, | ^^^^^^^^^^^^ warning: 3 warnings emitted ", "positive_passages": [{"docid": "doc-en-rust-7cdc79ef87272187a3994c116d49523df7131d804d1b2f1d966c795a36b75a1b", "text": "Encountered an ICE when building 0.18.0 on edition 2021. I have roughly narrowed it down to the following (can probably be reduced further). Seems to be an issue with closure captures. $DIR/tuple-struct-field.rs:8:26 error: fields `1`, `2`, `3`, and `4` are never read --> $DIR/tuple-struct-field.rs:8:28 | LL | struct SingleUnused(i32, [u8; LEN], String); | ------------ ^^^^^^^^^ LL | struct UnusedAtTheEnd(i32, f32, [u8; LEN], String, u8); | -------------- ^^^ ^^^^^^^^^ ^^^^^^ ^^ | | | field in this struct | fields in this struct | = help: consider removing these fields note: the lint level is defined here --> $DIR/tuple-struct-field.rs:1:9 | LL | #![deny(dead_code)] | ^^^^^^^^^ help: consider changing the field to be of unit type to suppress this warning while preserving the field numbering, or remove the field error: field `0` is never read --> $DIR/tuple-struct-field.rs:13:27 | LL | struct UnusedJustOneField(i32); | ------------------ ^^^ | | | field in this struct | LL | struct SingleUnused(i32, (), String); | ~~ = help: consider removing this field error: fields `0`, `1`, `2`, and `3` are never read --> $DIR/tuple-struct-field.rs:13:23 error: fields `1`, `2`, and `4` are never read --> $DIR/tuple-struct-field.rs:18:31 | LL | struct MultipleUnused(i32, f32, String, u8); | -------------- ^^^ ^^^ ^^^^^^ ^^ LL | struct UnusedInTheMiddle(i32, f32, String, u8, u32); | ----------------- ^^^ ^^^^^^ ^^^ | | | fields in this struct | help: consider changing the fields to be of unit type to suppress this warning while preserving the field numbering, or remove the fields | LL | struct MultipleUnused((), (), (), ()); | ~~ ~~ ~~ ~~ LL | struct UnusedInTheMiddle(i32, (), (), u8, ()); | ~~ ~~ ~~ error: aborting due to 2 previous errors error: aborting due to 3 previous errors ", "positive_passages": [{"docid": "doc-en-rust-16acbb4914bb9992ed29a1710c87156554dd8685769f60cdee2f321edfbae7e4", "text": "The idea of changing a field to unit type to preserve field numbering makes sense for fields in the middle of a tuple. However, if the unused field is at the end, or it's the only field, then deleting it won't affect field numbering of any other field. No response No response $DIR/issue-49040.rs:1:29 --> $DIR/issue-49040.rs:2:12 | LL | #![allow(unused_variables)]; | ^ consider adding a `main` function to `$DIR/issue-49040.rs` LL | fn foo() {} | ^ consider adding a `main` function to `$DIR/issue-49040.rs` error: aborting due to 2 previous errors", "positive_passages": [{"docid": "doc-en-rust-6cf3c2bb3c08a38bafff3532db72ea1908c7190bb562a2249ecc7c70a784b1e0", "text": "Tested on nightly and stable. The second diagnostic is strange. $DIR/at-in-struct-patterns.rs:8:15 | LL | let Foo { var @ field1, .. } = foo; | --- ^^^^^ | | | while parsing the fields for this pattern | = note: struct patterns use `field: pattern` syntax to bind to fields = help: consider replacing `new_name @ field_name` with `field_name: new_name` if that is what you intended error: `@ ..` is not supported in struct patterns --> $DIR/at-in-struct-patterns.rs:10:26 | LL | let Foo { field1: _, bar @ .. } = foo; | --- ^^^^^^^^ | | | while parsing the fields for this pattern | help: bind to each field separately or, if you don't need them, just remove `bar @` | LL - let Foo { field1: _, bar @ .. } = foo; LL + let Foo { field1: _, .. } = foo; | error: `@ ..` is not supported in struct patterns --> $DIR/at-in-struct-patterns.rs:11:15 | LL | let Foo { bar @ .. } = foo; | --- ^^^^^^^^ | | | while parsing the fields for this pattern | help: bind to each field separately or, if you don't need them, just remove `bar @` | LL - let Foo { bar @ .. } = foo; LL + let Foo { .. } = foo; | error: expected identifier, found `@` --> $DIR/at-in-struct-patterns.rs:12:15 | LL | let Foo { @ } = foo; | --- ^ expected identifier | | | while parsing the fields for this pattern error: expected identifier, found `@` --> $DIR/at-in-struct-patterns.rs:13:15 | LL | let Foo { @ .. } = foo; | --- ^ expected identifier | | | while parsing the fields for this pattern error[E0425]: cannot find value `var` in this scope --> $DIR/at-in-struct-patterns.rs:9:10 | LL | dbg!(var); | ^^^ not found in this scope | help: consider importing this function | LL + use std::env::var; | error: aborting due to 6 previous errors For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-1fb348516fce42735a2f7d856f66678c531adef3f524310c5b14170cbebf9cf5", "text": "This erroneous code: Produces this error message It would be better if instead it produced an error message like this: Tried the code in these Rust versions and they all produced the same error message: 1.37.0 stable1.45.0 stable1.46.0-beta.1 (2020-07-15 )1.47.0-nightly (2020-07-22 ) $DIR/impl-unused-tps.rs:13:8 --> $DIR/impl-unused-tps.rs:15:8 | LL | impl Foo for [isize;1] { | ^ unconstrained type parameter error[E0207]: the type parameter `U` is not constrained by the impl trait, self type, or predicates --> $DIR/impl-unused-tps.rs:30:8 --> $DIR/impl-unused-tps.rs:31:8 | LL | impl Bar for T { | ^ unconstrained type parameter error[E0207]: the type parameter `U` is not constrained by the impl trait, self type, or predicates --> $DIR/impl-unused-tps.rs:38:8 --> $DIR/impl-unused-tps.rs:39:8 | LL | impl Bar for T | ^ unconstrained type parameter error[E0207]: the type parameter `U` is not constrained by the impl trait, self type, or predicates --> $DIR/impl-unused-tps.rs:46:8 --> $DIR/impl-unused-tps.rs:47:8 | LL | impl Foo for T | ^ unconstrained type parameter error[E0207]: the type parameter `V` is not constrained by the impl trait, self type, or predicates --> $DIR/impl-unused-tps.rs:46:10 --> $DIR/impl-unused-tps.rs:47:10 | LL | impl Foo for T | ^ unconstrained type parameter error: aborting due to 5 previous errors error[E0119]: conflicting implementations of trait `Foo<_>` for type `[isize; 0]` --> $DIR/impl-unused-tps.rs:27:1 | LL | impl Foo for [isize;0] { | ---------------------------- first implementation here ... LL | impl Foo for U { | ^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `[isize; 0]` error[E0275]: overflow evaluating the requirement `([isize; 0], _): Sized` | = help: consider increasing the recursion limit by adding a `#![recursion_limit = \"256\"]` attribute to your crate (`impl_unused_tps`) note: required for `([isize; 0], _)` to implement `Bar` --> $DIR/impl-unused-tps.rs:31:11 | LL | impl Bar for T { | - ^^^ ^ | | | unsatisfied trait bound introduced here = note: 126 redundant requirements hidden = note: required for `([isize; 0], _)` to implement `Bar` error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0207`. Some errors have detailed explanations: E0119, E0207, E0275. For more information about an error, try `rustc --explain E0119`. ", "positive_passages": [{"docid": "doc-en-rust-971b90aaf1316210c41e93f144e98b6981b2461b1128426fc3dcd08ac2a1f1d4", "text": "No response No response No response\n(for compiler people:) We could fix this by merging the impl WF and coherence checks into one compiler stage. I'll put up a PR and see what T-types thinks.\nlabels +AsyncAwait-Triaged We discussed this in a WG-async meeting and are deferring to the resolution suggested above. He noted that this is a fix to the diagnostics only. That fact that this is an error is correct.", "commid": "rust_issue_116982", "tokennum": 104}], "negative_passages": []} {"query_id": "q-en-rust-7899fc9485030901dc4f6faffe8fe65da1cd4e20dd3654594fbbe028fddcbab1", "query": "sig: ty::PolyFnSig<'tcx>, tuple_arguments: TupleArgumentsFlag, ) -> ty::Binder<'tcx, (ty::TraitRef<'tcx>, Ty<'tcx>)> { assert!(!self_ty.has_escaping_bound_vars()); let arguments_tuple = match tuple_arguments { TupleArgumentsFlag::No => sig.skip_binder().inputs()[0], TupleArgumentsFlag::Yes => tcx.intern_tup(sig.skip_binder().inputs()), }; debug_assert!(!self_ty.has_escaping_bound_vars()); let trait_ref = tcx.mk_trait_ref(fn_trait_def_id, [self_ty, arguments_tuple]); sig.map_bound(|sig| (trait_ref, sig.output())) }", "positive_passages": [{"docid": "doc-en-rust-0e86f5cad76845a9579f9f9606cb740329e6e79be0820e577c6d84ca9b39e498", "text": "An there actually does panic for the test. I only moved this code between the files, the same comment and code pattern exists as well for normal closures: Originally posted by in\nI couldn't repro this failure, maybe was doing instead of ? The ordering is important here.\nI believe I did, yes.", "commid": "rust_issue_104825", "tokennum": 64}], "negative_passages": []} {"query_id": "q-en-rust-78ac75d6b195115976117471ca901341df6e1dcb58adfb8e6c18526ded72766b", "query": "} } pub fn retokenize(sess: &'a ParseSess, mut span: Span) -> Self { let begin = sess.source_map().lookup_byte_offset(span.lo()); let end = sess.source_map().lookup_byte_offset(span.hi()); // Make the range zero-length if the span is invalid. if begin.sf.start_pos != end.sf.start_pos { span = span.shrink_to_lo(); } let mut sr = StringReader::new(sess, begin.sf, None); // Seek the lexer to the right byte range. sr.end_src_index = sr.src_index(span.hi()); sr } fn mk_sp(&self, lo: BytePos, hi: BytePos) -> Span { self.override_span.unwrap_or_else(|| Span::with_root_ctxt(lo, hi)) }", "positive_passages": [{"docid": "doc-en-rust-9f7caf72595177bf2822e32eb545f7227354c9856f07d783118015d474032cf3", "text": "contains a function, which allows to to, well, retokenize existing span: There's only caller of this function: And there's only one caller of that caller: In other words, the only reason why we need is to get the span of hir::Usewhich happens to be a glob import. ( rls needs to know the span of use foo:: with explicit list of imported items\" code action. This was the first demoed code action and is still that is implemented ) I'd like to remove the function, as it is one of the few remaining things that requires to be public, and as it is very much a hack anyway. Additionally, I believe I've broken this function accidently a while ago: Rather than retokenizing a span, it retokenized file from the begining until the end of the span (whcih, as uses are usually on top, worked correctly most of the times by accident). What would be the best way to fix this? Add hir::Use-- this seem to be the \"proper\" solution here Move hack's implementation to where it belongs: in , just fetch the text of the file and look for ('') Remove the deglog functionlity from RLS? I don't like this option as it handicaps the RLS before we formally decided to deprecated it. But I don't completely disregard it either, as, as a whole, it seems to be a full-stack one off hack penetrating every layer of abstraction. WDYT?\nI'd suggest the second alternative (move hack's implementation to in librutcsaveanalysis), save analysis is a home of many hacks already and one more until save analysis is deprecated in its entirety won't hurt. Adding span for would be ok if it were useful for anything else at least.", "commid": "rust_issue_76046", "tokennum": 399}], "negative_passages": []} {"query_id": "q-en-rust-791cf9a0a722d1b6f1dde0e6ef5c09fe7748aa165cd29c4c2b53e834a618e40b", "query": "/// # Examples /// /// ``` /// #![feature(osstring_ascii)] /// use std::ffi::OsString; /// /// let mut s = OsString::from(\"Gr\u00fc\u00dfe, J\u00fcrgen \u2764\");", "positive_passages": [{"docid": "doc-en-rust-2ccc646c7f0e6c17a55f593504c3124b11a8dc153ad606c6e758ef0bf7a7cf75", "text": " $DIR/clone-debug-dead-code-in-the-same-struct.rs:4:12 error: fields `field1`, `field2`, `field3`, and `field4` are never read --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:6:5 | LL | pub struct Whatever { | ^^^^^^^^ | -------- fields in this struct LL | pub field0: (), LL | field1: (), | ^^^^^^ LL | field2: (), | ^^^^^^ LL | field3: (), | ^^^^^^ LL | field4: (), | ^^^^^^ | = note: `Whatever` has a derived impl for the trait `Debug`, but this is intentionally ignored during dead code analysis note: the lint level is defined here --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:1:11 |", "positive_passages": [{"docid": "doc-en-rust-9546ab9a79e7afb097786fe35f6ec641a95b6f3e21077a1cea80490e41620b4f", "text": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types).", "commid": "rust_issue_128053", "tokennum": 509}], "negative_passages": []} {"query_id": "q-en-rust-7d49cb0c1c28911544ef219b4dd6e3e21f6fcc4ce45880631ff8903bd807697f", "query": " error: struct `Whatever` is never constructed --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:4:12 error: fields `field1`, `field2`, `field3`, and `field4` are never read --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:6:5 | LL | pub struct Whatever { | ^^^^^^^^ | -------- fields in this struct LL | pub field0: (), LL | field1: (), | ^^^^^^ LL | field2: (), | ^^^^^^ LL | field3: (), | ^^^^^^ LL | field4: (), | ^^^^^^ | = note: `Whatever` has a derived impl for the trait `Debug`, but this is intentionally ignored during dead code analysis note: the lint level is defined here --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:1:11 |", "positive_passages": [{"docid": "doc-en-rust-97adf45f9b9e264448b397ec08a8e2a265a07410a46c4023c53938df064ef2be", "text": "So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "commid": "rust_issue_128053", "tokennum": 495}], "negative_passages": []} {"query_id": "q-en-rust-7d797b1e8c07943b17232e0478b7e03f236d5a61bdaeae30aa19b40cdc01efa9", "query": "/// Parses a statement. This stops just before trailing semicolons on everything but items. /// e.g., a `StmtKind::Semi` parses to a `StmtKind::Expr`, leaving the trailing `;` unconsumed. pub fn parse_stmt(&mut self) -> PResult<'a, Option> { Ok(self.parse_stmt_(true)) } fn parse_stmt_(&mut self, macro_legacy_warnings: bool) -> Option { self.parse_stmt_without_recovery(macro_legacy_warnings).unwrap_or_else(|mut e| { Ok(self.parse_stmt_without_recovery(true).unwrap_or_else(|mut e| { e.emit(); self.recover_stmt_(SemiColonMode::Break, BlockMode::Ignore); None }) })) } fn parse_stmt_without_recovery(", "positive_passages": [{"docid": "doc-en-rust-22fb7b6f67d45384b76cb98f00c2c9bb1eb60397fdda5039e6272ede7ca75a41", "text": "The idea of shortening to has been discussed many times before, and the general consensus always ends up being that it is not a good idea. I would like to propose something a bit different, so please hear me out. I propose that instead of aliasing to , we instead just make expand into when in the context of a variable declaration. For example: This solves the problems (readability, more concise) that people have with , whilst not causing the issues that people have (pattern matching and consistancy). The most common issue people have with aliasing to is that it doesn't fit well with patterns, like . However, gets rid of this problem, because you wouldn't write if you were trying to pattern match, so why would someone want to write ? You wouldn't do that, and it wouldn't compile. This would be optional, of course. would still work fine, and people would just have to option of removing the to let it be implied. There are a lot of other abbreviations used in rust (, , ) that all serve the purpose of writing less. It just doesn't feel right that one of the most commonly used keywords is so long. :heart::crab: $DIR/unsized-exprs2.rs:22:19 --> $DIR/unsized-exprs2.rs:22:5 | LL | udrop::<[u8]>(foo()[..]); | ^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-86145dda3b6fd95b4efbdb679c46b9803ebc45ffea7ba9a1d53d1d81b4427672", "text": "The following code fails on stable: When compiled in release mode, the assert is optimized out by LLVM even though it checks the exact same condition as the other assert, as can be verified by testing in debug mode. Note that it's theoretically possible for the stack to be aligned correctly such that the bug is suppressed, but that's not likely. The 256's can be replaced with larger alignments if that happens. The bug is caused by the impl of copying to the stack with the default alignment of 16. ()\nand seem related\nThis is a soundness hole.\nSo... I guess the problem is that the that rustc emits for the unsized local doesn't use ? That's... kind of bad, because it doesn't look like even supports dynamically setting the alignment: If this is correct, the issue should be renamed to something like \"unsized locals do not respect alignment\".\nA way to fix this would be to change the ABI for dynamic trait calls that take by value to always pass by address. When the trait object is created is where the trampoline function would be generated, so there is no additional cost for static calls.\nThis can also be fixed by not allowing dynamically-sized local variables with an alignment more than some maximum value and having all dynamically-sized locals use that maximum alignment unless optimizations can prove that a smaller alignment suffices.\nMessing with the ABI wouldn't help with unsized locals in general (assuming is correct), since e.g. will also need an alloca with dynamic alignment in general. Given only a value, there is no static way to say anything about its alignment, so how would you distinguish whether calling a method on it is OK?\nCan we call and then align the pointer for the local ourselves as a workaround until we come up with a good solution?\nIn clang there's a function which might or might not be useful in this case.\nrequires a constant alignment. Non constant alignments are . This matches semantics of LLVM instruction, so it's not helping, as alignment of value in is unknown at compile time when moving from .\nI think that overallocating for unsized locals of unknown alignment may make sense. You can and then make use of to compute the offset. Sure, it increases stack requirements, but it seems acceptable for , and in most cases you end up with extra 7 bytes on stack or less. 256 alignment seems like an edge case where wasting 255 bytes doesn't matter.", "commid": "rust_issue_68304", "tokennum": 530}], "negative_passages": []} {"query_id": "q-en-rust-7eaf3acf7057c86cb3c04930e6bed0552f8967b3037d79014b329038cd28b562", "query": "error[E0508]: cannot move out of type `[u8]`, a non-copy slice --> $DIR/unsized-exprs2.rs:22:19 --> $DIR/unsized-exprs2.rs:22:5 | LL | udrop::<[u8]>(foo()[..]); | ^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-35b067be423839cfae33dfa0b48475783f27ab1ed0904fd969f99c950d943054", "text": "if we keep setting as the default alignment we can skip the overallocating and offset dance for types with an alignment below that.\nMy idea was that rustc would prohibit coercing overaligned types to a dyn trait that takes by value. So:\nThe alignment is dynamically determined though, not sure if saving 15 bytes of stack space is worth the conditional branch?\nOh right. Yea that's probably not helpful.\nSince , this would be backwards incompatible (at latest) once trait object upcasting is finally implemented: and trait objects would have to have the same alignment restriction (because you can upcast to ), but today you can freely create such objects with any alignment.\nI was under the impression that calling didn't require dynamic , is that related to missing MIR optimizations, meaning we have unnecessary moves which end up duplicating the data on the stack using a dynamic ?\nCorrectness shouldn't rely on optimizations to fire...\nSure, I was just hoping we weren't relying on \"unsized locals\", only unsized parameters (i.e. ), for what we're exposing as stable. And I was hoping an optimization wouldn't need to be involved to get there. I wonder if it would even be possible to restrict \"unsized locals\" to the subset where we can handle it entirely through ABI, and not actually have dynamic s anywhere, since that would make me far more comfortable with 's implementation or even stabilization. Ironically, if that works, it's also the subset we could've implemented years earlier than we did, since most of the complexity, unknowns (and apparently bugs) are around dynamic s, not calling trait object methods on a dereference of .\nThat should be possible. I think everything that is necessary is performing the deref of the box as argument () instead of moving the box contents to a local first () That would also make it possible to call in cg_clif. Cranelift doesn't have alloca support, so the current MIR is untranslatable.\nHmm, but that's not trivially sound because it could lead to MIR calls where argument and return places overlap, which is UB. We'd potentially have to special-case it to derefs, but even then what's to stop someone from doing ? Maybe we could introduce a temporary for the return value instead, for these kinds of virtual calls? That should remove any potential overlap while allowing the call.", "commid": "rust_issue_68304", "tokennum": 517}], "negative_passages": []} {"query_id": "q-en-rust-7eaf3acf7057c86cb3c04930e6bed0552f8967b3037d79014b329038cd28b562", "query": "error[E0508]: cannot move out of type `[u8]`, a non-copy slice --> $DIR/unsized-exprs2.rs:22:19 --> $DIR/unsized-exprs2.rs:22:5 | LL | udrop::<[u8]>(foo()[..]); | ^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-e0557b2e15a4d389c0583e27ec38e510a3d214eb85386293065b15f95dc546f5", "text": "(Unrelatedly, I just realize we use a copying-out shim for vtable entries that take by value, even when would be passed as at the call ABI level. We would probably have to make computing the call ABI a query for this microopt to not be expensive)\nThat would result in being moved to a temp first before dropping and calling the boxed closure. If it wasn't moved first before the drop of , then borrowck would give an error. (I meant for this change to be done at mir construction level, not as an optimization pass, so borrowck should catch any soundness issues)\nWe can try it, I guess, but right now a lot relies on many redundant copies/moves being created by MIR building and may not catch edge cases like this. But also I don't think moving out of a moves the itself today. Me too, but I was going with the \"correct by construction\" approach of creating a redundant copy/move, just on the return side, instead of the argument side, of the call.\nAfter revisiting on triage, removing nomination.\npre-triage: nominated for discussion in the next weekly meeting.\nI just checked and, a function call roughly this (touched up for clarity): Some of my previous comments assumed the wasn't there and the call would use the destination directly, but given that there is a move, that simplifies my suggestion.


--> $DIR/unsized-exprs2.rs:22:19 --> $DIR/unsized-exprs2.rs:22:5 | LL | udrop::<[u8]>(foo()[..]); | ^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-5beba8d62abc39b05f6e650dcce3335b6738bd1d349d26aea50dceae17630e1f", "text": "I'm in favor of this proposal, but I do think we want to be slightly careful to ensure we're not violating soundness. The main concern would be if there is some way to access or re-use the variable that was moved. I attempted to do so with this example () but it fails to compile: The error here arises because the closure would need to capture a to do anything, which prevents a later move. But maybe there is some kind of tricky example that can arise where we re-assign the value during argument evaluation?\ncompiles and does before , where is the result of .\nBorrowck should prevent that, right?\nborrowck may rely on MIR building creating extraneous copies, so we can't just assume it helps us. IMO we should limit it to functions taking unsized parameters and function calls passing such parameters or dereferences of es (any other pointer/reference doesn't support moving out of).\nWell, I think is right that borrowck would catch violations of soundness in some strict sense, but I guess what I meant is that we might be breaking backwards compatibility.\nThe example that feels pretty relevant. If I understood your proposal, it would no longer compile. If I recall (thinking back), I had a plan to handle this by save the to an alternate location in the actual call, we pass ownership of and then do a shallow free of the This still seems like it would work, though I'm not sure how easy/elegant it is to formulate. In a sense we'd be rewriting to .\nIsn't example dependent on doing a direct virtual call? My understanding is that you can't do this on stable with because the virtual call exists in one place, in libstd, and it doesn't do anything weird, so it would work with a restricted ruleset. EDIT: if it can work on stable, it's because calling 's will move the whole before the assignment is evaluated, which is fine, because the shallow drop is actually inside that , not its caller.\nI agree that this code doesn't compile on stable today, but I'm not sure if that's the point. I guess you're saying it's ok for it to stop compiling? I think what I would like is that our overall behavior is as consistent as possible between Sized and Unsized parameters. I think we could actually do a more general change that doesn't special case Sized vs Unsized or anything else.", "commid": "rust_issue_68304", "tokennum": 522}], "negative_passages": []} {"query_id": "q-en-rust-7eaf3acf7057c86cb3c04930e6bed0552f8967b3037d79014b329038cd28b562", "query": "error[E0508]: cannot move out of type `[u8]`, a non-copy slice --> $DIR/unsized-exprs2.rs:22:19 --> $DIR/unsized-exprs2.rs:22:5 | LL | udrop::<[u8]>(foo()[..]); | ^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-c0a55b5a9c88bb1e08f508b1fb679a6cf9e068e808080659d5f7ae2be7470592", "text": "I think I am basically proposing the same thing as earlier, but with a twist: Today, when building the parameters, we introduce a temporary for each parameter. So if we had we might do: What I am saying is that we could say: If the argument is moved (not copied), and the \"place\" has outer derefs (e.g., we are passing where is some place), then we assign the place to a temporary and we make the argument to the function be : I don't believe this is observable to end-users, except potentially for when the destructor on runs. I believe that it accepts all the same code we used to accept. It also has nothing specific to or types at all. Does that make sense? I'm very curious to know if anyone sees any problems with it. (One obvious thing: if we ever add , its design would have to accommodate this requirement, but I think that's fine, in fact the complications around getting moves right for precisely this case are one of the reasons we've held off on thus far.)\nCan you not reinitialize today? If you move the whole , it is indeed more conservative (and should let us skip unnecessary moves even in cases). But also, do you want to move the locals check from typeck to borrowck? I don't have an intuition for when your approach wouldn't create unsized MIR locals, from the perspective of typeck. However, this is only relevant if we want a stricter feature-gate. At the very least it seems like it would be easier to implement a change only to MIR building, and it should fix problems with for the time being, even without a separate feature-gate.\nHmm, that's a good point, I think do that. Sigh. I knew that seemed too easy. I remember we used to try and prevent that, but we never did it fully, and in the move to NLL we loosened those rules back up, so this compiles: Still, you could do my proposal for unsized types, though it would introduce an incongruity. UPDATE: Oh, wait, I remember now. For unsized types, reinitialization is not possible, because you can't do with an unsized rvalue (we wouldn't know how many bytes to move). So in fact there isn't really the same incongruity.", "commid": "rust_issue_68304", "tokennum": 510}], "negative_passages": []} {"query_id": "q-en-rust-7eaf3acf7057c86cb3c04930e6bed0552f8967b3037d79014b329038cd28b562", "query": "error[E0508]: cannot move out of type `[u8]`, a non-copy slice --> $DIR/unsized-exprs2.rs:22:19 --> $DIR/unsized-exprs2.rs:22:5 | LL | udrop::<[u8]>(foo()[..]); | ^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-8eda5e2da06426c5ca0d1e41f6002aa4939e49c4190b6fbed85bbea5acf8da8e", "text": "To be clear, I wasn't proposing changing anything apart from MIR building either, I don't think. IIUC, what you proposed is that would be compiled to: whereas under my proposal (limited to apply to unsized parameters, instead of moved parameters) would yield: I think I mis-stated the problem earlier. It's not so much that your proposal will introduce compilation errors, it's that I think it changes the semantics of code like this in what I see as a surprising way: I believe that this code would compile to the following under your proposal: which means that would be invoked with the new value of , right? Under my proposal, the code would still compile, but it would behave in what I see as the expected fashion. Specifically, it would generate:\nIsn't there a way to \"lock\" that value through the borrow-checker, so further accesses would be disallowed, until the call returns? I'd rather be stricter here.
--> $DIR/unsized-exprs2.rs:22:19 --> $DIR/unsized-exprs2.rs:22:5 | LL | udrop::<[u8]>(foo()[..]); | ^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-27fd42f01a24ee7bd4fa9f1c0f7bf282c5af37d103c7e5a053a6ed0f28b8ecdb", "text": "In particular, the IR for under your proposal, if I understood, would be But the access to that (I believe) we wish to prevent takes place in the \"evaluate remaining arguments\" part of things. So I think we'd have to add some sort of \"pseudo-move\" that tells borrowck that should not be assignable from that point forward, right? Well, it depends on your perspective. It's true that, given the need to permit reinitialization, the check is now specific to unsized values -- i.e., MIR construction will be different depending on whether we know the type to be sized or not. That's unfortunate. However, I think we still maintain a degree of congruence, from an end-user perspective. In particular, users could not have reinitialized a anyhow. However, under my proposal, users can write things like and the code works as expected (versus either having an unexpected result or getting an error). In any case, it seems like we're sort of \"narrowing in\" on a proposal here: when we see a function call with a parameter that is , and the value being passed is a deref, we want to modify the IR to not introduce a temporary and instead pass that parameter directly from its location, either we do some sort of \"no more assignments\" lock, or we move the pointer and introduce a temporary (and then pass in a deref of that temporary) Does that sound right so far?\nIf it's easier to do the more flexible approach (by moving the pointer) than the \"locking\" approach (which is a bit like borrowing the pointer until all arguments are evaluated, then releasing the borrow and doing a move instead), I don't mind it, I was only arguing with the \"uniformity\" aspect. Something important here: your approach is stabilizable, while mine is meant more for that one in the standard library, and not much else.\nYes, I was kind of shooting for something that we could conceivably stabilize.\nRemoving nomination, this was already discussed in triage meeting and it's under control.\nSo should we maybe try out the approach I proposed and see how well it works?\nyes, going to try that out.\nFor the record, the \"new plan\" is this.", "commid": "rust_issue_68304", "tokennum": 480}], "negative_passages": []} {"query_id": "q-en-rust-7eaf3acf7057c86cb3c04930e6bed0552f8967b3037d79014b329038cd28b562", "query": "error[E0508]: cannot move out of type `[u8]`, a non-copy slice --> $DIR/unsized-exprs2.rs:22:19 --> $DIR/unsized-exprs2.rs:22:5 | LL | udrop::<[u8]>(foo()[..]); | ^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-7cf635da4f5adcfae533b40271ecf10f20aa01733028a100f12d429f49c1ae0c", "text": "When generating the arguments to a call, we ordinarily will make a temporary containing the value of each argument: But we will modify this such that, for arguments that meet the following conditions: The argument is moved (not copied), and its type is not known to be , and the \"place\" has outer derefs (e.g., we are passing P where P is some place), then we assign the place P (instead of ) to a temporary and we make the argument to the function be (instead of ). This means that given where , we used to generate but now we generate We are now moving the box itself, which is slightly different than the semantics for values known to be sized. In particular, it does not permit of the box after having moved its contents. But that is not supported for unsized types anyway. The change does permit modifications to the callee during argument evaluation and they will work in an analogous way to how they would work with sized values. So given something like: we will find that we (a) save the old value of to a temporary and then (b) overwrite while evaluating the arguments and then (c) invoke the with the old value of .\nUpdate: There is also\nis a perfectly normal operand, I don't know what you mean. It's the same kind of as we currently have in (on the RHS)\nOh sorry, somehow I thought operands could only refer directly to locals, not also dereference them. I think I am beginning to see the pieces here -- together with function argument handling, this entirely avoids having to dynamically determine the size of a local. I have to agree that is quite elegant, even though baking \"pass-by-reference\" into MIR semantics still seems like a heavy hammer to me. On the other hand this places a severe restriction on now unsized locals may be used, basically taking back large parts of Is there some long-term plan to bring that back (probably requires with dynamic alignment in LLVM)? Or should we amend the RFC or at least the tracking issue with the extra restrictions?\nI would keep the more general feature but use two different feature-gates (if we can). And just like with a few other feature-gates () we could warn that (the general one) is incomplete and could lead to misaligned variables (or something along those lines).", "commid": "rust_issue_68304", "tokennum": 499}], "negative_passages": []} {"query_id": "q-en-rust-7eaf3acf7057c86cb3c04930e6bed0552f8967b3037d79014b329038cd28b562", "query": "error[E0508]: cannot move out of type `[u8]`, a non-copy slice --> $DIR/unsized-exprs2.rs:22:19 --> $DIR/unsized-exprs2.rs:22:5 | LL | udrop::<[u8]>(foo()[..]); | ^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | cannot move out of here | move occurs because value has type `[u8]`, which does not implement the `Copy` trait error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-e815c1522a134b198016730d0062b3d332c75119cceb0b89e314acd16ab4a8f8", "text": "We could also add runtime asserts that the (dynamic) alignment is respected, once we know could never trigger them (e.g. from stable code).\nI was wondering about that too. It seems like it'd be good to review the more general feature in any case. This is partly why I brought up the idea of wanting a guaranteed \"no alloca\" path -- this was something I remember us discussing long ago, and so in some sense I feel like the work we're doing here isn't just working around LLVM's lack of dynamic alignment support (though it is doing that, too) but also working to complete the feature as originally envisioned.\nI two notes to the tracking issue so we don't overlook this in the future.\nThis is probably expected, but the fix failed to actually make sound:", "commid": "rust_issue_68304", "tokennum": 172}], "negative_passages": []} {"query_id": "q-en-rust-7ec4d42010ca12ae3cbfcdd9a803c10b30ed2e66707295f5c68af8ea6162b9f7", "query": "rm $(TMPDIR)/bar $(RUSTC) foo.rs --emit=asm,ir,bc,obj,link --crate-type=staticlib rm $(TMPDIR)/bar.ll rm $(TMPDIR)/bar.bc rm $(TMPDIR)/bar.s rm $(TMPDIR)/bar.o rm $(TMPDIR)/$(call STATICLIB_GLOB,bar)", "positive_passages": [{"docid": "doc-en-rust-a80b97a4b1136e26628808d1be2a10e26d684af7165cc3301985491de63de5fb", "text": "Passing rustc the flag and passing may produce different files (that is, outputs). This is true of rust-core (see below), though was not the case for a simpler (\"Hello, world!\") test file. Test script: Diff output indicates binary files differ. Tested with , rust-core on . rustc compiled from master on Arch Linux. If this is, in fact, expected behavior- sorry for the trouble, unexpected to me.\nCan you provide a diff between the outputs? This is likely a known issue, but I would like to be sure.\n.bc is a binary format, so just gives \"Binary files and differ\" as its output. On my machine, is 19K and is 62K. I'm not familiar enough with the format to extract much more data.\nI note the difference persists when the second line is (two separate flags), though both produce both artifacts.\nCan you pass and diff those outputs?\nWith emit flags as and respectively, there is no difference in and (according to diff.)\nIt seems that every bytecode is invalid only when a rlib is produced. It stopped working between rust-git-0.9.1301 and 1362 (shortly before 2014-02-23 15:37:05 -0800). Going back thorugh the log, I found I believe that creates deflated bytecode. Perhaps should be stored with extension or .", "commid": "rust_issue_12992", "tokennum": 299}], "negative_passages": []} {"query_id": "q-en-rust-7ef9a2da9e0ec31968acd5d78cab2c2cf73794bc31c1f8388d2e885fba7b449b", "query": "[1.8h]: https://github.com/rust-lang/rust/pull/31460 [1.8l]: https://github.com/rust-lang/rust/pull/31668 [1.8m]: https://github.com/rust-lang/rust/pull/31020 [1.8m]: https://github.com/rust-lang/rust/pull/31534 [1.8mf]: https://github.com/rust-lang/rust/pull/31534 [1.8mp]: https://github.com/rust-lang/rust/pull/30894 [1.8mr]: https://users.rust-lang.org/t/multirust-0-8-with-cross-std-installation/4901 [1.8ms]: https://github.com/rust-lang/rust/pull/30448", "positive_passages": [{"docid": "doc-en-rust-ec93b61328c9666a771dcbaee01d4f3175b6dfc4cee7d2970ce7a3e072a48fa8", "text": "These are all legal, and somewhat strange, since they're not really following the file-system analogy that modules are typically explained with. I expect this behaviour is just whatever happens to fall out of the current implementation strategy, so let's make a more explicit decision one way or another. As one might expect, each of these is a distinct instance of (which can be verified by placing in : the compiler will greet you 3 times). Noticed in .\nThis seems weird to me. I think the intention is that the out-of-line syntax () should only work at the top-level.\nI'd vote for allowing non-inline modules only at the top-level, to retain a clear mapping to the file system.\n(Or nested inside modules.)\ntriage: P-medium Backwards incompatible, but ... kind of a curiousity.\nFound one more mod aliasing scenario. It involves several files, and is semi officially allowed by the reference. The layout The files The copy module will be aliased. The problem is that although and are not directory owners, their inner modules are.\nUpon reflecting more about this, I think that there are two questions here. should non inline modules be allowed (syntactically) inside items? should module aliasing (the same file representing two modules in the crate) be allowed? The questions are independent because you can use attribute to create module aliasing manually. I would prefer if the answer to the second question is not. I'm contributing to the Rust plugin for IntelliJ IDEA, and one of the tasks there is mapping files to modules (for example, to go to the parent module). I would much prefer just to highlight module aliasing as an error, rather then to deal with non one-to-one mapping between files and modules. As a side note, if a user does want aliasing for some reason, she can always use symbolic link for the same effect.\nI tend to believe it is fine to have aliasing if you write the yourself (you asked for it...), and that it would be ok to put a non-inline module inside an non-module, if you specify the path yourself, but that we should error out otherwise. However, there has definitely been confusion in the past about the role of as a declaration and not a , and disallowing aliasing would prevent that confusion from leading to incorrect trees.", "commid": "rust_issue_29765", "tokennum": 498}], "negative_passages": []} {"query_id": "q-en-rust-7ef9a2da9e0ec31968acd5d78cab2c2cf73794bc31c1f8388d2e885fba7b449b", "query": "[1.8h]: https://github.com/rust-lang/rust/pull/31460 [1.8l]: https://github.com/rust-lang/rust/pull/31668 [1.8m]: https://github.com/rust-lang/rust/pull/31020 [1.8m]: https://github.com/rust-lang/rust/pull/31534 [1.8mf]: https://github.com/rust-lang/rust/pull/31534 [1.8mp]: https://github.com/rust-lang/rust/pull/30894 [1.8mr]: https://users.rust-lang.org/t/multirust-0-8-with-cross-std-installation/4901 [1.8ms]: https://github.com/rust-lang/rust/pull/30448", "positive_passages": [{"docid": "doc-en-rust-13193e7d8cc7de52ca12bbee1c181886fde5ae3673cf0de803a8511204caf844", "text": "however is disconcerting, and suggests that perhaps the code has a slightly different model in mind than I do in my head. :) That is, I strongly expect to (by default) be found at . What happens if you don't have the , but just place in ?\nIt is forbidden because is not a directory owner.\nJust in case here is a repository with the example: I don't see a reason to allow user to shoot himself in the leg. What if you accidentelly have identical attributes (copy paste problem)? My main concern though is that non injective file to module mapping may complicate the implementation of some rust tools. Here is another hypothetical example of this. Suppose you implement incremental compilation for rustc. You will need a mapping from files to modules. If mod aliasing is alowed, then you will need a multimapping. So you end up with more complex code which is used only in a tiny fraction of projects and because of this probably contains some bugs.\nAh, I see. So it seems to me that the problem is that should not be allowed in contexts where would be illegal. In other words, an inline module should only be considered a \"directory owner\" if it is located within a directory owner. A compromise might be Yet Another Lint Warning. ;) I agree with you about preventing people from shooting themselves in the leg, but I guess I think people don't use lightly or frequently, so this seems unlikely to be a major source of confusion.\nI totally agree that this curiosity is insignificant for users. But I still worry more about the implementation side of the issue, and a lint warning does not help here.", "commid": "rust_issue_29765", "tokennum": 346}], "negative_passages": []} {"query_id": "q-en-rust-7f0c1971d452a503bc23a5c950010ce945b2e0f81ed1dacf1936ae72d5077074", "query": "pub fn emit_feature_err(diag: &SpanHandler, feature: &str, span: Span, explain: &str) { diag.span_err(span, explain); // #23973: do not suggest `#![feature(...)]` if we are in beta/stable if option_env!(\"CFG_DISABLE_UNSTABLE_FEATURES\").is_some() { return; } diag.fileline_help(span, &format!(\"add #![feature({})] to the crate attributes to enable\", feature));", "positive_passages": [{"docid": "doc-en-rust-fa5de52d3ace12677445254bb6cdb3896b1f26f61b4e6e81d95ebf54de71f95c", "text": "Steps to reproduce: The output of is: does as suggested by the message, but that fails too (as expected): Proposed fix: in the beta and stable channels (i.e. whenever is forbidden), the message that suggests adding should not be shown. (Clean up the test case with .) CC\nGonna milestone this real quick, as it seems quite important.\nI might be able to whip up a quick fix, though I'm not 100% sure about testing it properly ... (I've never actually tried building something for a non-dev release channel before...)\nafter chatting with it seems like I have a patch, building it now\ngonna leave to it\nHello good Rust folks :smile: Before the beta, I used which is in . Is there any way to use this trait during the beta ? I'm not sure how I can get the code to compile. Thanks for any tip / hints you can offer :+1:\nHi Commenting on random unrelated issues is usually not the right thing to do. You should rather open a new issue, or ask on . But to answer your question anyway, that trait only has : \u2026 and is stable and can be used on beta. So I\u2019d recommend you copy/paste the trait into your code, or make a function.\nThanks for taking the time to answer. I appreciate it. I don't see how this is unrelated. This is related, since the code I'm talking about used to compile with and the compiler suggests to do that, during the beta period. Thanks for your tip on copying the function :+1: I guess I'll just do that if nothing better is available and change this again when the trait is stable.\nThis specific issue is about fixing one of the compiler\u2019s error message, not about fixing code that triggers that error. It really helps everyone to keep discussions on topic.\nAlright. I understand. I'll make sure to open a new issue or go on StackOverflow next time if appropriate. Thanks again :+1:\nThis doesn't seem to be properly fixed in the beta channel. I'm still being recommended to add in the errors, and then told it's not available in the beta channel. Rust version: The code I used (snappy example in the docs with the extra addition on the first line:\nThis happened after Beta release.", "commid": "rust_issue_23973", "tokennum": 490}], "negative_passages": []} {"query_id": "q-en-rust-7f0c1971d452a503bc23a5c950010ce945b2e0f81ed1dacf1936ae72d5077074", "query": "pub fn emit_feature_err(diag: &SpanHandler, feature: &str, span: Span, explain: &str) { diag.span_err(span, explain); // #23973: do not suggest `#![feature(...)]` if we are in beta/stable if option_env!(\"CFG_DISABLE_UNSTABLE_FEATURES\").is_some() { return; } diag.fileline_help(span, &format!(\"add #![feature({})] to the crate attributes to enable\", feature));", "positive_passages": [{"docid": "doc-en-rust-b4e439e7e28004a888d7eeea745c4600205c8125bbf9c901393d71174d4795f9", "text": "We'll see the effect of this patch in 1.0 and afterwards Beta, etc.\nWhoops, I thought this made it in before beta. Thanks!\nare there plans to release updates to the beta channel within a given 6 weeks cycle? Either this one or in the future.\nI'm certain there will be updates; see discussion from last night here:\nSo how does one fix this error? I'm a new user of the rust library and I can't seem to use the collections library during the beta due to this problem. Given this git issue is a top google result, it would be nice if there were a solution here. Edit: Looks like the solution is to switch to the rust nightly. Is the entire collections library off-limits in the beta?\nby design, you have access to the parts of that are re-exported from (and are stable).", "commid": "rust_issue_23973", "tokennum": 181}], "negative_passages": []} {"query_id": "q-en-rust-7f5fc2e420d7d957d56257ae65aecab8528201adfd7c88f03dbea1dbb21fa863", "query": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // // no-system-llvm // compile-flags: -O #![crate_type=\"lib\"] #[no_mangle] pub fn sum_me() -> i32 { // CHECK-LABEL: @sum_me // CHECK-NEXT: {{^.*:$}} // CHECK-NEXT: ret i32 6 vec![1, 2, 3].iter().sum::() } ", "positive_passages": [{"docid": "doc-en-rust-534e1717015c5d072ad75ef23b6fabc05af6acebd4a2d49c1e1aebceff0ad1be", "text": "Rustc used to be able to replace with a constant when optimizing, not at the moment. It simply looks like a Layout method that is not inlinable. (This is a regression strictly speaking \u2014 stable & beta can do this optimization, but not nightly.) Code to reproduce:\nAre there any ways to write automated tests to check for \"regressions\" like this? Assertions against LLVM bitcode?\nYes can be used for this, e.g. Assuming this is run against the bitcode after optimization, that is.\ncc -- seems LLVM related\nNope. Just () that is not marked as inline.\ncc\nI didn't confirm that making Layout::repeat as inline fixes this, so please confirm that/add a test before closing.", "commid": "rust_issue_43272", "tokennum": 161}], "negative_passages": []} {"query_id": "q-en-rust-7f6790480771b6671343513ae337652136e85cf3e8ed9b2bde14853d2749303c", "query": "// In theory, any zero-sized value could be borrowed // mutably without consequences. However, only &mut [] // is allowed right now, and only in functions. if self.const_kind == Some(hir::ConstContext::Static(hir::Mutability::Mut)) { // Inside a `static mut`, &mut [...] is also allowed. match ty.kind() { ty::Array(..) | ty::Slice(_) => {} _ => return Err(Unpromotable), } } else if let ty::Array(_, len) = ty.kind() { if let ty::Array(_, len) = ty.kind() { // FIXME(eddyb) the `self.is_non_const_fn` condition // seems unnecessary, given that this is merely a ZST. match len.try_eval_usize(self.tcx, self.param_env) {", "positive_passages": [{"docid": "doc-en-rust-a8a4ce7a1f7159520cb23f17ce512eac52ede2baae114ca9b5c37a5c766ec511", "text": "Consider the following code: This should just work. But instead it throws an error: The reason for this is that gets lifetime extended via promotion, so this reference now points to a separate mutable static -- but mutable statics cannot be mutated during CTFE. I see no way to fix this, other than stopping to promote mutable references -- which we should IMO do anyway, they are a very strange and unprincipled special case. Just imagine doing this in a loop, suddenly all these different and mutable allocations share the same address... Cc\nUgh, that's an ancient rule (believe it or not, it's a workaround for not being able to write and have the length be inferred!) and it was never meant to get this far, I guess I didn't think it through properly when adding the \"outermost scope\" rule, which is the only position where allowing it makes sense. In the current implementation, it should've never been promoted, only allowed, my bad!", "commid": "rust_issue_75556", "tokennum": 217}], "negative_passages": []} {"query_id": "q-en-rust-7f85300e426c4b75ba2ee81ce3fb96d9f13868dd994b638f421b83a58211e89c", "query": "// for where the type was defined. On the other // hand, `paths` always has the right // information if present. Some(&( ref fqp, ItemType::Trait | ItemType::Struct | ItemType::Union | ItemType::Enum, )) => Some(&fqp[..fqp.len() - 1]), Some(..) => Some(&*self.cache.stack), Some(&(ref fqp, _)) => Some(&fqp[..fqp.len() - 1]), None => None, }; ((Some(*last), path), true)", "positive_passages": [{"docid": "doc-en-rust-127ec8bc25a2afdf8e9f5817ddbaa0ef3a2cb2f7097dce45b0cc5cd164bd9c6c", "text": "There are bad (404) links in search results with nightly rustdoc. E.g. go to and then click the result. This leads to which does not exist. Cc\nActually, this issue seems to exist even on stable -- somehow it went unnoticed for quite a while, it seems.\nI think it's because we are badly handling a reexport/type alias.\nI think this is a duplicate of", "commid": "rust_issue_83991", "tokennum": 86}], "negative_passages": []} {"query_id": "q-en-rust-7fc8299a0747ad1f230a31ecb059e27e6c4b7d551aaba4d0f218d16f61244d25", "query": " // Assuming that the hidden type in these tests is `&'_#15r u8`, // we have a member constraint: `'_#15r member ['static, 'a, 'b, 'c]`. // // Make sure we pick up the minimum non-ambiguous region among them. // We will have to exclude `['b, 'c]` because they're incomparable, // and then we should pick `'a` because we know `'static: 'a`. // check-pass trait Cap<'a> {} impl Cap<'_> for T {} fn type_test<'a, T: 'a>() -> &'a u8 { &0 } // Basic test: make sure we don't bail out because 'b and 'c are incomparable. fn basic<'a, 'b, 'c>() -> impl Cap<'a> + Cap<'b> + Cap<'c> where 'a: 'b, 'a: 'c, { &0 } // Make sure we don't pick `'static`. fn test_static<'a, 'b, 'c, T>() -> impl Cap<'a> + Cap<'b> + Cap<'c> where 'a: 'b, 'a: 'c, T: 'a, { type_test::<'_, T>() // This will fail if we pick 'static } fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-52ff78bb803c348d23865431fcf95aca9114b024ef3ad08fdb44f7d12198f25f", "text": "The following code should pass regardless of the declaration order of lifetimes: This is related to but it's different in that we do have a lower bound region, , but we fail to recognize that because , which is responsible for calculating from , assumes that the outlive relation is a total order, which is not true. Here is the graph of : ! So we need an efficient algorithm that works for partial order relations, but the one that comes to mind is . The perf may not be really important here but I'm curious to see the alternatives. label C-bug T-compiler A-borrow-checker E-mentor E-help-wanted $DIR/erase-error-in-mir-drop-tracking.rs:11:46 | LL | fn connect(&'_ self) -> Self::Connecting<'a>; | ^^ undeclared lifetime | help: consider introducing lifetime `'a` here | LL | fn connect<'a>(&'_ self) -> Self::Connecting<'a>; | ++++ help: consider introducing lifetime `'a` here | LL | trait Client<'a> { | ++++ error: `C` does not live long enough --> $DIR/erase-error-in-mir-drop-tracking.rs:19:5 | LL | async move { c.connect().await } | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0261`. ", "positive_passages": [{"docid": "doc-en-rust-7041623637e60720121892113e26d42cd0f2ee4ad12f7507749f6c0b2fa95336", "text": " $DIR/functional-struct-update-respects-privacy.rs:28:49 | LL | let s_2 = foo::S { b: format!(\"ess two\"), ..s_1 }; // FRU ... | ^^^ private field | ^^^ field `secret_uid` is private error: aborting due to previous error", "positive_passages": [{"docid": "doc-en-rust-4ca0e1ddefc7fb7468afc8b93eebb60c928e1406b90c27795c8b3cd68db8f0f4", "text": "In a recent PR I made a change to unify the wording of multiple privacy errors, but : used to be This one change needs to be reversed while keeping the rest.\nHi! Could I give this issue a shot?\nI am sorry I forgot to claim the issue\nNo worries :-)", "commid": "rust_issue_70323", "tokennum": 60}], "negative_passages": []} {"query_id": "q-en-rust-850b458174f910356868b315d552c0bdc72dd400f625a59575acdbf4369f14d1", "query": "use rustc_hir::def::{DefKind, DocLinkResMap, Namespace, Res}; use rustc_hir::HirId; use rustc_lint_defs::Applicability; use rustc_resolve::rustdoc::source_span_for_markdown_range; use rustc_resolve::rustdoc::{prepare_to_doc_link_resolution, source_span_for_markdown_range}; use rustc_span::def_id::DefId; use rustc_span::Symbol;", "positive_passages": [{"docid": "doc-en-rust-ae7756653024b37944631061b04002bb6639e4d51af6f74811edbc5db7f3ecaf", "text": "We've recently begun testing Bevy against the beta toolchain, where it has incorrectly raised an lint against . is not imported in this file, yet rustdoc believes that it is. (See the imports .) As a temporary workaround, I've created to ignore the error. Here is that first failed. In case it expires, here's . Look at . I can reproduce it on using by running the following command: The is optional, but it makes the command finish quicker. I'll continue testing to see if I can create a minimum reproducible example. :\nI've created a minimal reproduction in my -repro branch of my fork. Go to the folder and run . I've managed to squish it down to this:\nI've created a reproduction that removes the dependency on Bevy . I've simplified it down to this:\nI bisected it, and this bug appears to first be introduced in . I could not reproduce it in .", "commid": "rust_issue_123677", "tokennum": 208}], "negative_passages": []} {"query_id": "q-en-rust-8511df17d87ab4cc7eeaead35dc3c0e8f924307c92d349aa8059a66b865ee15e", "query": "sess.note(&format!(\"{:?}\", &cmd)); let mut output = prog.stderr.clone(); output.push_all(&prog.stdout); sess.note(str::from_utf8(&output[..]).unwrap()); sess.note(&*escape_string(&output[..])); sess.abort_if_errors(); } info!(\"linker stderr:n{}\", String::from_utf8(prog.stderr).unwrap()); info!(\"linker stdout:n{}\", String::from_utf8(prog.stdout).unwrap()); info!(\"linker stderr:n{}\", escape_string(&prog.stderr[..])); info!(\"linker stdout:n{}\", escape_string(&prog.stdout[..])); }, Err(e) => { sess.fatal(&format!(\"could not exec the linker `{}`: {}\", pname, e));", "positive_passages": [{"docid": "doc-en-rust-5b7b9164853227cf9705abea05f87305a8ae80e0d990e7c0165edcbbeb00349d", "text": "Compiler crashes while compiling using Cargo. Cargo version: cargo 0.6.0-nightly ( 2015-10-17) rustc version: 1.3.0\nIt looks like this comes from the linker producing output (normal or error) that is not valid UTF-8. Fixing this in any other way than just using a lossy conversion is going to be really painful because it goes through the error logging infrastructure, which all works with s. That's certainly not ideal, because it would be best to relay other programs' messages to the user unchanged, but it's probably the only realistic way to handle this in the near future.\nJust dump the bytes as hexadecimal? I am not sure what is going on here.\nMy first thought was that the path was probably not UTF-8, but because of the way cargo invokes it that crashes rustc long before it tries to link anything, so that can't be it. I'm not sure why else the message would be non-UTF-8. (I can't reproduce the ICE on that crate (I just get a normal link error), but I can reproduce the ICE by constructing a situation where the linker prints non-UTF-8 messages.) But yeah, some kind of lossy conversion with hex escapes is probably the way to go. I'm not aware of an existing function to do that.\nActually, mapping with is probably good enough.\nfoo'ncollect2: error: ld returned 1 exit statusn` Not great, but not an ICE and probably good enough until we have a dedicated function to do this sort of thing since this should be a very rare error.", "commid": "rust_issue_29122", "tokennum": 355}], "negative_passages": []} {"query_id": "q-en-rust-8573ef716adb1b4a129e0714a9a2fda77d6e0a138830ea067756d9dbd51c90ab", "query": "codegen_fn_attrs.export_name = Some(s); } } else if attr.check_name(sym::target_feature) { if tcx.fn_sig(id).unsafety() == Unsafety::Normal { if tcx.is_closure(id) || tcx.fn_sig(id).unsafety() == Unsafety::Normal { let msg = \"`#[target_feature(..)]` can only be applied to `unsafe` functions\"; tcx.sess .struct_span_err(attr.span, msg)", "positive_passages": [{"docid": "doc-en-rust-4cabe525e8bd65c1784b54acf748f2e1483eb31241627e1b563551496fc380f1", "text": "This code () Produces ICE on any channel\nThis is the same issue as , attributes on expressions are not converted into HIR properly.\nI checked on latest nightly - isn't produced anymore while this one still throws ICE.\ncould you reopen this issue?\ntriage: P-high. Removing nomination.\nReduced somewhat:", "commid": "rust_issue_68060", "tokennum": 65}], "negative_passages": []} {"query_id": "q-en-rust-857d2b1e41619099bb494e8aa5880678fb0c1acd832e35fa2679dcbd3894ef86", "query": "// except according to those terms. // run-pass #![feature(macro_literal_matcher)] macro_rules! a { ($i:literal) => { \"right\" };", "positive_passages": [{"docid": "doc-en-rust-0433e96f1fea1692a6dbda2b304cc4ffe74d1cf8031552073123b81e4ce976c8", "text": "Tracking issue for rust-lang/rfcs.\nIs anyone working on implementation already? If not I can take a crack at it.\nI'll probably do it.\nHuh oh, we have an issue already: do we mean to include potentially minus-prefixed literals? If yes, we have a problem because 1. it means we have to return an Expr instead of a Lit, which means I'm not sure one will be able to use the bound fragment in a non-expr context (for example in a static array type?) and 2. it's no longer always a single-TT, so the benefits in terms of future-proofing are a bit less good. If we don't... well, we loose expressivity. I have no idea what is more intuitive though... Would you, as a Rust user, expect to parse ? I certainly can think of use cases for this...\ngood point. I think I would have expected it, I also agree with the complications.\nCan't we upgrade Lit to support negative literals?\nI'd expect \"literal\" (or a future integer-literal specifier) to include (negative 5), but not (the negation of positive 5). Perhaps that expectation doesn't match reality, though. For instance, based on parsing rules for non-literals, I'd expect to act like . So, if handling both and as literals makes this easier, that works. But \"literal\" definitely needs to include negative numbers, or we'd have a weird user trap to explain.\nI'd expect \"literal\" (or a future integer-literal specifier) to include -5 (negative 5), but not - 5 (the negation of positive 5). Unary operators, including minus, are processed purely by the parser at the moment, no lexer involved. So their treatment is whitespace agnostic. (I personally think this is a good thing.)\nI agree with is not syntactically different from and has no reason to be. But I also agree with the fact that the user will probably expect negative numbers to work as literals (even it \u2013 syntactically \u2013 they're not).\nis also supported in literal patterns while arbitrary expressions are not. It'd be reasonable to support it in fragments as well.\nAnother thing that won't match is a macro call, like , or any of the other macros that I think of as expanding to a literal (, ...).", "commid": "rust_issue_35625", "tokennum": 529}], "negative_passages": []} {"query_id": "q-en-rust-857d2b1e41619099bb494e8aa5880678fb0c1acd832e35fa2679dcbd3894ef86", "query": "// except according to those terms. // run-pass #![feature(macro_literal_matcher)] macro_rules! a { ($i:literal) => { \"right\" };", "positive_passages": [{"docid": "doc-en-rust-9d182356753c860866613db5fde7e24272669928c2adea54b67398cbbb488354", "text": "I dunno, maybe this is not such a good idea after all. Re-reading the RFC in a more skeptical light, I felt like the \"Motivation\" section could use some more concrete use cases. Maybe this is just failure of imagination on my part, though! I think the likelihood of this feature attracting people who really want constant expressions should be acknowledged as a drawback, too.\nThat's fine IMO. Personally I'd like it to match only explicit literals in code to distinguish, for example, between literal and identifier and produce different code for them. Right now this is not possible because all of , , are too broad and conflict with each other.\nHi All, I have implemented this feature in my fork of Rust over 1.25.0, under this branch: Following a quick review by a rustc mentor I may have the free time to provide a full pull request for this.\nBTW label has the tooltip \"implemented in nightly and unstable\" but I did not see it there, i.e. commits & pull requests referencing this. Did I miss it?\nThe implementation that I worked on was merged, yay!\nIs there anything stopping an FCP for this?\nIt's currently merged in unstable; do you mean an FCP to declare it stable?\nYes, an FCP to stabilise. Should have clarified.\nCurrently, it is not possible to pass a literal from a macro to another macro. . I think this is a bug.\nInteresting! That is a bug. I opened to track it.\nAgreed, this should be allowed just like passing any other node types.\nIs there anything else preventing this from being stabilized? The issue above has been fixed.\nThis has baked for some time now and I believe it should be ready for stabilization. So let's start the review. merge The feature gate test is here: Unstable book: Tests defined here:\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [ ] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!", "commid": "rust_issue_35625", "tokennum": 486}], "negative_passages": []} {"query_id": "q-en-rust-857d2b1e41619099bb494e8aa5880678fb0c1acd832e35fa2679dcbd3894ef86", "query": "// except according to those terms. // run-pass #![feature(macro_literal_matcher)] macro_rules! a { ($i:literal) => { \"right\" };", "positive_passages": [{"docid": "doc-en-rust-ba319ab491e9a33d298d8cade668ea28bf8dd31af2044c7a4651474f66faed76", "text": "See for info about what commands tagged team members can give me.\nReading through the tests, I see that is a literal. While that makes sense to me, I remember hearing that it wasn't, but was instead a negation operator and a literal. Am I misremembering?\ncc on\nI agree that it makes sense. I would say that even if is interpreted as negation + a literal, I would probably still want the macro matcher to accept since that behavior seems more intuitive. In other words, I would see being more as an implementation detail than what a literal is in the mental model.\nI think that the nuance with causes a bit of a problem, because I don't think it should be treated as a literal, but I also don't think that the current implementation of would allow detecting it either. You could na\u00efvely allow but that would allow which is clearly wrong.\nthere are tests currently implemented that verify that is detected as a literal.\nI worded that wrong; I meant that only having a matcher and not being able to distinguish strings and numbers means that you can't easily make not a literal.\nfor procedural macros the input as lexed by the compiler is (two tokens) but procedural macros can create tokens that are negatives (like ). In that sense matching for makes sense to me.\nAre literals generated by the compiler counted as true literals? A quick example would be something like: This fails to match, as the expansion, I guess, happens after the selection. I just naively assumed that would have worked.\nis not a literal generated by the compiler, it is an identifier followed by an exclamation mark followed by a set of parentheses containing some tokens. See for example: If you were to use it in expression position, it would be interpreted as an invocation of which expands to a string literal. In this case and in your macro though, it is never in a syntactic position where it would be treated as an expression.\nI can understand semantically why it doesn't make sense, I just thought it would have, but that being said, I already changed it to , so there's no issue. It just made more sense to me as a literal, because according to the compiler, it could generate a literal from it. I guess it was more a question than an actual issue.\nThis is true, but there's more to it!", "commid": "rust_issue_35625", "tokennum": 504}], "negative_passages": []} {"query_id": "q-en-rust-857d2b1e41619099bb494e8aa5880678fb0c1acd832e35fa2679dcbd3894ef86", "query": "// except according to those terms. // run-pass #![feature(macro_literal_matcher)] macro_rules! a { ($i:literal) => { \"right\" };", "positive_passages": [{"docid": "doc-en-rust-721bfecc35df475bca4ea1dad1c9601a73cf036bdeb04b87c9488b5714dd1929", "text": "A few built-in macros can expand their arguments eagerly through ~m a g i c~, so they accept and likes even if they expect string literals: Fortunately (or not), this currently cannot be done in user-defined macros.\nI actually just realised that I can't change it to , as it's used in . So, I've just had to manually unroll the macro. ! It doesn't look pretty. :( Ninja edit: I just tested what actually happens if you try to pass in a non-literal with . It rejects it the exact same way. Does the already make use of this feature? If not, the seems to propagate the required token. Oh, unless the compiler steps in... That's actually probably the case... I've reverted it back to the type.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete.\nSince you implemented this, would you be willing to write up the stabilization PR?\nsure\nThis appears to be missing from the Rust reference", "commid": "rust_issue_35625", "tokennum": 242}], "negative_passages": []} {"query_id": "q-en-rust-8580b18f522540db5d90059dc7b65721a8907837540639939e980229c95eb684", "query": "cargo: Option = \"cargo\", rustc: Option = \"rustc\", rustfmt: Option = \"rustfmt\", cargo_clippy: Option = \"cargo-clippy\", docs: Option = \"docs\", compiler_docs: Option = \"compiler-docs\", library_docs_private_items: Option = \"library-docs-private-items\",", "positive_passages": [{"docid": "doc-en-rust-99c889b6440041652c587ab3ef8b73797f12c011e66f29f49153e5655b8da3be", "text": "I'm pretty sure it is possible to set custom paths for , & in but not for clippy.\nlabel +T-bootstrap", "commid": "rust_issue_121518", "tokennum": 29}], "negative_passages": []} {"query_id": "q-en-rust-858956bfbeacc4a26c50bcf4452304c01e84baadaa75e070afdc168fcca7090c", "query": "run_path_with_cstr(original, &|original| { run_path_with_cstr(link, &|link| { cfg_if::cfg_if! { if #[cfg(any(target_os = \"vxworks\", target_os = \"redox\", target_os = \"android\", target_os = \"espidf\", target_os = \"horizon\", target_os = \"vita\", target_os = \"nto\"))] { if #[cfg(any(target_os = \"vxworks\", target_os = \"redox\", target_os = \"android\", target_os = \"espidf\", target_os = \"horizon\", target_os = \"vita\", target_env = \"nto70\"))] { // VxWorks, Redox and ESP-IDF lack `linkat`, so use `link` instead. POSIX leaves // it implementation-defined whether `link` follows symlinks, so rely on the // `symlink_hard_link` test in library/std/src/fs/tests.rs to check the behavior.", "positive_passages": [{"docid": "doc-en-rust-5923fe45b718fab55ab35d21c8370569fc6087fac5f9f0992fc6d95dce139465", "text": "Ferrocene CI has detected that this test was broken by rust-lang/rust . Specifically, by the change in , shown below: Test output: Reverting that single line diff fixes the test for the QNX7.1 targets, e.g. and can the change be reverted or does QNX7.0 need to use , instead of , here? from looking at libc, it appears that both and are available on QNX7.0 so reverting the change should at least not cause compilation or linking errors. if the case is the latter, then we should use in addition to .\nthanks for the report! I will need to re-test it for QNX 7.0, to see if it is a 7.0 requirement (i don't recall why this was required initially, but clearly I ran into some issues with it). I also plan to do a similar automation to ensure 7.0 builds run without problems on our hardware. Is there a list of tests you see failing on 7.1 that are OK to ignore? I see a list - is this the most relevant one? Thx!\nwe do not ignore any single unit test at the moment and we run library (e.g. libstd) tests as well as (cross) compilation tests using . we don't pass to but instead pass the list of test suites as arguments. we currently run these test suites:\nWG-prioritization assigning priority (). label -I-prioritize +P-low\nfixed in", "commid": "rust_issue_129895", "tokennum": 321}], "negative_passages": []} {"query_id": "q-en-rust-85a2d7167cc8a68f4911b721745faab59f7775f9163bb5e153c860f57e8e2f24", "query": "} pub fn return_type_impl_trait(self, scope_def_id: LocalDefId) -> Option<(Ty<'tcx>, Span)> { // HACK: `type_of_def_id()` will fail on these (#55796), so return `None`. // `type_of()` will fail on these (#55796, #86483), so only allow `fn`s or closures. let hir_id = self.hir().local_def_id_to_hir_id(scope_def_id); match self.hir().get(hir_id) { Node::Item(item) => { match item.kind { ItemKind::Fn(..) => { /* `type_of_def_id()` will work */ } _ => { return None; } } } _ => { /* `type_of_def_id()` will work or panic */ } Node::Item(&hir::Item { kind: ItemKind::Fn(..), .. }) => {} Node::TraitItem(&hir::TraitItem { kind: TraitItemKind::Fn(..), .. }) => {} Node::ImplItem(&hir::ImplItem { kind: ImplItemKind::Fn(..), .. }) => {} Node::Expr(&hir::Expr { kind: ExprKind::Closure(..), .. }) => {} _ => return None, } let ret_ty = self.type_of(scope_def_id);", "positive_passages": [{"docid": "doc-en-rust-740e0124fe7b9fb360f4e9638fbe9ccc902825c50c3d2e1b17fb8758a8ca2f54", "text": " $DIR/negative-coherence-placeholder-region-constraints-on-unification.rs:21:1 | LL | impl FnMarker for fn(T) {} | ------------------------------------------- first implementation here LL | impl FnMarker for fn(&T) {} | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `fn(&_)` | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #56105 = note: this behavior recently changed as a result of a bug fix; see rust-lang/rust#56105 for details note: the lint level is defined here --> $DIR/negative-coherence-placeholder-region-constraints-on-unification.rs:4:11 | LL | #![forbid(coherence_leak_check)] | ^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-1f22b06224aecd994d3e19d133edc7f3e152837884d6a163ec0313c23c6e85a5", "text": " $DIR/format-args-non-identifier-diagnostics.rs:8:16 | LL | println!(\"{x.0}\"); | ^^^ not supported in format string | help: consider using a positional formatting argument instead | LL | println!(\"{0}\", x.0); | ~ +++++ error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-720a5af7afe29e212deb5bca365c920ea90a61e0ab1beeab54ba376512407424", "text": " $DIR/dyn-star-to-dyn.rs:3:12 | LL | #![feature(dyn_star)] | ^^^^^^^^ | = note: see issue #102425 for more information = note: `#[warn(incomplete_features)]` on by default warning: 1 warning emitted ", "positive_passages": [{"docid": "doc-en-rust-4018551dac267ef28378021fb7bbc4c5e35bc18503b1e8ef23c1d818eb068b91", "text": " $DIR/regions-escape-method.rs:15:13 --> $DIR/regions-escape-method.rs:16:13 | LL | s.f(|p| p) | -- ^ returning this value requires that `'1` must outlive `'2` | || | |return type of closure is &'2 i32 | has type `&'1 i32` | help: dereference the return value | LL | s.f(|p| *p) | + error: aborting due to 1 previous error", "positive_passages": [{"docid": "doc-en-rust-7bdc2a1076403b14b3b7ffe6e2607f626dc344c6eab1abe343328494516dcb5c", "text": " $DIR/regions-escape-method.rs:15:13 --> $DIR/regions-escape-method.rs:16:13 | LL | s.f(|p| p) | -- ^ returning this value requires that `'1` must outlive `'2` | || | |return type of closure is &'2 i32 | has type `&'1 i32` | help: dereference the return value | LL | s.f(|p| *p) | + error: aborting due to 1 previous error", "positive_passages": [{"docid": "doc-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "text": ": // Clean up on Darwin if sess.targ_cfg.os == abi::OsMacos { // On OSX, debuggers needs this utility to get run to do some munging of the // symbols if sess.targ_cfg.os == abi::OsMacos && sess.opts.debuginfo { // FIXME (#9639): This needs to handle non-utf8 paths run::process_status(\"dsymutil\", [output.as_str().unwrap().to_owned()]); }", "positive_passages": [{"docid": "doc-en-rust-140863e760a41f08d661a13a9ac18c21e18ccb20b638824a8ef6d2ffff87960c", "text": "Perhaps there's a minimum amount of debug info we can output by default without breaking things. This is alarming to newcomers.\nI am 99% confident this comes straight from dsymutil.\nPossibly related to\nnon-critical for 0.6, de-milestoning\n+1 to fix this or provide temporary \"fix\"\nAs a new user curious about the language, this definitely made me think I'd done something wrong while building. This issue is also not a top result in Google if you search for the full error message and I haven't seen anywhere else that the issue is detailed. While this is totally understandable in a pre 1.0 release, it would be really nice to have a note that it's expected in the README and/or printed to the console at the end of the build output.\n+1 30 seconds into playing with rust, initial impression: 0.6 is still not ready for anything more than casual experimentation. If that's the message you intend to send, well done!\n+1 Occurs when building rust, and when compiling .rs files. I was almost certain my build had failed since the last ~6 lines were these warnings (while everything was fine). I also get this warning every time I compile any .rs file.\nNominating, production ready.\naccepted for production-ready milestone\nThanks! I know it's a small thing, but I certainly appreciate it :)\nI'd be curious what happens if we stop running dsymutil for non-debug builds. In particular, does the gdb experience suffer at all when debugging the resulting binary?\nI am writing a rustpkg script for to fix cadencemarseille/rust-pcre and this is tripping me up because part of the script looks at output, which includes this warning in the data. For now I think I'll just use the last line of output, but it would be great to have this issue fixed.\nI'm still getting this in Homebrew Rust on Mavericks. Trace: System: When Homebrew gets Rust 0.9 for Mavericks, will this be fixed?\nI get this all the time still.\nThat's probably because we build servo with debug symbols, hence we run dsymutil. I would expect will have a better experience.\nThe job failed! Check out the build log: error[E0478]: lifetime bound not satisfied --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:19:16 | LL | type Bar = BarImpl<'a, 'b, T>; | ^^^^^^^^^^^^^^^^^^ | note: lifetime parameter instantiated with the lifetime `'a` as defined here --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:14:6 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | ^^ note: but lifetime parameter must outlive the lifetime `'b` as defined here --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:14:10 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | ^^ error: lifetime may not live long enough --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:23:21 | LL | self.enter_scope(|ctx| { | --- | | | has type `&'1 mut FooImpl<'_, '_, T>` | has type `&mut FooImpl<'2, '_, T>` LL | BarImpl(ctx); | ^^^ this usage requires that `'1` must outlive `'2` error: lifetime may not live long enough --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:22:9 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | -- -- lifetime `'b` defined here | | | lifetime `'a` defined here ... LL | / self.enter_scope(|ctx| { LL | | BarImpl(ctx); LL | | }); | |__________^ argument requires that `'a` must outlive `'b` | = help: consider adding the following bound: `'a: 'b` = note: requirement occurs because of a mutable reference to `FooImpl<'_, '_, T>` = note: mutable references are invariant over their type parameter = help: see for more information about variance error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0478`. ", "positive_passages": [{"docid": "doc-en-rust-7bdc2a1076403b14b3b7ffe6e2607f626dc344c6eab1abe343328494516dcb5c", "text": " $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:19:16 | LL | type Bar = BarImpl<'a, 'b, T>; | ^^^^^^^^^^^^^^^^^^ | note: lifetime parameter instantiated with the lifetime `'a` as defined here --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:14:6 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | ^^ note: but lifetime parameter must outlive the lifetime `'b` as defined here --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:14:10 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | ^^ error: lifetime may not live long enough --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:23:21 | LL | self.enter_scope(|ctx| { | --- | | | has type `&'1 mut FooImpl<'_, '_, T>` | has type `&mut FooImpl<'2, '_, T>` LL | BarImpl(ctx); | ^^^ this usage requires that `'1` must outlive `'2` error: lifetime may not live long enough --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:22:9 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | -- -- lifetime `'b` defined here | | | lifetime `'a` defined here ... LL | / self.enter_scope(|ctx| { LL | | BarImpl(ctx); LL | | }); | |__________^ argument requires that `'a` must outlive `'b` | = help: consider adding the following bound: `'a: 'b` = note: requirement occurs because of a mutable reference to `FooImpl<'_, '_, T>` = note: mutable references are invariant over their type parameter = help: see for more information about variance error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0478`. ", "positive_passages": [{"docid": "doc-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "text": ": #[unstable(feature = \"rw_exact_all_at\", issue = \"51984\")] #[stable(feature = \"rw_exact_all_at\", since = \"1.33.0\")] fn write_all_at(&self, mut buf: &[u8], mut offset: u64) -> io::Result<()> { while !buf.is_empty() { match self.write_at(buf, offset) {", "positive_passages": [{"docid": "doc-en-rust-c3d71db05f58ca4f54afce7b9ce574cc4d204442ef32ef20532c10eb344eb89a", "text": "The and traits contain convenience methods and wrapping and . This issue tracks addition of similar methods to trait's and . This is implemented in .\nif this is a tracking issue, maybe it should have the label\nWe should eventually add similar methods to .\nI know nothing about developing on Windows so I can't do it myself but I guess a PR would be accepted. Can we stabilize this? Is there anything that still needs to be done?\nSeems reasonable to me! fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [ ] [x] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete.\nWhat's the next step? Should I make a PR to stabilize? Sorry but I don't fully understand how things work yet.\nindeed that should be all that's necessary! If you'd like to make the PR you're also more than welcome to do so!", "commid": "rust_issue_51984", "tokennum": 290}], "negative_passages": []} {"query_id": "q-en-rust-919d37b11c0dac3bc6be49c434baf386362ef5e4d1b03df366b2d770b469ef9e", "query": "#[macro_use] #[no_link] extern crate macro_reexport_1; //~^ ERROR macros reexports are experimental and possibly buggy //~| HELP add #![feature(macro_reexport)] to the crate attributes to enable ", "positive_passages": [{"docid": "doc-en-rust-fa5de52d3ace12677445254bb6cdb3896b1f26f61b4e6e81d95ebf54de71f95c", "text": "Steps to reproduce: The output of is: does as suggested by the message, but that fails too (as expected): Proposed fix: in the beta and stable channels (i.e. whenever is forbidden), the message that suggests adding should not be shown. (Clean up the test case with .) CC\nGonna milestone this real quick, as it seems quite important.\nI might be able to whip up a quick fix, though I'm not 100% sure about testing it properly ... (I've never actually tried building something for a non-dev release channel before...)\nafter chatting with it seems like I have a patch, building it now\ngonna leave to it\nHello good Rust folks :smile: Before the beta, I used which is in . Is there any way to use this trait during the beta ? I'm not sure how I can get the code to compile. Thanks for any tip / hints you can offer :+1:\nHi Commenting on random unrelated issues is usually not the right thing to do. You should rather open a new issue, or ask on . But to answer your question anyway, that trait only has : \u2026 and is stable and can be used on beta. So I\u2019d recommend you copy/paste the trait into your code, or make a function.\nThanks for taking the time to answer. I appreciate it. I don't see how this is unrelated. This is related, since the code I'm talking about used to compile with and the compiler suggests to do that, during the beta period. Thanks for your tip on copying the function :+1: I guess I'll just do that if nothing better is available and change this again when the trait is stable.\nThis specific issue is about fixing one of the compiler\u2019s error message, not about fixing code that triggers that error. It really helps everyone to keep discussions on topic.\nAlright. I understand. I'll make sure to open a new issue or go on StackOverflow next time if appropriate. Thanks again :+1:\nThis doesn't seem to be properly fixed in the beta channel. I'm still being recommended to add in the errors, and then told it's not available in the beta channel. Rust version: The code I used (snappy example in the docs with the extra addition on the first line:\nThis happened after Beta release.", "commid": "rust_issue_23973", "tokennum": 490}], "negative_passages": []} {"query_id": "q-en-rust-919d37b11c0dac3bc6be49c434baf386362ef5e4d1b03df366b2d770b469ef9e", "query": "#[macro_use] #[no_link] extern crate macro_reexport_1; //~^ ERROR macros reexports are experimental and possibly buggy //~| HELP add #![feature(macro_reexport)] to the crate attributes to enable ", "positive_passages": [{"docid": "doc-en-rust-b4e439e7e28004a888d7eeea745c4600205c8125bbf9c901393d71174d4795f9", "text": "We'll see the effect of this patch in 1.0 and afterwards Beta, etc.\nWhoops, I thought this made it in before beta. Thanks!\nare there plans to release updates to the beta channel within a given 6 weeks cycle? Either this one or in the future.\nI'm certain there will be updates; see discussion from last night here:\nSo how does one fix this error? I'm a new user of the rust library and I can't seem to use the collections library during the beta due to this problem. Given this git issue is a top google result, it would be nice if there were a solution here. Edit: Looks like the solution is to switch to the rust nightly. Is the entire collections library off-limits in the beta?\nby design, you have access to the parts of that are re-exported from (and are stable).", "commid": "rust_issue_23973", "tokennum": 181}], "negative_passages": []} {"query_id": "q-en-rust-91a85ceaf85c65e6590b1d09ad37e379284f866bfeaa567faac5939bb6676d36", "query": "base.cpu = \"yonah\".to_string(); base.max_atomic_width = Some(64); base.pre_link_args.insert(LinkerFlavor::Gcc, vec![\"-m32\".to_string()]); base.link_env.extend(super::apple_base::macos_link_env(\"i686\")); base.link_env_remove.extend(super::apple_base::macos_link_env_remove()); // don't use probe-stack=inline-asm until rust#83139 and rust#84667 are resolved base.stack_probes = StackProbeType::Call;", "positive_passages": [{"docid": "doc-en-rust-1cdc2b9a8878bd1803c5d5245aa71b492645c50cb4faf7da5def93238c780635", "text": " $DIR/mismatched-delimiter-corner-case-issue-127868.rs:4:42 | LL | fn main() { | - closing delimiter possibly meant for this LL | let a = [[[[[[[[[[[[[[[[[[[[1, {, (, [,; | ^ unclosed delimiter LL | } | ^ mismatched closing delimiter error: this file contains an unclosed delimiter --> $DIR/mismatched-delimiter-corner-case-issue-127868.rs:6:52 | LL | fn main() { | - unclosed delimiter LL | let a = [[[[[[[[[[[[[[[[[[[[1, {, (, [,; | ----- - this delimiter might not be properly closed... | ||||| | ||||another 16 unclosed delimiters begin from here | |||unclosed delimiter | ||unclosed delimiter | |unclosed delimiter | unclosed delimiter LL | } | - ...as it matches this but it has different indentation LL | | ^ error: aborting due to 2 previous errors ", "positive_passages": [{"docid": "doc-en-rust-ed76f830ed5c30a4ee3d1d23b33a7e4db4b559513f348625c76816a5d6cb1855", "text": "In my Fedora installation ended up being a corrupted file. It took me hours to realize, given that the output does not bring any clarity to it. rustc should really have ratelimit detection or something to detect corrupted file instead and inform about this to the user. !\nThis indeed seems quite confusing; and counter-productive to try to show this blob of errors. Would you be able to provide us with a reproducible example of the issue? Or if you still have it the corrupted file. labels +T-compiler +A-diagnostics +D-verbose +D-confusing +S-needs-repro -needs-triage\nSo this a problem because I unfortunately deleted the trashed file before someone told me that I should report this as a bug. I regret that I did not store it :-/ BUT I recalled that it was probably for reason a long string of ANSI escape codes. So I did a test string: This creates somewhat similar output as I got, except probably the string was a bit longer in my case.\nOK doubling that starts to be almost perfect match:\n! ... I'm considering T-shirt design ;-) Made by me and rustc. EDIT: and rendered by excellent terminal emulator foot\nSomehow rustc should be able to detect corrupted file in cargo's source repository (or whatever it is called). It was pretty bizarre sight when I encountered this by accident.\nThanks for the repro. label -S-needs-repro\nYeah I mean it is really a huge corner case but it can happen in the case of data corruption (e.g. power outage in middle of write operation). I'd detect frequency of errors and maybe a threshold parameter when it is interpreted as corruption with a sane default value.\nyeah, I believe this error comes out from the phase of lexing, which rustc are trying to find all the mismatched delimiters, maybe a threshold is ok for resolve it.", "commid": "rust_issue_127868", "tokennum": 408}], "negative_passages": []} {"query_id": "q-en-rust-9328ed706a19015ba19e0cbade6cf72b0d3baf89e3873b119dd99adf1f065e6a", "query": " error[E0573]: expected type, found variant `Ok` --> $DIR/issue-106062.rs:15:64 | LL | async fn connection_handler(handler: impl Sized) -> Result { | ^^ not a type | help: try using the variant's enum | LL | async fn connection_handler(handler: impl Sized) -> Result { | ~~~~~~~~~~~~~~~~~~~~ LL | async fn connection_handler(handler: impl Sized) -> Result { | ~~~~~~~~~~~~~~~~~~~ error: aborting due to previous error For more information about this error, try `rustc --explain E0573`. ", "positive_passages": [{"docid": "doc-en-rust-5db0f739e1d608e1359bcf2428d209445c9811131e415ea1baed828c225412f2", "text": "The full repo is here: : #![cfg_attr(not(bootstrap), feature(const_eval_select))] #![feature(const_float_bits_conv)] #![feature(const_float_classify)] #![feature(const_fmt_arguments_new)]", "positive_passages": [{"docid": "doc-en-rust-d5fac24d636764b5b8baec2c10c5560467a6b65def4df085f9c0e5a36597be4e", "text": "Currently, is not , since it uses with a non- check: can't be made , since it involves ptr-int cast to check the alignment: Recently intrinsic , it allows to run different code in CTFE and runtime. This, in turn, allows us to only make the alignment check in runtime and ignore it in the CTFE where it doesn't make much sense. See also: cc and (it seems like use of requires approval of all of the above teams) label +T-lang +T-libs +A-const-eval +A-const-fn\nAnother approach to make const would be to disable the checks altogether, but since we already have , that seems unreasonable.\ncc opinions before we escalate to lang and libs?\nIf a new intrinsic was in the spirit of and , this could possibly provide a solution without . Something like:\nIt seems useless to be able to perform any align checks in constants. The CTFE catches invalid pointer dereferences anyways.\nYes, but we also want to catch invalid pointer dereferences when we are not in CTFE, hence the idea of the function/intrinsic.\nSo why not use ?\nIt doesn't catch insufficiently aligned pointers though.", "commid": "rust_issue_90011", "tokennum": 263}], "negative_passages": []} {"query_id": "q-en-rust-933be2c68e5a4d12595935cf7327baaf42fed835129a8566f8929f9b11111b4f", "query": " // no-prefer-dynamic // ignore-emscripten thread_local!(static FOO: Foo = Foo);", "positive_passages": [{"docid": "doc-en-rust-a74a6a28ad143b441ce913e52da3ba850af414bfd2da9ec9dc1b3011f049a693", "text": "Right now on our bots we're basically in a state where if valgrind rpass tests are compiled dynamically they will all fail, but when compiled statically they will pass. I'm not really sure what's going on unfortunately, but this is either a bug in valgrind or a bug with us, and we should know precisely what it is to ensure something more sinister isn't lurking. Some links: - first pass at linking tests statically, fixed - another pass at linking all tests statically\nAll the failures seem to be within platform\u2019s dynamic loader. Here\u2019s a few things to try: updating kernel+glibc to the most recent versions (4.4-ish and 2.23-ish, respectively) help? downgrading kernel+glibc to some older version help? non-linux/non-glibc work correctly?\nAny news/changes on this one ?", "commid": "rust_issue_31968", "tokennum": 196}], "negative_passages": []} {"query_id": "q-en-rust-937a925cc2ed0945d5b11512a4fdad8ee0083162ab0d67190766b5e994149648", "query": "/// that were given to the `panic!()` macro. /// /// See [`PanicInfo::message`]. #[unstable(feature = \"panic_info_message\", issue = \"66745\")] #[stable(feature = \"panic_info_message\", since = \"CURRENT_RUSTC_VERSION\")] pub struct PanicMessage<'a> { message: fmt::Arguments<'a>, }", "positive_passages": [{"docid": "doc-en-rust-64078b2bce961079ae95e838f0de415ad1df6d73b4b0784edf1e2232442d09d1", "text": "was closed when stabilized the corresponding language feature, but we didn\u2019t realize that it was also the tracking issue for this standard library feature: History: [x] Original tracking issue: [x] Discussion of payload vs message [x] Making .message() infallible: [x] Split core/std PanicInfo into two types: This made core::panic::PanicInfo::message() infallible. [x] Use opaque return type: [x] FCP [x] Stablization PR: Unresolved questions: [x] Should it be infallible? Yes, by making std::panic::PanicHookInfo a different type. [x] What should the return type be? or an opaque ? The first is simpler, but the latter allows a bit more flexibility for future changes.\nInstead of making a PR go through the queue to fix the tracking issue, should we stabilize this method? fcp merge The doc-comment is slightly wrong or out of date. Its second line shoudl be removed. always creates a where is , regardless of the number of arguments given. In the single-argument case, that argument is required to have type . At first returning seemed unusual to me and I considered that we could make the method return instead, on the basis that calling its impl is almost the only useful thing to do with a . But has been stable and documented as the return type of since 1.0.0, so maybe it\u2019s fine.\nTeam member has proposed to merge this. The next step is review by the rest of the tagged team members: [x] [ ] [ ] [x] [x] [ ] [ ] [ ] Concerns: single-arg core::panic () Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nThe module also links to that closed tracking issue. Let\u2019s consider the pFCP above to also include stabilizing it. ( is stable.)\nconcern single-arg core::panic Should we change libcore\u2019s macro so that its single-argument form creates a with a instead of one with a ?", "commid": "rust_issue_66745", "tokennum": 496}], "negative_passages": []} {"query_id": "q-en-rust-937a925cc2ed0945d5b11512a4fdad8ee0083162ab0d67190766b5e994149648", "query": "/// that were given to the `panic!()` macro. /// /// See [`PanicInfo::message`]. #[unstable(feature = \"panic_info_message\", issue = \"66745\")] #[stable(feature = \"panic_info_message\", since = \"CURRENT_RUSTC_VERSION\")] pub struct PanicMessage<'a> { message: fmt::Arguments<'a>, }", "positive_passages": [{"docid": "doc-en-rust-25eef06e5513438ea83a2ce818db3dbca78deef5f357f7771ea0a66096ac6576", "text": "CC Pushing this idea further, it\u2019s tempting to maybe remove entirely and instead have a whose dynamic type can be . But that doesn\u2019t work because is a method of and is not .\nI thought the point of the message field was that core can't make a payload since it can't allocate?\nThe struct is: Both the and fields involve borrowing. As far as is concerned, they could both be borrowing form a stack frame. (And as far as I understand they are, in practice today.) It\u2019s in the case of unwinding that there\u2019s an owned that can return, which requires allocating.\nBased on the timing of it looks like you may have thoughts on this :)\nAh, goes into how and why the panic handler in libstd (which is called by if libstd is linked) ignores the it receives. We could probably change this but it gets tricky: while most implementations of for environments would be fine with receiving a borrowed payload, libstd wants to potentially take ownership and box it for unwinding.\nI have zero recollection for why this is the way it is, and I'm not really particularly interested in doing all the archaeology to dig up why it is the way it is. I'm not a huge fan of just stabilizing things because they happen to be there, but I don't really see why we wouldn't want to stabilize this. If there were a reason to not stabilize this though an investigation would presumably turn that up, but I'm not up to do that investigation myself.\nThat\u2019s totally fair. I\u2019m confident that this was more likely forgotten than deliberately left unstable. If deliberate we would have kept the tracking issue open or made a new one for the remaining bits. This is what lead me to propose stabilization. Having dug a little more, and thanks to Ralf\u2019s blog post, I think we may want at some point to revisit what data we expect to contain or not in various situation. In particular it could be interesting to extend to support payloads other than and make it closer to . Given those possibly-desirable changes, it may not be the time to add more constraints: fcp cancel\nproposal cancelled.\nLol, I had no idea. ;) So, indeed, I was wondering if we could do something about the duality of the and fields in . I assume we cannot change the fact that the same type is used both of panic handlers and panic hooks.", "commid": "rust_issue_66745", "tokennum": 519}], "negative_passages": []} {"query_id": "q-en-rust-937a925cc2ed0945d5b11512a4fdad8ee0083162ab0d67190766b5e994149648", "query": "/// that were given to the `panic!()` macro. /// /// See [`PanicInfo::message`]. #[unstable(feature = \"panic_info_message\", issue = \"66745\")] #[stable(feature = \"panic_info_message\", since = \"CURRENT_RUSTC_VERSION\")] pub struct PanicMessage<'a> { message: fmt::Arguments<'a>, }", "positive_passages": [{"docid": "doc-en-rust-928e4cd082453c51abb233b033d4d88ce4bfbacacb0004ddc46fb7251f46f38e", "text": "Concretely, I was thinking of adjusting somewhat like this: That would at least remove the need for the hack in . (The hack would likely move to instead, but that is a smaller hack IMO: the hack is now not encoded in the state of any more, just in its dynamic API.) I think this would retain all current functionality while clarifying the invariant of wrt. the payload (and explicitly calling the message a \"payload\"; right now it seems to be something that sits next to the payload which is inaccruate). But there are enough subtleties here that we'll only really know once someone implements this. With this change, the panic hook would always get a . We should likely provide a method somewhere in libstd to factor out this code: I was imagining something like an extension trait on with a type like . However, the panic hook is not the only place where that conversion is useful; another situation is code that caught a panic and wants to extract the message. I expect most instances of that to just handle payloads, not knowing that payloads are also common (that is certainly what happened when I wrote such code). So ideally we'd have a method that takes and returns , but I do not have a good idea where to put that. This is the hard part, as we cannot use . I wonder if we can use a take-based approach, similar to ? Conceptually, would put the panic into an , and somehow pass an to the panic handler, and the implementation of said handler could then the payload if it needs full ownership (like for unwinding). Except the panic handler cannot be generic, so it would have to be , which is not a thing. But with enough unsafe code we could re-implement something that behaves like that. I don't see any way to do this entirely safely; the panic handler that wants to take out the payload and put it somewhere needs to dynamically determine the size of the payload and arrange for there to be enough space. In liballoc, we could provide a safe wrapper for this that puts things into a , but in libcore, I don't think there is a safe API that we could provide to grab ownership.", "commid": "rust_issue_66745", "tokennum": 462}], "negative_passages": []} {"query_id": "q-en-rust-937a925cc2ed0945d5b11512a4fdad8ee0083162ab0d67190766b5e994149648", "query": "/// that were given to the `panic!()` macro. /// /// See [`PanicInfo::message`]. #[unstable(feature = \"panic_info_message\", issue = \"66745\")] #[stable(feature = \"panic_info_message\", since = \"CURRENT_RUSTC_VERSION\")] pub struct PanicMessage<'a> { message: fmt::Arguments<'a>, }", "positive_passages": [{"docid": "doc-en-rust-86b2c8da8c5c6984f78aa5d946a63acf7bba6ccd5b4f5ecdc4bafd4ce7e60c32", "text": "But maybe it is enough for the public API to expose ways to borrow the payload (should work for all non-unwinding handlers); if only libstd needs to take the payload (for unwinding) we could keep the unsafe part of the API internal. Also, we certainly do not want a panic hook to the payload out of its . At this point it seems to be quite a problem that panic hook and panic handler both use the same type. In fact, a panic handler likely wants a to be able to things without interior mutability. Is there any way we could make the attribute support both signatures, and then (based on the types of the methods) only -based implementations will actually be able to or the payload?\nI agree, but it\u2019s can easily be an inherent method of nor of because those are in libcore which doesn\u2019t have access to . It feels slightly unfortunate that we\u2019d need an extension trait just for this. I considered this yesterday and come to pretty much the same conclusions. To take ownership of a payload of unknown size through a non-generic API we pretty much heap allocation, which for libcore means something like receiving an , unsafely using , and returning that is to be passed to . It\u2019s not pretty, but if only libcore and libstd deal with this internally maybe that\u2019s ok? Is there a use case for a other than libstd\u2019s to take ownership of the payload?\nI was thinking of a slightly different API actually: Here, does to put the payload into caller-allocated storage; that storage must have (at least) size and align as given by . What I did not consider is how to get out the vtable, so maybe that higher-order approach is better.\nWe discussed this in the libs-api meeting today. We concluded that before stabilizing this, we need to make sure it works (doesn't return ) for std::panic!() as well. That is already the case with Rust 2021's std::panic!() (since it's identical to core::panic!() in this edition), but not for Rust 2015/2018's std::panic!(). Once that is fixed, we feel okay with returning a placeholder message for non-string panics (panics from or Rust 2015/2018's ), at which point the function can return an unconditionally, rather than an .", "commid": "rust_issue_66745", "tokennum": 521}], "negative_passages": []} {"query_id": "q-en-rust-937a925cc2ed0945d5b11512a4fdad8ee0083162ab0d67190766b5e994149648", "query": "/// that were given to the `panic!()` macro. /// /// See [`PanicInfo::message`]. #[unstable(feature = \"panic_info_message\", issue = \"66745\")] #[stable(feature = \"panic_info_message\", since = \"CURRENT_RUSTC_VERSION\")] pub struct PanicMessage<'a> { message: fmt::Arguments<'a>, }", "positive_passages": [{"docid": "doc-en-rust-4a89d36630d834ba38cdfcc5cd7802c94f943d601d14ea3f278db75c486d0db3", "text": "(And also, should be returned by value, not by reference.)\nAny progress on stabilization?\nIt would great to have this little feature.\nStill nothing? What's missing?\ncan this be stabilized after was resolved in the pull request ?\nYes, soon, hopefully! I've updated the overview at the top of this tracking issue. The main question left before stabilization is the return type of the method. Should this return (as the unstable method does today), or should it return an opaque type that wraps that fmt::Arguments, so we can still change in the future? (In case we want to support different types of no_std panic, such as panicking with an Error or something.)\nCan it return or so?\nI would vote for an opaque type that would also implement .\nDid that in this PR:\nmerge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged team members: [ ] [x] [x] [x] [x] No concerns currently listed. Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete. As the automated representative of the governance process, I would like to thank the author for their work and everyone else who contributed. This will be merged soon.", "commid": "rust_issue_66745", "tokennum": 343}], "negative_passages": []} {"query_id": "q-en-rust-93aba413ad51db43753d1e2e4280d8b3009727bd238950c1597d091724853ad5", "query": " // force-host // no-prefer-dynamic #![crate_type = \"proc-macro\"] #![crate_name=\"some_macros\"] extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro_attribute] pub fn first(_attr: TokenStream, item: TokenStream) -> TokenStream { item // This doesn't erase the spans. } #[proc_macro_attribute] pub fn second(_attr: TokenStream, item: TokenStream) -> TokenStream { // Make a new `TokenStream` to erase the spans: let mut out: TokenStream = TokenStream::new(); out.extend(item); out } ", "positive_passages": [{"docid": "doc-en-rust-b644a10cf9202b1eeecaecfde65d48af02e173d50f2532cc116c6dbd5936ff5f", "text": "crashes on nightly () and also on stable () with \"nightly\" features turned on via . It does not trigger on stable without flag. Here is the output I'm getting: I created a small repro case + here: (it only happens when there is a certain combination of procedural macros, so I cannot make it a single file).\nThe source code is: with crate being a procedural macro crate with:\nCouple of things that are important here: only happens with two procedural macros. macro has to do that . attribute has to be , does not reproduce with anything else.\nJust a heads-up: AFAIK if a bug only occurs on stable with (I understand that it is not the case here), then it should probably be disregarded, because isn't supposed to be used when not bootstrapping the compiler.\nThe bug itself reminds me of\nit happens on nightly as well. That detail was included to give context that evidently the ICE has been present for longer than 12 weeks, I believe.\nWith the following patch in the linked repo, the output is not ideal, but preferable to the ICE: I'll take a look to see if we can somehow keep the span for the entire attr doc, or at worst pointing at the correct macro.\nChanging the patch to be gives us which makes more sense but is missing a primary span. Edit: I see no way to recover the original span after its has been modified. I guess we already rely on proc macro writers to be nice when it comes to spans, so this is acceptable as well.", "commid": "rust_issue_63821", "tokennum": 325}], "negative_passages": []} {"query_id": "q-en-rust-93c5d7f2a630f97ac407877601acccd389796dd1dff540f8070f22652cbb7a0f", "query": "/// # Example /// /// ```rust /// # #![allow(deprecated)] /// use url::Url; /// /// let raw = \"https://username@example.com:8080/foo/bar?baz=qux#quz\";", "positive_passages": [{"docid": "doc-en-rust-d5aaca454c4b9e374bf52fd688a570f38314b1b4372f475dd8a89626b2ac7ccb", "text": "Users should use the cargo-ified Deprecate for a release or two then delete. cc\n:+1:\nRelated:\nI tried this: Hoping to deprecate the whole crate, but then building this program does not emit any warning: I expected a deprecation warning on the line. Is there another way to achieve this, or do I need to annotate individual items in the crate?\nThat should have worked, but you may need to remove the annotation above (they may be conflicting). I would also personally find it very helpful to annotate methods in particular what the path forward is (or just a general post about how to migrate).\nRemoving worked, thanks. Should this deprecation go through the RFC process, or can I assume that the team wants it since filed this issue? The documentation at explains how to use rust-url, and in particular the concept of relative v.s. non-relative URL schemes (e.g. vs ) which is a bit unusual. I believe this sufficient to make migration straightforward, but I\u2019m obviously biased. Any feedback on the documentation is welcome. (I\u2019ll make the deprecation message include the documentation\u2019s URL, it seems more useful landing page than the github repo.)\nHaving an issue is sometimes considered a \"todo item\" rather than \"everyone has approved this\", you'll find both styles of issues on the issue tracker. We don't have much precedent for deprecating an in-tree crate, so we're forging new territory here. I'd like to start moving crates out of the repo, however, so I hope to have some more on this topic soon.", "commid": "rust_issue_15874", "tokennum": 358}], "negative_passages": []} {"query_id": "q-en-rust-94071e4acf6f5c0915e331f9061a16e09f887c1947a58645b490954b9a6f93cf", "query": "sections[None] = [] section_order = [None] targets = {} top_level_keys = [] for line in open(rust_dir + '/config.toml.example').read().split(\"n\"): if cur_section == None: if line.count('=') == 1: top_level_key = line.split('=')[0] top_level_key = top_level_key.strip(' #') top_level_keys.append(top_level_key) if line.startswith('['): cur_section = line[1:-1] if cur_section.startswith('target'):", "positive_passages": [{"docid": "doc-en-rust-0d3e58ce58bc2122f663187778037ad23504868bc4e7259dad3b9da77deeb0bb", "text": "The problem is that the configure script assumes that all keys are in a section and there are no top-level keys. This forces people to use instead: We should make it possible to set top-level keys through , both and . We should also update the line in the readme linked above to suggest instead of printf. $DIR/incorrect-transmute.rs:5:8 | LL | extern \"rust-intrinsic\" fn transmute() {} | ^^^^^^^^^^^^^^^^ | = help: add `#![feature(intrinsics)]` to the crate attributes to enable = note: this compiler was built on YYYY-MM-DD; consider upgrading it if it is out of date error[E0094]: intrinsic has wrong number of type parameters: found 0, expected 2 --> $DIR/incorrect-transmute.rs:5:37 | LL | extern \"rust-intrinsic\" fn transmute() {} | ^ expected 2 type parameters error: intrinsic must be in `extern \"rust-intrinsic\" { ... }` block --> $DIR/incorrect-transmute.rs:5:40 | LL | extern \"rust-intrinsic\" fn transmute() {} | ^^ error: aborting due to 3 previous errors Some errors have detailed explanations: E0094, E0658. For more information about an error, try `rustc --explain E0094`. ", "positive_passages": [{"docid": "doc-en-rust-873bf10379b4caf187ed69a684f0180f6cd1d81ef25e103ed58fe8de2dca7fde", "text": " $DIR/struct-variant-privacy-xc.rs:6:33 --> $DIR/struct-variant-privacy-xc.rs:7:33 | LL | struct_variant_privacy::Bar::Baz { a: _a } => {} | ^^^ private enum", "positive_passages": [{"docid": "doc-en-rust-a7f3e9810aa82b6f33a59ff9fa989846e97e88c833b24e7a93ba56e854e1291a", "text": " $DIR/issue-76064.rs:1:17 --> $DIR/issue-76064.rs:3:17 | LL | struct Bug([u8; panic!(1)]); | ^^^^^^^^^ LL | struct Bug([u8; panic!(\"panic\")]); | ^^^^^^^^^^^^^^^ | = note: see issue #51999 for more information = help: add `#![feature(const_panic)]` to the crate attributes to enable", "positive_passages": [{"docid": "doc-en-rust-7c73542d67b097e4fd1fee1922c673ac020986f94704300c10704cf32974803c", "text": "The panic payload of the macro (as passed on to ) is generic, but the CTFE panic support assumes it will always be an . This leads to ICEs: Cc\nI guess we could support this, but we'd have to find some way to render the errors. Maybe we need to wait for const traits so we can require const Debug?\nAlternatively constck could require the argument to have type .\nalso requires a\nYeah, the libcore panic machinery does not support non-format-string panics.\nAye, left out the latter of my message. is likely the most common argument to , and it's not unusual with e.g. ; so I agree that waiting for const traits before supporting any other arguments is reasonable.\nIm hitting this ICE on stable without feature flag:\nHm indeed, that is interesting: any idea why we go on here after seeing an error? Does this kind of behavior mean any part of CTFE is reachable on stable somehow?\nyes. We're not gating CTFE on whether there were stability (or any other static checks) failing. I've been abusing this since 1.0 to demo things on the stable compiler and never got around to fixing it. I don't even know if we have a tracking issue for it.\n:rofl: So what would be a possible approach here? Replace such consts by early enough that their MIR never actually gets evaluated? We might as well use this issue, IMO...\nThat would require us to mutate all MIR and potentially even HIR. Other possible solutions: poison the mir Body if const checks fail (by adding a field like or by just nuking the body by replacing it with ) and thus make allow const evaluation to check if the mir Body failed the checks and bail out before doing anything make the have an field similar to\nOn second thought, let's leave this issue for the ICE, and make a new one for \"we can trigger nightly-only const code on stable because we don't abort compilation early enough\":\nGiven that this ICE is pre-existing, as shown by , can we un-block ? I'd really like to see const_panic stabilized, and this seems like a minor issue to be blocking on.\nThat comment is an example of accidentally partially exposing an unstable feature on stable -- a separate bug tracked in I don't see how that's an argument for shipping that feature on stable when it is broken.", "commid": "rust_issue_66693", "tokennum": 515}], "negative_passages": []} {"query_id": "q-en-rust-959351194e96aebd4863d61f28fe9b867a82e4016e8b66a5a377e6b97f36e5ed", "query": "error[E0658]: panicking in constants is unstable --> $DIR/issue-76064.rs:1:17 --> $DIR/issue-76064.rs:3:17 | LL | struct Bug([u8; panic!(1)]); | ^^^^^^^^^ LL | struct Bug([u8; panic!(\"panic\")]); | ^^^^^^^^^^^^^^^ | = note: see issue #51999 for more information = help: add `#![feature(const_panic)]` to the crate attributes to enable", "positive_passages": [{"docid": "doc-en-rust-21ce43edd15410cd7321881478b82e77d0a12e87f14b1a6a8e5e03c80674bc9f", "text": "In particular, applies to all unstable const features. Fixing this bug here is not even that hard I think: somewhere around here there needs to be a check that the argument has type . Re: , AFAIK that one is also blocked on figuring out how the diagnostics should look like, and a further concern (though maybe not blocking) is .\nI guess that might make this a non-issue, where it is proposed also would require as first argument.\nFor reference: , which was recently merged. The plan is to fix the behaviour of in the Rust 2021 edition. Given that this ICE is listed as a blocker for , does this mean that stabilizing that will have to wait for the next edition?\nNo, because breaking changes are allowed to be made to unstable features without moving to a new edition; that's kinda the whole idea. The 2021 edition will effectively just extend the same restriction to all invocations, const context or not.\nI'd like to tackle this but I'm not very experienced with the CTFE code. I get that we need to check from the that we've matched on in this branch and we need to match on the enum, but: For the first question, I see and that we probably want to check that the type of the tuple and that there's variants of that are , but do we stop the iteration when we get one that is a type? Is the projection list always guaranteed to terminate in a variant of that has a ? And once we have an instance of , do we just check that there's any_ projection in the list that has a ?\nyou can use to get the type of the . Then you can test its to be .", "commid": "rust_issue_66693", "tokennum": 352}], "negative_passages": []} {"query_id": "q-en-rust-9598a86019636b1d353301348b44de6a1f3ff49b9f32bf0c11d4c0d48aa6e372", "query": "pub fn must_use(&self) -> bool { true } /// hello /// ///
this is a warning
/// /// done pub fn warning1() {} /// Checking there is no bottom margin if \"warning\" is the last element. /// ///
this is a warning
pub fn warning2() {}
} impl AsRef for Foo {", "positive_passages": [{"docid": "doc-en-rust-2da5fa86b84bf416d9b670af0c4e7d9c0ccd4cda593660d7d16436e913a6382e", "text": "Following , we realized that rustdoc users would prefer to have a syntax extension rather than using plain HTML to have warning blocks. suggested the following: suggested: What do you think? cc\nPersonal Opinions: The first style will look fine for short forms, but I'm not sure for longer text. A possible syntax: Which to me pushes the text too far right. Not having any spacing, but just separating with newlines, seems hard to parse at a glance what is part of the WARNING, and prevents multiple line breaks inside the warning. Also, it's not as extensible. INFO can be , but if we ever want more extensions, they would increasingly possibly conflict with other things. The second style makes it immediately clear what is part of the block, and is unlikely to accidentally conflict if we ever want to add more block styles. It also supports multiple lines or inner blocks trivially. A final advantage is that it may support nested blocks, in case those ever come to be useful. Overall: I prefer the second form, both aesthetically and for future-proofing\nI think I agree with in preferring the second form. I like the first form's simplicity, but it also seems surprising that is treated as a 'keyword' and will change the formatting of the docs.\nThat's why we have this debate. :stuckouttongue:\nI personally actually prefer stabilizing a CSS class, I think we should do more of that, just because parsing can get annoying. But this is not that hard to parse anyway. The problem with is precisely that you can't scope it easily: what if you want to add code blocks? bullet points?\nThe problem with this is it ties rustdoc to the HTML backend; like I said nothing rustdoc currently does guarentees that it will generate HTML from the markdown. Changing that means that to make the JSON backend have the same info, we'd have to add a full HTML parser to rustdoc just to read the HTML in the markdown, because pulldown does not parse HTML for us. I really think we should avoid making the JSON backend a second-class citizen.\nAlso there's not much point to this if it's just a little CSS IMO, you can add that yourself with and it will look just as good and be more flexible since you can control the styles yourself.", "commid": "rust_issue_79710", "tokennum": 504}], "negative_passages": []} {"query_id": "q-en-rust-9598a86019636b1d353301348b44de6a1f3ff49b9f32bf0c11d4c0d48aa6e372", "query": "pub fn must_use(&self) -> bool { true } /// hello /// ///
this is a warning
/// /// done pub fn warning1() {} /// Checking there is no bottom margin if \"warning\" is the last element. /// ///
this is a warning
pub fn warning2() {}
} impl AsRef for Foo {", "positive_passages": [{"docid": "doc-en-rust-750e7c681f4102792b3611241ddcbc147822060286ee625b8e928ddde584f36a", "text": "I'm not a giant fan of the syntax because it looks odd when rendered by anything other than rustdoc, which I'd rather avoid, one of the major design concerns for intra-doc links was compatibility with other markdown parsers:\nSo, I agree, but I'm not convinced that the warning class is something that needs to be surfaced in the JSON backend as anything special. Also, bear in mind, HTML is a part of markdown: valid HTML (well a subset of HTML) is valid markdown. Users already can , and they do that in many cases. Markdown parsers already need to be able to handle HTML. There's nothing new there. Whatever consumes the JSON backend will need to be able to handle that too. I mean, yes, in theory, in practice people don't do this because of the barrier to entry (you have to get your tools to use that, including )\nAlso, this is incorrect, pulldown absolutely parses HTML. It has to, like I said valid HTML is valid Markdown. It doesn't parse it thoroughly, I believe you can mess it up with sufficiently complex HTML, but it parses HTML. What this changes is subtler: up till now you did not need any particular stylesheet to acceptably render rustdoc markdown from the JSON backend on your own (just use your favorite markdown renderer's stylesheet), but now you will need other css classes to handle rendering warnings as warnings.\nI do not agree that this is a problem. I've used Phabricator's implementation of this approach happily and it is by design that warning banners are one paragraph. It is not the intention that you would put entire sections of documentation into a warning banner. They are for a brief callout. It's fine to refer to a subsequent section of the documentation from the callout if needed. I suppose this is a matter of taste but it's one where I believe the opinionatedness of the syntax is a positive thing.\nSo I discussed this a bit with on the side, we agree that the effect on the JSON backend isn't major since HTML is already a subset of markdown and docs use HTML already (especially for images and tables). My position is as follows: I strictly prefer HTML over some other scoped syntax with nuances people have to learn. I don't think there's as much value in unscoped syntax which makes my position \"HTML or nothing\", basically.", "commid": "rust_issue_79710", "tokennum": 533}], "negative_passages": []} {"query_id": "q-en-rust-9598a86019636b1d353301348b44de6a1f3ff49b9f32bf0c11d4c0d48aa6e372", "query": "pub fn must_use(&self) -> bool { true } /// hello /// ///
this is a warning
/// /// done pub fn warning1() {} /// Checking there is no bottom margin if \"warning\" is the last element. /// ///
this is a warning
pub fn warning2() {}
} impl AsRef for Foo {", "positive_passages": [{"docid": "doc-en-rust-dec4ce823c8be44080dd52f4329cd82509f0245db429e2372e378f7f940d6712", "text": "I'm not wholly against unscoped single-line syntax. We and syntax in W3C specs and you invariably need to add more stuff to it (there's an HTML class in these specs for this reason). Furthermore, Rust has a strong culture of wrapping text, so this means we need to work well while wrapped, and that means we're again introducing some form of scoping mechanism (e.g. indenting the text), which editors will have to learn, as well.\nI don't know about admonitions specifically but there seems to be something of a community habit to use \"magic tags\" on fenced code blocks for this sort of features (e.g. github has a tag), maybe that could be an option? I don't know if pulldown_cmark has a hook for manipulating code blocks but\u2026 So a warning would be or maybe the tag would be to leave room for things like note/todo/danger/\u2026 Alternatively, take a gander at how other markdown implementations extend the language for this sort of features? e.g. seems to be using for admonitions.\nHmm, but is still for code, even if it has some extra behavior. Aside from the design concerns, this would be feasible to implement: we just extract the language string from the code block with pulldown.\nCould you use something that's already built into markdown? For example, rustdoc could take and render it as an admonition.\nThat's what suggested: to use markdown and make some changes on rustdoc side. I think the codeblock approach is the best personally.\nMy bad, I didn't see that.\nI wonder if it makes more sense to add this as a proc-macro. Rustdoc can still provide the CSS classes and things, and you could use them with HTML directly, but you could use or something and it would expand to the correct HTML tags and classes.\nMight be better to have a then, no?\nIs that going to be a special header similar to the \"unsafe\"/\"experimental\"/etc ones, or is that going to also be used within docs in an order-sensitive way? If the latter we should probably just use a CSS class; mixing and is ugly\nAgreed.", "commid": "rust_issue_79710", "tokennum": 483}], "negative_passages": []} {"query_id": "q-en-rust-9598a86019636b1d353301348b44de6a1f3ff49b9f32bf0c11d4c0d48aa6e372", "query": "pub fn must_use(&self) -> bool { true } /// hello /// ///
this is a warning
/// /// done pub fn warning1() {} /// Checking there is no bottom margin if \"warning\" is the last element. /// ///
this is a warning
pub fn warning2() {}
} impl AsRef for Foo {", "positive_passages": [{"docid": "doc-en-rust-0401f4aca5f75b95bccb3c9f0c1382f7d896e23af0bafe46936c5b01ce984214", "text": "Among the suggested ideas, my favourite one remains the code block with \"warning\" as tag:\nYeah, but triple backticks is also for preformatted text without further markup, so you won't be able to e.g. use text formatting (or other code) in there\nTrue... Finding the right syntax is much more complicated than expected...\nThat's the issue of using markdown, for all that it has a pretty nice baseline syntax it really was not designed for extensibility.\nWe can always use : it'd allow to have text formatting inside it.\nI would use , it's not quite kosher to invent arbitrary HTML tags (though you can use Custom Elements, which is more heavyweight)\nSounds good to me. It's funny because we kinda came back to the . :rofl:\nI just thought: we could use tag in the markdown and then transform it into in the generated HTML?\nMy main objection to all of these is that rustdoc shouldn't be adding custom markdown syntax that no one else supports. Any new syntax we use should look reasonable even if rendered with the default renderer.\nWell, it remains valid markdown since it's HTML (even though it's custom tag). I'm completely fine with , but it might be simpler/better for users to simply write .\nComing back to this (because of ), it seems like the conversation kinda agreed on adding a CSS class directly and then it would look like this in the docs: . I'll send a PR in the next days and then we can open an FCP to see if everyone is ok with it (and if not, we'll have code to help the debate).\nThis indeed seems like a better solution than the HTML block. It also plays better with something said about not linking it too much too HTML. This will look like a proper warning or note, mostly independent of the HTML generation.\nThey also seemed to not be too favourable to anything that needs some extra parsing from rustdoc iuc. I'm fine with bot the github solution and the CSS class.\nMaybe one more comment to make here. The note-taking app, which uses Markdown also has a similar syntax to create these blocks. It is called . It uses a syntax similar to the syntax that is used for doc intra-links currently, and it is fully compatible with them.", "commid": "rust_issue_79710", "tokennum": 506}], "negative_passages": []} {"query_id": "q-en-rust-9598a86019636b1d353301348b44de6a1f3ff49b9f32bf0c11d4c0d48aa6e372", "query": "pub fn must_use(&self) -> bool { true } /// hello /// ///
this is a warning
/// /// done pub fn warning1() {} /// Checking there is no bottom margin if \"warning\" is the last element. /// ///
this is a warning
pub fn warning2() {}
} impl AsRef for Foo {", "positive_passages": [{"docid": "doc-en-rust-905d2f3b7c6b013fa7467fd80d5c43168bffe6d3d80604ac80788220d4eb9b1e", "text": "I am uncertain if this is any better than what suggested from the GitHub syntax, but because of the similarity to the doc intra-link syntax. I do feel like this might be interesting because it should go well with backwards compatibility. However, this may look worse than the GitHub syntax solution when not going through cargo doc.\nI thought I'd already left this comment, but I'd really like to avoid adding more markdown syntax where possible. This feature works fine without special syntax, I don't think we need it. edit: copying over the reasoning from the :\nAlthough, I might not agree fully, I can fully appreciate and understand the reasoning you left in the corresponding PR. Still, I do think it might be worth to standardize some kind of syntax for some kind of warning block. This does not actually require customized markdown. Alternatively, I can suggest adding the GitHub flavor with emojis to the . This may help in standardizing short form warnings and notes within documentation. We could for example propose to standardize: which should look like the following I do value people's opinion on this. Is this a possible compromise?\nThat's a valid alternative, I think the yellow color is pretty important though, and it's useful for themes to be able to make styling decisions about warnings so I do think that having some way of marking it is important. I just don't think the way of marking it should be some magic invocation as opposed to CSS classes. (And I think that the bar for a feature that is adding new semantics to the Rustdoc markdown parser is much, much higher than that of one stabilizing a CSS class)\n100% agree. I will look at the Rust API guidelines issue and possibly an RFC to rust to track the adoption in GitHub and possibly eventually merge. Thank you :smile:\nBeyond all this discussion of third-party tools, I'd really like to make sure that whatever new syntax and presentation we add to rustdoc can also be to [mdBook]. That way, when the standard library starts using callouts, we can make sure the same style also gets used in The Book. Mostly, The Book seems to just use blockquotes, often with headings like [this one about ownership]. mdBook doesn't do anything special with it: it's [just a block quote with a header].", "commid": "rust_issue_79710", "tokennum": 501}], "negative_passages": []} {"query_id": "q-en-rust-9598a86019636b1d353301348b44de6a1f3ff49b9f32bf0c11d4c0d48aa6e372", "query": "pub fn must_use(&self) -> bool { true } /// hello /// ///
this is a warning
/// /// done pub fn warning1() {} /// Checking there is no bottom margin if \"warning\" is the last element. /// ///
this is a warning
pub fn warning2() {}
} impl AsRef for Foo {", "positive_passages": [{"docid": "doc-en-rust-c904caf33bd46cec9a84e48d777b4f48ec308eec9cfa53ff08ad48d46f41d2fc", "text": "[mdBook]: [this one about ownership]: [just a block quote with a header]:\nThis is a good point. Once we have something merged here, I'll send a PR there to have an equivalent.\nCC\nI'd be happy to add some CSS rules to mdbook if that is what you are proposing.\nYeah, it is.\nFYI, using a is not semantic and Microsoft GitHub is setting a bad precedence. \u2014 CommonMark Spec v0.30, denotes a in Markdown and is rendered as such. \u2014 W3C HTML spec, Unless you are quoting a source, this is not the correct element to use. While I disagree with even using altogether, at least some tools out there are at least translating the blockquote in admonition context into a (div has no semantic meaning which is unfortunately the best HTML option for admonitions). It would be ill-advised to follow Microsoft GitHub\u2019s Markdown syntax fork—especially with their disregard for semantics. Similarly using a block would also be breaking semantics as an admonition is not preformatted text. CommonMark does have that could be used if the solution was preferred. There\u2019s also other reasonable options noted here that don\u2019t break semantics.\nI agree that block quotes for callouts are bad, and would like to work with mdBook to roll out a common syntax (probably ) that will replace the ad-hoc, unsemantic markup that they use right now.\nUsing is what we agreed upon in The debate now is on the look itself.\nUnfortunately isn't great because markdown has a \"boundary\" when HTML tags are used. I wanted to add a warning indicating that users probably wanted another trait, but intra-doc links don't work (which is the correct behavior per markdown spec).\nWorks perfectly fine though (well, minus the UI bug I just discovered haha): ! I used this code: I'll send a PR to improve documentation.\nAh. Well I suppose the issue wasn't with intra-doc links, but with inline code blocks. What I tried was that but with bar`, and it failed.\nWorks too: !\nI opened which adds extra explanation on how to use markdown inside html tags. That should solve your issue.\nThat note is exactly what I was just looking in to. I had it inline, given my familiarity with HTML. TIL that matters.", "commid": "rust_issue_79710", "tokennum": 509}], "negative_passages": []} {"query_id": "q-en-rust-9598a86019636b1d353301348b44de6a1f3ff49b9f32bf0c11d4c0d48aa6e372", "query": "pub fn must_use(&self) -> bool { true } /// hello /// ///
this is a warning
/// /// done pub fn warning1() {} /// Checking there is no bottom margin if \"warning\" is the last element. /// ///
this is a warning
pub fn warning2() {}
} impl AsRef for Foo {", "positive_passages": [{"docid": "doc-en-rust-1196006ec2219d0730aa48521bbdc7cae854ede34ccfdf1d5ca56b4ee83cd30c", "text": "It doesn't hurt to have it and if it can helps users, well, why not doing it. :)", "commid": "rust_issue_79710", "tokennum": 24}], "negative_passages": []} {"query_id": "q-en-rust-9607d1e0a7f28fdc7194d38fd476735713570aa6e5c0b36e85d4f4ee33995cab", "query": " use foo::*; mod foo { pub mod bar { pub mod bar { pub mod bar {} } } } use bar::bar; //~ ERROR `bar` is ambiguous use bar::*; fn main() { } ", "positive_passages": [{"docid": "doc-en-rust-14d87f94d6dd2f60938cc3099809c6fbd53892083d65f782d3f0ae98e232297d", "text": " $DIR/turbofish-arg-with-stray-colon.rs:2:17 | LL | let x = Tr; | ^ expected one of 8 possible tokens | = note: type ascription syntax has been removed, see issue #101728 help: maybe write a path separator here | LL | let x = Tr; | ~~ error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-e50af215174b6184602adb938425db76e7242910ad97b6f43300eeb640dddd67", "text": " $DIR/issue_74400.rs:12:5 | LL | f(data, identity) | ^ implementation of `FnOnce` is not general enough | = note: `fn(&'2 T) -> &'2 T {identity::<&'2 T>}` must implement `FnOnce<(&'1 T,)>`, for any lifetime `'1`... = note: ...but it actually implements `FnOnce<(&'2 T,)>`, for some specific lifetime `'2` error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-083ab7db4e2d48777362fd29f1c6170eec0fa54a54f82e95b6d3b4e25b7c55d1", "text": "Regression from to . The following code yields an incorrect error: On beta, this gives: On nightly, this:\nwould you be so kind to label this issue? It's sliding down the list while being hard to search.\nRegression occurred between nightly-2020-06-23 and nightly-2020-06-24, I suspect is related (it's ?).\nIt would be great to find the culprit PR. ping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nsearched nightlies: from nightly-2020-06-23 to nightly-2020-06-24 regressed nightly: nightly-2020-06-24 searched commits: from to regressed commit: #![warn(unused_parens)] #![deny(unused_parens)] #![allow(unreachable_code)] fn main() { // We want to suggest the properly-balanced expression `1 / (2 + 3)`, not // the malformed `1 / (2 + 3` let _a = (1 / (2 + 3)); let _a = (1 / (2 + 3)); //~ERROR unnecessary parentheses f(); }", "positive_passages": [{"docid": "doc-en-rust-4ec7e73a404ac76d69c671c6f06db99d936ae8f0078fcfaac8411a19baadc5dd", "text": "Update test file to use + annotations instead of (see https://rust-) The files need to update are: - $DIR/double-move.rs:45:9 | LL | let _y = *x; | -- value moved here LL | x.foo(); | ^ value used here after move | ^ value used here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error[E0382]: use of moved value: `*x` --> $DIR/double-move.rs:51:18 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | let _y = *x; | ^^ value used here after move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error: aborting due to 6 previous errors", "positive_passages": [{"docid": "doc-en-rust-86145dda3b6fd95b4efbdb679c46b9803ebc45ffea7ba9a1d53d1d81b4427672", "text": "The following code fails on stable: When compiled in release mode, the assert is optimized out by LLVM even though it checks the exact same condition as the other assert, as can be verified by testing in debug mode. Note that it's theoretically possible for the stack to be aligned correctly such that the bug is suppressed, but that's not likely. The 256's can be replaced with larger alignments if that happens. The bug is caused by the impl of copying to the stack with the default alignment of 16. ()\nand seem related\nThis is a soundness hole.\nSo... I guess the problem is that the that rustc emits for the unsized local doesn't use ? That's... kind of bad, because it doesn't look like even supports dynamically setting the alignment: If this is correct, the issue should be renamed to something like \"unsized locals do not respect alignment\".\nA way to fix this would be to change the ABI for dynamic trait calls that take by value to always pass by address. When the trait object is created is where the trampoline function would be generated, so there is no additional cost for static calls.\nThis can also be fixed by not allowing dynamically-sized local variables with an alignment more than some maximum value and having all dynamically-sized locals use that maximum alignment unless optimizations can prove that a smaller alignment suffices.\nMessing with the ABI wouldn't help with unsized locals in general (assuming is correct), since e.g. will also need an alloca with dynamic alignment in general. Given only a value, there is no static way to say anything about its alignment, so how would you distinguish whether calling a method on it is OK?\nCan we call and then align the pointer for the local ourselves as a workaround until we come up with a good solution?\nIn clang there's a function which might or might not be useful in this case.\nrequires a constant alignment. Non constant alignments are . This matches semantics of LLVM instruction, so it's not helping, as alignment of value in is unknown at compile time when moving from .\nI think that overallocating for unsized locals of unknown alignment may make sense. You can and then make use of to compute the offset. Sure, it increases stack requirements, but it seems acceptable for , and in most cases you end up with extra 7 bytes on stack or less. 256 alignment seems like an edge case where wasting 255 bytes doesn't matter.", "commid": "rust_issue_68304", "tokennum": 530}], "negative_passages": []} {"query_id": "q-en-rust-9a4344123574086f3fdb11e15d1fa3cd2a399c373f89e3595aff3fa1a9c5b2ef", "query": "LL | y.foo(); | ^ value used here after move error[E0382]: use of moved value: `*x` error[E0382]: use of moved value: `x` --> $DIR/double-move.rs:45:9 | LL | let _y = *x; | -- value moved here LL | x.foo(); | ^ value used here after move | ^ value used here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error[E0382]: use of moved value: `*x` --> $DIR/double-move.rs:51:18 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | let _y = *x; | ^^ value used here after move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error: aborting due to 6 previous errors", "positive_passages": [{"docid": "doc-en-rust-35b067be423839cfae33dfa0b48475783f27ab1ed0904fd969f99c950d943054", "text": "if we keep setting as the default alignment we can skip the overallocating and offset dance for types with an alignment below that.\nMy idea was that rustc would prohibit coercing overaligned types to a dyn trait that takes by value. So:\nThe alignment is dynamically determined though, not sure if saving 15 bytes of stack space is worth the conditional branch?\nOh right. Yea that's probably not helpful.\nSince , this would be backwards incompatible (at latest) once trait object upcasting is finally implemented: and trait objects would have to have the same alignment restriction (because you can upcast to ), but today you can freely create such objects with any alignment.\nI was under the impression that calling didn't require dynamic , is that related to missing MIR optimizations, meaning we have unnecessary moves which end up duplicating the data on the stack using a dynamic ?\nCorrectness shouldn't rely on optimizations to fire...\nSure, I was just hoping we weren't relying on \"unsized locals\", only unsized parameters (i.e. ), for what we're exposing as stable. And I was hoping an optimization wouldn't need to be involved to get there. I wonder if it would even be possible to restrict \"unsized locals\" to the subset where we can handle it entirely through ABI, and not actually have dynamic s anywhere, since that would make me far more comfortable with 's implementation or even stabilization. Ironically, if that works, it's also the subset we could've implemented years earlier than we did, since most of the complexity, unknowns (and apparently bugs) are around dynamic s, not calling trait object methods on a dereference of .\nThat should be possible. I think everything that is necessary is performing the deref of the box as argument () instead of moving the box contents to a local first () That would also make it possible to call in cg_clif. Cranelift doesn't have alloca support, so the current MIR is untranslatable.\nHmm, but that's not trivially sound because it could lead to MIR calls where argument and return places overlap, which is UB. We'd potentially have to special-case it to derefs, but even then what's to stop someone from doing ? Maybe we could introduce a temporary for the return value instead, for these kinds of virtual calls? That should remove any potential overlap while allowing the call.", "commid": "rust_issue_68304", "tokennum": 517}], "negative_passages": []} {"query_id": "q-en-rust-9a4344123574086f3fdb11e15d1fa3cd2a399c373f89e3595aff3fa1a9c5b2ef", "query": "LL | y.foo(); | ^ value used here after move error[E0382]: use of moved value: `*x` error[E0382]: use of moved value: `x` --> $DIR/double-move.rs:45:9 | LL | let _y = *x; | -- value moved here LL | x.foo(); | ^ value used here after move | ^ value used here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error[E0382]: use of moved value: `*x` --> $DIR/double-move.rs:51:18 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | let _y = *x; | ^^ value used here after move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error: aborting due to 6 previous errors", "positive_passages": [{"docid": "doc-en-rust-e0557b2e15a4d389c0583e27ec38e510a3d214eb85386293065b15f95dc546f5", "text": "(Unrelatedly, I just realize we use a copying-out shim for vtable entries that take by value, even when would be passed as at the call ABI level. We would probably have to make computing the call ABI a query for this microopt to not be expensive)\nThat would result in being moved to a temp first before dropping and calling the boxed closure. If it wasn't moved first before the drop of , then borrowck would give an error. (I meant for this change to be done at mir construction level, not as an optimization pass, so borrowck should catch any soundness issues)\nWe can try it, I guess, but right now a lot relies on many redundant copies/moves being created by MIR building and may not catch edge cases like this. But also I don't think moving out of a moves the itself today. Me too, but I was going with the \"correct by construction\" approach of creating a redundant copy/move, just on the return side, instead of the argument side, of the call.\nAfter revisiting on triage, removing nomination.\npre-triage: nominated for discussion in the next weekly meeting.\nI just checked and, a function call roughly this (touched up for clarity): Some of my previous comments assumed the wasn't there and the call would use the destination directly, but given that there is a move, that simplifies my suggestion.
error[E0382]: use of moved value: `*x` error[E0382]: use of moved value: `x` --> $DIR/double-move.rs:45:9 | LL | let _y = *x; | -- value moved here LL | x.foo(); | ^ value used here after move | ^ value used here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error[E0382]: use of moved value: `*x` --> $DIR/double-move.rs:51:18 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | let _y = *x; | ^^ value used here after move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error: aborting due to 6 previous errors", "positive_passages": [{"docid": "doc-en-rust-5beba8d62abc39b05f6e650dcce3335b6738bd1d349d26aea50dceae17630e1f", "text": "I'm in favor of this proposal, but I do think we want to be slightly careful to ensure we're not violating soundness. The main concern would be if there is some way to access or re-use the variable that was moved. I attempted to do so with this example () but it fails to compile: The error here arises because the closure would need to capture a to do anything, which prevents a later move. But maybe there is some kind of tricky example that can arise where we re-assign the value during argument evaluation?\ncompiles and does before , where is the result of .\nBorrowck should prevent that, right?\nborrowck may rely on MIR building creating extraneous copies, so we can't just assume it helps us. IMO we should limit it to functions taking unsized parameters and function calls passing such parameters or dereferences of es (any other pointer/reference doesn't support moving out of).\nWell, I think is right that borrowck would catch violations of soundness in some strict sense, but I guess what I meant is that we might be breaking backwards compatibility.\nThe example that feels pretty relevant. If I understood your proposal, it would no longer compile. If I recall (thinking back), I had a plan to handle this by save the to an alternate location in the actual call, we pass ownership of and then do a shallow free of the This still seems like it would work, though I'm not sure how easy/elegant it is to formulate. In a sense we'd be rewriting to .\nIsn't example dependent on doing a direct virtual call? My understanding is that you can't do this on stable with because the virtual call exists in one place, in libstd, and it doesn't do anything weird, so it would work with a restricted ruleset. EDIT: if it can work on stable, it's because calling 's will move the whole before the assignment is evaluated, which is fine, because the shallow drop is actually inside that , not its caller.\nI agree that this code doesn't compile on stable today, but I'm not sure if that's the point. I guess you're saying it's ok for it to stop compiling? I think what I would like is that our overall behavior is as consistent as possible between Sized and Unsized parameters. I think we could actually do a more general change that doesn't special case Sized vs Unsized or anything else.", "commid": "rust_issue_68304", "tokennum": 522}], "negative_passages": []} {"query_id": "q-en-rust-9a4344123574086f3fdb11e15d1fa3cd2a399c373f89e3595aff3fa1a9c5b2ef", "query": "LL | y.foo(); | ^ value used here after move error[E0382]: use of moved value: `*x` error[E0382]: use of moved value: `x` --> $DIR/double-move.rs:45:9 | LL | let _y = *x; | -- value moved here LL | x.foo(); | ^ value used here after move | ^ value used here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error[E0382]: use of moved value: `*x` --> $DIR/double-move.rs:51:18 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | let _y = *x; | ^^ value used here after move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error: aborting due to 6 previous errors", "positive_passages": [{"docid": "doc-en-rust-c0a55b5a9c88bb1e08f508b1fb679a6cf9e068e808080659d5f7ae2be7470592", "text": "I think I am basically proposing the same thing as earlier, but with a twist: Today, when building the parameters, we introduce a temporary for each parameter. So if we had we might do: What I am saying is that we could say: If the argument is moved (not copied), and the \"place\" has outer derefs (e.g., we are passing where is some place), then we assign the place to a temporary and we make the argument to the function be : I don't believe this is observable to end-users, except potentially for when the destructor on runs. I believe that it accepts all the same code we used to accept. It also has nothing specific to or types at all. Does that make sense? I'm very curious to know if anyone sees any problems with it. (One obvious thing: if we ever add , its design would have to accommodate this requirement, but I think that's fine, in fact the complications around getting moves right for precisely this case are one of the reasons we've held off on thus far.)\nCan you not reinitialize today? If you move the whole , it is indeed more conservative (and should let us skip unnecessary moves even in cases). But also, do you want to move the locals check from typeck to borrowck? I don't have an intuition for when your approach wouldn't create unsized MIR locals, from the perspective of typeck. However, this is only relevant if we want a stricter feature-gate. At the very least it seems like it would be easier to implement a change only to MIR building, and it should fix problems with for the time being, even without a separate feature-gate.\nHmm, that's a good point, I think do that. Sigh. I knew that seemed too easy. I remember we used to try and prevent that, but we never did it fully, and in the move to NLL we loosened those rules back up, so this compiles: Still, you could do my proposal for unsized types, though it would introduce an incongruity. UPDATE: Oh, wait, I remember now. For unsized types, reinitialization is not possible, because you can't do with an unsized rvalue (we wouldn't know how many bytes to move). So in fact there isn't really the same incongruity.", "commid": "rust_issue_68304", "tokennum": 510}], "negative_passages": []} {"query_id": "q-en-rust-9a4344123574086f3fdb11e15d1fa3cd2a399c373f89e3595aff3fa1a9c5b2ef", "query": "LL | y.foo(); | ^ value used here after move error[E0382]: use of moved value: `*x` error[E0382]: use of moved value: `x` --> $DIR/double-move.rs:45:9 | LL | let _y = *x; | -- value moved here LL | x.foo(); | ^ value used here after move | ^ value used here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error[E0382]: use of moved value: `*x` --> $DIR/double-move.rs:51:18 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | let _y = *x; | ^^ value used here after move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error: aborting due to 6 previous errors", "positive_passages": [{"docid": "doc-en-rust-8eda5e2da06426c5ca0d1e41f6002aa4939e49c4190b6fbed85bbea5acf8da8e", "text": "To be clear, I wasn't proposing changing anything apart from MIR building either, I don't think. IIUC, what you proposed is that would be compiled to: whereas under my proposal (limited to apply to unsized parameters, instead of moved parameters) would yield: I think I mis-stated the problem earlier. It's not so much that your proposal will introduce compilation errors, it's that I think it changes the semantics of code like this in what I see as a surprising way: I believe that this code would compile to the following under your proposal: which means that would be invoked with the new value of , right? Under my proposal, the code would still compile, but it would behave in what I see as the expected fashion. Specifically, it would generate:\nIsn't there a way to \"lock\" that value through the borrow-checker, so further accesses would be disallowed, until the call returns? I'd rather be stricter here.
error[E0382]: use of moved value: `*x` error[E0382]: use of moved value: `x` --> $DIR/double-move.rs:45:9 | LL | let _y = *x; | -- value moved here LL | x.foo(); | ^ value used here after move | ^ value used here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error[E0382]: use of moved value: `*x` --> $DIR/double-move.rs:51:18 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | let _y = *x; | ^^ value used here after move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error: aborting due to 6 previous errors", "positive_passages": [{"docid": "doc-en-rust-27fd42f01a24ee7bd4fa9f1c0f7bf282c5af37d103c7e5a053a6ed0f28b8ecdb", "text": "In particular, the IR for under your proposal, if I understood, would be But the access to that (I believe) we wish to prevent takes place in the \"evaluate remaining arguments\" part of things. So I think we'd have to add some sort of \"pseudo-move\" that tells borrowck that should not be assignable from that point forward, right? Well, it depends on your perspective. It's true that, given the need to permit reinitialization, the check is now specific to unsized values -- i.e., MIR construction will be different depending on whether we know the type to be sized or not. That's unfortunate. However, I think we still maintain a degree of congruence, from an end-user perspective. In particular, users could not have reinitialized a anyhow. However, under my proposal, users can write things like and the code works as expected (versus either having an unexpected result or getting an error). In any case, it seems like we're sort of \"narrowing in\" on a proposal here: when we see a function call with a parameter that is , and the value being passed is a deref, we want to modify the IR to not introduce a temporary and instead pass that parameter directly from its location, either we do some sort of \"no more assignments\" lock, or we move the pointer and introduce a temporary (and then pass in a deref of that temporary) Does that sound right so far?\nIf it's easier to do the more flexible approach (by moving the pointer) than the \"locking\" approach (which is a bit like borrowing the pointer until all arguments are evaluated, then releasing the borrow and doing a move instead), I don't mind it, I was only arguing with the \"uniformity\" aspect. Something important here: your approach is stabilizable, while mine is meant more for that one in the standard library, and not much else.\nYes, I was kind of shooting for something that we could conceivably stabilize.\nRemoving nomination, this was already discussed in triage meeting and it's under control.\nSo should we maybe try out the approach I proposed and see how well it works?\nyes, going to try that out.\nFor the record, the \"new plan\" is this.", "commid": "rust_issue_68304", "tokennum": 480}], "negative_passages": []} {"query_id": "q-en-rust-9a4344123574086f3fdb11e15d1fa3cd2a399c373f89e3595aff3fa1a9c5b2ef", "query": "LL | y.foo(); | ^ value used here after move error[E0382]: use of moved value: `*x` error[E0382]: use of moved value: `x` --> $DIR/double-move.rs:45:9 | LL | let _y = *x; | -- value moved here LL | x.foo(); | ^ value used here after move | ^ value used here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error[E0382]: use of moved value: `*x` --> $DIR/double-move.rs:51:18 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | let _y = *x; | ^^ value used here after move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error: aborting due to 6 previous errors", "positive_passages": [{"docid": "doc-en-rust-7cf635da4f5adcfae533b40271ecf10f20aa01733028a100f12d429f49c1ae0c", "text": "When generating the arguments to a call, we ordinarily will make a temporary containing the value of each argument: But we will modify this such that, for arguments that meet the following conditions: The argument is moved (not copied), and its type is not known to be , and the \"place\" has outer derefs (e.g., we are passing P where P is some place), then we assign the place P (instead of ) to a temporary and we make the argument to the function be (instead of ). This means that given where , we used to generate but now we generate We are now moving the box itself, which is slightly different than the semantics for values known to be sized. In particular, it does not permit of the box after having moved its contents. But that is not supported for unsized types anyway. The change does permit modifications to the callee during argument evaluation and they will work in an analogous way to how they would work with sized values. So given something like: we will find that we (a) save the old value of to a temporary and then (b) overwrite while evaluating the arguments and then (c) invoke the with the old value of .\nUpdate: There is also\nis a perfectly normal operand, I don't know what you mean. It's the same kind of as we currently have in (on the RHS)\nOh sorry, somehow I thought operands could only refer directly to locals, not also dereference them. I think I am beginning to see the pieces here -- together with function argument handling, this entirely avoids having to dynamically determine the size of a local. I have to agree that is quite elegant, even though baking \"pass-by-reference\" into MIR semantics still seems like a heavy hammer to me. On the other hand this places a severe restriction on now unsized locals may be used, basically taking back large parts of Is there some long-term plan to bring that back (probably requires with dynamic alignment in LLVM)? Or should we amend the RFC or at least the tracking issue with the extra restrictions?\nI would keep the more general feature but use two different feature-gates (if we can). And just like with a few other feature-gates () we could warn that (the general one) is incomplete and could lead to misaligned variables (or something along those lines).", "commid": "rust_issue_68304", "tokennum": 499}], "negative_passages": []} {"query_id": "q-en-rust-9a4344123574086f3fdb11e15d1fa3cd2a399c373f89e3595aff3fa1a9c5b2ef", "query": "LL | y.foo(); | ^ value used here after move error[E0382]: use of moved value: `*x` error[E0382]: use of moved value: `x` --> $DIR/double-move.rs:45:9 | LL | let _y = *x; | -- value moved here LL | x.foo(); | ^ value used here after move | ^ value used here after partial move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error[E0382]: use of moved value: `*x` --> $DIR/double-move.rs:51:18 | LL | let x = \"hello\".to_owned().into_boxed_str(); | - move occurs because `x` has type `std::boxed::Box`, which does not implement the `Copy` trait LL | x.foo(); | - value moved here LL | let _y = *x; | ^^ value used here after move | = note: move occurs because `*x` has type `str`, which does not implement the `Copy` trait error: aborting due to 6 previous errors", "positive_passages": [{"docid": "doc-en-rust-e815c1522a134b198016730d0062b3d332c75119cceb0b89e314acd16ab4a8f8", "text": "We could also add runtime asserts that the (dynamic) alignment is respected, once we know could never trigger them (e.g. from stable code).\nI was wondering about that too. It seems like it'd be good to review the more general feature in any case. This is partly why I brought up the idea of wanting a guaranteed \"no alloca\" path -- this was something I remember us discussing long ago, and so in some sense I feel like the work we're doing here isn't just working around LLVM's lack of dynamic alignment support (though it is doing that, too) but also working to complete the feature as originally envisioned.\nI two notes to the tracking issue so we don't overlook this in the future.\nThis is probably expected, but the fix failed to actually make sound:", "commid": "rust_issue_68304", "tokennum": 172}], "negative_passages": []} {"query_id": "q-en-rust-9a83fc9f78700d48d487a5f8107e90e83eb27cb8394a6da7973c45a6fe804cfc", "query": "# Complete! At this point, you have successfully built the Guessing Game! Congratulations! This project showed you a lot: `let`, `match`, methods, associated functions, using external crates, and more. This first project showed you a lot: `let`, `match`, methods, associated functions, using external crates, and more. Our next project will show off even more. At this point, you have successfully built the Guessing Game! Congratulations! ", "positive_passages": [{"docid": "doc-en-rust-bf87b42aefc13bfda1b39cac27e454ad05a42c7f50c60a85a1d1b83cfdaf60af", "text": "I am learning rust and reading the book for the first time The section mentions a \"Next Project\" at the very end We can add a link to that project ?\nIt's referring to the next project showed in the book (so next section if I remember well). If confirms, this issue can be closed.\nActually, this sentence is vestigial from when we had the Dining Philosophers chapter. I would actually prefer to remove this sentence. Want to submit another PR? :D\nabsolutely :-) on it\nwas interested in looking at the old . pasted here for future people was removed in\nsent PR", "commid": "rust_issue_32936", "tokennum": 123}], "negative_passages": []} {"query_id": "q-en-rust-9a864c79d5f7576dcb7e6a5c804ddc2b0d19bb25c0f37360b865fc561b7ffd64", "query": "LLVMRustThinLTOPatchDICompileUnit(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } extern \"C\" void LLVMRustThinLTORemoveAvailableExternally(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } #endif // LLVM_VERSION_GE(4, 0)", "positive_passages": [{"docid": "doc-en-rust-eac71975284c29468d15101371d7dec17505b09a4c2a386257a23b255df72ff3", "text": "I'm opening this up to serve as a tracking issue for enabling multiple codegen units in release mode by default. I've written up a before but the tl;dr; is that multiple codegen units enables us to run optimization/code generation in parallel, making use of all available computing resources often speeding up compilations by more than 2x. Historically this has not been done due to claims of a loss in performance, but the is intended to assuage such concerns. The most viable route forward seems to be to enable multiple CGUs and ThinLTO at the same time in release mode. Blocking issues: [x] Enable for libstd on all platforms - - - - Potential blockers/bugs: - -\ncc\nPresumably this will remain a stable compiler flag, but what else will change by default? Is the plan to make this only affect , or is the plan to also make this affect ? Do we want to make continue to use only a single CGU, to hedge against regressions for people who are already willing to trade off compiler time for runtime?\nI would specifically propose that any opt level greater than 1 uses 16 codegen units and ThinLTO enabled by default for those 16 codegen units.\nThis is very interesting. One would think that ThinLTO-driven inlining should always have to do less work when our much more conservative pre-LLVM inlining. But that doesn't always seem to be the case, as in the . Most tests on the seem to profit though. Overall it's still not a clear picture to me.\nwas that comment meant for a different thread? I forget which one as well though, so I'll respond here! As a \"quick benchmark\" I compiled the regex test suite with 16 CGUs + ThinLTO and then toggled inlining in all CGUs on/off. Surprisingly inlining in all CGUs was 5s faster to compile, and additionally . In those timings is where inlining is only in one CGU, where is inlining in all CGUs. One benchmark, , got twice as slow!\nYeah, the comment was in response to (rust-doom) but since that was closed already, I put it here.", "commid": "rust_issue_45320", "tokennum": 458}], "negative_passages": []} {"query_id": "q-en-rust-9a864c79d5f7576dcb7e6a5c804ddc2b0d19bb25c0f37360b865fc561b7ffd64", "query": "LLVMRustThinLTOPatchDICompileUnit(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } extern \"C\" void LLVMRustThinLTORemoveAvailableExternally(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } #endif // LLVM_VERSION_GE(4, 0)", "positive_passages": [{"docid": "doc-en-rust-8459400afb569ad2188b3cd5b231230badf367bcfeb89dca33d60d66624907a0", "text": "Regarding the compilation time difference between pre- and post-trans inlining, my hypothesis would be that sometimes pre-trans inlining will lead to more code being eliminated early on (as in the case of regex apparently) and sometimes it will have the opposite effect. I suspect that there's room for improvement by tuning which LLVM passes we run specifically for ThinLTO. Or do we do that already?\nHeh it's true yeah, I'd imagine that there's always room for improvement in pass tuning in Rust :). Right now we perform (afaik) 0 customization of any pass manager in LLVM. All of the normal optimization passes, LTO optimization passes, and ThinLTO optimization passes are all the same as what's in LLVM itself.\nI wanted to also take the time and tabulate all the results from the to make sure it's all visibile in one place. Note that all the timings below are comparing a release build to a release build with 16 CGUs and ThinLTO enabled Improvements All of the following compile times improved, sorted by most improved to least improved Regressions The following crates regressed in compile times, sorted from smallest regression to largest alexcrichton's attempt to reproduce the regressions Here I attempt to reproduce the regressions on my own machine with Unfortunately the only regression I was able to reproduce was the rust-belt regression. I'll be looking more into that.\nLooking into , one interesting thing I've found is that the crate takes longer to compile with ThinLTO than it would otherwise. Looking at it appears that one codegen unit in this crate takes 99% of the time in LLVM. This codegen unit appears to be basically entirely dominated by . Almost all of the benefit of multiple codegen units is spreading out the work across all CPUs. Enabling ThinLTO to avoid losing any perf is fundamentally doing more work than what's already happening at , but we get speedups across the board in most cases. If we have one huge CGU, split it in two, and work on those in parallel then we've got 50% of the original time to run ThinLTO (in a perfect world). If ThinLTO takes more than 50% of the time then we'd have a build time regression, but that's almost never happening.", "commid": "rust_issue_45320", "tokennum": 488}], "negative_passages": []} {"query_id": "q-en-rust-9a864c79d5f7576dcb7e6a5c804ddc2b0d19bb25c0f37360b865fc561b7ffd64", "query": "LLVMRustThinLTOPatchDICompileUnit(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } extern \"C\" void LLVMRustThinLTORemoveAvailableExternally(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } #endif // LLVM_VERSION_GE(4, 0)", "positive_passages": [{"docid": "doc-en-rust-826aee41cb0e8b13d2368b8675cf3e3aa442a17b3bcb7408e02c50b17283eed2", "text": "What's happening here is that we have one huge CGU, but when we split it up we still have one huge CGU. This means that the CGU which may take ~1% less time in LLVM only gives us a tiny sliver of a window to run ThinLTO passes. In the case of the crate this means that adding ThinLTO passes is an overall build time regression. So generalizing even further, I think that enabling ThinLTO and multiple CGUs by default is going to regress compile time performance in any crate where our paritioning implementation doesn't actually partition very well. In the case of we've got one huge function which hogs almost all the optimization time (I think) and the entire crate is basically dominated by that one function. I would be willing to conclude, however, that such a situation is likely quite rare. Almost all other crates in the ecosystem will benefit from the partitioning which should basically evenly split up a crate. maybe you have thoughts on this though?\nThank you so much for collecting and analyzing such a large amount of data, Your conclusions make sense to me. Unevenly distributed CGU sizes are also a problem for incremental compilation; e.g. the test sees 90% CGU re-use, yet compile time is almost the same as from-scratch. In the non-incremental case it might be an option to detect the problematic case right after partitioning and then switch to non-LTO mode? I'm not sure it's worth the extra complexity though. For cases where there is one big CGU but that CGU contains multiple functions, we should be able to rather easily redistribute functions to other CGUs. We'd only need a metric for the size of a . The number of MIR instructions might be a sufficient heuristic here. That would not help for but it might help for other crates. In conclusion, judging from the table above, I think we should make the default sooner rather than later. Open questions I see: What to do about ? I would have thought that it would always be a clear win to disable this when ThinLTO is enabled but that doesn't seem to be the case. Are there hardware configurations where this never is a win? E.g. should we default to traditional compilation if ?", "commid": "rust_issue_45320", "tokennum": 482}], "negative_passages": []} {"query_id": "q-en-rust-9a864c79d5f7576dcb7e6a5c804ddc2b0d19bb25c0f37360b865fc561b7ffd64", "query": "LLVMRustThinLTOPatchDICompileUnit(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } extern \"C\" void LLVMRustThinLTORemoveAvailableExternally(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } #endif // LLVM_VERSION_GE(4, 0)", "positive_passages": [{"docid": "doc-en-rust-a39618a6d2ec0fbbc39b9a1ecc2ba00b73839dc85487f888bb218f3688a727ce", "text": "FWIW, as the author of , which is a made-for-fun Piston game, I am more than happy to take a small compile regression when it seems like vast majority of crates get huge real-time improvements!\nThis seems like a good thing to have in our back pocket, but I'm also wary of trying to do this by default. For example the case is one where we may wish to disable ThinLTO but the one massive function could also critically rely on a function in a different CGU being inlined? That sort of case would be difficult for us to determine... In any case though I agree that we probably don't need the complexity just yet so I think we can defer this for later. Agreed! So far all the crate's I've seen the O(instructions) has been quite a good metric for \"how long this takes in all LLVM-related passes\", so counting MIR sounds reasonable to me as well. Again though I also feel like this is ok to have in our back pocket. I'd want dig more into the minor change example, I'm curious how awry the CGU distribution is there and whether a heuristic like this would help. Quite surprisingly I've yet to see any data that it's beneficial to compile time in release mode or doesn't hurt runtime. (all data is that it hurts both compile time and runtime performance!) That being said we're seeing such big wins from ThinLTO on big projects today that we can probably just save this as a possible optimization for the future. On the topic of inline functions I recently dug up again (later closed in favor of , but the latter may no longer be true today). I suspect that may \"fix\" quite a bit of the compile time issue here without coming at a loss of performance? In any case possibly lots of interesting things we could do there. Also a good question! I find this to be a difficult one, however, because the CGU number will affect the output artifact, which means that if we do this sort of probing it'll be beneficial for performance but come at the cost of more difficult deterministic builds. You'd have to specify the CGUs manually or build on the same-ish hardware to get a deterministic build I think? I think though that if you have 1 CPU then this is universaly a huge regression.", "commid": "rust_issue_45320", "tokennum": 504}], "negative_passages": []} {"query_id": "q-en-rust-9a864c79d5f7576dcb7e6a5c804ddc2b0d19bb25c0f37360b865fc561b7ffd64", "query": "LLVMRustThinLTOPatchDICompileUnit(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } extern \"C\" void LLVMRustThinLTORemoveAvailableExternally(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } #endif // LLVM_VERSION_GE(4, 0)", "positive_passages": [{"docid": "doc-en-rust-4426806565390046acd7ad33b00b13788407fc81b51697f17774c126464fdbb6", "text": "With one CPU we'd split the preexisting one huge CGU into N different ones, probably take roughly the same amount of time to opimize those, and then tack on ThinLTO and more optimization passes. My guess is that with one CPU we'd easily see 50% regressions. With 2+ CPUs however I'd probably expect to see benefits to compile time. Anything giving us twice the resources to churn the cgus more quickly should quickly start seeing wins in theory I think. For now though the deterministic builds wins me over in terms of leaving this as-is for all hardware configurations. That and I doubt anyone's compiling Rust on single-core machines nowadays!\nOne thing I think that's also worth pointing out is that up to this point we've mostly been measuring the runtime of an entire . That's actually, I believe, the absolute worst case scenario for where ThinLTO will provide benefit. Despite this, it's showing huge improvements for lots of projects! The benefit of ThinLTO and multiple CGUs is leveraging otherwise idle parallelism on the build machine. It's overall increasing the amount of work the compiler does. For a from scratch, though, you typically already have tons of crates compiling for the first half of the build in parallel. In that sense there's not actually any idle parallelism. Put another way, ThinLTO and multiple CGUs should only be beneficial for builds which dont have many crates compiling in parallel for long parts of the build. If a build is 100% parallel for the entire time then ThinLTO will likely regress compile time performance. Now you might realize, however, that one very common case where you're only building one crate is in an incremental build! Typically if you do an incremental build you're only building a handful of crates, often serially. In that sense I think that there's some massive wins of ThinLTO + multiple CGUs in incremental builds rather than entire crate builds. Although improving both is of course great as well :)\nFor deterministic builds you have to do some extra configuration anyway (e.g. path remapping) so I would not consider that a blocker. And I guess there are single core VMs around somewhere.", "commid": "rust_issue_45320", "tokennum": 475}], "negative_passages": []} {"query_id": "q-en-rust-9a864c79d5f7576dcb7e6a5c804ddc2b0d19bb25c0f37360b865fc561b7ffd64", "query": "LLVMRustThinLTOPatchDICompileUnit(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } extern \"C\" void LLVMRustThinLTORemoveAvailableExternally(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } #endif // LLVM_VERSION_GE(4, 0)", "positive_passages": [{"docid": "doc-en-rust-9cbbd7c4c8d3f7f4c75a168140fc39d1686ff5f6d6e1524b514ba1c75b0b91ac", "text": "But all of this is such a niche case that I don't really care MIR-only RLIBs should put us into a pretty good spot regarding this, so I'm quite confident that we're on the right path overall.\nThat's really interesting. For incremental compilation enabled the situation might be different (because pre-trans inlining hurts re-use) but for the non-incremental case it sounds like it's pretty clear what to do.\nYeah, maybe we should revisit this at some point. Although I have to say the current solution of only and symbols and nothing in between is really nice and simple.\nHm yeah that's a good point about needing configuration anyway for deterministic builds. It now seems like a more plausible route to take! Also that's a very interesting apoint about incremental and inlining on our own end... Maybe we should dig more into those ThinLTO runtime regressions at some point! Also yeah I don't really want to change how we trans inline functions just yet, I do like the simplicity too :)\nout of curiosity how did you figure out this and which function it was. I've wanted to look into why the webrender build is so slow and would welcome tips.\noh sure I'd love to explain! So I originally found as a problematic crate when compiling rust-belt as it just took awhile and I decided to dig deeper. I checked out the crate and ran: That drops in the current directory, and opening that up I see: ! The graph here isn't always the easiest to read, but we've clearly got two huge bars, both of which correspond to taking a huge amount of time for that one CGU (first is optimization, second is ThinLTO + codegen). Next I ran: and that command dumps a bunch of IR files into . Our interesting CGU is so I opened up (we sure do love our long filenames). Inside that file it was 70k lines and some poking around showed that one function was 66k lines of IR. I sort of forget now how at this point I went from that IR to determining there was a huge function in there though...\nI don't know if this is the right place, but I think it should be considered whether the symbol issues reported in the Nightly section here: should be considered a blocker or not.", "commid": "rust_issue_45320", "tokennum": 500}], "negative_passages": []} {"query_id": "q-en-rust-9a864c79d5f7576dcb7e6a5c804ddc2b0d19bb25c0f37360b865fc561b7ffd64", "query": "LLVMRustThinLTOPatchDICompileUnit(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } extern \"C\" void LLVMRustThinLTORemoveAvailableExternally(LLVMModuleRef Mod) { report_fatal_error(\"ThinLTO not available\"); } #endif // LLVM_VERSION_GE(4, 0)", "positive_passages": [{"docid": "doc-en-rust-b0225b6de619b46503f20143d72f83ac92cc3a4f9288ee033156ed5537d68271", "text": "The tl;dr is that certain versions of llvm, (i've seen this on 3.8, and whatever llvm rustc nightly uses) appends seemingly random garbage to the end of some names, e.g. we get: instead of: This knocks the debuginfo out of sync (it doesn't have the garbage appended). I've been able to repro this with clang3.8 for c++ files as well (haven't tested on other llvm), and switching to 5.0 seems to have fixed the issue. I don't know if its within reach, but perhaps we should attempt upgrading to llvm 5.0 before releasing this on stable? Note I understand this is for release mode, but i see this in debug mode on rustc nightly right now as well...", "commid": "rust_issue_45320", "tokennum": 185}], "negative_passages": []} {"query_id": "q-en-rust-9a9d26e98addf1868e11f3ae81e4260eb830539ef00563d4a65fd92f5821d879", "query": "(&self_ty.kind, parent_pred) { if let ty::Adt(def, _) = p.skip_binder().trait_ref.self_ty().kind { let id = self.tcx.hir().as_local_hir_id(def.did).unwrap(); let node = self.tcx.hir().get(id); let node = self .tcx .hir() .as_local_hir_id(def.did) .map(|id| self.tcx.hir().get(id)); match node { hir::Node::Item(hir::Item { kind, .. }) => { Some(hir::Node::Item(hir::Item { kind, .. })) => { if let Some(g) = kind.generics() { let key = match &g.where_clause.predicates[..] { [.., pred] => {", "positive_passages": [{"docid": "doc-en-rust-1e91a33f7369d84f0bc42974a03df193d853988de2edb10da26581d11b8a406f", "text": "I tried code very similar to this: and it produces some type of type checking error when uncommenting the line beginning with . : Backtrace:\nThe error is likely due to the fact that the trait bounds on don't include for , though the compiler/type checker crashing is still likely a bug.\nThat code was introduced in , cc Submitted as a quick fix.\nI can't get it to work in the same crate, just with two crates, where the first one depends on the other: modify labels: -E-needs-mcve\nThanks I included a test in the PR.", "commid": "rust_issue_69725", "tokennum": 124}], "negative_passages": []} {"query_id": "q-en-rust-9ab9238216208a639255e55ba7901653a22efe82ba25f045378fc55d20270622", "query": "/// Unix-specific extensions to `Permissions` #[stable(feature = \"fs_ext\", since = \"1.1.0\")] pub trait PermissionsExt { /// Returns the underlying raw `mode_t` bits that are the standard Unix /// permissions for this file. /// Returns the underlying raw `st_mode` bits that contain the standard /// Unix permissions for this file. /// /// # Examples ///", "positive_passages": [{"docid": "doc-en-rust-b0f76e335d3badee54ff91c284523808a5f4e33eb6bbb1afdb963f102490878c", "text": "On linux, , then run: This changes the permissions of to . Is this expected? masks away everything other than the lower 3 triads when it calls out to this code: Then sends these mode bits into into chmod which considers four triads, not three. Possibly useful references: and Should we change Metadata.permissions to preserve that triad, maybe by masking with instead of ?\nI guess the solution depends on whether Unix's std::fs::Permissions is supposed to care about 9 bits or 12 (this is not an implementation detail since it's exposed to the programmer via of PermissionsExt). any thoughts on this?\nIt seems to me like we should extend the mask to the full 12 bits.\nSeems reasonable to me to avoid masking where it makes sense!", "commid": "rust_issue_44147", "tokennum": 166}], "negative_passages": []} {"query_id": "q-en-rust-9abb73820cedad68a8605de79954cab92ba723b0c0f3f8a5a1b9109bcbbc0629", "query": "let ty::CoroutineClosure(_, parent_args) = *tcx.type_of(parent_def_id).instantiate_identity().kind() else { bug!(); bug!(\"coroutine's parent was not a coroutine-closure\"); }; if parent_args.references_error() { return coroutine_def_id.to_def_id();", "positive_passages": [{"docid": "doc-en-rust-d9788f9fb62c8de2f97f07962af96a35636fdc8a876f620741389e1c3bb30856", "text": " $DIR/incorrect-variant-form-through-alias-caught.rs:12:9 | LL | let Alias::Braced(..) = panic!(); | ^^^^^^^^^^^^^^^^^ not a tuple struct or tuple variant | help: use the struct variant pattern syntax | LL | let Alias::Braced {} = panic!(); | ~~ error[E0618]: expected function, found enum variant `Alias::Unit` --> $DIR/incorrect-variant-form-through-alias-caught.rs:15:5", "positive_passages": [{"docid": "doc-en-rust-64b23c5c36e9988ec4c5b0f9b27af4a414e6c71c77fec9aaba09da710256fe45", "text": "I wrote code that looked like this while not having my brain turned on 100%. Due to said brain not functioning, it took me a while to understand what rustc was trying to tell me with the message pointing to the pattern with the text . sure looked like a unit variant to me! What rustc was trying to tell me was that the definition of the variant was a struct variant (\"found struct variant\") and my pattern wasn't matching the definition (\"this pattern you specified looks like it's trying to match a variant that is not a unit struct, unit variant or constant\"). The change I needed to make was adding to my pattern, because I was trying to distinguish different variants from each other, not use pieces within the variants. Suggesting seems like a decent starting point to me, but there could definitely be cases I'm not considering here and I'm totally open to hearing that No response No response $DIR/str-as-char-non-lit.rs:6:19 | LL | let _: &str = ('a'); | ---- ^^^^^ expected `&str`, found `char` | | | expected due to this error[E0308]: mismatched types --> $DIR/str-as-char-non-lit.rs:8:19 | LL | let _: &str = token(); | ---- ^^^^^^^ expected `&str`, found `char` | | | expected due to this error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-205d33f2c4ca407a5eba587eb9d79d5b66428139a1f256eedc1fb0c137300290", "text": " $DIR/issue-79843-impl-trait-with-missing-bounds-on-async-fn.rs:14:20 | LL | assert_is_send(&bar); | ^^^^ `::Bar` cannot be sent between threads safely ... LL | fn assert_is_send(_: &T) {} | ---- required by this bound in `assert_is_send` | = help: the trait `Send` is not implemented for `::Bar` help: introduce a type parameter with a trait bound instead of using `impl Trait` | LL | async fn run(_: &(), foo: F) -> std::io::Result<()> where ::Bar: Send { | ^^^^^^^^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^ error[E0277]: `::Bar` cannot be sent between threads safely --> $DIR/issue-79843-impl-trait-with-missing-bounds-on-async-fn.rs:24:20 | LL | assert_is_send(&bar); | ^^^^ `::Bar` cannot be sent between threads safely ... LL | fn assert_is_send(_: &T) {} | ---- required by this bound in `assert_is_send` | = help: the trait `Send` is not implemented for `::Bar` help: introduce a type parameter with a trait bound instead of using `impl Trait` | LL | async fn run2(_: &(), foo: F) -> std::io::Result<()> where ::Bar: Send { | ^^^^^^^^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0277`. ", "positive_passages": [{"docid": "doc-en-rust-bf02cff849637e34b780957988347f5e3bb12fc557fa33d07b85804f11218daa", "text": "(Edit: Actually, being an inherent fn doesn't matter. It happens with top-level fns too.) This prints the diagnostic: The suggestion in the last line is not valid syntax (). It also emits malformed syntax if the first parameter is a or parameter. The diagnostic becomes well-formed if any of the following is done: The first parameter is changed to instead ofThe fn is not an async fn. In either case, the diagnostic correctly says: Happens on both nightly: ... and stable:\nThis should hopefully be a fairly straightforward patch to the diagnostics code that generates the suggestion.", "commid": "rust_issue_79843", "tokennum": 132}], "negative_passages": []} {"query_id": "q-en-rust-9d87a2438bdc3bf9aad0b24d67cef63a4337c6431216583c2a236143f84959f2", "query": "check_expr(fcx, &**idx); let raw_base_t = fcx.expr_ty(&**base); let idx_t = fcx.expr_ty(&**idx); if ty::type_is_error(raw_base_t) || ty::type_is_bot(raw_base_t) { if ty::type_is_error(raw_base_t) { fcx.write_ty(id, raw_base_t); } else if ty::type_is_error(idx_t) || ty::type_is_bot(idx_t) { } else if ty::type_is_error(idx_t) { fcx.write_ty(id, idx_t); } else { let (_, autoderefs, field_ty) = autoderef(fcx, expr.span, raw_base_t, Some(base.id), lvalue_pref, |base_t, _| ty::index(base_t)); match field_ty { Some(ty) => { Some(ty) if !ty::type_is_bot(ty) => { check_expr_has_type(fcx, &**idx, ty::mk_uint()); fcx.write_ty(id, ty); fcx.write_autoderef_adjustment(base.id, base.span, autoderefs); } None => { _ => { // This is an overloaded method. let base_t = structurally_resolved_type(fcx, expr.span,", "positive_passages": [{"docid": "doc-en-rust-a39767ff1fc582a70382e095be04a350d7ea60ab147cde1462399fe395d9bbb8", "text": "Taking a look.\nProbably the best fix for now is to disallow deref of outright. I'll work up a PR that fixes this and a few other ICEs surrounding .\nThis is fixed with a test, closing.", "commid": "rust_issue_17373", "tokennum": 48}], "negative_passages": []} {"query_id": "q-en-rust-9d89377b6123bfa9c0405e2a7ef3b5f38333e5386208bd68d01876969848634e", "query": "path = self.parse_path(PathStyle::Type)?; path_span = path_lo.to(self.prev_span); } else { path = ast::Path { segments: Vec::new(), span: DUMMY_SP }; path_span = self.span.to(self.span); path = ast::Path { segments: Vec::new(), span: path_span }; } // See doc comment for `unmatched_angle_bracket_count`.", "positive_passages": [{"docid": "doc-en-rust-90bd225a208b9bb471a32eda4959929a256e6fff107b5af48e291c8d9be22cb1", "text": "root: yup-oauth2 - 360 detected crates which regressed due to this; cc // Clears (and restores) the `in_scope_lifetimes` field. Used when // visiting nested items, which never inherit in-scope lifetimes // from their surrounding environment. fn without_in_scope_lifetime_defs( &mut self, f: impl FnOnce(&mut LoweringContext<'_>) -> T, ) -> T { let old_in_scope_lifetimes = std::mem::replace(&mut self.in_scope_lifetimes, vec![]); // this vector is only used when walking over impl headers, // input types, and the like, and should not be non-empty in // between items assert!(self.lifetimes_to_define.is_empty()); let res = f(self); assert!(self.in_scope_lifetimes.is_empty()); self.in_scope_lifetimes = old_in_scope_lifetimes; res } pub(super) fn lower_mod(&mut self, m: &Mod) -> hir::Mod { hir::Mod { inner: m.inner,", "positive_passages": [{"docid": "doc-en-rust-cb3dbc0c517a588095eda62daa96bdfedbcaf49475de56306468c3ba5d5694f1", "text": "The diagnostics (\"use of undeclared lifetime name\") are bad, but this indeed must be an error because the inner is an illegal use of the outer rather than an implicitly defined fresh lifetime.\nHm, I don't think that's quite what I'd expect: previously using in that context would be fine because there is no shadowing so to speak, that is, this compiles:\nFrom Similarly, if a fn definition is nested inside another fn definition, it is an error to mention lifetimes from that outer definition (without binding them explicitly). This is again intended for future-proofing and clarity, and is an edge case. (In the second example there's an explicit definition that shadows the previous definition for following uses, in the first example there's no second definition, only use)\nI see that this is in the RFC, but I think I more or less disagree with the reasoning given. Disallowing the use of inband lifetimes when nesting function seems arbitrary and not entirely helpful; it also makes a mechanical change from \"old\" to \"new\" more difficult because shadowing is visible only based on indent levels.\nIMO it should be an error, with a helpful note like the one you get when writing . Let's not make in-band lifetimes any more confusing than they already are.\nThe distinction is that inband lifetime's selling point is that you don't need to declare lifetimes. If we say that, and then clarify it with \"except\" then I think the feature feels incomplete. Inband lifetimes confusion does not increase with permitting this, IMO.\nThe problem is if it's allowed, then has a different meaning than This seems unfortunate and confusing. Even unergonomic if I may say so. There's no way that the distinction between and won't be missed in a quick scan of the second one. But, like, the RFC process did decide that isn't important, so whatever.\nWell, somewhat, but not really. Nothing outside of imports/definitions inherits into the inner function, unlike the impl where there is a history and multiple things which inherit (, lifetimes, generics).", "commid": "rust_issue_52532", "tokennum": 461}], "negative_passages": []} {"query_id": "q-en-rust-9d8c6d880c99ebca1c3992108fa45a83bb68838e37f3b017b6833b21efe98157", "query": "} pub fn is(&self, mode: mode_t) -> bool { self.mode & libc::S_IFMT == mode self.masked() == mode } fn masked(&self) -> mode_t { self.mode & libc::S_IFMT } }", "positive_passages": [{"docid": "doc-en-rust-45efef0229c4766ecfdd3173c73ee58d24e7069ec3baffd913c7c9f351d53171", "text": " $DIR/intersection-patterns.rs:13:9 | LL | Some(x) @ y => {} | -------^^^- | | | | | binding on the right, should be on the left | pattern on the left, should be on the right | help: switch the order: `y @ Some(x)` error: left-hand side of `@` must be a binding --> $DIR/intersection-patterns.rs:23:9 | LL | Some(x) @ Some(y) => {} | -------^^^------- | | | | | also a pattern | interpreted as a pattern, not a binding | = note: bindings are `x`, `mut x`, `ref x`, and `ref mut x` error: pattern on wrong side of `@` --> $DIR/intersection-patterns.rs:32:9 | LL | 1 ..= 5 @ e => {} | -------^^^- | | | | | binding on the right, should be on the left | pattern on the left, should be on the right | help: switch the order: `e @ 1 ..=5` error: aborting due to 3 previous errors ", "positive_passages": [{"docid": "doc-en-rust-7d0e41f583374c24af112120ecdc9c7bfb9766f11dbcaf58cbdfd7089d1eadef", "text": "The compiler could suggest that identifier must precede the pattern in the following code: playpen link: Current error message: It could be improved as:\nmodify labels: A-diagnostics C-feature-request T-compiler\nSo what happens after is that we've parsed a and then follows which is not a valid token after a parsed . To fix this we can, after having parsed in , check if follows and then emit appropriate suggestions. We will need to use some heuristics to figure out what the user wants based on the structure of . Some of the suggestions would be and some not. cc", "commid": "rust_issue_65400", "tokennum": 129}], "negative_passages": []} {"query_id": "q-en-rust-9df45ddc28ab8a49da38cdb8604999f3501f8a23554827f887ab00570d412062", "query": "def::DefVariant(_, variant_id, _) => { for field in fields { self.check_field(expr.span, variant_id, NamedField(field.ident.node)); NamedField(field.ident.node.name)); } } _ => self.tcx.sess.span_bug(expr.span,", "positive_passages": [{"docid": "doc-en-rust-cd356b80efe8963569b705c0d082e39ab100fdda7683d49f27a524f808e2b4f1", "text": "Tracking issue for: rust-lang/rfcs Functional record update does not mark the source as consumed, even if does not implement Copy. This means that e.g. the source's destructor will eventually be run, so the following code prints :\nNote that, as discovered, you can also explicitly consume , also allowing code like:\n:bomb:\nIs legal, even if all members are private? It's a bit surprising\nIt is. There's the potentially-useful case where some of the members are private, and FRU allows you to update the public ones. Of course it should consume the source even then.\nP-backcompat-lang, 1.0 beta\nIn my opinion the problem here is the privacy violation. The idea is that is expanded to and so forth. In this case, that does not trigger a move, because the types of those fields are . I think that part is acting as expected. What is surprising is that one can do this from outside the module.\nThat was (but my favored solution isn't minor).\nWhen I initially saw this problem, I did not see it as a privacy violation, for the (obvious?) reason that since no private members of had been mentioned in the code , the client module is not breaking any abstraction introduced by privacy. I think this line of thinking is analogous to outline of the useful case where FRU allows one to update the public members of a type and silently move the private ones as well, via . This is useful for the case where the author of wants to reserve the right to add new private members to while still allowing for clients to use the convenient FRU syntax. I originally was going to try to explain my point of view via analogy with something from hygienic macros: If I were to define (and export) a macro within 's defining module: then it should, in principle, not be a privacy violation to use from client code outside 's defining module. (That is, in principle, hygiene should allow macros to expose controlled access to private internals.) Then I realized the crucial problem with that kind of analogy: is something that the designer of has to choose to put into the module; i.e., it is opt-in. But currently in Rust, FRU is not opt-in; its not even something you can opt out of.", "commid": "rust_issue_21407", "tokennum": 487}], "negative_passages": []} {"query_id": "q-en-rust-9df45ddc28ab8a49da38cdb8604999f3501f8a23554827f887ab00570d412062", "query": "def::DefVariant(_, variant_id, _) => { for field in fields { self.check_field(expr.span, variant_id, NamedField(field.ident.node)); NamedField(field.ident.node.name)); } } _ => self.tcx.sess.span_bug(expr.span,", "positive_passages": [{"docid": "doc-en-rust-d6f89ff452205249e804ee67896c49d22c052f99387d1c0f11af3b93dd1a3665", "text": "Any struct automatically is given support for it, whether it be appropriate for that struct or not. And that seems wrong, at least given examples like the one provided here in . This latter point, that providing support for FRU should not be something that is silently attached to a type like , seems somewhat in line with the thinking given in . But I am also pretty sure we cannot adopt anything of the scope given in that draft. (Still, maybe there's a hint of solution embedded within, in the sense of having some way of marking a type as data vs abstract; see final two bullet points below.) So, as I see it, we can either: out a way to support the point-of-view outlined here in the description for , in that should somehow be treated as consuming . Such a change seems like it would be quite subtle, since we would need to have some way of expressing that code like is sound, even if the types of the fields and are consumed by their respective and methods. (In other words, need to consume , yet somehow not completely consume it. Maybe there's a way to think of it as consuming \"whatever is left of the type\", I do not know.) Advantages: This would address the bug as described by the issue author, and would not break any code that was not already broken. Drawbacks: It is not clear what the semantics are for this form (in other words: the phrase \"would not break any code\" above is only true if you first accept that this hypothesized semantics exists). Also, there may not be time to do this properly for 1.0. , try to adopt a data/abstract-type distinction along the lines of the one in draft RFC. (I'm not going to write Advantages/Drawbacks for this; I think we clearly do not have time to do it that way for 1.0) , change FRU into a non-hygienic expansion into the record construction form with all fields listed, in the sense that is synonymous with and subject to privacy checks. Advantages: Seems really simple to implement, and is consistent with at least one core team member's mental model of FRU. Drawbacks: If we did this, then code like the example above would break.", "commid": "rust_issue_21407", "tokennum": 474}], "negative_passages": []} {"query_id": "q-en-rust-9df45ddc28ab8a49da38cdb8604999f3501f8a23554827f887ab00570d412062", "query": "def::DefVariant(_, variant_id, _) => { for field in fields { self.check_field(expr.span, variant_id, NamedField(field.ident.node)); NamedField(field.ident.node.name)); } } _ => self.tcx.sess.span_bug(expr.span,", "positive_passages": [{"docid": "doc-en-rust-052ce01de26b08a8f737abdfb24caf536f7787a9ed0ef876d1b88ac232477464", "text": ", let FRU keep its current hygienic (aka \"privacy violating\") semantics, but also make FRU something one must opt-in to support on a type. E.g. make a builtin trait that a struct must implement in order to be usable with FRU. (Or maybe its an attribute you attach to the item.) Advantages: Also seems really simple to implement (in the language). Accommodates users of FRU for the example. Drawbacks: If we did this, it would impose a burden on all code today that makes use of FRU, since they would have to start implementing . Thus, not simple to implement for the libraries and the overall ecosystem. , a hybrid between the previous two bullet-points: By default, treat FRU as non-hygienic expansion, but add a builtin trait that one can opt-into to get the hygienic (aka \"privacy violating\") semantics. As a bonus, perhaps implementing on a struct would also imply implementing on that struct, which seems like it would basically give us something very similar to the semantics desired by (Or, again, maybe its an attribute you attach to the item. This detail does not particularly matter to me.) Advantage: While this is obviously more complicated than the previous two bullet-points, it has the advantage that it has a staged landing strategy: We could just_ implement the change to FRU to be non-hygienic expansion for 1.0 beta. We could add at an arbitrary point in the future; it would not have to be in the 1.0 release. Drawback: People have claimed that some of my past proposals were a bit ... baroque. I imagine this one is no exception. :) (In other words, this semantics is a little complicated to explain to a newcomer.)\nI don't want to make a big fuss about it, but I do want to note, again, that our \"deadlines\" are entirely self-imposed, and that under these circumstances \"we don't have time\" is slightly odd, because we can make time, any time we want. The blanket refusal to even contemplate doing that - even in light of new information - and to weigh the tradeoffs on their own merits is something I still don't really understand.", "commid": "rust_issue_21407", "tokennum": 486}], "negative_passages": []} {"query_id": "q-en-rust-9df45ddc28ab8a49da38cdb8604999f3501f8a23554827f887ab00570d412062", "query": "def::DefVariant(_, variant_id, _) => { for field in fields { self.check_field(expr.span, variant_id, NamedField(field.ident.node)); NamedField(field.ident.node.name)); } } _ => self.tcx.sess.span_bug(expr.span,", "positive_passages": [{"docid": "doc-en-rust-f88caec2557592d4606dad39cd734d5f1a3e33cd1274c6919ee6120c35705a54", "text": "That out of the way, an additional possibility, let's call it 6., for list: We could just make the \"abstract type\" vs. \"data type\" distinction based on \"does or doesn't the type have a private field\". This doesn't entirely fill me with satisfaction (as with \"worse is better\" solutions in general), with specific drawbacks being that it doesn't extend cleanly to the nullary case (no fields at all), and that in terms of UX a change of semantics based on the presence or absence of a private field may be unexpected, but on the plus side, it's simpler, and would, I think, work.\nSure, hypothetical show-stopper issues could cause us to revisit our self-imposed schedule. I personally don't think that this issue, on its own at least, provides sufficient motivation for schedule revision. I admit that your draft RFC seems like it addresses a host of issues, so maybe all of them taken in unison would be a different story. (But then again, that's not even my call to make.)\nThere were a couple more options that I should have put on the list, for completeness, but overlooked. 6 note, the \"has any private fields ==abstract-type; all public fields ==data-type\", is one of them. This would (probably) be less work than adopting the draft RFC in full, but I am not sure what the other Advantages/Disadvantages are. I invite others to write an Advantages/Drawbacks for this approach (perhaps in an RFC PR). 7 We could add a way for a struct to opt-out of FRU support entirely. So in this approach, one would attach the appropriate attribute to , , etc. Advantages: won't break any existing code Drawbacks: Seems quite fragile; we may be good about auditing our own stdlib, but clients who are not aware of the attribute may inadvertently expose themselves to the same bug that currently plagues .\nAlso, I just wanted to mention: when I wrote my comment 6 hours ago, I came to the keyboard thinking that the FRU support for adding new private fields to and having still work was an important use case. But after some reflection in the hours since, supporting that use case does not seem as important to me.", "commid": "rust_issue_21407", "tokennum": 489}], "negative_passages": []} {"query_id": "q-en-rust-9df45ddc28ab8a49da38cdb8604999f3501f8a23554827f887ab00570d412062", "query": "def::DefVariant(_, variant_id, _) => { for field in fields { self.check_field(expr.span, variant_id, NamedField(field.ident.node)); NamedField(field.ident.node.name)); } } _ => self.tcx.sess.span_bug(expr.span,", "positive_passages": [{"docid": "doc-en-rust-af18126ee9950e4e8ddf5f7113df149b986b16d1a95a5eb29bbef32e79a631f3", "text": "Here is why: It is relatively easy for a developer who wants to provide that sort of extensible abstraction can still do it, by factoring the type into a all-public front-end with a single public field that holds the all-private back-end, like so: So, given that there exists a reasonable pattern to bring back usable FRU even under the option \"(3.) change FRU into a non-hygienic expansion into the record construction form with all fields listed, in the sense that is synonymous with and subject to privacy checks.\", I would not object to us adopting (3.) aka solution.\nI think that \"has any private field\" is a decent way to handle abstractness (you can always add a private unit field). On the other hand, Rust supports trait bounds of types \u2013 these should handle roles just fine \u2013 you shouldn't be able to coerce past a trait bound. By the way, this is somewhat subtle, because coercing to is wrong (as it leaks memory \u2013 it coerces past the implicit bound), but coercing to is fine (as the bound isn't used by ). However, this isn't a property of \u2013 you still can't coerce to \u2013 so we probably want a relatively-sophisticated way to occasionally ignore roles.\nI mean, Rust already has a mechanism for ensuring representation-abstraction \u2013 private fields \u2013 which are semantic (they are handled after type-checking, we probably want to handle them during it, but they definitely aren't handled syntactically or during resolution). It is just that FRU currently accesses them improperly. Also, it doesn't have much to do with e.g. roles \u2013 is an abstract type, but it doesn't use any bound on . only uses the implicit bound. A for data-only types would be nice, through.\nAnother alternative came to me while I was drafting an RFC about this. (But its an alternative that I do not particularly like.) 8a Instead of making \"has any private fields\" the thing that makes FRU unavailable, instead make the rule be \"has all private fields.\" Thus types like and will not be subject to the vulnerability outlined here. Adding a single field to an otherwise private struct is what changes the nature of a struct with respect to abstraction and FRU, rather than adding a single non- field to an otherwise public struct.", "commid": "rust_issue_21407", "tokennum": 511}], "negative_passages": []} {"query_id": "q-en-rust-9df45ddc28ab8a49da38cdb8604999f3501f8a23554827f887ab00570d412062", "query": "def::DefVariant(_, variant_id, _) => { for field in fields { self.check_field(expr.span, variant_id, NamedField(field.ident.node)); NamedField(field.ident.node.name)); } } _ => self.tcx.sess.span_bug(expr.span,", "positive_passages": [{"docid": "doc-en-rust-2970bb6c7ef2c53faa928bc9972b6000d9bf53fad0a8ad5bc68188bb06d21ee9", "text": "8b, which is so similar to 8a that I gave them both the same numeral: Outlaw the trivial FRU form . That is, to use FRU, you have to use at least one field in the constructing expression. Again, this implies that types like and will not be subject to the vulnerability outlined here. (This may be a decent change to make to the language regardless of whatever else happens on this ticket; the only argument in favor of keeping support for such a trivial form is maybe for supporting certain macro-expansions. But fully-general macro-authors already have to tread very carefully around corner cases on structs, due to issues like the empty braces problem (see )\nThat would leave the vulnerability intact in the case where the type has both public fields and private fields, the fact that no such type currently exists in the standard library notwithstanding (just as a very artificial demonstration, consider a with ).\nyep. Library authors would have to factor their structures very carefully. (I described the drawback you mention in the RFC I posted shortly after you wrote your comment, though I did not include the detail of having the private fields be , which does seem like it could make the situation all the more subtle. Not insurmountable, but a definite drawback.) Just to be clear, I'm not saying I prefer (8a) or (8b); I'm just trying to be complete in my exploration of the solution space.\nRepurposed as the tracking issue for rust-lang/rfcs\nI'll just go snag", "commid": "rust_issue_21407", "tokennum": 323}], "negative_passages": []} {"query_id": "q-en-rust-9e326be467fce2de0a4383c7fb5490b8b637ac2665c1d7366ca0bad6f84a8aca", "query": "self.opts.debugging_opts.print_enum_sizes } pub fn nonzeroing_move_hints(&self) -> bool { !self.opts.debugging_opts.disable_nonzeroing_move_hints self.opts.debugging_opts.enable_nonzeroing_move_hints } pub fn sysroot<'a>(&'a self) -> &'a Path { match self.opts.maybe_sysroot {", "positive_passages": [{"docid": "doc-en-rust-f690b1524cefb71b0292504fb4b2e291d373ad2a8482ceb54a934841747b8532", "text": "It looks like that when a local variable is initialized in a loop and dropped in the same loop, the stack local drop flag for that local isn't reinitialized to \"not dropped\" when the loop runs after the first time. Concretely, this test passes on stable and fails on nightly right now: cc\nInterestingly this slight variation on the above code does drop twice as expected:", "commid": "rust_issue_27401", "tokennum": 80}], "negative_passages": []} {"query_id": "q-en-rust-9e5a15545918bb86695b44edf3c577c5ace0b547a05a3dd1baac9c1c4004e102", "query": "format!(\"*{}\", code) }; return Some(( sp, expr.span, message, suggestion, Applicability::MachineApplicable,", "positive_passages": [{"docid": "doc-en-rust-bc991a8c63d81686bebf07f793f5c69e366692742b1f03c258d8aba8722870bd", "text": " $DIR/min-choice-reject-ambiguous.rs:17:5 | LL | type_test::<'_, T>() // This should pass if we pick 'b. | ^^^^^^^^^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds | help: consider adding an explicit lifetime bound... | LL | T: 'b + 'a, | ++++ error[E0309]: the parameter type `T` may not live long enough --> $DIR/min-choice-reject-ambiguous.rs:28:5 | LL | type_test::<'_, T>() // This should pass if we pick 'c. | ^^^^^^^^^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds | help: consider adding an explicit lifetime bound... | LL | T: 'c + 'a, | ++++ error[E0700]: hidden type for `impl Cap<'b> + Cap<'c>` captures lifetime that does not appear in bounds --> $DIR/min-choice-reject-ambiguous.rs:39:5 | LL | fn test_ambiguous<'a, 'b, 'c>(s: &'a u8) -> impl Cap<'b> + Cap<'c> | -- hidden type `&'a u8` captures the lifetime `'a` as defined here ... LL | s | ^ | help: to declare that `impl Cap<'b> + Cap<'c>` captures `'a`, you can add an explicit `'a` lifetime bound | LL | fn test_ambiguous<'a, 'b, 'c>(s: &'a u8) -> impl Cap<'b> + Cap<'c> + 'a | ++++ error: aborting due to 3 previous errors Some errors have detailed explanations: E0309, E0700. For more information about an error, try `rustc --explain E0309`. ", "positive_passages": [{"docid": "doc-en-rust-52ff78bb803c348d23865431fcf95aca9114b024ef3ad08fdb44f7d12198f25f", "text": "The following code should pass regardless of the declaration order of lifetimes: This is related to but it's different in that we do have a lower bound region, , but we fail to recognize that because , which is responsible for calculating from , assumes that the outlive relation is a total order, which is not true. Here is the graph of : ! So we need an efficient algorithm that works for partial order relations, but the one that comes to mind is . The perf may not be really important here but I'm curious to see the alternatives. label C-bug T-compiler A-borrow-checker E-mentor E-help-wanted $DIR/bad-index-due-to-nested.rs:20:5 | LL | map[k] | ^^^ the trait `Hash` is not implemented for `K` | note: required by a bound in ` as Index<&K>>` --> $DIR/bad-index-due-to-nested.rs:9:8 | LL | K: Hash, | ^^^^ required by this bound in ` as Index<&K>>` help: consider restricting type parameter `K` | LL | fn index<'a, K: std::hash::Hash, V>(map: &'a HashMap, k: K) -> &'a V { | +++++++++++++++++ error[E0277]: the trait bound `V: Copy` is not satisfied --> $DIR/bad-index-due-to-nested.rs:20:5 | LL | map[k] | ^^^ the trait `Copy` is not implemented for `V` | note: required by a bound in ` as Index<&K>>` --> $DIR/bad-index-due-to-nested.rs:10:8 | LL | V: Copy, | ^^^^ required by this bound in ` as Index<&K>>` help: consider restricting type parameter `V` | LL | fn index<'a, K, V: std::marker::Copy>(map: &'a HashMap, k: K) -> &'a V { | +++++++++++++++++++ error[E0308]: mismatched types --> $DIR/bad-index-due-to-nested.rs:20:9 | LL | fn index<'a, K, V>(map: &'a HashMap, k: K) -> &'a V { | - this type parameter LL | map[k] | ^ | | | expected `&K`, found type parameter `K` | help: consider borrowing here: `&k` | = note: expected reference `&K` found type parameter `K` error[E0308]: mismatched types --> $DIR/bad-index-due-to-nested.rs:20:5 | LL | fn index<'a, K, V>(map: &'a HashMap, k: K) -> &'a V { | - this type parameter ----- expected `&'a V` because of return type LL | map[k] | ^^^^^^ | | | expected `&V`, found type parameter `V` | help: consider borrowing here: `&map[k]` | = note: expected reference `&'a V` found type parameter `V` error: aborting due to 4 previous errors Some errors have detailed explanations: E0277, E0308. For more information about an error, try `rustc --explain E0277`. ", "positive_passages": [{"docid": "doc-en-rust-406a2986ff0ccb500658ee5abeb43b92da770a2e1e06ad8161c217272f3e9cc5", "text": "I tried this code (): I expected to see this happen: rustc outputs an error saying that I need to add the and traits on . Instead, this happened: which is jarring to see, and unhelpful, because HashMap does support Index, but only with particular trait bounds on . $DIR/coherence-constrained.rs:14:5 | LL | async fn foo(&self) {} | ^^^^^^^^^^^^^^^^^^^ cannot satisfy `::T == ()` error[E0284]: type annotations needed: cannot satisfy `::T == ()` --> $DIR/coherence-constrained.rs:22:5 | LL | async fn foo(&self) {} | ^^^^^^^^^^^^^^^^^^^ cannot satisfy `::T == ()` error[E0119]: conflicting implementations of trait `Foo` for type `Bar` --> $DIR/coherence-constrained.rs:18:1 | LL | impl Foo for Bar { | ---------------- first implementation here ... LL | impl Foo for Bar { | ^^^^^^^^^^^^^^^^ conflicting implementation for `Bar` error: aborting due to 3 previous errors Some errors have detailed explanations: E0119, E0284. For more information about an error, try `rustc --explain E0119`. ", "positive_passages": [{"docid": "doc-en-rust-971b90aaf1316210c41e93f144e98b6981b2461b1128426fc3dcd08ac2a1f1d4", "text": "No response No response No response\n(for compiler people:) We could fix this by merging the impl WF and coherence checks into one compiler stage. I'll put up a PR and see what T-types thinks.\nlabels +AsyncAwait-Triaged We discussed this in a WG-async meeting and are deferring to the resolution suggested above. He noted that this is a fix to the diagnostics only. That fact that this is an error is correct.", "commid": "rust_issue_116982", "tokennum": 104}], "negative_passages": []} {"query_id": "q-en-rust-a159fadfa1ea5a635dcb6d369447fc73103d2371a3047cfe0d69262e862cdfff", "query": "pub static G: fn() = G0; pub static H: &(dyn Fn() + Sync) = &h; pub static I: fn() = Helper(j).mk(); pub static K: fn() -> fn() = { #[inline(never)] fn k() {} #[inline(always)] || -> fn() { k } }; static X: u32 = 42; static G0: fn() = g;", "positive_passages": [{"docid": "doc-en-rust-f8ec7071402e43ae430484c0343bfca03af5fcd3009ca46bcbda1ddb1c535629", "text": "Regression in ( cc modify labels: +regression-from-stable-to-stable -regression-untriaged\nWG-prioritization assigning priority (). label -I-prioritize +P-medium\nBeta backport accepted as per compiler team . A backport PR will be authored by the release team at the end of the current development cycle. label +beta-accepted", "commid": "rust_issue_126012", "tokennum": 80}], "negative_passages": []} {"query_id": "q-en-rust-a1dae52acc3c2367bff88d7bdd2e939cd40fcb03345c040997d4d6f68e60e3a0", "query": " fn main() {} #[cfg(FALSE)] fn container() { const extern \"Rust\" PUT_ANYTHING_YOU_WANT_HERE bug() -> usize { 1 } //~^ ERROR expected `fn` //~| ERROR `const extern fn` definitions are unstable } ", "positive_passages": [{"docid": "doc-en-rust-4b896e75ea201d5026a72d90704af60b8d8f16ff87fb6c41e963080cb6fd89bb", "text": "The following program compiles, and prints :\nAs pointed out by on discord , this also works on stable:\nRegression introduced in 1.40.0, specifically this line:\nI have a fix in", "commid": "rust_issue_68062", "tokennum": 42}], "negative_passages": []} {"query_id": "q-en-rust-a1f6ae7722aba151ed6c38dc4da927a6481a1bda96277ae3230685ddb69c4653", "query": "} } #[doc(hidden)] #[stable(feature = \"rust1\", since = \"1.0.0\")] pub fn from_elem(elem: T, n: usize) -> Vec { unsafe { let mut v = Vec::with_capacity(n); let mut ptr = v.as_mut_ptr(); // Write all elements except the last one for i in 1..n { ptr::write(ptr, Clone::clone(&elem)); ptr = ptr.offset(1); v.set_len(i); // Increment the length in every step in case Clone::clone() panics } if n > 0 { // We can write the last element directly without cloning needlessly ptr::write(ptr, elem); v.set_len(n); } v } } //////////////////////////////////////////////////////////////////////////////// // Common trait implementations for Vec ////////////////////////////////////////////////////////////////////////////////", "positive_passages": [{"docid": "doc-en-rust-0a396996c5e71e5549afbc67ed4bc7561878e9d205281062d3053330284332cc", "text": "Super easy to implement for anyone interested. The implementation in the RFC is sound, although the new code should be abstracted out into a function that the macro calls for inline and macro-sanity reasons. The function needs to be marked as public/stable, but should otherwise be marked as #[doc(hidden)], as it is not intended to be called directly -- only by the macro.\nWilling to mentor on details.\nI'd like to give it a shot :)\nBy the way: The seems very sparse. If it's okay, I'd expand that a bit while I'm on it, or should that go into a seperate PR?\nYeah, that's totally reasonable to do at the same time!\nGreat! What would be the preferred place where to put the new function? I've it to as for now (currently compiling it)\nSince it's supposed to only be used by the macro (and will be doc(hidden)), it makes sense to me to put it in Putting it in makes sense, too, though.\nhas a section labeled \"Internal methods and functions\" that seemed to fit well.", "commid": "rust_issue_22414", "tokennum": 236}], "negative_passages": []} {"query_id": "q-en-rust-a244dd86b7e6d0b0c3a59adae5296673a39459d1e861da3795af3d6587384159", "query": " #![feature(marker_trait_attr)] #[marker] trait Marker {} impl Marker for &'_ () {} //~ ERROR type annotations needed impl Marker for &'_ () {} //~ ERROR type annotations needed fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-eaf8aed72a7fb57ad8d28d7447bf226504be02fb120af8e4c8b07e9c7b0dfbfd", "text": "Not sure if we have a test for the following and rust #![feature(markertraitattr)] #[marker] trait Marker {} impl Marker for &' () {} impl Marker for &' () {} If not, we need to add them, since they would be important for special casing during canonicalization. $DIR/panic-handler-with-target-feature.rs:11:1 | LL | #[target_feature(enable = \"avx2\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ LL | LL | fn panic(info: &PanicInfo) -> ! { | ------------------------------- `panic_impl` language item function is not allowed to have `#[target_feature]` error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-d6f19833b4d55a19558c344ec77f996c005c4f43f0f3a11acf2c70ec42b32de6", "text": "This works (but it shouldn't): similar to , cc (tracking issue) label T-lang T-compiler C-bug I-unsound F-targetfeature11\ncc , author of Note that this is about the nostd panic handler, not about the function. This fails as it should: gives\nugh we really need a way to make sure that every single \"this is called by rust\" function gets a check... I assume that these functions can't be unsafe, so I guess those checks are a food way to start looking and make sure to deduplicate the logic there?\nYeah sadly we do need to make sure, not just for the constructs that we have now but also it needs to be kept in mind for the future constructs we add or stabilize. A lot thankfully goes through either the trait system (operators or the GlobalAlloc trait used by ) or function pointers (alloc error hook, std panic hook). Functions that you can make callable solely by setting an attribute, those are rare. The best list of lang items I could find was in the , so that would be a great start for such an exhaustive search. Thankfully most of them are unstable, but still it needs to be addressed for each of them before stabilization (if that is a target).\nWG-prioritization assigning priority (). label -I-prioritize +P-medium\nlabel +requires-nightly\nThere is also another table of lang items in the source code, .", "commid": "rust_issue_109411", "tokennum": 318}], "negative_passages": []} {"query_id": "q-en-rust-a47f45326d665d88440f337adc838c3c9e08a0c03fdadc0eb8672871d5a4b122", "query": "\"arch\": \"x86_64\", \"cpu\": \"x86-64\", \"crt-static-respected\": true, \"data-layout\": \"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128\", \"data-layout\": \"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128\", \"dynamic-linking\": true, \"env\": \"gnu\", \"executables\": true,", "positive_passages": [{"docid": "doc-en-rust-1e7639d323e44931dc05aa7e023f1372448209a69239fe00cbda3db07686f887", "text": "The commit a check that stars firing due to the changes to the default target layouts in LLVM 18 ():\nHaving this issue on as well: Why it's trying to compile for a 32-bit target is clearly beyond me because I'm using an entirely 64-bit host \u2015 since happens to be the author of the crate this is failing on (namely, the 32-bit BIOS version of the bootloader crate which I explicitly avoid depending on for my intendedly UEFI-only kernel), pinging him for assistance.\nI think this issue thread is about some failing tests in the Rust repo. Your issue seems to be specific to the crate, which required some updates to the custom targets it is using. I published a new version already, which should fix the issue. If you still have issues with the crate after a , please open an issue in the repository.", "commid": "rust_issue_120492", "tokennum": 185}], "negative_passages": []} {"query_id": "q-en-rust-a482512add36d8beb3cbbb1735da3318243ba8145443b0bb7fc5f5c00edf2467", "query": " // run-pass #![allow(dead_code)] enum E { A, B } fn main() { match &&E::A { &&E::A => { } &&E::B => { } }; } ", "positive_passages": [{"docid": "doc-en-rust-90bd225a208b9bb471a32eda4959929a256e6fff107b5af48e291c8d9be22cb1", "text": "root: yup-oauth2 - 360 detected crates which regressed due to this; cc error[E0311]: the parameter type `T` may not live long enough --> $DIR/issue-86483.rs:6:1 | LL | pub trait IceIce | ^ - help: consider adding an explicit lifetime bound...: `T: 'a` | _| | | LL | | where LL | | for<'a> T: 'a, LL | | { ... | LL | | LL | | } | |_^ ...so that the type `T` will meet its required lifetime bounds error[E0311]: the parameter type `T` may not live long enough --> $DIR/issue-86483.rs:10:5 | LL | pub trait IceIce | - help: consider adding an explicit lifetime bound...: `T: 'a` ... LL | type Ice<'v>: IntoIterator; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds error[E0309]: the parameter type `T` may not live long enough --> $DIR/issue-86483.rs:10:32 | LL | pub trait IceIce | - help: consider adding an explicit lifetime bound...: `T: 'v` ... LL | type Ice<'v>: IntoIterator; | ^^^^^^^^^^^^ ...so that the reference type `&'v T` does not outlive the data it points at error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0309`. ", "positive_passages": [{"docid": "doc-en-rust-740e0124fe7b9fb360f4e9638fbe9ccc902825c50c3d2e1b17fb8758a8ca2f54", "text": " $DIR/issue-115599.rs:5:12 | LL | if let CONST_STRING = empty_str {} | ^^^^^^^^^^^^ | = note: the traits must be derived, manual `impl`s are not sufficient = note: see https://doc.rust-lang.org/stable/std/marker/trait.StructuralEq.html for details error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-749b2a39e3587addcd49eae214863ecc634dee05f8269bc4babf97bcfde394f8", "text": " $DIR/issue-70388-recover-dotdotdot-rest-pat.rs:4:13 | LL | let Foo(...) = Foo(0); | ^^^ | | | not a valid pattern | help: for a rest pattern, use `..` instead of `...` error: unexpected `...` --> $DIR/issue-70388-recover-dotdotdot-rest-pat.rs:5:13 | LL | let [_, ..., _] = [0, 1]; | ^^^ | | | not a valid pattern | help: for a rest pattern, use `..` instead of `...` error[E0308]: mismatched types --> $DIR/issue-70388-recover-dotdotdot-rest-pat.rs:6:33 | LL | let _recovery_witness: () = 0; | -- ^ expected `()`, found integer | | | expected due to this error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-a48c513119b7cc694de9083a41db19b09972c37e0e9d47da7e1e4326cafd2fc8", "text": "` The problem here is that we have instead of The compiler however points towards in the error message instead of the superfluous : Fun fact: when using four dots , it correctly points at the last dot :) $DIR/issue-120856.rs:1:37 | LL | pub type Archived = ::Archived; | ^ | | | use of undeclared crate or module `n` | help: a trait with a similar name exists: `Fn` error[E0433]: failed to resolve: use of undeclared crate or module `m` --> $DIR/issue-120856.rs:1:25 | LL | pub type Archived = ::Archived; | ^ | | | use of undeclared crate or module `m` | help: a type parameter with a similar name exists: `T` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0433`. ", "positive_passages": [{"docid": "doc-en-rust-5a19f9d590d60df01202720f0e8260d5a9669c8113a21da25106b7198a3705df", "text": " $DIR/unconstructible-pub-struct.rs:30:12 | LL | pub struct T9 { | ^^ | note: the lint level is defined here --> $DIR/unconstructible-pub-struct.rs:2:9 | LL | #![deny(dead_code)] | ^^^^^^^^^ error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-9546ab9a79e7afb097786fe35f6ec641a95b6f3e21077a1cea80490e41620b4f", "text": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types).", "commid": "rust_issue_128053", "tokennum": 509}], "negative_passages": []} {"query_id": "q-en-rust-a74ac354ae2d9b5f3dc63346db0c6ff4a68a42ad1ea35a23ca51f7141b1c63b2", "query": " error: struct `T9` is never constructed --> $DIR/unconstructible-pub-struct.rs:30:12 | LL | pub struct T9 { | ^^ | note: the lint level is defined here --> $DIR/unconstructible-pub-struct.rs:2:9 | LL | #![deny(dead_code)] | ^^^^^^^^^ error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-97adf45f9b9e264448b397ec08a8e2a265a07410a46c4023c53938df064ef2be", "text": "So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "commid": "rust_issue_128053", "tokennum": 495}], "negative_passages": []} {"query_id": "q-en-rust-a78dba9a7f0fb7fca362a5dd303f95372a800a2180f48300e409406523e2f1f2", "query": "let parse_only = matches.opt_present(\"parse-only\"); let no_trans = matches.opt_present(\"no-trans\"); let no_analysis = matches.opt_present(\"no-analysis\"); let no_rpath = matches.opt_present(\"no-rpath\"); let lint_levels = [lint::allow, lint::warn, lint::deny, lint::forbid];", "positive_passages": [{"docid": "doc-en-rust-a950a660acb91d70b8b4626ffdafe37172eb2873a306de813dafa2ee4d8f9db9", "text": "Currently (something after 0.5) rustc (IIUIC) is using rpaths to point libs and binaries to the appropriate locations. This is problematic when rustc and the other libs/bins are package e.g. for Fedora. The rpath usage should be made optional - can be used instead - and LDLIBRARYPATH.\nWhy is this problematic for packaging? Is it because the rpath values end up with the wrong values? Each crate's rpath has several values, one of which is supposed to be the final installation location (controlled with ). Can we just provide more control over which locations are used during rpathing?\nIn general Fedora recommends to not use rpaths [0], because the paths are hardcoded in the binary. Fedora preferres to point the dynamic linker to the appropriate paths [1]. But yes, we are more flexible about internal libraries - which the case is here. So yes, it would also help us if we could controll the rpath path (which goes a bit into the direction if issue ) [0] [1]\nThe following rpath is used in e.g. the cargo binary: The rpath referres to rpm's buildroot, where the lib won't stay (it will be instaled to ) Maybe the problem could adressed if the rpath would respect - but this could lead to problems at build time. Maybe LDLIBRARYPATH can be used to point to the correct dir's at build time, and the rpath can be used to point to the correct dir's at runtime. Just my 2ct - but you ar ethe expert :)\nThanks for those links. We do generate rpaths with the expectation that some of them will be invalidated by moving the binaries around. It's possible we could consider removing rpath, but it sure is nice to not think about dynamic loading when running binaries out of my build directory. We could easily add a flag to the configure script, to turn off the rpath for all the rust-provided libraries, with the assumption that package managers would modify during install. If we did that I feel like we would still want the installed rustc to generate rpaths with user-built libraries - otherwise the experience would be different depending on the rust installation. We could also just remove rpath completely - it doesn't work on windows anyway.", "commid": "rust_issue_5219", "tokennum": 512}], "negative_passages": []} {"query_id": "q-en-rust-a78dba9a7f0fb7fca362a5dd303f95372a800a2180f48300e409406523e2f1f2", "query": "let parse_only = matches.opt_present(\"parse-only\"); let no_trans = matches.opt_present(\"no-trans\"); let no_analysis = matches.opt_present(\"no-analysis\"); let no_rpath = matches.opt_present(\"no-rpath\"); let lint_levels = [lint::allow, lint::warn, lint::deny, lint::forbid];", "positive_passages": [{"docid": "doc-en-rust-5151270ffdeea50f64b201e1c8168f621692c15bd4b7ae2d4388d3b0d3a4f6d7", "text": "Another thing to consider is that, if we just used rustpkg for all builds, and let rustpkg install all libraries at a known location, then we might be able to conveniently not use rpath while still doing the right thing at load time. Can user directories like be put into ?\nYes - I can fully understand how easy running rust in the build directory is when using rpaths :) For the build directory use case it should be possible to create wrappers for rustc and friends which use to point to the correct dirs. You mention thet some libs might reside in , could you explain the overall concept you are following with this? In general I'd expect libraries to appear in /lib, /usr/lib some and some private /usr/lib/myapp. There are also some xdg specs that specify and - but let usfirst see what you say about ~/.rustpkg. Oh, I don't think that ~/.rustpkg would be expanded correctly if put into\nJust to summarize my thoughts: If rpath is used, the rpath path beeing used should only use directories below (to point to non-standard dirs containing libs relevant for a rust component). The rpath path should not point to any build-time dir. To include libraries at buildtime or when using the binaries in the build directory I'd suggest using wrappers (isn't autotools doing it like this?) which set appropriately.\nAny follow up thoughts?\nOur package manager will by default not be installing libraries to a system location and will instead put them in the user's directory, . This doesn't seem out of the ordinary to me; consider cabal, etc. The behavior you recommend requires some deep changes to how Rust works, so need to be considered carefully. Does the current behavior prevent packaging in Fedora, and is there a minimal change we can make to unblock packaging? You'll probably be interested in this subject.\nThanks for the rustpkg explanation. Do I understand that rustpkg will create binaries and libs with rpaths pointing to the user's specific rustpkg dir ? My feeling is that this user specific rpaths should be configurable at build time. The use case is obviously when the binary or lib is going to be installed systemwide. The only rpath exception is the rpath pointing to the system wide package specific libdir (e.g. ).", "commid": "rust_issue_5219", "tokennum": 536}], "negative_passages": []} {"query_id": "q-en-rust-a78dba9a7f0fb7fca362a5dd303f95372a800a2180f48300e409406523e2f1f2", "query": "let parse_only = matches.opt_present(\"parse-only\"); let no_trans = matches.opt_present(\"no-trans\"); let no_analysis = matches.opt_present(\"no-analysis\"); let no_rpath = matches.opt_present(\"no-rpath\"); let lint_levels = [lint::allow, lint::warn, lint::deny, lint::forbid];", "positive_passages": [{"docid": "doc-en-rust-7b543ce276287a523dcf1a37842437cfe9a628174cf6714aeeb2711fab0a1543", "text": "I think that I'll spend a bit more time with looking at and and might get a better feeling for the problem. Btw.: I noted that the libdir isn't configurabel. Fedora for example keeps x86_64 libs in (/usr)?/lib64, and i686 in (/usr)/lib. IIUIC this is currently not possible with rust as the libs will be installed to /usr/lib/arch-specific-path\nThe point about libdir is well taken and I updated . I am not entirely sure how rustpkg does it's build now or how and intend for it to work, but based on the current language design I would expect that, yes, binaries installed to would have rpaths pointing there. Currently there are several rpaths configured for each linked library so that they hopefully fall back to something useful when the libraries are moved around during install or during the bootstrapping process, something like: The relative path to the library The absolute path to where the library was at build time The system installation path Under this scheme I would expect that would, when linking against libraries in , create rpaths to .\nYes, there are several rpaths used (seen in and ) and I'd say - even with looking at rust itself, without the packaging glasses on - that it should be configurable what rpaths are used. Maybe it should be an option of . As said, system wide binaries are the best example for binaries where you don't want to many rpaths - including no rpaths pointing to some user dir.\nPlease don't tag TJC :(\nSorry, TJC! Maybe it's time to give in and start working on Rust :) Seriously though, sorry. I'm sure it's a real PITA.\nmentioned that this is a security issue, which I didn't realize. Debian also says .\nBy the way, it's possible to strip this with but it would be much nicer to not require a third party tool.\nI generally wonder why you don't add a user-specific to to point to the user specific rustpkg dir instead of using rpath. That makes it much easier to cope with - and that would also be in par with (at least) the Fedora guidelines. And I suppose with Debian's too, as it's the common way. (See 3rd party java binaries. They modify the to point to some hidden dir in the userdir.)\nWe used to.", "commid": "rust_issue_5219", "tokennum": 551}], "negative_passages": []} {"query_id": "q-en-rust-a78dba9a7f0fb7fca362a5dd303f95372a800a2180f48300e409406523e2f1f2", "query": "let parse_only = matches.opt_present(\"parse-only\"); let no_trans = matches.opt_present(\"no-trans\"); let no_analysis = matches.opt_present(\"no-analysis\"); let no_rpath = matches.opt_present(\"no-rpath\"); let lint_levels = [lint::allow, lint::warn, lint::deny, lint::forbid];", "positive_passages": [{"docid": "doc-en-rust-bde3b6ae1e3e98b475a33f9a1a8a3307672213332add48ee4a6c55c0b0f17136", "text": "Users kept forgetting and/or getting annoyed having to adjust it when running from a build dir. Rpaths made it possible to run foo/bar/rustc no matter which rustc you chose.\nright. IIUIC - I always feel like a noob when it comes to autotools - autotools are generating wrappers for binaries which are created during a build process and moves the original binaries to hiddend irectories. The wrappers are then modifying the environment (setting env vars etc) to achieve the same goal (so e.g. that rustc can be run from the dev directory without messing with the env vars yourself). I wonder if this could help.\nOkay, it's actually not autools which creates these wrappers but libtool:\nSorry, I didn't mean to imply a disinterest in fixing this better, just noting how it came to be. Also strongly disinterested in using libtool. Would prefer going back to telling users to set env vars.\nHow about defaulting to static linking (when it works)? Dynamic linking is valuable, but mostly for global installs handled by a package manager. Of course, if we end up with a bunch of tools using that would be wasteful. I think RPATH will just lead to confusion, and it's an easy security hole to leave open.\nI would be very happy to get static linking working well enough to make it a default in many such cases. But we ought to do the right thing when linking dynamically too. Is rpath-using- considered acceptable? I.e. is it just the absolute paths that are problematic?\nThe absolute paths are the ones that can easily be a security issue, so isn't that bad. I would still rather avoid it for a globally installed package but it's easy enough to strip for people who don't need it and won't be an issue if you don't know about it (like the absolute path could be). Using the rpath could be a flag, but the absolute path should never be there.\nNominating for milestone 2: backwards compatible. (The choice of whether one needs to put a setting in or use some configure flag for certain systems, etc, seems like a backwards compatibility issue. Though maybe I misunderstand and these are issues only for people developing itself, not for clients of ?) It also could well be that this is more appropriately assocated with milestone 1: well-defined.", "commid": "rust_issue_5219", "tokennum": 524}], "negative_passages": []} {"query_id": "q-en-rust-a78dba9a7f0fb7fca362a5dd303f95372a800a2180f48300e409406523e2f1f2", "query": "let parse_only = matches.opt_present(\"parse-only\"); let no_trans = matches.opt_present(\"no-trans\"); let no_analysis = matches.opt_present(\"no-analysis\"); let no_rpath = matches.opt_present(\"no-rpath\"); let lint_levels = [lint::allow, lint::warn, lint::deny, lint::forbid];", "positive_passages": [{"docid": "doc-en-rust-e2ab7893880a27d6b67cf522b730616f8bf2b057dd45e0e57f81fe2174c90e4c", "text": "I suspect and have a more concrete notion about this.\nI don't have anything much to add to this despite it coming up for triage today. I think it has to be fixed still, I'm still happy to do it via whatever combination of static linking by default and use of is sufficient for packagers, and I haven't seen any clear movement on it in the meantime.\naccepted for backwards-compatible milestone\nI'm getting worried that we won't resolve this for 1.0. I agree this is important to fix. Here's my current suggestion: Modify the build system to once again set LDLIBRARYPATH/PATH to find the appropriate dynamic libraries. Add a flag to rustc to disable rpath Add a configure flag to disable rpathing the rust build itself. Packagers can use this to do whatever they want. With this logic the rust package itself could avoid rpath where distros have their own way of dealing with library paths. The rustc installed by these distros would still be rpathing user's crates for the sake of convenience. Is this an acceptable compromise or do distros want us to completely disable rpath? I'm afraid that doing so would provide an inconsistent developer experience depending on how rustc is acquired; if we need to do that then we instead must find a way to completely eliminate rpath.\nIf we start to just statically link everything by default then rpath becomes much less important and we can just say it's up to users to figure it out if they opt into dynamic linking.\nI think we should completely remove absolute rpaths. It's okay to have $ORIGIN-based ones but it may not be a good default. I'm not sure what the answer is to that, but agree that static linking basically resolves this.\nIt doesn't appear that this is fully working. There are still rpaths in the binaries when configured with even though the flag for rustc does work. /usr/bin/rustc: RPATH=$ORIGIN/../lib:/build/rust-git/src/rust/x8664-unknown-linux-gnu/stage1/lib/rustlib/x8664-unknown-linux-gnu/lib:/usr/lib/rustlib/x86_64-unknown-linux-gnu/lib\nWhat steps did you follow to having the rpath show up? I thought I double checked on linux that the configure flag worked, but I may have overlooked something.", "commid": "rust_issue_5219", "tokennum": 541}], "negative_passages": []} {"query_id": "q-en-rust-a7a0e47a62b72c6be00e604fea5a87e04d07904872750479fd6ded004bf8d993", "query": "Note that the `target_env = \"p1\"` condition first appeared in Rust 1.80. Prior to Rust 1.80 the `target_env` condition was not set. ## Enabled WebAssembly features The default set of WebAssembly features enabled for compilation is currently the same as [`wasm32-unknown-unknown`](./wasm32-unknown-unknown.md). See the documentation there for more information. ", "positive_passages": [{"docid": "doc-en-rust-9a994ebf459940df871763a70de91670c66bd2d2064028ac59873b47fceaf38d", "text": "This can be replicated with a minimal : Compile it to WASM with Then, check if the generated WASM uses reference types: $DIR/issue-50439.rs:25:22 | LL | let _ = [(); 0 - !!( as ReflectDrop>::REFLECT_DROP) as usize]; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: this may fail depending on what value the parameter takes error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-f64793285949e06fa320af40588c55d72e9c3a09325e93c79fc772fefb28e22a", "text": "Testcase: I was mostly fooling around trying to see if there was any way I could get rustc to believe that it would always know whether Bears use crate::sync::atomic::{AtomicBool, Ordering}; use crate::sys::cvt; use crate::sys_common::AsInner;", "positive_passages": [{"docid": "doc-en-rust-aabd150f83489b1935f382030f552306221c0bba742003aab84895c06fe936b1", "text": "Good question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: uses the flag with (mentioned above, in 2.6.23) uses with ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") uses to atomically set the flag on the pipe fds ( in 2.6.27) uses with ( in 2.6.27) uses to permit use of ( in 2.6.28) ~ ( in 4.5, not fixed by this proposal)~ As you can see, the workarounds fixed by this proposal all have a similar flavor. Originally posted by in\nKernel updated in .\nAs noted, needs a much newer kernel 4.5, so we'll still need that fallback. But all the stuff should be ok to assume available now.\nThe new minimal version is 2.6.27 from what I can see, so \"Socket::accept uses accept4(2) to permit use of SOCK_CLOEXEC\" still needs a fallback, right?\nThe builder is using 2.6.32 headers, so I would call that our new minimum.\nAh right; I am actually not sure where I got the 27 from...\nThe title of suggested 2.6.27, but we went a little higher to match the actual distros in use.\nI know this is ancient history, but do you recall the problem with ? You introduced that comment about \"some kernel\" in , and you were the last to update it in . If the problem kernels were before 2.6.32, then I'd like to clean that up now.\nJust digging in kernel history, I did find one hiccup in , but that bug was introduced and resolved during the 3.7 development cycle, not in a release.\nIIRC we just mirrored what musl did at the time, which looks like it : Looks like the original mailing list thread is gone and there's no comments in the musl source, and I don't recall what happened here. I suspect it would be safe to remove those workarounds and just assume works one the first try until someone says differently.", "commid": "rust_issue_74519", "tokennum": 498}], "negative_passages": []} {"query_id": "q-en-rust-a8acc6d397a816f203d51bfb423b56ec221b84b5693cd0ce20406de0dbb61aec", "query": "A(i32), B(i32), } // CHECK: %Enum4 = type { [2 x i32] } // CHECK: %Enum4 = type { [0 x i32], i32, [1 x i32] } // CHECK: %\"Enum4::A\" = type { [1 x i32], i32, [0 x i32] } pub enum Enum64 { A(Align64), B(i32), } // CHECK: %Enum64 = type { [16 x i64] } // CHECK: %Enum64 = type { [0 x i32], i32, [31 x i32] } // CHECK: %\"Enum64::A\" = type { [8 x i64], %Align64, [0 x i64] } // CHECK-LABEL: @align64", "positive_passages": [{"docid": "doc-en-rust-c5dbf7516b51b3fa4a40718b99ab8fabf0a7603d27e7b671b56e7a96f282dc61", "text": "In the 2017-12-14 Nightly I've seen a significant (almost 20%) compilation-time increase when I use on my code. And the successive day (2017-12-15) the compilation got a bit slower still and the run-time of the binary also got about 8% slower. The successive days the situation kept going slightly worse. Extracting parts of my code to show compiler/code performance is not easy, so I've composed this benchmark function (that solves an Euler problem. If this code isn't enough I will write other similar benchmarks): // CHECK: %Enum4 = type { [2 x i32] } // CHECK: %Enum4 = type { [0 x i32], i32, [1 x i32] } // CHECK: %\"Enum4::A\" = type { [1 x i32], i32, [0 x i32] } pub enum Enum64 { A(Align64), B(i32), } // CHECK: %Enum64 = type { [16 x i64] } // CHECK: %Enum64 = type { [0 x i32], i32, [31 x i32] } // CHECK: %\"Enum64::A\" = type { [8 x i64], %Align64, [0 x i64] } // CHECK-LABEL: @align64", "positive_passages": [{"docid": "doc-en-rust-71a34b93847c29253a567b7e18242b14031f3e6c1bb470ef28fd1ab835d6c331", "text": "If you could provide an example that exhibits the slowdown with , that would allow us to fix it (is the Dec 13 nightly faster than the Dec 14 nightly which has the same speed, but worse runtime performance, than the Dec 15 nightly?)\nThe last nightly (2017-12-21) has fixed all three problems. Despite some attempts I failed finding a small amount of code that shows a significant compilation slowdown from 2017-12-14 to 2017-12-20, perhaps it's a more diffuse problem that emerges from compiling my whole big amount of code. I am sorry. I don't fully understand your question, but I try to answer creating a table with some data for some of the last Nightlies: Rustc version --emit=metadata Opt build Binary size Run-time 1.24.0-nightly 2017-12-11 4.32 49.78 12.93 1.24.0-nightly 2017-12-12 4.31 48.50 12.84 1.24.0-nightly 2017-12-13 5.15 50.69 12.86 1.24.0-nightly 2017-12-15 5.33 52.98 14.04 1.24.0-nightly 2017-12-17 5.44 53.31 14.03 1.24.0-nightly 2017-12-20 5.47 54.49 14.14 1.24.0-nightly 2017-12-21 4.28 48.82 12.96 The --emit=metadata is a check time. The Opt build is the compilation time for a well optimized build. The binary size is stripped and in bytes. The times are all in seconds.\nEDIT: told me about , so the range is .... EDIT2: at a quick glance, none of those changes should affect binary size or run time. How many times have you run each of these? How much do they vary?", "commid": "rust_issue_46897", "tokennum": 405}], "negative_passages": []} {"query_id": "q-en-rust-a8acc6d397a816f203d51bfb423b56ec221b84b5693cd0ce20406de0dbb61aec", "query": "A(i32), B(i32), } // CHECK: %Enum4 = type { [2 x i32] } // CHECK: %Enum4 = type { [0 x i32], i32, [1 x i32] } // CHECK: %\"Enum4::A\" = type { [1 x i32], i32, [0 x i32] } pub enum Enum64 { A(Align64), B(i32), } // CHECK: %Enum64 = type { [16 x i64] } // CHECK: %Enum64 = type { [0 x i32], i32, [31 x i32] } // CHECK: %\"Enum64::A\" = type { [8 x i64], %Align64, [0 x i64] } // CHECK-LABEL: @align64", "positive_passages": [{"docid": "doc-en-rust-76c6f80f0cd754bc3f272d9970b9f00ced35debc6880730135d6a594c9889069", "text": "They should be: rustc 1.24.0-nightly ( 2017-12-20) binary: rustc commit-hash: commit-date: 2017-12-20 host: x8664-pc-windows-gnu release: 1.24.0-nightly LLVM version: 4.0 rustc 1.24.0-nightly ( 2017-12-21) binary: rustc commit-hash: commit-date: 2017-12-21 host: x8664-pc-windows-gnu release: 1.24.0-nightly LLVM version: 4.0 Mixing up things seems quite easy here.\nAh, that's why I asked, so the range is .... Still not seeing anything clearly relevant, but maybe someone else from does?\nThe --emit=metadata times are computed as the minimum of four tries. Their variance is very small (on the order of +-0.05) so I think they are reliable. The binary run-time is the minimum of 5 runs, their variance is higher perhaps because they do some I/O, despite using a SSD, about +-0.12. The compilation is slow so there I've done only one time, so those values are the less reliable, but differences above 1.5 seconds should be reliable.\nFor the compilation time issue, maybe there was a bug in NLL that made it active without it being enabled. For the run-time issue, we need to actually investigate this.\nAs in, investigate the cause of the run-time fluctuations and how it unregressed?\nThis is my good old friend the torpedo. LLVM optimizes into this: This is basically equivalent to this pseudo-Rust code After the \"non-constant-address\" write is generated, LLVM is sunk - LLVM has no way of getting rid of the non constant address, and no way of optimizing accesses to the target with that non-constant address.\nUpstream LLVM (that's the yet-to-be-released LLVM 6) has this patch, which avoids the SimplifyCfg sink problem by not running sinking before inlining I have confirmed that - e.g. this code is optimized correctly with the new Until then, I think representing small enums using a struct (rather than an array) should work around this problem.", "commid": "rust_issue_46897", "tokennum": 501}], "negative_passages": []} {"query_id": "q-en-rust-a8acc6d397a816f203d51bfb423b56ec221b84b5693cd0ce20406de0dbb61aec", "query": "A(i32), B(i32), } // CHECK: %Enum4 = type { [2 x i32] } // CHECK: %Enum4 = type { [0 x i32], i32, [1 x i32] } // CHECK: %\"Enum4::A\" = type { [1 x i32], i32, [0 x i32] } pub enum Enum64 { A(Align64), B(i32), } // CHECK: %Enum64 = type { [16 x i64] } // CHECK: %Enum64 = type { [0 x i32], i32, [31 x i32] } // CHECK: %\"Enum64::A\" = type { [8 x i64], %Align64, [0 x i64] } // CHECK-LABEL: @align64", "positive_passages": [{"docid": "doc-en-rust-743e618636739d75ca6a1770f993451b2f5ec4998638be4931a5b07edd86b4c2", "text": "With the 2017-12-22 Nightly all the problems are back: Rustc version --emit=metadata Opt build Binary size Run-time 1.24.0-nightly 2017-12-22 5.36 53.57 13.96\nCoud you open a PR that turns small unions into LLVM structs (rather than arrays)? That should fix the problem.\nThe binaries generated by the last Nightly (1.24.0-nightly 2017-12-25) is further few percents slower (perhaps because of thinLTO?), the compilation times can't be compared now.\nIs there more stable measure besides compile time / run time? They can be affected too easily by the environment especially when the run time is expected to take less than 1 second. The generated assembly size seems to be a good measure.\nPlease re-test next nightly and reopen if not fixed.\nThe run-time of the binary is now OK. The compilation time with --emit=metadata is still quite large.\nCan you open a separate one for compilation times? Seems to be a separate issue.\n, I've tried here:", "commid": "rust_issue_46897", "tokennum": 237}], "negative_passages": []} {"query_id": "q-en-rust-a8acc6d397a816f203d51bfb423b56ec221b84b5693cd0ce20406de0dbb61aec", "query": "A(i32), B(i32), } // CHECK: %Enum4 = type { [2 x i32] } // CHECK: %Enum4 = type { [0 x i32], i32, [1 x i32] } // CHECK: %\"Enum4::A\" = type { [1 x i32], i32, [0 x i32] } pub enum Enum64 { A(Align64), B(i32), } // CHECK: %Enum64 = type { [16 x i64] } // CHECK: %Enum64 = type { [0 x i32], i32, [31 x i32] } // CHECK: %\"Enum64::A\" = type { [8 x i64], %Align64, [0 x i64] } // CHECK-LABEL: @align64", "positive_passages": [{"docid": "doc-en-rust-f020239427631201084cbb7329b788b424a92488f42a8225f64d9f726512fac8", "text": "I was looking at the generated asm for to see what happens to the array, and was surprised to see a large function with 5(!) call sites for . The generated code doesn't seem to vary with the number of elements in the array. That got me to compare with the functionally equivalent code: which doesn't yield the same assembly at all. The first difference being that the latter doesn't even store the array (only the strs) while the former does. That got me even further in comparing with equivalent loops, up to the extreme (and stupid): vs. The latter generates the simplest code possible: The former not so much:\nDocumenting my thinking in the hopes that it will be useful, or at least that someone will correct me via Campbell's Law. We know that the ASM outputs are different, so let's go back a few steps until we can find where in the toolchain we want to add our improvement. I compiled this on 1.19 with , but the important thing to note is that while iterany is quite bloated, it looks like the for loop has already been optimized: So is just as trivial in LLVM bytecode as it is in assembler - whatever optimizations we needed to do have already happened. We need to go back farther. . Notice that the tables have turned! is more concise than , so the code we need to add should change the MIR output. Note that . So the problem here seems to be that we aren't inlining before we generate LLVM bytecode. But that answer doesn't satisfy me either, because neither the nor the has any calls to , and in fact . So... one of these things is happening: 're failing to inline , and we haven't noticed because LLVM's been inlining this code for us. LLVM doesn't have as much knowledge as we do, so it can't do as good a job as we theoretically could. 've been looking at unoptimized MIR this whole time. This hypothesis seems to be borne out by the fact that when I rerun rustc without optimizations, I get exactly the same MIR output. But , and ? It's possible that both of these hypotheses are correct; maybe this is unoptimized MIR, because the MIR optimizations are doing absolutely nothing to this file!", "commid": "rust_issue_43517", "tokennum": 501}], "negative_passages": []} {"query_id": "q-en-rust-a8acc6d397a816f203d51bfb423b56ec221b84b5693cd0ce20406de0dbb61aec", "query": "A(i32), B(i32), } // CHECK: %Enum4 = type { [2 x i32] } // CHECK: %Enum4 = type { [0 x i32], i32, [1 x i32] } // CHECK: %\"Enum4::A\" = type { [1 x i32], i32, [0 x i32] } pub enum Enum64 { A(Align64), B(i32), } // CHECK: %Enum64 = type { [16 x i64] } // CHECK: %Enum64 = type { [0 x i32], i32, [31 x i32] } // CHECK: %\"Enum64::A\" = type { [8 x i64], %Align64, [0 x i64] } // CHECK-LABEL: @align64", "positive_passages": [{"docid": "doc-en-rust-976406cbe00b3d9294b8f3fb8e40906ac4c6144c811c5de0940272aaeebf0061", "text": "I'm still cloning rustc, but once I do I'll comment out the MIR optimizations and see where that gets us.\nWell, I can oblige that in one aspect :) You put far too much hope into MIR optimizations. They do very little right now—inlining, for example, isn't even enabled by default (only with ). Even in the future, when all the MIR optimizations we've ever wanted are implemented, they will only cover a small subset of what LLVM does (there's no point in duplicating all the work that went into LLVM, and that would be impractical anyway). This code looks sufficiently messy that I don't expect MIR optimizations to ever fully optimize it away. For example, that would probably require loop unrolling, which hasn't ever been proposed AFAIK (and if it was, I'd be skeptical of it). In any case, the more interesting question is why LLVM can't optimize the iterator version as well as the alternative. Whatever goes awry in optimizing this, it happens in one of LLVM's many passes.\nThe LLVM IR you posted is very weird. It has multiple branches on constants, and presumably dead code consequently. There's also several branches on values?!? These should all have been cleaned up. After running it through again, I get this IR (names are all messed up cause I did it on , which insists on applying C++ name demangling): // CHECK: %Enum4 = type { [2 x i32] } // CHECK: %Enum4 = type { [0 x i32], i32, [1 x i32] } // CHECK: %\"Enum4::A\" = type { [1 x i32], i32, [0 x i32] } pub enum Enum64 { A(Align64), B(i32), } // CHECK: %Enum64 = type { [16 x i64] } // CHECK: %Enum64 = type { [0 x i32], i32, [31 x i32] } // CHECK: %\"Enum64::A\" = type { [8 x i64], %Align64, [0 x i64] } // CHECK-LABEL: @align64", "positive_passages": [{"docid": "doc-en-rust-24aac01a907939aeafe1226089a1532310b5af4cc1e0f9a74499351cbc1464b3", "text": "In more detail, LLVM 3.9 generates the following small code for ( metadata stripped - it doesn't matter, with this function substituted in both 3.9 and 4.0 manage to optimize everything out): While LLVM 4.0 manages to screw up and generate this monstrosity: I think the somehow prevents SROA.\nGoing closer to the root: LLVM 3.9 optimizes the closure in to this: While LLVM 4.0 for some reason generates this:\ncaused by\nThe initial nonequivalence in the filing of the bug report is due to the slice iterator's explicit manual unrolling by 4 in its implementation of any. That might seem bad but as soon as llvm learns to loop optimize a loop with two exits (either the element was found or the iterator reached the end), the explicit unrolling can be removed.\nis the PR that fixed this intended to be a permanent solution, or is it just a workaround? IOW, should we open a new bug to track reevaluating this fix when LLVM is resolved?\nHmmm, mentioned this bug, but you're right, this is older so me \"fixing it\" is probably more coincidental than anything else.\nThat LLVM bug is now tracked at and appears to have had no motion on it in the past 5 years. Assembly in the initial comparison cases remains identical per No other changes, as far as I can tell.\nThis was nominally resolved on our end, but was left open in the possibility that the LLVM issue was going to be resolved, so as to remove the \"temporary\" workaround. However, if we wish to track LLVM-side bugs that are not currently an impact on us, but may allow upgrades later, then I believe we may wish to consider tracking them in another form than an issue on the GH tracker, as they are \"not an issue\", so to speak. Alternatively, if we want to track them on this tracker, we may wish to track them in a more organized fashion (e.g. a metabug). I am going to close this issue. Do feel free to reopen this if it regresses again, or it becomes otherwise currently actionable, or if there is a particular reason we must have an open issue on our end to track this LLVM issue for potential improvements as opposed to any of the other currently-open LLVM issues we could be potentially waiting on in order to make an improvement.", "commid": "rust_issue_43517", "tokennum": 530}], "negative_passages": []} {"query_id": "q-en-rust-a8acc6d397a816f203d51bfb423b56ec221b84b5693cd0ce20406de0dbb61aec", "query": "A(i32), B(i32), } // CHECK: %Enum4 = type { [2 x i32] } // CHECK: %Enum4 = type { [0 x i32], i32, [1 x i32] } // CHECK: %\"Enum4::A\" = type { [1 x i32], i32, [0 x i32] } pub enum Enum64 { A(Align64), B(i32), } // CHECK: %Enum64 = type { [16 x i64] } // CHECK: %Enum64 = type { [0 x i32], i32, [31 x i32] } // CHECK: %\"Enum64::A\" = type { [8 x i64], %Align64, [0 x i64] } // CHECK-LABEL: @align64", "positive_passages": [{"docid": "doc-en-rust-766e293c61eaaa855df526f09483bc4da2ecbf0ebea74614fda27bb69f530343", "text": "The LLVM issue has long since been fixed, so it's possible that the workaround may be removed.\nI think any workarounds here are long gone, since moved to internal iteration way back in", "commid": "rust_issue_43517", "tokennum": 43}], "negative_passages": []} {"query_id": "q-en-rust-a8e21958f0ec6610380c4129acbb2817b2ea7b217afb9563e4e9fb14b3843957", "query": "}, ); // Construct the list of in-scope lifetime parameters for async lowering. // We include all lifetime parameters, either named or \"Fresh\". // The order of those parameters does not matter, as long as it is // deterministic. if let Some((async_node_id, _)) = async_node_id { let mut extra_lifetime_params = this .r .extra_lifetime_params_map .get(&fn_id) .cloned() .unwrap_or_default(); for rib in this.lifetime_ribs.iter().rev() { extra_lifetime_params.extend( rib.bindings .iter() .map(|(&ident, &(node_id, res))| (ident, node_id, res)), ); match rib.kind { LifetimeRibKind::Item => break, LifetimeRibKind::AnonymousCreateParameter { binder, .. } => { if let Some(earlier_fresh) = this.r.extra_lifetime_params_map.get(&binder) { extra_lifetime_params.extend(earlier_fresh); } } _ => {} } } this.r .extra_lifetime_params_map .insert(async_node_id, extra_lifetime_params); } this.record_lifetime_params_for_async(fn_id, async_node_id); if let Some(body) = body { // Ignore errors in function bodies if this is rustdoc", "positive_passages": [{"docid": "doc-en-rust-d920298abf7760b787a4e565d974d50dfd740c81766d2266f3334b649deb6dc5", "text": "Example: yields // Construct the list of in-scope lifetime parameters for async lowering. // We include all lifetime parameters, either named or \"Fresh\". // The order of those parameters does not matter, as long as it is // deterministic. if let Some((async_node_id, _)) = async_node_id { let mut extra_lifetime_params = this .r .extra_lifetime_params_map .get(&fn_id) .cloned() .unwrap_or_default(); for rib in this.lifetime_ribs.iter().rev() { extra_lifetime_params.extend( rib.bindings .iter() .map(|(&ident, &(node_id, res))| (ident, node_id, res)), ); match rib.kind { LifetimeRibKind::Item => break, LifetimeRibKind::AnonymousCreateParameter { binder, .. } => { if let Some(earlier_fresh) = this.r.extra_lifetime_params_map.get(&binder) { extra_lifetime_params.extend(earlier_fresh); } } _ => {} } } this.r .extra_lifetime_params_map .insert(async_node_id, extra_lifetime_params); } this.record_lifetime_params_for_async(fn_id, async_node_id); if let Some(body) = body { // Ignore errors in function bodies if this is rustdoc", "positive_passages": [{"docid": "doc-en-rust-b216d664ce4ec915380dcc20e837c9f70be9575b68c08f70e28c99ec44be2bf5", "text": "I'm seeing an internal compiler error on the following input (found by ):\nReport from : regressed nightly: nightly-2020-02-07\nIt looks like caused the regression. (The error happens at but not at .) cc\nThe specific commit where the error starts happening is .\ntriage: P-high. Removing nomination. Added regression stable-to-nightly tag.\nFixed by", "commid": "rust_issue_69401", "tokennum": 80}], "negative_passages": []} {"query_id": "q-en-rust-a91bc3d246bab1431b9daf3ae63e6a44f1505d656b423fe8b965b25425930409", "query": "self.truncate(0) } /// Returns the number of elements in the vector. /// Returns the number of elements in the vector, also referred to /// as its 'length'. /// /// # Examples ///", "positive_passages": [{"docid": "doc-en-rust-3998b0fbbd812688a7ae25ca2bc53324b97397a63353e3872c34f27bda93c85f", "text": "This ensures that someone looking at the documentation who searches for the word \"length\" to find the method which returns the length of the method will be able to find the method, and also explains the origin of the name if someone is unaware. I bring this up because one of my coworkers came to me asking for the name of the method, because they were having trouble finding it in the documentation.\nThe problem with this is that it's generally bad to include this in the summary, as it's just repeating what's already been said. So I didn't do this on purpose. I'm not sure what exact text rustdoc searches though.... i'd be against putting this in the summary, but for putting it in the longer description\nWe seem to be very inconsistent about what summaries to use for methods: Some say \"length\" and some say \"number of elements\". Perhaps changing them all to say something like \"Returns the number of elements in / length of the slice.\" would help here.\nIt doesn't need to be in the summary(?), it can be in the body of the doc. already finds and and so on, so the focus is simply in Ctrl+F search inside the doc page itself (if I understand correctly?).", "commid": "rust_issue_37866", "tokennum": 269}], "negative_passages": []} {"query_id": "q-en-rust-a949cfbdfa1b11238bcdee346975aebf902eb486bd4dc460ee8047c2098fed63", "query": " fn main() { transmute(); // does not ICE } extern \"rust-intrinsic\" fn transmute() {} //~^ ERROR intrinsic has wrong number of type parameters: found 0, expected 2 //~| ERROR intrinsics are subject to change //~| ERROR intrinsic must be in `extern \"rust-intrinsic\" { ... }` block ", "positive_passages": [{"docid": "doc-en-rust-873bf10379b4caf187ed69a684f0180f6cd1d81ef25e103ed58fe8de2dca7fde", "text": " ### Location ### Summary ", "positive_passages": [{"docid": "doc-en-rust-72f0f3b5408c33a22719773a2806572e906cb640818c4bb0d93411e1017517a8", "text": "This is unfortunate because: gives the impression that documentation fixes should not be submitted here, or are not welcome. prevents you from gathering commonly requested information (link to docs, rust version number, suggested fixes) in an automatic fashion. prevents automatic labelling for documentation-relevant issues. To fix this, . $DIR/account-for-lifetimes-in-closure-suggestion.rs:13:22 | LL | Thing.enter_scope(|ctx| { | --- | | | has type `TwoThings<'_, '1>` | has type `TwoThings<'2, '_>` LL | SameLifetime(ctx); | ^^^ this usage requires that `'1` must outlive `'2` | = note: requirement occurs because of the type `TwoThings<'_, '_>`, which makes the generic argument `'_` invariant = note: the struct `TwoThings<'a, 'b>` is invariant over the parameter `'a` = help: see for more information about variance error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-7bdc2a1076403b14b3b7ffe6e2607f626dc344c6eab1abe343328494516dcb5c", "text": " $DIR/account-for-lifetimes-in-closure-suggestion.rs:13:22 | LL | Thing.enter_scope(|ctx| { | --- | | | has type `TwoThings<'_, '1>` | has type `TwoThings<'2, '_>` LL | SameLifetime(ctx); | ^^^ this usage requires that `'1` must outlive `'2` | = note: requirement occurs because of the type `TwoThings<'_, '_>`, which makes the generic argument `'_` invariant = note: the struct `TwoThings<'a, 'b>` is invariant over the parameter `'a` = help: see for more information about variance error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "text": ": // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // compile-flags: -D warnings -D unknown-lints #![allow(unknown_lints)] #![allow(random_lint_name)] fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-0c01f8b386510b5a6fbed027ba17dadb6ac102acc71a2f78f7915ba6b4a839d2", "text": "The following snippet works fine before but now it reports a warning: Steps to reproduce: clone rustup override set nightly-2017-08-11 cargo test Is this an expected behavior, or a bug? Thank you!\ncc -- probably related to the new lint implementation\nYes this was caused by The bug is on . The check for whether is in scope is not checking the current set of attributes being linted. For example this does not warn: but this warns: (as you've noticed) The fix is to just take the local variable into account which has the up-to-that-point set of lint attributes collected.\nI've opened a PR for this at", "commid": "rust_issue_43809", "tokennum": 142}], "negative_passages": []} {"query_id": "q-en-rust-aa9434791fd797a7dac5db1c568f2ec9ba6a93bed0976eb3ea7e23c2b97663fe", "query": "#[inline] fn plain_summary_line(s: Option<&str>) -> String { let md = markdown::plain_summary_line(s.unwrap_or(\"\")); shorter(Some(&md)).replace(\"n\", \" \") let line = shorter(s).replace(\"n\", \" \"); markdown::plain_summary_line(&line[..]) } fn document(w: &mut fmt::Formatter, cx: &Context, item: &clean::Item) -> fmt::Result {", "positive_passages": [{"docid": "doc-en-rust-c8c31d0935bfad6453ca56a52ce12aeae4d8a710911d54dfc638be37cf0f3959", "text": "A documentation comment like the one cited verbatim below, produces a main page summary that includes the first line of the code in the example. This is very surprising since there's an empty line in between, and the code example inserts a large colored box in the overview page (the parent module). cc since it's probably related to PR /// A tuple or fixed size array that can be used to index an array. /// /// /// /// Note the blanket implementation that's not visible in rustdoc: /// !", "commid": "rust_issue_31899", "tokennum": 121}], "negative_passages": []} {"query_id": "q-en-rust-aaa6332a71184df7e8282580e65350e96539dc7cfbce7c1960d9663058d7eb97", "query": "}; debug!(\"make_shim({:?}) = untransformed {:?}\", instance, result); pm::run_passes( // We don't validate MIR here because the shims may generate code that's // only valid in a reveal-all param-env. However, since we do initial // validation with the MirBuilt phase, which uses a user-facing param-env. // This causes validation errors when TAITs are involved. pm::run_passes_no_validate( tcx, &mut result, &[", "positive_passages": [{"docid": "doc-en-rust-f490521c52d25c7603a2622aadcc7005d494ad96916fc9287f4e5f1aa9ce79cd", "text": " $DIR/issue-68062-const-extern-fns-dont-need-fn-specifier-2.rs:5:18 | LL | const unsafe WhereIsFerris Now() {} | ^^^^^^^^^^^^^ expected one of `extern` or `fn` error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-4b896e75ea201d5026a72d90704af60b8d8f16ff87fb6c41e963080cb6fd89bb", "text": "The following program compiles, and prints :\nAs pointed out by on discord , this also works on stable:\nRegression introduced in 1.40.0, specifically this line:\nI have a fix in", "commid": "rust_issue_68062", "tokennum": 42}], "negative_passages": []} {"query_id": "q-en-rust-ae15a8fa492b329292f0395503d64b74f329132acc869fffff62231fee12f126", "query": "// the MIR body will be constructed well. let coroutine_ty = body.local_decls[ty::CAPTURE_STRUCT_LOCAL].ty; let ty::Coroutine(_, args) = *coroutine_ty.kind() else { bug!(\"{body:#?}\") }; let ty::Coroutine(_, args) = *coroutine_ty.kind() else { bug!(\"tried to create by-move body of non-coroutine receiver\"); }; let args = args.as_coroutine(); let coroutine_kind = args.kind_ty().to_opt_closure_kind().unwrap();", "positive_passages": [{"docid": "doc-en-rust-d9788f9fb62c8de2f97f07962af96a35636fdc8a876f620741389e1c3bb30856", "text": " $DIR/non-existent-field-present-in-subfield.rs:37:24 | LL | let _test = &fooer.c; | ^ unknown field | = note: available fields are: `first`, `_second`, `_third` help: one of the expressions' fields has a field of the same name | LL | let _test = &fooer.first.bar.c; | ^^^^^^^^^^ error[E0609]: no field `test` on type `Foo` --> $DIR/non-existent-field-present-in-subfield.rs:40:24 | LL | let _test2 = fooer.test; | ^^^^ unknown field | = note: available fields are: `first`, `_second`, `_third` help: one of the expressions' fields has a field of the same name | LL | let _test2 = fooer.first.bar.c.test; | ^^^^^^^^^^^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0609`. ", "positive_passages": [{"docid": "doc-en-rust-48cee304816ed8068fba74f198865051001db76ca3462cd9612353747a29af8e", "text": "I just encountered this output: We currently mention available fields when the field used isn't found. It would be nice if we peeked at the types for the available fields to see if any of them has any fields (including through deref) of the name we originally wanted. In this case, I would love to see the following output: $DIR/impl-trait-with-missing-bounds.rs:45:13 | LL | qux(constraint); | ^^^^^^^^^^ `::Item` cannot be formatted using `{:?}` because it doesn't implement `Debug` ... LL | fn qux(_: impl std::fmt::Debug) {} | --------------- required by this bound in `qux` | = help: the trait `Debug` is not implemented for `::Item` help: introduce a type parameter with a trait bound instead of using `impl Trait` | LL | fn baw(constraints: I) where ::Item: Debug { | ^^^^^^^^^^^^^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 6 previous errors For more information about this error, try `rustc --explain E0277`.", "positive_passages": [{"docid": "doc-en-rust-bf02cff849637e34b780957988347f5e3bb12fc557fa33d07b85804f11218daa", "text": "(Edit: Actually, being an inherent fn doesn't matter. It happens with top-level fns too.) This prints the diagnostic: The suggestion in the last line is not valid syntax (). It also emits malformed syntax if the first parameter is a or parameter. The diagnostic becomes well-formed if any of the following is done: The first parameter is changed to instead ofThe fn is not an async fn. In either case, the diagnostic correctly says: Happens on both nightly: ... and stable:\nThis should hopefully be a fairly straightforward patch to the diagnostics code that generates the suggestion.", "commid": "rust_issue_79843", "tokennum": 132}], "negative_passages": []} {"query_id": "q-en-rust-b127f11bc1abeafe158fb50c618032059c948c3bbb3745f975545cf998ffc109", "query": "/// Creates an iterator that skips the first `n` elements. /// /// After they have been consumed, the rest of the elements are yielded. /// Rather than overriding this method directly, instead override the `nth` method. /// /// # Examples ///", "positive_passages": [{"docid": "doc-en-rust-069405b501f0e546f28d9b10d313572e89188902e86e202852d11618e1634404", "text": "From : It's easy, according to implement my own ` implementation for free. In retrospect it makes sense, but it was not obvious to me at all just from reading the docs. It is not mentioned in , or the docs for . And as the link in my post suggests, trying to work my way around the ownership of the APIs led me quite far afield. Admittedly it's not the most common case, but I think it would save others time if the standard library at least mentioned this somewhere. EDIT: fixed forum link which changed its name when I marked it solved.\nA section about \"you should consider overloading these things\" definitely sounds good. should also be on that list. (In the future or in nightly is in that list too, .)\nYour forum link is broken.\nThe docs do say However, I do like 's suggestion, about talking about which methods you may want to override.\nSuch hints should be non-normative though to allow the implementations, especially specializations, to use different approaches in the future.\nAgreed, I was picturing more which methods to consider overloading, not specifically which things use them. (Or, contrapositively, which things there's usually no reason to overload, like , because the default implementation is generally as good as it gets because it's a trivial wrapper.)\nSo basically this issue is about adding some documentation on which methods you might want to override right? Which methods do you think are best to mention? Or did I misinterpret the discussion?\nThat's correctly, probably under or near For now my inclination would be to mention and -- once stable it should be instead, but IIRC we don't mention unstable in the broader docs until it stabilizes.\nWhat about adding this line of text at the end of the \"Implementing Iterator\" section? which call internally. However, it is also possible to write a custom implementation of methods such as and if an iterator is able to compute them more efficiently without calling . Feel free to adjust the text or make something up yourself :) If you agree I will create a PR for this.\nI think it would also be nice to add something to the documentation. How about this? Rather than overriding this method directly, instead override the method.\nin case it wasn't clear or you are not watching your GitHub notifications, I approve of both your and suggestions. I look forward to the PR!", "commid": "rust_issue_60223", "tokennum": 517}], "negative_passages": []} {"query_id": "q-en-rust-b127f11bc1abeafe158fb50c618032059c948c3bbb3745f975545cf998ffc109", "query": "/// Creates an iterator that skips the first `n` elements. /// /// After they have been consumed, the rest of the elements are yielded. /// Rather than overriding this method directly, instead override the `nth` method. /// /// # Examples ///", "positive_passages": [{"docid": "doc-en-rust-f68ea4b7b2d5dd8af323f213ecffdfd834d941669927fc9ddebbe652dbea89de", "text": "Sorry for the late reply! I apparently don't get notifications for thumbs ups and the like! I will hopefully be able to create a PR somewhere this midday.", "commid": "rust_issue_60223", "tokennum": 34}], "negative_passages": []} {"query_id": "q-en-rust-b12b65d3e2c80a68c849d5f7b384c7840f129f050d2772ed1ecc74a5cefb8971", "query": "} } /// Constructs a new `Rc` using a closure `data_fn` that has access to a /// weak reference to the constructing `Rc`. /// Constructs a new `Rc` while giving you a `Weak` to the allocation, /// to allow you to construct a `T` which holds a weak pointer to itself. /// /// Generally, a structure circularly referencing itself, either directly or /// indirectly, should not hold a strong reference to prevent a memory leak. /// In `data_fn`, initialization of `T` can make use of the weak reference /// by cloning and storing it inside `T` for use at a later time. /// indirectly, should not hold a strong reference to itself to prevent a memory leak. /// Using this function, you get access to the weak pointer during the /// initialization of `T`, before the `Rc` is created, such that you can /// clone and store it inside the `T`. /// /// `new_cyclic` first allocates the managed allocation for the `Rc`, /// then calls your closure, giving it a `Weak` to this allocation, /// and only afterwards completes the construction of the `Rc` by placing /// the `T` returned from your closure into the allocation. /// /// Since the new `Rc` is not fully-constructed until `Rc::new_cyclic` /// returns, calling [`upgrade`] on the weak reference inside `data_fn` will /// returns, calling [`upgrade`] on the weak reference inside your closure will /// fail and result in a `None` value. /// /// # Panics /// /// If `data_fn` panics, the panic is propagated to the caller, and the /// temporary [`Weak`] is dropped normally. ///", "positive_passages": [{"docid": "doc-en-rust-34f5fb938e9d88a3aeb61d1d37a61b5a4e328c732c81db1c30fae7d849556c86", "text": "As I in the i.r-l.o prerelease thread, the documentation of the new(ly stabilized) and method could really use some elaboration before the 1.60 release. I'm quite familiar with refcounting, strong/weak refs, etc. but just from the doc, I have no idea what problem the new method solves exactly, when I should use it, why the closure is needed, how the closure argument seems to come from \"thin air\", and what the closure should return.\nArgh, thanks. Could've sworn I specifically double-checked that I wrote \"cyclic\"\u2026\nProblem the method solves: constructing a which holds a pointer to itself. When you should use it: when you need to do the above. Previously you'd construct with e.g. and then mutate the value (via e.g. ) to fix up the self references. Where the closure argument comes from: essentially, the point of is that the place is allocated first, with strong=0, weak=1. The place has no data yet (you're constructing the value to place there), but is in the \"zombie\" state where it has weak references but no data. Why the closure is needed: the place needs to be \"finished\" once the value is created. In theory, you could \"recover\" any zombie weak, but for the most case, Rust wants to pretend that a \"zombie\" and a dangling are equivalent. What the closure should return: the closure returns the object to place in the reference counted place. After you've constructed the object, the reference count is \"activated\" and the object is accessable by (clones of) the weak pointer given to the constructor closure. For the most part, exists to construct style objects more easily, which is what the example demonstrates. It can be used for other use cases, but this is the primary motivating example. I'm biased because I know what is used for, so the existing documentation is enough. The docs aren't going to be changed for 1.60, but if you (or anyone) can improve the docs (with the above information or otherwise), PRs to do so are welcome!\nI agree I had to read the documentation a few times to understand what this does.\nHi!", "commid": "rust_issue_95672", "tokennum": 485}], "negative_passages": []} {"query_id": "q-en-rust-b12b65d3e2c80a68c849d5f7b384c7840f129f050d2772ed1ecc74a5cefb8971", "query": "} } /// Constructs a new `Rc` using a closure `data_fn` that has access to a /// weak reference to the constructing `Rc`. /// Constructs a new `Rc` while giving you a `Weak` to the allocation, /// to allow you to construct a `T` which holds a weak pointer to itself. /// /// Generally, a structure circularly referencing itself, either directly or /// indirectly, should not hold a strong reference to prevent a memory leak. /// In `data_fn`, initialization of `T` can make use of the weak reference /// by cloning and storing it inside `T` for use at a later time. /// indirectly, should not hold a strong reference to itself to prevent a memory leak. /// Using this function, you get access to the weak pointer during the /// initialization of `T`, before the `Rc` is created, such that you can /// clone and store it inside the `T`. /// /// `new_cyclic` first allocates the managed allocation for the `Rc`, /// then calls your closure, giving it a `Weak` to this allocation, /// and only afterwards completes the construction of the `Rc` by placing /// the `T` returned from your closure into the allocation. /// /// Since the new `Rc` is not fully-constructed until `Rc::new_cyclic` /// returns, calling [`upgrade`] on the weak reference inside `data_fn` will /// returns, calling [`upgrade`] on the weak reference inside your closure will /// fail and result in a `None` value. /// /// # Panics /// /// If `data_fn` panics, the panic is propagated to the caller, and the /// temporary [`Weak`] is dropped normally. ///", "positive_passages": [{"docid": "doc-en-rust-2ba1d12c04163d8be4307341016f856d00395214ee60d6f3fcd4c5bd47922d58", "text": "Just a quick note that changing the documentation for the Rust 1.60.0 release itself will likely not be possible, that'd require a full release rebuild.\nI'm sending a PR with explanations provided by so that it'll at least be in the next release. Why do you need a full release rebuild btw? Isn't it possible to regen the docs without rebuilding everything like the option on rustc?\nI opened Feedback is very welcome. :)\nOur release tooling only supports building everything at the same time.\nWell, at least it'll be present in the next release with a beta backport.", "commid": "rust_issue_95672", "tokennum": 129}], "negative_passages": []} {"query_id": "q-en-rust-b15904aede167d26caa238a9c5f32dfe385a67a6dd4fd45274b4dcf1fcd5df25", "query": "// stripping away any starting or ending parenthesis characters\u2014hence this // test of the JSON error format. #![warn(unused_parens)] #![deny(unused_parens)] #![allow(unreachable_code)] fn main() { // We want to suggest the properly-balanced expression `1 / (2 + 3)`, not // the malformed `1 / (2 + 3` let _a = 1 / (2 + 3); let _a = 1 / (2 + 3); //~ERROR unnecessary parentheses f(); }", "positive_passages": [{"docid": "doc-en-rust-4ec7e73a404ac76d69c671c6f06db99d936ae8f0078fcfaac8411a19baadc5dd", "text": "Update test file to use + annotations instead of (see https://rust-) The files need to update are: - $DIR/str-as-char-butchered.rs:4:21 | LL | let _: &str = '\u03b2; | ^ expected `while`, `for`, `loop` or `{` after a label | help: add `'` to close the char literal | LL | let _: &str = '\u03b2'; | + error[E0308]: mismatched types --> $DIR/str-as-char-butchered.rs:4:19 | LL | let _: &str = '\u03b2; | ---- ^^ expected `&str`, found `char` | | | expected due to this error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-205d33f2c4ca407a5eba587eb9d79d5b66428139a1f256eedc1fb0c137300290", "text": " $DIR/issue-59134-0.rs:8:27 | LL | const FLAG: u32 = bogus.field; | ^^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-bcb716d01e0aa92e792c79ce9116a6404c964666a769ff595f0e8e001f55676e", "text": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?", "commid": "rust_issue_59134", "tokennum": 497}], "negative_passages": []} {"query_id": "q-en-rust-b523cfd67ece5f2e6a8d67c450dd03609bee563d0fa77dd5e82575f00eb6a222", "query": " error[E0425]: cannot find value `bogus` in this scope --> $DIR/issue-59134-0.rs:8:27 | LL | const FLAG: u32 = bogus.field; | ^^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-1560eee77cf6c7650928cb9ddd0f8e103b6279cb08a15618e7077839aa0fe6b3", "text": "I think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "commid": "rust_issue_59134", "tokennum": 164}], "negative_passages": []} {"query_id": "q-en-rust-b52bc1eb1a5635da45d8d1fba08e094369fb034d1c9d39d737fbe75bcbf3384c", "query": " //@ known-bug: #111699 //@ edition:2021 //@ compile-flags: -Copt-level=0 #![feature(core_intrinsics)] use std::intrinsics::offset; fn main() { let a = [1u8, 2, 3]; let ptr: *const u8 = a.as_ptr(); unsafe { assert_eq!(*offset(ptr, 0), 1); } } ", "positive_passages": [{"docid": "doc-en-rust-4ba4016c415c1575c07d2ee0b6ab9be28620aa99d91eb8e92280f6a4c3cb941b", "text": " $DIR/issue-54966.rs:3:27 | LL | fn generate_duration() -> Oper {} | ^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0412`. ", "positive_passages": [{"docid": "doc-en-rust-27abf7dffdd9fb3420783921a65ebbfb4054467a1c3c0f7240038bbba2872f0e", "text": "I expected just a compiler error . Compiler panicked: Meta\nThis is fixed in the beta version (1.30).", "commid": "rust_issue_54966", "tokennum": 24}], "negative_passages": []} {"query_id": "q-en-rust-b5dc21aca58affde69b68dc86797637d6319e45250678bb806f5fd1d74eb2fec", "query": " error[E0412]: cannot find type `Oper` in this scope --> $DIR/issue-54966.rs:3:27 | LL | fn generate_duration() -> Oper {} | ^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0412`. ", "positive_passages": [{"docid": "doc-en-rust-afa01b1e4a6f7a1fc98bc53710b50fb4a7badf9201d0c207379809b43c80a6cc", "text": "This lets you get a mutable reference to an immutable value under a specific case. Code: From what I can tell, the tuple and at least two depths of Enums are necessary, along with an on the right side. There's probably other variations, but that seems to work well enough. Expected output would be a compile error, instead output is This works on both stable and nightly\nwould fix this as well?\nFixed by . Off: I wonder why we can't ship nll yet. It would solve so many issues right now, because of the match ergonomics sigh Match ergonomics produces roughly 10% of all errors/unsoundness/ICEs atm (at least it feels like this)\nYes correctly fixed this issue.", "commid": "rust_issue_52240", "tokennum": 157}], "negative_passages": []} {"query_id": "q-en-rust-b5ef5fa830822948ff108f0fddf2c7cd1728e80c207d9f596d6c5ed0e53f74ec", "query": " // run-rustfix fn main() { let a: usize = 123; let b: &usize = &a; if true { a } else { *b //~ ERROR `if` and `else` have incompatible types [E0308] }; if true { 1 } else { 1 //~ ERROR `if` and `else` have incompatible types [E0308] }; if true { 1 } else { 1 //~ ERROR `if` and `else` have incompatible types [E0308] }; } ", "positive_passages": [{"docid": "doc-en-rust-bc991a8c63d81686bebf07f793f5c69e366692742b1f03c258d8aba8722870bd", "text": " $DIR/issue-66693-panic-in-array-len.rs:8:20 | LL | let _ = [0i32; panic!(2f32)]; | ^^^^^^^^^^^^ | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error[E0080]: evaluation of constant value failed --> $DIR/issue-66693-panic-in-array-len.rs:12:21 | LL | let _ = [false; panic!()]; | ^^^^^^^^ the evaluated program panicked at 'explicit panic', $DIR/issue-66693-panic-in-array-len.rs:12:21 | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0080`. ", "positive_passages": [{"docid": "doc-en-rust-7c73542d67b097e4fd1fee1922c673ac020986f94704300c10704cf32974803c", "text": "The panic payload of the macro (as passed on to ) is generic, but the CTFE panic support assumes it will always be an . This leads to ICEs: Cc\nI guess we could support this, but we'd have to find some way to render the errors. Maybe we need to wait for const traits so we can require const Debug?\nAlternatively constck could require the argument to have type .\nalso requires a\nYeah, the libcore panic machinery does not support non-format-string panics.\nAye, left out the latter of my message. is likely the most common argument to , and it's not unusual with e.g. ; so I agree that waiting for const traits before supporting any other arguments is reasonable.\nIm hitting this ICE on stable without feature flag:\nHm indeed, that is interesting: any idea why we go on here after seeing an error? Does this kind of behavior mean any part of CTFE is reachable on stable somehow?\nyes. We're not gating CTFE on whether there were stability (or any other static checks) failing. I've been abusing this since 1.0 to demo things on the stable compiler and never got around to fixing it. I don't even know if we have a tracking issue for it.\n:rofl: So what would be a possible approach here? Replace such consts by early enough that their MIR never actually gets evaluated? We might as well use this issue, IMO...\nThat would require us to mutate all MIR and potentially even HIR. Other possible solutions: poison the mir Body if const checks fail (by adding a field like or by just nuking the body by replacing it with ) and thus make allow const evaluation to check if the mir Body failed the checks and bail out before doing anything make the have an field similar to\nOn second thought, let's leave this issue for the ICE, and make a new one for \"we can trigger nightly-only const code on stable because we don't abort compilation early enough\":\nGiven that this ICE is pre-existing, as shown by , can we un-block ? I'd really like to see const_panic stabilized, and this seems like a minor issue to be blocking on.\nThat comment is an example of accidentally partially exposing an unstable feature on stable -- a separate bug tracked in I don't see how that's an argument for shipping that feature on stable when it is broken.", "commid": "rust_issue_66693", "tokennum": 515}], "negative_passages": []} {"query_id": "q-en-rust-b5fc17187cab9a36105bd514bc52cf3f3d1c7b7491a7811525562f90fc390f5b", "query": " error: argument to `panic!()` in a const context must have type `&str` --> $DIR/issue-66693-panic-in-array-len.rs:8:20 | LL | let _ = [0i32; panic!(2f32)]; | ^^^^^^^^^^^^ | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error[E0080]: evaluation of constant value failed --> $DIR/issue-66693-panic-in-array-len.rs:12:21 | LL | let _ = [false; panic!()]; | ^^^^^^^^ the evaluated program panicked at 'explicit panic', $DIR/issue-66693-panic-in-array-len.rs:12:21 | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0080`. ", "positive_passages": [{"docid": "doc-en-rust-21ce43edd15410cd7321881478b82e77d0a12e87f14b1a6a8e5e03c80674bc9f", "text": "In particular, applies to all unstable const features. Fixing this bug here is not even that hard I think: somewhere around here there needs to be a check that the argument has type . Re: , AFAIK that one is also blocked on figuring out how the diagnostics should look like, and a further concern (though maybe not blocking) is .\nI guess that might make this a non-issue, where it is proposed also would require as first argument.\nFor reference: , which was recently merged. The plan is to fix the behaviour of in the Rust 2021 edition. Given that this ICE is listed as a blocker for , does this mean that stabilizing that will have to wait for the next edition?\nNo, because breaking changes are allowed to be made to unstable features without moving to a new edition; that's kinda the whole idea. The 2021 edition will effectively just extend the same restriction to all invocations, const context or not.\nI'd like to tackle this but I'm not very experienced with the CTFE code. I get that we need to check from the that we've matched on in this branch and we need to match on the enum, but: For the first question, I see and that we probably want to check that the type of the tuple and that there's variants of that are , but do we stop the iteration when we get one that is a type? Is the projection list always guaranteed to terminate in a variant of that has a ? And once we have an instance of , do we just check that there's any_ projection in the list that has a ?\nyou can use to get the type of the . Then you can test its to be .", "commid": "rust_issue_66693", "tokennum": 352}], "negative_passages": []} {"query_id": "q-en-rust-b60a83dd72393c8ce7821df6d7201b70738b0aac6736b8a20ffdb1e467f89214", "query": "tinfo: *const TypeInfo, dest: extern \"C\" fn(*mut libc::c_void) -> *mut libc::c_void, ) -> !; fn __gxx_personality_v0( version: c_int, actions: uw::_Unwind_Action, exception_class: uw::_Unwind_Exception_Class, exception_object: *mut uw::_Unwind_Exception, context: *mut uw::_Unwind_Context, ) -> uw::_Unwind_Reason_Code; }", "positive_passages": [{"docid": "doc-en-rust-32273abb043c6654aef23523563421842d54754f8b9fce5abf25a92a8e46d2f3", "text": " $DIR/expect_unfulfilled_expectation.rs:24:22 --> $DIR/expect_unfulfilled_expectation.rs:17:14 | LL | #[expect(unused_mut, reason = \"this expectation will create a diagnostic with the default lint level\")] | ^^^^^^^^^^ | = note: this expectation will create a diagnostic with the default lint level = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` warning: this lint expectation is unfulfilled --> $DIR/expect_unfulfilled_expectation.rs:27:22 | LL | #[expect(unused, unfulfilled_lint_expectations, reason = \"the expectation for `unused` should be fulfilled\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the expectation for `unused` should be fulfilled = note: the `unfulfilled_lint_expectations` lint can't be expected and will always produce this message warning: this lint expectation is unfulfilled --> $DIR/expect_unfulfilled_expectation.rs:27:22 | LL | #[expect(unused, unfulfilled_lint_expectations, reason = \"the expectation for `unused` should be fulfilled\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the expectation for `unused` should be fulfilled = note: the `unfulfilled_lint_expectations` lint can't be expected and will always produce this message = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` warning: 4 warnings emitted warning: 6 warnings emitted ", "positive_passages": [{"docid": "doc-en-rust-9da6e95c335f48d6b06b8a6cd003c89c70bc53bedd6bf2c5c7ac9bd6bf6d5f89", "text": "I tried this code: I expected to catch lint and suppress it, as it should. Instead, compiler issues a diagnostic both about unused value and unfulfilled lint expectation: It does work as expected though if is applied to function. I used the latest stable (Rust 1.81), but it is reproducible both on latest beta (2024-09-04 ) and latest nightly (2024-09-05 ). labels: +F-lintreasons, +A-diagnostics $DIR/expect_unfulfilled_expectation.rs:24:22 --> $DIR/expect_unfulfilled_expectation.rs:17:14 | LL | #[expect(unused_mut, reason = \"this expectation will create a diagnostic with the default lint level\")] | ^^^^^^^^^^ | = note: this expectation will create a diagnostic with the default lint level = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` warning: this lint expectation is unfulfilled --> $DIR/expect_unfulfilled_expectation.rs:27:22 | LL | #[expect(unused, unfulfilled_lint_expectations, reason = \"the expectation for `unused` should be fulfilled\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the expectation for `unused` should be fulfilled = note: the `unfulfilled_lint_expectations` lint can't be expected and will always produce this message warning: this lint expectation is unfulfilled --> $DIR/expect_unfulfilled_expectation.rs:27:22 | LL | #[expect(unused, unfulfilled_lint_expectations, reason = \"the expectation for `unused` should be fulfilled\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the expectation for `unused` should be fulfilled = note: the `unfulfilled_lint_expectations` lint can't be expected and will always produce this message = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` warning: 4 warnings emitted warning: 6 warnings emitted ", "positive_passages": [{"docid": "doc-en-rust-79ae52ec28826e7fd5d0571036ba651615fc4170373c979f134af2d5a515bd5f", "text": "are present for more than one HIR node in the and if one node is the (direct or indirect) parent of the other node, then add the level only_ for parent node. This will take care of any lints on the child (since we always walk up the HIR hierachy as mentioned earlier) as well as the parent and it will not cause any warnings about missed expectations. I am new to this area so if someone can review this idea or suggest alternatives that will be great. since you seemed to have worked in this area recently, can you please weigh in?\nYour investigation is accurate, and the fix is simple: visit the statement node when building lint levels. Do you mind making such a pr ? I don't think we need any special adjustment for cases, duplicated attributes are already taken into account, but a test won't hurt.\nThanks . Yes, I'll put together a PR. My only concern is if all we do is add the level at , although it will work perfectly for , etc., in the case of we will end up with an unfulfilled expectation on the node which will appear to the user as a warning. But anyway let me have a go and see what happens.\nOpened draft PR that adds the lint on . It does suffer from the extra warning problem.", "commid": "rust_issue_130142", "tokennum": 273}], "negative_passages": []} {"query_id": "q-en-rust-b7662c6bc08650e717f64a63f2a805d2a528581f2aa9b51ab4ac0a400e068702", "query": "severity: ChangeSeverity::Info, summary: \"The `build.profiler` option now tries to use source code from `download-ci-llvm` if possible, instead of checking out the `src/llvm-project` submodule.\", }, ChangeInfo { change_id: 129152, severity: ChangeSeverity::Info, summary: \"New option `build.cargo-clippy` added for supporting the use of custom/external clippy.\", }, ];", "positive_passages": [{"docid": "doc-en-rust-99c889b6440041652c587ab3ef8b73797f12c011e66f29f49153e5655b8da3be", "text": "I'm pretty sure it is possible to set custom paths for , & in but not for clippy.\nlabel +T-bootstrap", "commid": "rust_issue_121518", "tokennum": 29}], "negative_passages": []} {"query_id": "q-en-rust-b8066812a2b29be804c901a93bda9dc8f9ab39927c33cc6b4231f62d7f86dea1", "query": "// `Iterator::__iterator_get_unchecked`. unsafe { (self.a.__iterator_get_unchecked(idx), self.b.__iterator_get_unchecked(idx)) } } #[inline] fn fold(mut self, init: Acc, mut f: F) -> Acc where F: FnMut(Acc, Self::Item) -> Acc, { let mut accum = init; let len = ZipImpl::size_hint(&self).0; for i in 0..len { // SAFETY: since Self: TrustedRandomAccessNoCoerce we can trust the size-hint to // calculate the length and then use that to do unchecked iteration. // fold consumes the iterator so we don't need to fixup any state. unsafe { accum = f(accum, self.get_unchecked(i)); } } accum } } #[doc(hidden)]", "positive_passages": [{"docid": "doc-en-rust-7accc4af3c6adb97fe69cbac3ba58b19914fb38653e6ecbb38372fc5512eb31d", "text": " $DIR/issue-109054.rs:18:9 | LL | type ReturnType<'a> = impl std::future::Future + 'a; | -- this generic parameter must be used with a generic lifetime parameter ... LL | &inner | ^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0792`. ", "positive_passages": [{"docid": "doc-en-rust-5220d5a8d8d86613c4381ae5bdff14152e89290d442600e3c2f39476da0e5bee", "text": " $DIR/issue-63364.rs:6:14 | LL | for n in 100_000.. { | ^^^^^^^ | = note: `#[deny(overflowing_literals)]` on by default error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-3db888818268f2e058d840ecf54bc68a793ddb678944677856eed5923b06907c", "text": "An ICE is observable with this code: Changing to take a rather than makes the ICE go away.\nInvestigation shows that will trigger the ICE, while will not\nIntroduced in cc We can probably get away with doing an early return if and no other changes.", "commid": "rust_issue_63364", "tokennum": 52}], "negative_passages": []} {"query_id": "q-en-rust-b8aa65178091608db1b49f5f228648a355c1f522bf122e22d0bc98a3ba22145b", "query": " error[E0107]: missing generics for struct `Vec` --> $DIR/issue-92305.rs:5:45 | LL | fn f(data: &[T]) -> impl Iterator { | ^^^ expected at least 1 generic argument | note: struct defined here, with at least 1 generic parameter: `T` --> $SRC_DIR/alloc/src/vec/mod.rs:LL:COL | LL | pub struct Vec { | ^^^ - help: add missing generic argument | LL | fn f(data: &[T]) -> impl Iterator> { | ~~~~~~ error[E0282]: type annotations needed --> $DIR/issue-92305.rs:7:5 | LL | iter::empty() | ^^^^^^^^^^^ cannot infer type for type parameter `T` declared on the function `empty` error[E0282]: type annotations needed --> $DIR/issue-92305.rs:10:35 | LL | fn g(data: &[T], target: T) -> impl Iterator> { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot infer type error: aborting due to 3 previous errors Some errors have detailed explanations: E0107, E0282. For more information about an error, try `rustc --explain E0107`. ", "positive_passages": [{"docid": "doc-en-rust-5d0d21894cfa4d73f8fee73fe4896e60fefd222c19d3497ab7a41e32fe7c9d19", "text": "Repro steps: create a new cargo binary with . Replace with the following code. Then run . Hope this is helpful! I've tried to minimize the code as much as possible. : $DIR/const_gen_fn.rs:6:1 | LL | const gen fn a() {} | ^^^^^-^^^---------- | | | | | `gen` because of this | `const` because of this error: functions cannot be both `const` and `async gen` --> $DIR/const_gen_fn.rs:9:1 | LL | const async gen fn b() {} | ^^^^^-^^^^^^^^^---------- | | | | | `async gen` because of this | `const` because of this error: aborting due to 2 previous errors ", "positive_passages": [{"docid": "doc-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "text": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "commid": "rust_issue_130232", "tokennum": 24}], "negative_passages": []} {"query_id": "q-en-rust-ba80d5651c5f0bad06b3ef1919624fbf99794661f247901fa98b3e871c150567", "query": "ty::note_and_explain_type_err(tcx, terr); } } // Finally, resolve all regions. This catches wily misuses of lifetime // parameters. infcx.resolve_regions_and_report_errors(); } fn check_cast(fcx: &FnCtxt,", "positive_passages": [{"docid": "doc-en-rust-1cadb871981fa33e195ab1398ce84ed497353dd5701ac299c8824236d27c9763", "text": "The following code compiles: Nominating for 1.0, P-backcompat-lang. I may well just fix this along with , since the solution is likely going to be similar to what pointed out, but I want to get it on the radar.\nHmm, OK. So the code does attempt to handle this, but I don't know if it is handled in the right way. Specifically, it creates fresh region variables for each region and checks to see whether they unify. The trouble is that does unify with \u2014it unifies if and only if and are the same region. But this is precisely what claimed to be a problem in . So I'm not sure what the right thing is to do here.\nI'm going to assign to because I'm blocked on his opinion on what to do here.\nAssigning P-backcompat-lang, 1.0.\nRight thing to do is to use .\nArgh. The right fix is to substitute some skolemized lifetimes in for the early bound regions before doing the unification rather than fresh variables.\nActually I think the problem is that we're just not calling on the inference context", "commid": "rust_issue_15517", "tokennum": 244}], "negative_passages": []} {"query_id": "q-en-rust-baa3974c1aafa6eed2bf52d0fffbcc14b6bdf3d4e33fe53ef344d4b0954956d3", "query": "/// # Example /// /// ```rust /// # #![allow(deprecated)] /// use url::get_scheme; /// /// let scheme = match get_scheme(\"https://example.com/\") {", "positive_passages": [{"docid": "doc-en-rust-d5aaca454c4b9e374bf52fd688a570f38314b1b4372f475dd8a89626b2ac7ccb", "text": "Users should use the cargo-ified Deprecate for a release or two then delete. cc\n:+1:\nRelated:\nI tried this: Hoping to deprecate the whole crate, but then building this program does not emit any warning: I expected a deprecation warning on the line. Is there another way to achieve this, or do I need to annotate individual items in the crate?\nThat should have worked, but you may need to remove the annotation above (they may be conflicting). I would also personally find it very helpful to annotate methods in particular what the path forward is (or just a general post about how to migrate).\nRemoving worked, thanks. Should this deprecation go through the RFC process, or can I assume that the team wants it since filed this issue? The documentation at explains how to use rust-url, and in particular the concept of relative v.s. non-relative URL schemes (e.g. vs ) which is a bit unusual. I believe this sufficient to make migration straightforward, but I\u2019m obviously biased. Any feedback on the documentation is welcome. (I\u2019ll make the deprecation message include the documentation\u2019s URL, it seems more useful landing page than the github repo.)\nHaving an issue is sometimes considered a \"todo item\" rather than \"everyone has approved this\", you'll find both styles of issues on the issue tracker. We don't have much precedent for deprecating an in-tree crate, so we're forging new territory here. I'd like to start moving crates out of the repo, however, so I hope to have some more on this topic soon.", "commid": "rust_issue_15874", "tokennum": 358}], "negative_passages": []} {"query_id": "q-en-rust-baf02ef8ba73b038f865f4c9b85457654fdd48ca272ad2302de2bd5784581f5a", "query": "} fn struct_all_fields_are_public(tcx: TyCtxt<'_>, id: LocalDefId) -> bool { // treat PhantomData and positional ZST as public, // we don't want to lint types which only have them, // cause it's a common way to use such types to check things like well-formedness tcx.adt_def(id).all_fields().all(|field| { let adt_def = tcx.adt_def(id); // skip types contain fields of unit and never type, // it's usually intentional to make the type not constructible let not_require_constructor = adt_def.all_fields().any(|field| { let field_type = tcx.type_of(field.did).instantiate_identity(); if field_type.is_phantom_data() { return true; } let is_positional = field.name.as_str().starts_with(|c: char| c.is_ascii_digit()); if is_positional && tcx .layout_of(tcx.param_env(field.did).and(field_type)) .map_or(true, |layout| layout.is_zst()) { return true; } field.vis.is_public() }) field_type.is_unit() || field_type.is_never() }); not_require_constructor || adt_def.all_fields().all(|field| { let field_type = tcx.type_of(field.did).instantiate_identity(); // skip fields of PhantomData, // cause it's a common way to check things like well-formedness if field_type.is_phantom_data() { return true; } field.vis.is_public() }) } /// check struct and its fields are public or not,", "positive_passages": [{"docid": "doc-en-rust-9546ab9a79e7afb097786fe35f6ec641a95b6f3e21077a1cea80490e41620b4f", "text": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types).", "commid": "rust_issue_128053", "tokennum": 509}], "negative_passages": []} {"query_id": "q-en-rust-baf02ef8ba73b038f865f4c9b85457654fdd48ca272ad2302de2bd5784581f5a", "query": "} fn struct_all_fields_are_public(tcx: TyCtxt<'_>, id: LocalDefId) -> bool { // treat PhantomData and positional ZST as public, // we don't want to lint types which only have them, // cause it's a common way to use such types to check things like well-formedness tcx.adt_def(id).all_fields().all(|field| { let adt_def = tcx.adt_def(id); // skip types contain fields of unit and never type, // it's usually intentional to make the type not constructible let not_require_constructor = adt_def.all_fields().any(|field| { let field_type = tcx.type_of(field.did).instantiate_identity(); if field_type.is_phantom_data() { return true; } let is_positional = field.name.as_str().starts_with(|c: char| c.is_ascii_digit()); if is_positional && tcx .layout_of(tcx.param_env(field.did).and(field_type)) .map_or(true, |layout| layout.is_zst()) { return true; } field.vis.is_public() }) field_type.is_unit() || field_type.is_never() }); not_require_constructor || adt_def.all_fields().all(|field| { let field_type = tcx.type_of(field.did).instantiate_identity(); // skip fields of PhantomData, // cause it's a common way to check things like well-formedness if field_type.is_phantom_data() { return true; } field.vis.is_public() }) } /// check struct and its fields are public or not,", "positive_passages": [{"docid": "doc-en-rust-97adf45f9b9e264448b397ec08a8e2a265a07410a46c4023c53938df064ef2be", "text": "So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "commid": "rust_issue_128053", "tokennum": 495}], "negative_passages": []} {"query_id": "q-en-rust-bb20c2f7f96ae7ab366133765a659a9f3006cab72a984a28250c2cb811445e07", "query": " error: expected one of `,` or `:`, found `(` --> $DIR/issue-66357-unexpected-unreachable.rs:12:13 | LL | fn f() { |[](* } | ^ expected one of `,` or `:` error: expected one of `)`, `-`, `_`, `box`, `mut`, `ref`, `|`, identifier, or path, found `*` --> $DIR/issue-66357-unexpected-unreachable.rs:12:14 | LL | fn f() { |[](* } | -^ help: `)` may belong here | | | unclosed delimiter error: aborting due to 2 previous errors ", "positive_passages": [{"docid": "doc-en-rust-863bba54f7a41ac696146c740900e5f4a220ca115d49a9429bf54c1698b41480", "text": "When compiling a personal project, I got an error that the compiler panicked and entered unreachable code. I've dumped the output: And I committed my code which caused the bug here: It might be because I had only written half of an anonymous function at at line 92. This is not that urgent as I only saw it with half-written code, but since the compiler suggested I file a bug I did. :\nGood call! Reducing it to is enough to trigger the ICE. p:last-child { p:last-child, .docblock > .warning:last-child { margin: 0; }", "positive_passages": [{"docid": "doc-en-rust-2da5fa86b84bf416d9b670af0c4e7d9c0ccd4cda593660d7d16436e913a6382e", "text": "Following , we realized that rustdoc users would prefer to have a syntax extension rather than using plain HTML to have warning blocks. suggested the following: suggested: What do you think? cc\nPersonal Opinions: The first style will look fine for short forms, but I'm not sure for longer text. A possible syntax: Which to me pushes the text too far right. Not having any spacing, but just separating with newlines, seems hard to parse at a glance what is part of the WARNING, and prevents multiple line breaks inside the warning. Also, it's not as extensible. INFO can be , but if we ever want more extensions, they would increasingly possibly conflict with other things. The second style makes it immediately clear what is part of the block, and is unlikely to accidentally conflict if we ever want to add more block styles. It also supports multiple lines or inner blocks trivially. A final advantage is that it may support nested blocks, in case those ever come to be useful. Overall: I prefer the second form, both aesthetically and for future-proofing\nI think I agree with in preferring the second form. I like the first form's simplicity, but it also seems surprising that is treated as a 'keyword' and will change the formatting of the docs.\nThat's why we have this debate. :stuckouttongue:\nI personally actually prefer stabilizing a CSS class, I think we should do more of that, just because parsing can get annoying. But this is not that hard to parse anyway. The problem with is precisely that you can't scope it easily: what if you want to add code blocks? bullet points?\nThe problem with this is it ties rustdoc to the HTML backend; like I said nothing rustdoc currently does guarentees that it will generate HTML from the markdown. Changing that means that to make the JSON backend have the same info, we'd have to add a full HTML parser to rustdoc just to read the HTML in the markdown, because pulldown does not parse HTML for us. I really think we should avoid making the JSON backend a second-class citizen.\nAlso there's not much point to this if it's just a little CSS IMO, you can add that yourself with and it will look just as good and be more flexible since you can control the styles yourself.", "commid": "rust_issue_79710", "tokennum": 504}], "negative_passages": []} {"query_id": "q-en-rust-bb263248d8d44e217808fde8cf6e115f5d2c6ab6d12a74a5000a5c5e9c47736f", "query": "} /* For the last child of a div, the margin will be taken care of by the margin-top of the next item. */ p:last-child { p:last-child, .docblock > .warning:last-child { margin: 0; }", "positive_passages": [{"docid": "doc-en-rust-750e7c681f4102792b3611241ddcbc147822060286ee625b8e928ddde584f36a", "text": "I'm not a giant fan of the syntax because it looks odd when rendered by anything other than rustdoc, which I'd rather avoid, one of the major design concerns for intra-doc links was compatibility with other markdown parsers:\nSo, I agree, but I'm not convinced that the warning class is something that needs to be surfaced in the JSON backend as anything special. Also, bear in mind, HTML is a part of markdown: valid HTML (well a subset of HTML) is valid markdown. Users already can , and they do that in many cases. Markdown parsers already need to be able to handle HTML. There's nothing new there. Whatever consumes the JSON backend will need to be able to handle that too. I mean, yes, in theory, in practice people don't do this because of the barrier to entry (you have to get your tools to use that, including )\nAlso, this is incorrect, pulldown absolutely parses HTML. It has to, like I said valid HTML is valid Markdown. It doesn't parse it thoroughly, I believe you can mess it up with sufficiently complex HTML, but it parses HTML. What this changes is subtler: up till now you did not need any particular stylesheet to acceptably render rustdoc markdown from the JSON backend on your own (just use your favorite markdown renderer's stylesheet), but now you will need other css classes to handle rendering warnings as warnings.\nI do not agree that this is a problem. I've used Phabricator's implementation of this approach happily and it is by design that warning banners are one paragraph. It is not the intention that you would put entire sections of documentation into a warning banner. They are for a brief callout. It's fine to refer to a subsequent section of the documentation from the callout if needed. I suppose this is a matter of taste but it's one where I believe the opinionatedness of the syntax is a positive thing.\nSo I discussed this a bit with on the side, we agree that the effect on the JSON backend isn't major since HTML is already a subset of markdown and docs use HTML already (especially for images and tables). My position is as follows: I strictly prefer HTML over some other scoped syntax with nuances people have to learn. I don't think there's as much value in unscoped syntax which makes my position \"HTML or nothing\", basically.", "commid": "rust_issue_79710", "tokennum": 533}], "negative_passages": []} {"query_id": "q-en-rust-bb263248d8d44e217808fde8cf6e115f5d2c6ab6d12a74a5000a5c5e9c47736f", "query": "} /* For the last child of a div, the margin will be taken care of by the margin-top of the next item. */ p:last-child { p:last-child, .docblock > .warning:last-child { margin: 0; }", "positive_passages": [{"docid": "doc-en-rust-dec4ce823c8be44080dd52f4329cd82509f0245db429e2372e378f7f940d6712", "text": "I'm not wholly against unscoped single-line syntax. We and syntax in W3C specs and you invariably need to add more stuff to it (there's an HTML class in these specs for this reason). Furthermore, Rust has a strong culture of wrapping text, so this means we need to work well while wrapped, and that means we're again introducing some form of scoping mechanism (e.g. indenting the text), which editors will have to learn, as well.\nI don't know about admonitions specifically but there seems to be something of a community habit to use \"magic tags\" on fenced code blocks for this sort of features (e.g. github has a tag), maybe that could be an option? I don't know if pulldown_cmark has a hook for manipulating code blocks but\u2026 So a warning would be or maybe the tag would be to leave room for things like note/todo/danger/\u2026 Alternatively, take a gander at how other markdown implementations extend the language for this sort of features? e.g. seems to be using for admonitions.\nHmm, but is still for code, even if it has some extra behavior. Aside from the design concerns, this would be feasible to implement: we just extract the language string from the code block with pulldown.\nCould you use something that's already built into markdown? For example, rustdoc could take and render it as an admonition.\nThat's what suggested: to use markdown and make some changes on rustdoc side. I think the codeblock approach is the best personally.\nMy bad, I didn't see that.\nI wonder if it makes more sense to add this as a proc-macro. Rustdoc can still provide the CSS classes and things, and you could use them with HTML directly, but you could use or something and it would expand to the correct HTML tags and classes.\nMight be better to have a then, no?\nIs that going to be a special header similar to the \"unsafe\"/\"experimental\"/etc ones, or is that going to also be used within docs in an order-sensitive way? If the latter we should probably just use a CSS class; mixing and is ugly\nAgreed.", "commid": "rust_issue_79710", "tokennum": 483}], "negative_passages": []} {"query_id": "q-en-rust-bb263248d8d44e217808fde8cf6e115f5d2c6ab6d12a74a5000a5c5e9c47736f", "query": "} /* For the last child of a div, the margin will be taken care of by the margin-top of the next item. */ p:last-child { p:last-child, .docblock > .warning:last-child { margin: 0; }", "positive_passages": [{"docid": "doc-en-rust-0401f4aca5f75b95bccb3c9f0c1382f7d896e23af0bafe46936c5b01ce984214", "text": "Among the suggested ideas, my favourite one remains the code block with \"warning\" as tag:\nYeah, but triple backticks is also for preformatted text without further markup, so you won't be able to e.g. use text formatting (or other code) in there\nTrue... Finding the right syntax is much more complicated than expected...\nThat's the issue of using markdown, for all that it has a pretty nice baseline syntax it really was not designed for extensibility.\nWe can always use : it'd allow to have text formatting inside it.\nI would use , it's not quite kosher to invent arbitrary HTML tags (though you can use Custom Elements, which is more heavyweight)\nSounds good to me. It's funny because we kinda came back to the . :rofl:\nI just thought: we could use tag in the markdown and then transform it into in the generated HTML?\nMy main objection to all of these is that rustdoc shouldn't be adding custom markdown syntax that no one else supports. Any new syntax we use should look reasonable even if rendered with the default renderer.\nWell, it remains valid markdown since it's HTML (even though it's custom tag). I'm completely fine with , but it might be simpler/better for users to simply write .\nComing back to this (because of ), it seems like the conversation kinda agreed on adding a CSS class directly and then it would look like this in the docs: . I'll send a PR in the next days and then we can open an FCP to see if everyone is ok with it (and if not, we'll have code to help the debate).\nThis indeed seems like a better solution than the HTML block. It also plays better with something said about not linking it too much too HTML. This will look like a proper warning or note, mostly independent of the HTML generation.\nThey also seemed to not be too favourable to anything that needs some extra parsing from rustdoc iuc. I'm fine with bot the github solution and the CSS class.\nMaybe one more comment to make here. The note-taking app, which uses Markdown also has a similar syntax to create these blocks. It is called . It uses a syntax similar to the syntax that is used for doc intra-links currently, and it is fully compatible with them.", "commid": "rust_issue_79710", "tokennum": 506}], "negative_passages": []} {"query_id": "q-en-rust-bb263248d8d44e217808fde8cf6e115f5d2c6ab6d12a74a5000a5c5e9c47736f", "query": "} /* For the last child of a div, the margin will be taken care of by the margin-top of the next item. */ p:last-child { p:last-child, .docblock > .warning:last-child { margin: 0; }", "positive_passages": [{"docid": "doc-en-rust-905d2f3b7c6b013fa7467fd80d5c43168bffe6d3d80604ac80788220d4eb9b1e", "text": "I am uncertain if this is any better than what suggested from the GitHub syntax, but because of the similarity to the doc intra-link syntax. I do feel like this might be interesting because it should go well with backwards compatibility. However, this may look worse than the GitHub syntax solution when not going through cargo doc.\nI thought I'd already left this comment, but I'd really like to avoid adding more markdown syntax where possible. This feature works fine without special syntax, I don't think we need it. edit: copying over the reasoning from the :\nAlthough, I might not agree fully, I can fully appreciate and understand the reasoning you left in the corresponding PR. Still, I do think it might be worth to standardize some kind of syntax for some kind of warning block. This does not actually require customized markdown. Alternatively, I can suggest adding the GitHub flavor with emojis to the . This may help in standardizing short form warnings and notes within documentation. We could for example propose to standardize: which should look like the following I do value people's opinion on this. Is this a possible compromise?\nThat's a valid alternative, I think the yellow color is pretty important though, and it's useful for themes to be able to make styling decisions about warnings so I do think that having some way of marking it is important. I just don't think the way of marking it should be some magic invocation as opposed to CSS classes. (And I think that the bar for a feature that is adding new semantics to the Rustdoc markdown parser is much, much higher than that of one stabilizing a CSS class)\n100% agree. I will look at the Rust API guidelines issue and possibly an RFC to rust to track the adoption in GitHub and possibly eventually merge. Thank you :smile:\nBeyond all this discussion of third-party tools, I'd really like to make sure that whatever new syntax and presentation we add to rustdoc can also be to [mdBook]. That way, when the standard library starts using callouts, we can make sure the same style also gets used in The Book. Mostly, The Book seems to just use blockquotes, often with headings like [this one about ownership]. mdBook doesn't do anything special with it: it's [just a block quote with a header].", "commid": "rust_issue_79710", "tokennum": 501}], "negative_passages": []} {"query_id": "q-en-rust-bb263248d8d44e217808fde8cf6e115f5d2c6ab6d12a74a5000a5c5e9c47736f", "query": "} /* For the last child of a div, the margin will be taken care of by the margin-top of the next item. */ p:last-child { p:last-child, .docblock > .warning:last-child { margin: 0; }", "positive_passages": [{"docid": "doc-en-rust-c904caf33bd46cec9a84e48d777b4f48ec308eec9cfa53ff08ad48d46f41d2fc", "text": "[mdBook]: [this one about ownership]: [just a block quote with a header]:\nThis is a good point. Once we have something merged here, I'll send a PR there to have an equivalent.\nCC\nI'd be happy to add some CSS rules to mdbook if that is what you are proposing.\nYeah, it is.\nFYI, using a is not semantic and Microsoft GitHub is setting a bad precedence. \u2014 CommonMark Spec v0.30, denotes a in Markdown and is rendered as such. \u2014 W3C HTML spec, Unless you are quoting a source, this is not the correct element to use. While I disagree with even using altogether, at least some tools out there are at least translating the blockquote in admonition context into a (div has no semantic meaning which is unfortunately the best HTML option for admonitions). It would be ill-advised to follow Microsoft GitHub\u2019s Markdown syntax fork—especially with their disregard for semantics. Similarly using a block would also be breaking semantics as an admonition is not preformatted text. CommonMark does have that could be used if the solution was preferred. There\u2019s also other reasonable options noted here that don\u2019t break semantics.\nI agree that block quotes for callouts are bad, and would like to work with mdBook to roll out a common syntax (probably ) that will replace the ad-hoc, unsemantic markup that they use right now.\nUsing is what we agreed upon in The debate now is on the look itself.\nUnfortunately isn't great because markdown has a \"boundary\" when HTML tags are used. I wanted to add a warning indicating that users probably wanted another trait, but intra-doc links don't work (which is the correct behavior per markdown spec).\nWorks perfectly fine though (well, minus the UI bug I just discovered haha): ! I used this code: I'll send a PR to improve documentation.\nAh. Well I suppose the issue wasn't with intra-doc links, but with inline code blocks. What I tried was that but with bar`, and it failed.\nWorks too: !\nI opened which adds extra explanation on how to use markdown inside html tags. That should solve your issue.\nThat note is exactly what I was just looking in to. I had it inline, given my familiarity with HTML. TIL that matters.", "commid": "rust_issue_79710", "tokennum": 509}], "negative_passages": []} {"query_id": "q-en-rust-bb263248d8d44e217808fde8cf6e115f5d2c6ab6d12a74a5000a5c5e9c47736f", "query": "} /* For the last child of a div, the margin will be taken care of by the margin-top of the next item. */ p:last-child { p:last-child, .docblock > .warning:last-child { margin: 0; }", "positive_passages": [{"docid": "doc-en-rust-1196006ec2219d0730aa48521bbdc7cae854ede34ccfdf1d5ca56b4ee83cd30c", "text": "It doesn't hurt to have it and if it can helps users, well, why not doing it. :)", "commid": "rust_issue_79710", "tokennum": 24}], "negative_passages": []} {"query_id": "q-en-rust-bb51d2d27094d6e819b787a119373f19343f1b24f4323cdd6800fc620efc492b", "query": "find_best_match_for_name(input.iter(), \"aaaa\", Some(4)), Some(Symbol::intern(\"AAAA\")) ); let input = vec![Symbol::intern(\"a_longer_variable_name\")]; assert_eq!( find_best_match_for_name(input.iter(), \"a_variable_longer_name\", None), Some(Symbol::intern(\"a_longer_variable_name\")) ); }) }", "positive_passages": [{"docid": "doc-en-rust-f0a7e9ef309afa13524b79e1e4eb6b2936bb23e506865eeeb3c4450a51ecd325", "text": "On identifiers that have more than one word, it is relatively common to write them in the wrong order ( \u2192 ). These are normally not found by Levenshtein distance checks, but we could do a basic \"split on ``, sort and join before comparison\" so that we could suggest the right identifier. $DIR/non_ascii_ident.rs:3:13 | LL | let _ = \u8bfb\u6587; | ^^^^ not found in this scope error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-6c4d4f3b496f5ae06ffc89ae4053ef8049e709be468f2006bc9bfacc001716c8", "text": "In I noticed this surprising spelling suggestion: To me and don't seem like they would be similar enough to meet the threshold for showing such a suggestion. Can we calibrate this better for short idents? For comparison, even doesn't assume you mean . rustc 1.45.0-nightly ( 2020-05-23) Mentioning who worked on suggestions most recently in .\nI debugged it a bit. is selected as the candidate because the edit distance is just 2 to . The code to fix is likely in or around . I tried some quick tricks but got ICEs and failing tests and gave up. One thing I didn't try that could perhaps work is to consider edit distance between characters of different alphabets/logogram sets as infinite, rather than 1, which is the case right now.\nThank you for investigating! Independent of what we do with different logogram sets (your suggestion sounds plausible), an edit distance of 2 for a string of length 2 should not meet the threshold for showing a suggestion, even within a single logogram set.\nI think edit distance of 1 for a string of length 1 is reasonable (a lot of existing UI tests relies on it), but maybe an edit distance of 2 for a string of length 2 is not reasonable indeed. As you point out, with regular chars, the edit distance is still 2 but is not suggested. So maybe there is a simpler fix to be made for than looking at what alphabets/logogram sets characters belong to.", "commid": "rust_issue_72553", "tokennum": 308}], "negative_passages": []} {"query_id": "q-en-rust-bf4da314a5fc644d1effe1185ac8124997320f02c5e608badd7018d25762b3f5", "query": "plugin_registrar, plugins, pointer, pointer_trait, pointer_trait_fmt, poll, position, post_dash_lto: \"post-lto\",", "positive_passages": [{"docid": "doc-en-rust-bebf3ac373245cb50b8b4c0e41523bb66c080f59a138926bab54cb74481b9855", "text": "I tried this code: I expected to see this happen: The compilation succeed or guide me to (1) cast the to with . (2) borrow the : . But in this case the addresses printed are the same (what?). Instead, this happened*: Cryptic error message: There is : . It took me minutes before I found the workaround. But it still surprised me. : $DIR/issue-51559.rs:4:1 | LL | pub const FOO: usize = unsafe { BAR as usize }; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer, but expected initialized plain (non-pointer) bytes | = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior error: aborting due to previous error For more information about this error, try `rustc --explain E0080`. ", "positive_passages": [{"docid": "doc-en-rust-3b97c0446b049871b66af0aacf8d6f5b25ea52a83d1a0d1f1d7d3cfaa0672f04", "text": "Some people were of the opinion that this shouldn't work, but it currently does. : cc\ncc\nit's just super scary. EDIT: it should not and does not work. If we allowed this, various surprising things can happen, especially around implicit promotion. If you want to store pointers and integers, you can use a raw pointer.\nFor my own education, can you elaborate on why function pointers are ok? I'd thought that the runtime addresses of things -- including functions -- shouldn't be accessible at const-time since we don't know where they'll live at runtime. Could I make a const array that usize long?\nWhen compiling to LLVM, the real function handle or pointer address is obtained. If you try doing anything weird with pointers at compile time, you will quickly notice that you are not at runtime. For example, you cannot divide such a usize by anything. You can add or subtract integers, because that's just pointer offsetting, but you can't inspect the address in any way. This also means that no, you cannot use this usize for array lengths or enum discriminants. This is due to the fact that miri pointers are not just addresses, but abstract pointers that are in a separate layer from the bytes of normal memory like the one of integers.\nThanks for the explanation. I guess that means that such a couldn't be passed to a const generic either?\nThe thing is const generics don't know where the usize comes from, so they could try to do something with it that is not allowed if it is just a pointer address. Until monomorphization we can't know if there is an issue. Note that converting a pointer to an usize and back just to be able to put it in const, statics, or use it with const generics, is a real pain. We would be better off in this case with an type, such that I can write and avoid the conversion to usize. For const generics, C++ accepts function pointers at the type level and that works just fine with clang, so I expect to work fine as well.\nHow would work? is disallowed but and work? that seems very weird and special cased (especially for fn types that involve higher rank lifetimes, as in my example) or is ~= ?", "commid": "rust_issue_51559", "tokennum": 511}], "negative_passages": []} {"query_id": "q-en-rust-bfb214c2871b33d5ddc9610dcb4451eeea1a3a0443ae3daa416429cd15747ee5", "query": " error[E0080]: it is undefined behavior to use this value --> $DIR/issue-51559.rs:4:1 | LL | pub const FOO: usize = unsafe { BAR as usize }; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered a pointer, but expected initialized plain (non-pointer) bytes | = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rust compiler repository if you believe it should not be considered undefined behavior error: aborting due to previous error For more information about this error, try `rustc --explain E0080`. ", "positive_passages": [{"docid": "doc-en-rust-186661af979691142d781fb362764a5d3fbb269992b5769a294bf99eec2f8644", "text": "then is a double indirection and to use it you need to worry about managing the memory it points to (which contains the )\nI haven't thought this through beyond having used the atomic pointers in the C++ standard library: One way to implement this could be where we provide blanket impls for , , , , , , , ... or use some compiler magic to achieve a similar effect...\nThis is a bit offtopic, but regarding , mentioned at least once the possibility of introducing some new type, to make sugar for . I guess it can't entirely be \"sugar\", because of existing impls, but at least both could work the same.\nHehe, this is cute. :) And I agree it is working as intended. (Nice that the translation to LLVM actually gets this right, should there be a testcase to make sure it stays that way? :D )\nSo... other than testing this, all we need is a PR that adds -casts in constants under a feature gate (and makes those casts unsafe in constants and )\nI've opened to brainstorm how to either add an type or retrofit to support items.\nThese kind of operations are now unsafe in contexts (impl PR: )\nSo do we have a test that you cannot use such usize as array length? :D\nhmm... apparently not... Test-instructions: (via the feature gate) to turn various kinds of references and function pointers into these expressions in array length, repeat lengths and enum discriminant initializers (e.g. ) Note: Do not use items, everything needs to be inline in the use site to be properly tested\nthis is no longer allowed by any means:\nYes that is intended. This issue is for adding a regression test testing what happens when you use a pointer casted to a usize in an array length.", "commid": "rust_issue_51559", "tokennum": 396}], "negative_passages": []} {"query_id": "q-en-rust-bfc1eabfac7834a3e820eced388b8602562a46673f8aee8c6701f0f9a76589c5", "query": "ast_passes_bound_in_context = bounds on `type`s in {$ctx} have no effect ast_passes_const_and_async = functions cannot be both `const` and `async` .const = `const` because of this .async = `async` because of this .label = {\"\"} ast_passes_const_and_c_variadic = functions cannot be both `const` and C-variadic .const = `const` because of this .variadic = C-variadic because of this ast_passes_const_and_coroutine = functions cannot be both `const` and `{$coroutine_kind}` .const = `const` because of this .coroutine = `{$coroutine_kind}` because of this .label = {\"\"} ast_passes_const_bound_trait_object = const trait bounds are not allowed in trait object types ast_passes_const_without_body =", "positive_passages": [{"docid": "doc-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "text": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "commid": "rust_issue_130232", "tokennum": 24}], "negative_passages": []} {"query_id": "q-en-rust-bfc7c214f561927c82f2346b6844ea64aba7dc0643418ef769b7b241e862d07a", "query": " error[E0425]: cannot find function `foo` in module `to_reuse` --> $DIR/ice-issue-124342.rs:7:21 | LL | reuse to_reuse::foo { foo } | ^^^ not found in `to_reuse` error[E0425]: cannot find value `foo` in this scope --> $DIR/ice-issue-124342.rs:7:27 | LL | reuse to_reuse::foo { foo } | ^^^ | help: you might have meant to refer to the associated function | LL | reuse to_reuse::foo { Self::foo } | ++++++ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-146312829290b27802b9e72c4c1d00d7d6e734a8e4dd04d06994a15e12da085f", "text": " $DIR/expect_tool_lint_rfc_2383.rs:38:18 | LL | #[expect(invalid_nan_comparisons)] | ^^^^^^^^^^^^^^^^^^^^^^^ | = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` warning: 3 warnings emitted ", "positive_passages": [{"docid": "doc-en-rust-9da6e95c335f48d6b06b8a6cd003c89c70bc53bedd6bf2c5c7ac9bd6bf6d5f89", "text": "I tried this code: I expected to catch lint and suppress it, as it should. Instead, compiler issues a diagnostic both about unused value and unfulfilled lint expectation: It does work as expected though if is applied to function. I used the latest stable (Rust 1.81), but it is reproducible both on latest beta (2024-09-04 ) and latest nightly (2024-09-05 ). labels: +F-lintreasons, +A-diagnostics $DIR/expect_tool_lint_rfc_2383.rs:38:18 | LL | #[expect(invalid_nan_comparisons)] | ^^^^^^^^^^^^^^^^^^^^^^^ | = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` warning: 3 warnings emitted ", "positive_passages": [{"docid": "doc-en-rust-79ae52ec28826e7fd5d0571036ba651615fc4170373c979f134af2d5a515bd5f", "text": "are present for more than one HIR node in the and if one node is the (direct or indirect) parent of the other node, then add the level only_ for parent node. This will take care of any lints on the child (since we always walk up the HIR hierachy as mentioned earlier) as well as the parent and it will not cause any warnings about missed expectations. I am new to this area so if someone can review this idea or suggest alternatives that will be great. since you seemed to have worked in this area recently, can you please weigh in?\nYour investigation is accurate, and the fix is simple: visit the statement node when building lint levels. Do you mind making such a pr ? I don't think we need any special adjustment for cases, duplicated attributes are already taken into account, but a test won't hurt.\nThanks . Yes, I'll put together a PR. My only concern is if all we do is add the level at , although it will work perfectly for , etc., in the case of we will end up with an unfulfilled expectation on the node which will appear to the user as a warning. But anyway let me have a go and see what happens.\nOpened draft PR that adds the lint on . It does suffer from the extra warning problem.", "commid": "rust_issue_130142", "tokennum": 273}], "negative_passages": []} {"query_id": "q-en-rust-c1f7ed062a0fa2c03a2fa2a890f6eedf4e86f4f4e4276b481992bdc0daad4c75", "query": "let projections_b = &place_b.projections; let same_initial_projections = iter::zip(projections_a, projections_b).all(|(proj_a, proj_b)| proj_a == proj_b); iter::zip(projections_a, projections_b).all(|(proj_a, proj_b)| proj_a.kind == proj_b.kind); if same_initial_projections { use std::cmp::Ordering; // First min(n, m) projections are the same // Select Ancestor/Descendant if projections_b.len() >= projections_a.len() { PlaceAncestryRelation::Ancestor } else { PlaceAncestryRelation::Descendant match projections_b.len().cmp(&projections_a.len()) { Ordering::Greater => PlaceAncestryRelation::Ancestor, Ordering::Equal => PlaceAncestryRelation::SamePlace, Ordering::Less => PlaceAncestryRelation::Descendant, } } else { PlaceAncestryRelation::Divergent", "positive_passages": [{"docid": "doc-en-rust-2ca9921ab5caa010237f232ab8f24b1d470bb40f055f188227f07c251c825b2d", "text": " $DIR/promote-no-mut.rs:3:50 | LL | static mut TEST1: Option<&mut [i32]> = Some(&mut [1, 2, 3]); | ----------^^^^^^^^^- | | | | | | | temporary value is freed at the end of this statement | | creates a temporary which is freed while still in use | using this value as a static requires that borrow lasts for `'static` error[E0716]: temporary value dropped while borrowed --> $DIR/promote-no-mut.rs:6:18 | LL | let x = &mut [1,2,3]; | ^^^^^^^ creates a temporary which is freed while still in use LL | x | - using this value as a static requires that borrow lasts for `'static` LL | }; | - temporary value is freed at the end of this statement error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0716`. ", "positive_passages": [{"docid": "doc-en-rust-a8a4ce7a1f7159520cb23f17ce512eac52ede2baae114ca9b5c37a5c766ec511", "text": "Consider the following code: This should just work. But instead it throws an error: The reason for this is that gets lifetime extended via promotion, so this reference now points to a separate mutable static -- but mutable statics cannot be mutated during CTFE. I see no way to fix this, other than stopping to promote mutable references -- which we should IMO do anyway, they are a very strange and unprincipled special case. Just imagine doing this in a loop, suddenly all these different and mutable allocations share the same address... Cc\nUgh, that's an ancient rule (believe it or not, it's a workaround for not being able to write and have the length be inferred!) and it was never meant to get this far, I guess I didn't think it through properly when adding the \"outermost scope\" rule, which is the only position where allowing it makes sense. In the current implementation, it should've never been promoted, only allowed, my bad!", "commid": "rust_issue_75556", "tokennum": 217}], "negative_passages": []} {"query_id": "q-en-rust-c280e0dfd2b240d84bb4047bfbe088cd24424c54fcc3657a4994cf725810206e", "query": "( &ty::Dynamic(ref data_a, _, src_dyn_kind), &ty::Dynamic(ref data_b, _, target_dyn_kind), ) => { assert_eq!(src_dyn_kind, target_dyn_kind); ) if src_dyn_kind == target_dyn_kind => { let old_info = old_info.expect(\"unsized_info: missing old info for trait upcasting coercion\"); if data_a.principal_def_id() == data_b.principal_def_id() {", "positive_passages": [{"docid": "doc-en-rust-4018551dac267ef28378021fb7bbc4c5e35bc18503b1e8ef23c1d818eb068b91", "text": " $DIR/issue-104086-suggest-let.rs:2:5 | LL | x = x = x; | ^ | help: you might have meant to introduce a new binding | LL | let x = x = x; | +++ error[E0425]: cannot find value `x` in this scope --> $DIR/issue-104086-suggest-let.rs:2:9 | LL | x = x = x; | ^ not found in this scope error[E0425]: cannot find value `x` in this scope --> $DIR/issue-104086-suggest-let.rs:2:13 | LL | x = x = x; | ^ not found in this scope error[E0425]: cannot find value `x` in this scope --> $DIR/issue-104086-suggest-let.rs:7:5 | LL | x = y = y = y; | ^ | help: you might have meant to introduce a new binding | LL | let x = y = y = y; | +++ error[E0425]: cannot find value `y` in this scope --> $DIR/issue-104086-suggest-let.rs:7:9 | LL | x = y = y = y; | ^ not found in this scope error[E0425]: cannot find value `y` in this scope --> $DIR/issue-104086-suggest-let.rs:7:13 | LL | x = y = y = y; | ^ not found in this scope error[E0425]: cannot find value `y` in this scope --> $DIR/issue-104086-suggest-let.rs:7:17 | LL | x = y = y = y; | ^ not found in this scope error[E0425]: cannot find value `x` in this scope --> $DIR/issue-104086-suggest-let.rs:13:5 | LL | x = y = y; | ^ | help: you might have meant to introduce a new binding | LL | let x = y = y; | +++ error[E0425]: cannot find value `y` in this scope --> $DIR/issue-104086-suggest-let.rs:13:9 | LL | x = y = y; | ^ not found in this scope error[E0425]: cannot find value `y` in this scope --> $DIR/issue-104086-suggest-let.rs:13:13 | LL | x = y = y; | ^ not found in this scope error[E0425]: cannot find value `x` in this scope --> $DIR/issue-104086-suggest-let.rs:18:5 | LL | x = x = y; | ^ | help: you might have meant to introduce a new binding | LL | let x = x = y; | +++ error[E0425]: cannot find value `x` in this scope --> $DIR/issue-104086-suggest-let.rs:18:9 | LL | x = x = y; | ^ not found in this scope error[E0425]: cannot find value `y` in this scope --> $DIR/issue-104086-suggest-let.rs:18:13 | LL | x = x = y; | ^ not found in this scope error[E0425]: cannot find value `x` in this scope --> $DIR/issue-104086-suggest-let.rs:23:5 | LL | x = x; // will suggest add `let` | ^ | help: you might have meant to introduce a new binding | LL | let x = x; // will suggest add `let` | +++ error[E0425]: cannot find value `x` in this scope --> $DIR/issue-104086-suggest-let.rs:23:9 | LL | x = x; // will suggest add `let` | ^ not found in this scope error[E0425]: cannot find value `x` in this scope --> $DIR/issue-104086-suggest-let.rs:27:5 | LL | x = y // will suggest add `let` | ^ | help: you might have meant to introduce a new binding | LL | let x = y // will suggest add `let` | +++ error[E0425]: cannot find value `y` in this scope --> $DIR/issue-104086-suggest-let.rs:27:9 | LL | x = y // will suggest add `let` | ^ not found in this scope error: aborting due to 17 previous errors For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-4a2a3ca0578e56aa495d8a238a4a39754613458b174a8b87ffbebf00f635a35a", "text": " $DIR/unnecessary-extern-crate.rs:14:1 | LL | extern crate alloc; | ^^^^^^^^^^^^^^^^^^^ help: remove it | note: lint level defined here --> $DIR/unnecessary-extern-crate.rs:11:9 | LL | #![deny(unnecessary_extern_crate)] | ^^^^^^^^^^^^^^^^^^^^^^^^ error: `extern crate` is unnecessary in the new edition --> $DIR/unnecessary-extern-crate.rs:17:1 | LL | extern crate alloc as x; | ^^^^^^^^^^^^^^^^^^^^^^^^ help: use `use`: `use alloc as x` error: `extern crate` is unnecessary in the new edition --> $DIR/unnecessary-extern-crate.rs:23:1 | LL | pub extern crate test as y; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `pub use`: `pub use test as y` error: `extern crate` is unnecessary in the new edition --> $DIR/unnecessary-extern-crate.rs:26:1 | LL | pub extern crate libc; | ^^^^^^^^^^^^^^^^^^^^^^ help: use `pub use`: `pub use libc` error: `extern crate` is unnecessary in the new edition --> $DIR/unnecessary-extern-crate.rs:32:5 | LL | extern crate alloc; | ^^^^^^^^^^^^^^^^^^^ help: use `use`: `use alloc` error: `extern crate` is unnecessary in the new edition --> $DIR/unnecessary-extern-crate.rs:35:5 | LL | extern crate alloc as x; | ^^^^^^^^^^^^^^^^^^^^^^^^ help: use `use`: `use alloc as x` error: `extern crate` is unnecessary in the new edition --> $DIR/unnecessary-extern-crate.rs:38:5 | LL | pub extern crate test; | ^^^^^^^^^^^^^^^^^^^^^^ help: use `pub use`: `pub use test` error: `extern crate` is unnecessary in the new edition --> $DIR/unnecessary-extern-crate.rs:41:5 | LL | pub extern crate test as y; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `pub use`: `pub use test as y` error: `extern crate` is unnecessary in the new edition --> $DIR/unnecessary-extern-crate.rs:45:9 | LL | extern crate alloc; | ^^^^^^^^^^^^^^^^^^^ help: use `use`: `use alloc` error: `extern crate` is unnecessary in the new edition --> $DIR/unnecessary-extern-crate.rs:48:9 | LL | extern crate alloc as x; | ^^^^^^^^^^^^^^^^^^^^^^^^ help: use `use`: `use alloc as x` error: aborting due to 10 previous errors ", "positive_passages": [{"docid": "doc-en-rust-4797aadb00e43b1e79a06e5417dc4622eb7a7d0ca28b53c427560dfa7aa43aa4", "text": "We need to add an epoch lint for It will lint , suggesting removal in root mods, and replacement with in non-root mods. This is blocked on the ability to elide extern crate in the first place.\nIt is also blocked on a feature to force linkage of a crate, see this comment and the discussion afterwards:\ncc\nI think the lint should be by default off in cases with , such as: This is because of the ergonomics problems with macros and the new macro import system, expressed here: It's bewildering for the migrating users if a lint tells them not to use the syntax, but then finding out that many of the macros out there have downright bad ergonomics with the import syntax. (For clarification: I think this is an undesirable state of affairs, and we should be linting against all , but if we do that, the ergonomics problem with the macro import should be fixed somehow.)\nI don't think we want at all, really. If we want to import macros, we should have another means of doing it. Otherwise we have an inconsistency. So yes, I think I agree with your last paragraph above all,\nI've discussed this a bit in the tracking issue for the macros import system:\nThanks, you expressed the exact concern I had in a concise way.\nAnother thing this needs to lint is unlisted crates that are included in that clash with existing modules. I'm not quite sure on what the story needs to be here cc\nDoes it? I've thought one of the things the path clarity RFC was about was to have separate namespaces for top level modules and extern crates.\nThe current plan has changed a bit so I'm not sure if that is still true. (If stuff does change it will probably be RFCd again, there's a bit of experimentation going on)\nSigh I hope we won't discuss all of this crap all over again... I don't enjoy that but it is not me who wants to change things.\nI've hidden your last comment, which I found hostile and unconstructive.\nIDK what has been hearing, but for me, this is the latest state of consensus on that issue, and it seems to have separate namespaces: is that the case? You seem unresponsive on IRC that's why I'm using github.\nThat's mostly the latest state of consensus, yeah. Some tweaks have been made iirc. Everything has been implemented.", "commid": "rust_issue_48719", "tokennum": 543}], "negative_passages": []} {"query_id": "q-en-rust-c302fb32f5fadaf73c3e5f79cae8625bf86486a9d82607233fc0362f104ee188", "query": "#[stable(feature = \"move_cell\", since = \"1.17.0\")] pub fn swap(&self, other: &Self) { if ptr::eq(self, other) { // Swapping wouldn't change anything. return; } if !is_nonoverlapping(self, other, 1) { // See for why we need to stop here. panic!(\"`Cell::swap` on overlapping non-identical `Cell`s\"); } // SAFETY: This can be risky if called from separate threads, but `Cell` // is `!Sync` so this won't happen. This also won't invalidate any // pointers since `Cell` makes sure nothing else will be pointing into // either of these `Cell`s. // either of these `Cell`s. We also excluded shenanigans like partially overlapping `Cell`s, // so `swap` will just properly copy two full values of type `T` back and forth. unsafe { ptr::swap(self.value.get(), other.value.get()); mem::swap(&mut *self.value.get(), &mut *other.value.get()); } }", "positive_passages": [{"docid": "doc-en-rust-2ec5dee6453ac67ff14b24b63e8a1e5a93825fe2dc24753c7e72b30695c7f453", "text": "In , it was uncovered that is making some rather strong assumptions: two s with different address but the same type must not overlap. Not only is this a scarily non-local safety invariant, it is also fundamentally incomaptible with some APIs that ought to be correct, as demonstrated by this snippet (thanks to and for help with working out the example): to see the issue: will duplicate parts of the memory range when there is overlap, which leads to double-drop (other parts of the memory range are just lost, leading to memory leaks, but that is not the main issue here). This is not itself a soundness issue as it requires unsafe code to trigger UB. But this likely reflects an unintended consequence of . This got stabilized in as part of That commit references and an RFC, but is not mentioned in either of them. was in Accept that is unsound and document \"non-overlap\" as part of the safety invariant. This seems very fragile. Make not misbehave on overlap, either by panicking or by only swapping the non-overlapping parts. Cc\nwas in , which was authored by a \"Charlie Fan\". This is probably\nCc\nThis makes the method sound, but it seems like there could be other seems-like-they-should-be-ok cell projections that break under this behavior.\nI don't think so. This makes the function behave \"strangely\", but since no data is duplicated nor discarded, I think it is sound.\nI share this hunch, although the code, out of nowhere, may look very contrived. Consider: So it seems like -king (at least for non- types) is the only option that would allow us too keep the structural property of s, which, imho, is a very intuitive programmer mindset (and thus, a mistake that many would do should they not know about this caveat), and so a quite important thing to preserve .\nThat is an example of the \"strange\" behavior not being the right thing for this code, but it is not an unsoundness of per se. Your unsafe code is simply making wrong assumptions about the functional behavior of . Current favors the first operand in case of overlap (according to documentation); unsafe code that incorrectly assumes it would favor the second operand would be equally wrong. I don't see a bug in either way. IMO it seems rather strange to only panic for non- types.", "commid": "rust_issue_80778", "tokennum": 510}], "negative_passages": []} {"query_id": "q-en-rust-c302fb32f5fadaf73c3e5f79cae8625bf86486a9d82607233fc0362f104ee188", "query": "#[stable(feature = \"move_cell\", since = \"1.17.0\")] pub fn swap(&self, other: &Self) { if ptr::eq(self, other) { // Swapping wouldn't change anything. return; } if !is_nonoverlapping(self, other, 1) { // See for why we need to stop here. panic!(\"`Cell::swap` on overlapping non-identical `Cell`s\"); } // SAFETY: This can be risky if called from separate threads, but `Cell` // is `!Sync` so this won't happen. This also won't invalidate any // pointers since `Cell` makes sure nothing else will be pointing into // either of these `Cell`s. // either of these `Cell`s. We also excluded shenanigans like partially overlapping `Cell`s, // so `swap` will just properly copy two full values of type `T` back and forth. unsafe { ptr::swap(self.value.get(), other.value.get()); mem::swap(&mut *self.value.get(), &mut *other.value.get()); } }", "positive_passages": [{"docid": "doc-en-rust-a4b7c0175f8911b861196ef37fe4f352c3fad5d4d87fe55100f0c4c481d1f11b", "text": "Btw, also exists. does not have array/slice projection functions though and they would be wrong due to the counts, so overlapping probably truly are not a thing.\nAt the very least, an argument that aren't allowed to overlap is easier to defend than that aren't allowed to overlap. Plus also, doesn't have guaranteed layout in the first place, so any example using tuples is (currently) implementation-detail-tied even before hitting . You could still construct an example using only types, of course; just replace the tuple with an actual type. You could even make it . As I understand it, there's three main cases here. ping two nonoverlapping is fine. ping two partially overlapping without any bound on has a sound, if surprising, definition, where the nonoverlapping part of the arrays are swapped. ping two partially overlapping unknown is unsound in any definition, unless both has no safety invariants and the overlapping portion is . The third case is obviously the most restricting one, and is defined on . To expound on first of the two subissues: Consider a type with the extra safety invariant that the second must be double the first . could even be . It would be valid to create two from , as you'd have and . There is no possible definition of (other than a no-op) which preserves the safety invariant. And of course the impl doesn't even know about the safety invariant. I could probably create an even worse example using underaligned types and relying on a specific endianness, perhaps even accidentally breaking validity invariants (think around partially overlapping ). The only two resolutions I'm confident in are making always panic for overlapping cells (because there could be safety invariants being broken) or forbidding overlapping s to -compatible types (optionally: that would be broken by ). And I don't like the subtle implications of the latter (e.g. being unsound for unbound ).\nI don't think would be immediately unsound, you would still need a way to create overlapping s.\nIn particular, they certainly can only overlap in restricted ways -- if they overlap such that one's sits on the other's , I can already cause UB without just by storing into that and then reading it as a . Oh I see... to only swap the non-overlapping parts we have to make an argument that these parts have the same type (safety+validity invariant).", "commid": "rust_issue_80778", "tokennum": 514}], "negative_passages": []} {"query_id": "q-en-rust-c302fb32f5fadaf73c3e5f79cae8625bf86486a9d82607233fc0362f104ee188", "query": "#[stable(feature = \"move_cell\", since = \"1.17.0\")] pub fn swap(&self, other: &Self) { if ptr::eq(self, other) { // Swapping wouldn't change anything. return; } if !is_nonoverlapping(self, other, 1) { // See for why we need to stop here. panic!(\"`Cell::swap` on overlapping non-identical `Cell`s\"); } // SAFETY: This can be risky if called from separate threads, but `Cell` // is `!Sync` so this won't happen. This also won't invalidate any // pointers since `Cell` makes sure nothing else will be pointing into // either of these `Cell`s. // either of these `Cell`s. We also excluded shenanigans like partially overlapping `Cell`s, // so `swap` will just properly copy two full values of type `T` back and forth. unsafe { ptr::swap(self.value.get(), other.value.get()); mem::swap(&mut *self.value.get(), &mut *other.value.get()); } }", "positive_passages": [{"docid": "doc-en-rust-ca1fb55f9ff9f81dce50946c9294ead6c8b65c0c9f1846bd21a3af78d84c40f8", "text": "Which... seems utterly bogus now that I think about it. Worse, the invariant could correlate the overlapping with the non-overlapping part, as in your example. (Removed a whole bunch of nonsense where I try to find some special cases that work.) Yeah we should make it panic.^^ It is unsound to create an to these though, even without I can already violate the safety invariant here by setting the second reference to . Ergo this is not a problem for , we can already assume such references do not exist. I am confused, is there a negation missing somewhere in this sentence?\nFair. I'm fairly confident that it'd be possible to come up with some safety invariant that a partial swap would break, though, with more data-dependent qualities. The parenthetical is an optional relaxation of the potential requirement (but at the cost of a lot of complexity).\nBut shouldn't it be forbidding overlapping s to -incompatible types? Not that I know what either of these things mean, but forbidding incompatible seems to make more sense than forbidding compatible.^^\nYeah, poor verbage. I meant roughly \"forbidding overlapping s of types that can be used in (i.e. sized types)\", with the optional extension of \"only for those actually broken by \" (however that would be defined).\nthat partially overlapping s through an enum is problematic even without :\nEnums isn't really the same thing. That enum projections are unsound , but as also discussed in that article, struct projections are typically ok, and the array projection discussed here is essentially a struct projection, not an enum projection.\nI would like to bring up my crate which allows projecting through using a macro. example usage: Note: doesn't allow projecting through enums for exactly the reasons that explained. I'm not sure how could interact with to produce bad behavior, because you can only project to fields, which can't be swapped with their parent, but just in case someone else finds something interesting. i.e. This isn't possible, so it's not possible to project through to a field that's the same type as the parent. Alongside the fact that all fields are disjoint means that cell-project should only produce overlapping if they are also .\nI agree with here. Sum types and product types are different, and really should commute around product types, i.e., ought to be sound.", "commid": "rust_issue_80778", "tokennum": 517}], "negative_passages": []} {"query_id": "q-en-rust-c302fb32f5fadaf73c3e5f79cae8625bf86486a9d82607233fc0362f104ee188", "query": "#[stable(feature = \"move_cell\", since = \"1.17.0\")] pub fn swap(&self, other: &Self) { if ptr::eq(self, other) { // Swapping wouldn't change anything. return; } if !is_nonoverlapping(self, other, 1) { // See for why we need to stop here. panic!(\"`Cell::swap` on overlapping non-identical `Cell`s\"); } // SAFETY: This can be risky if called from separate threads, but `Cell` // is `!Sync` so this won't happen. This also won't invalidate any // pointers since `Cell` makes sure nothing else will be pointing into // either of these `Cell`s. // either of these `Cell`s. We also excluded shenanigans like partially overlapping `Cell`s, // so `swap` will just properly copy two full values of type `T` back and forth. unsafe { ptr::swap(self.value.get(), other.value.get()); mem::swap(&mut *self.value.get(), &mut *other.value.get()); } }", "positive_passages": [{"docid": "doc-en-rust-b6f6cf0827c572b75d97b4e21eff6feaf65504809c34cb4ad9302d547666ce03", "text": "You can use , which led me to this issue (it'd be good if we could support field projection inside of and ).\nWhile you could use to check that the tuple layout conforms to the C-like layout needed to do such projection, I don't see how your specific use of would lead to two partially overlapping though. Note that the problem is not with projecting to a single field, but to a group/subset containing multiple fields (or like in the array/slice example, to a subarray/subslice with length ).\nOh I see, you're right. False alarm.", "commid": "rust_issue_80778", "tokennum": 127}], "negative_passages": []} {"query_id": "q-en-rust-c329264a9cb198dcab1e100beda5bb91281eb5052255108da6e6be97ed9c0ae1", "query": " // issue-54966: ICE returning an unknown type with impl FnMut fn generate_duration() -> Oper {} //~^ ERROR cannot find type `Oper` in this scope fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-27abf7dffdd9fb3420783921a65ebbfb4054467a1c3c0f7240038bbba2872f0e", "text": "I expected just a compiler error . Compiler panicked: Meta\nThis is fixed in the beta version (1.30).", "commid": "rust_issue_54966", "tokennum": 24}], "negative_passages": []} {"query_id": "q-en-rust-c329264a9cb198dcab1e100beda5bb91281eb5052255108da6e6be97ed9c0ae1", "query": " // issue-54966: ICE returning an unknown type with impl FnMut fn generate_duration() -> Oper {} //~^ ERROR cannot find type `Oper` in this scope fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-afa01b1e4a6f7a1fc98bc53710b50fb4a7badf9201d0c207379809b43c80a6cc", "text": "This lets you get a mutable reference to an immutable value under a specific case. Code: From what I can tell, the tuple and at least two depths of Enums are necessary, along with an on the right side. There's probably other variations, but that seems to work well enough. Expected output would be a compile error, instead output is This works on both stable and nightly\nwould fix this as well?\nFixed by . Off: I wonder why we can't ship nll yet. It would solve so many issues right now, because of the match ergonomics sigh Match ergonomics produces roughly 10% of all errors/unsoundness/ICEs atm (at least it feels like this)\nYes correctly fixed this issue.", "commid": "rust_issue_52240", "tokennum": 157}], "negative_passages": []} {"query_id": "q-en-rust-c33ef2d3f19dc91a7d0f74be68aafcb1f586aa3ba9e16d8d7ba5b4095e0b8349", "query": " error: `#[target_feature(..)]` can only be applied to `unsafe` functions --> $DIR/issue-68060.rs:8:13 | LL | #[target_feature(enable = \"\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ can only be applied to `unsafe` functions ... LL | |_| (), | ------ not an `unsafe` function error: the feature named `` is not valid for this target --> $DIR/issue-68060.rs:8:30 | LL | #[target_feature(enable = \"\")] | ^^^^^^^^^^^ `` is not valid for this target error[E0737]: `#[track_caller]` requires Rust ABI --> $DIR/issue-68060.rs:11:13 | LL | #[track_caller] | ^^^^^^^^^^^^^^^ error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0737`. ", "positive_passages": [{"docid": "doc-en-rust-4cabe525e8bd65c1784b54acf748f2e1483eb31241627e1b563551496fc380f1", "text": "This code () Produces ICE on any channel\nThis is the same issue as , attributes on expressions are not converted into HIR properly.\nI checked on latest nightly - isn't produced anymore while this one still throws ICE.\ncould you reopen this issue?\ntriage: P-high. Removing nomination.\nReduced somewhat:", "commid": "rust_issue_68060", "tokennum": 65}], "negative_passages": []} {"query_id": "q-en-rust-c344f034772db9ee8713266570beb16d10295a74ba42d6fdfb52790067240204", "query": "pub projection_bounds: Vec>, } impl<'tcx> ExistentialBounds<'tcx> { pub fn new(region_bound: ty::Region, builtin_bounds: BuiltinBounds, projection_bounds: Vec>) -> Self { let mut projection_bounds = projection_bounds; ty::sort_bounds_list(&mut projection_bounds); ExistentialBounds { region_bound: region_bound, builtin_bounds: builtin_bounds, projection_bounds: projection_bounds } } } #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub struct BuiltinBounds(EnumSet);", "positive_passages": [{"docid": "doc-en-rust-3934a1b92c7e17508c6b88bced70af5d14e1d80fcd45b847830b1281e6361ebc", "text": "It looks like if you go and grab this commit of timely-dataflow from github, you get a nicely reproducible ICE. TravisCI gets it too! In case anything changes in the meantime, the commit you want is Related issues: . The times I've seen this before I haven't be able to reproduce it, but since it seems to happen from a clean pull, I thought I would show you. To repro, just follow the commands here. git clone, cargo build, cargo test. Boom!\nIt seems like I can make the ICE go away by replacing a member variable with one that uses an equivalent trait which doesn't use associated types: Swapping that in removes the ICE, swapping it out brings the ICE back. So, maybe something isn't working great for boxed traits with associated types. The commit that fixes things is\nDoing some testing, and the repro in (not mine) is way simpler. It also explodes on 1.1 stable, with a similar complaint. Just boxing a trait with two associated types.\nI run into this problem, too (with stable ).", "commid": "rust_issue_27222", "tokennum": 232}], "negative_passages": []} {"query_id": "q-en-rust-c344f034772db9ee8713266570beb16d10295a74ba42d6fdfb52790067240204", "query": "pub projection_bounds: Vec>, } impl<'tcx> ExistentialBounds<'tcx> { pub fn new(region_bound: ty::Region, builtin_bounds: BuiltinBounds, projection_bounds: Vec>) -> Self { let mut projection_bounds = projection_bounds; ty::sort_bounds_list(&mut projection_bounds); ExistentialBounds { region_bound: region_bound, builtin_bounds: builtin_bounds, projection_bounds: projection_bounds } } } #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub struct BuiltinBounds(EnumSet);", "positive_passages": [{"docid": "doc-en-rust-59896f2a78933cbb77beb7a499e617bbdcbb69e9c72cd913fbde9be24226cf8c", "text": "Hi, I ran into an interesting ICE while working on a personal project. This seems like it could be related to issues or , but I'm not sure. I set up a . The README includes a description of the bug and the compiler backtrace.\nGetting the same ICE occasionally in (At more specifically, not minimized because I don't think I can produce a smaller example than the package above).\nThis is a somewhat flaky ICE I used to see a bunch back in April, but it seemed like it sorted itself out with the 1.0. I've found it again! It is on stable 1.1, so I thought I would bump this post 1.0 issue rather than my open pre-1.0 issues. I'm really not sure what causes this, but the common features tend to be doing a bunch of builds with light changes to a local crate (path=\"...\") on which the project depends (in this case, local edits to the timely crate). It feels a lot like it is Cargo getting just a bit confused about what has changed and what hasn't, as will sometimes fix the problem. Previously, toggling LTO would tickle/untickle the bug, as would insisting on some inlining. Just now, fixed it up, after would not... Maybe someone should just put a on the bounds list... =/", "commid": "rust_issue_25467", "tokennum": 296}], "negative_passages": []} {"query_id": "q-en-rust-c37d141f43358f7e17a80fba2f593e5681b108435484e92f6708f72b0b43acf7", "query": "}); let existential_projections = bounds.projection_bounds.iter().map(|(bound, _)| { bound.map_bound(|b| { if b.projection_ty.self_ty() != dummy_self { tcx.sess.delay_span_bug( DUMMY_SP, &format!(\"trait_ref_to_existential called on {:?} with non-dummy Self\", b), ); bound.map_bound(|mut b| { assert_eq!(b.projection_ty.self_ty(), dummy_self); // Like for trait refs, verify that `dummy_self` did not leak inside default type // parameters. let references_self = b.projection_ty.substs.iter().skip(1).any(|arg| { if arg.walk().any(|arg| arg == dummy_self.into()) { return true; } false }); if references_self { tcx.sess .delay_span_bug(span, \"trait object projection bounds reference `Self`\"); let substs: Vec<_> = b .projection_ty .substs .iter() .map(|arg| { if arg.walk().any(|arg| arg == dummy_self.into()) { return tcx.ty_error().into(); } arg }) .collect(); b.projection_ty.substs = tcx.intern_substs(&substs[..]); } ty::ExistentialProjection::erase_self_ty(tcx, b) }) });", "positive_passages": [{"docid": "doc-en-rust-9efb448391e17c7cc2c5b7508a4f5840904130cac96cf2da07110367068af352", "text": " $DIR/self-in-generics.rs:5:19 --> $DIR/self-in-generics.rs:12:19 | LL | pub fn f(_f: &dyn SelfInput) {} | ^^^^^^^^^", "positive_passages": [{"docid": "doc-en-rust-9efb448391e17c7cc2c5b7508a4f5840904130cac96cf2da07110367068af352", "text": " $DIR/self-in-const-generics.rs:9:16 | LL | fn foo(x: &dyn BB) {} | ^^ | = note: it cannot use `Self` as a type parameter in a supertrait or `where`-clause error: aborting due to previous error For more information about this error, try `rustc --explain E0038`. ", "positive_passages": [{"docid": "doc-en-rust-9efb448391e17c7cc2c5b7508a4f5840904130cac96cf2da07110367068af352", "text": " $DIR/feature-gate-macro-literal-matcher.rs:14:19 | LL | macro_rules! m { ($lt:literal) => {} } | ^^^^^^^^^^^ | = help: add #![feature(macro_literal_matcher)] to the crate attributes to enable error: aborting due to previous error For more information about this error, try `rustc --explain E0658`. ", "positive_passages": [{"docid": "doc-en-rust-0433e96f1fea1692a6dbda2b304cc4ffe74d1cf8031552073123b81e4ce976c8", "text": "Tracking issue for rust-lang/rfcs.\nIs anyone working on implementation already? If not I can take a crack at it.\nI'll probably do it.\nHuh oh, we have an issue already: do we mean to include potentially minus-prefixed literals? If yes, we have a problem because 1. it means we have to return an Expr instead of a Lit, which means I'm not sure one will be able to use the bound fragment in a non-expr context (for example in a static array type?) and 2. it's no longer always a single-TT, so the benefits in terms of future-proofing are a bit less good. If we don't... well, we loose expressivity. I have no idea what is more intuitive though... Would you, as a Rust user, expect to parse ? I certainly can think of use cases for this...\ngood point. I think I would have expected it, I also agree with the complications.\nCan't we upgrade Lit to support negative literals?\nI'd expect \"literal\" (or a future integer-literal specifier) to include (negative 5), but not (the negation of positive 5). Perhaps that expectation doesn't match reality, though. For instance, based on parsing rules for non-literals, I'd expect to act like . So, if handling both and as literals makes this easier, that works. But \"literal\" definitely needs to include negative numbers, or we'd have a weird user trap to explain.\nI'd expect \"literal\" (or a future integer-literal specifier) to include -5 (negative 5), but not - 5 (the negation of positive 5). Unary operators, including minus, are processed purely by the parser at the moment, no lexer involved. So their treatment is whitespace agnostic. (I personally think this is a good thing.)\nI agree with is not syntactically different from and has no reason to be. But I also agree with the fact that the user will probably expect negative numbers to work as literals (even it \u2013 syntactically \u2013 they're not).\nis also supported in literal patterns while arbitrary expressions are not. It'd be reasonable to support it in fragments as well.\nAnother thing that won't match is a macro call, like , or any of the other macros that I think of as expanding to a literal (, ...).", "commid": "rust_issue_35625", "tokennum": 529}], "negative_passages": []} {"query_id": "q-en-rust-c85fb2f26151c01a1e09cc246cd3f041ad9b5b14f0f49c34eb9c949e48b85b25", "query": " error[E0658]: :literal fragment specifier is experimental and subject to change (see issue #35625) --> $DIR/feature-gate-macro-literal-matcher.rs:14:19 | LL | macro_rules! m { ($lt:literal) => {} } | ^^^^^^^^^^^ | = help: add #![feature(macro_literal_matcher)] to the crate attributes to enable error: aborting due to previous error For more information about this error, try `rustc --explain E0658`. ", "positive_passages": [{"docid": "doc-en-rust-9d182356753c860866613db5fde7e24272669928c2adea54b67398cbbb488354", "text": "I dunno, maybe this is not such a good idea after all. Re-reading the RFC in a more skeptical light, I felt like the \"Motivation\" section could use some more concrete use cases. Maybe this is just failure of imagination on my part, though! I think the likelihood of this feature attracting people who really want constant expressions should be acknowledged as a drawback, too.\nThat's fine IMO. Personally I'd like it to match only explicit literals in code to distinguish, for example, between literal and identifier and produce different code for them. Right now this is not possible because all of , , are too broad and conflict with each other.\nHi All, I have implemented this feature in my fork of Rust over 1.25.0, under this branch: Following a quick review by a rustc mentor I may have the free time to provide a full pull request for this.\nBTW label has the tooltip \"implemented in nightly and unstable\" but I did not see it there, i.e. commits & pull requests referencing this. Did I miss it?\nThe implementation that I worked on was merged, yay!\nIs there anything stopping an FCP for this?\nIt's currently merged in unstable; do you mean an FCP to declare it stable?\nYes, an FCP to stabilise. Should have clarified.\nCurrently, it is not possible to pass a literal from a macro to another macro. . I think this is a bug.\nInteresting! That is a bug. I opened to track it.\nAgreed, this should be allowed just like passing any other node types.\nIs there anything else preventing this from being stabilized? The issue above has been fixed.\nThis has baked for some time now and I believe it should be ready for stabilization. So let's start the review. merge The feature gate test is here: Unstable book: Tests defined here:\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [ ] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!", "commid": "rust_issue_35625", "tokennum": 486}], "negative_passages": []} {"query_id": "q-en-rust-c85fb2f26151c01a1e09cc246cd3f041ad9b5b14f0f49c34eb9c949e48b85b25", "query": " error[E0658]: :literal fragment specifier is experimental and subject to change (see issue #35625) --> $DIR/feature-gate-macro-literal-matcher.rs:14:19 | LL | macro_rules! m { ($lt:literal) => {} } | ^^^^^^^^^^^ | = help: add #![feature(macro_literal_matcher)] to the crate attributes to enable error: aborting due to previous error For more information about this error, try `rustc --explain E0658`. ", "positive_passages": [{"docid": "doc-en-rust-ba319ab491e9a33d298d8cade668ea28bf8dd31af2044c7a4651474f66faed76", "text": "See for info about what commands tagged team members can give me.\nReading through the tests, I see that is a literal. While that makes sense to me, I remember hearing that it wasn't, but was instead a negation operator and a literal. Am I misremembering?\ncc on\nI agree that it makes sense. I would say that even if is interpreted as negation + a literal, I would probably still want the macro matcher to accept since that behavior seems more intuitive. In other words, I would see being more as an implementation detail than what a literal is in the mental model.\nI think that the nuance with causes a bit of a problem, because I don't think it should be treated as a literal, but I also don't think that the current implementation of would allow detecting it either. You could na\u00efvely allow but that would allow which is clearly wrong.\nthere are tests currently implemented that verify that is detected as a literal.\nI worded that wrong; I meant that only having a matcher and not being able to distinguish strings and numbers means that you can't easily make not a literal.\nfor procedural macros the input as lexed by the compiler is (two tokens) but procedural macros can create tokens that are negatives (like ). In that sense matching for makes sense to me.\nAre literals generated by the compiler counted as true literals? A quick example would be something like: This fails to match, as the expansion, I guess, happens after the selection. I just naively assumed that would have worked.\nis not a literal generated by the compiler, it is an identifier followed by an exclamation mark followed by a set of parentheses containing some tokens. See for example: If you were to use it in expression position, it would be interpreted as an invocation of which expands to a string literal. In this case and in your macro though, it is never in a syntactic position where it would be treated as an expression.\nI can understand semantically why it doesn't make sense, I just thought it would have, but that being said, I already changed it to , so there's no issue. It just made more sense to me as a literal, because according to the compiler, it could generate a literal from it. I guess it was more a question than an actual issue.\nThis is true, but there's more to it!", "commid": "rust_issue_35625", "tokennum": 504}], "negative_passages": []} {"query_id": "q-en-rust-c85fb2f26151c01a1e09cc246cd3f041ad9b5b14f0f49c34eb9c949e48b85b25", "query": " error[E0658]: :literal fragment specifier is experimental and subject to change (see issue #35625) --> $DIR/feature-gate-macro-literal-matcher.rs:14:19 | LL | macro_rules! m { ($lt:literal) => {} } | ^^^^^^^^^^^ | = help: add #![feature(macro_literal_matcher)] to the crate attributes to enable error: aborting due to previous error For more information about this error, try `rustc --explain E0658`. ", "positive_passages": [{"docid": "doc-en-rust-721bfecc35df475bca4ea1dad1c9601a73cf036bdeb04b87c9488b5714dd1929", "text": "A few built-in macros can expand their arguments eagerly through ~m a g i c~, so they accept and likes even if they expect string literals: Fortunately (or not), this currently cannot be done in user-defined macros.\nI actually just realised that I can't change it to , as it's used in . So, I've just had to manually unroll the macro. ! It doesn't look pretty. :( Ninja edit: I just tested what actually happens if you try to pass in a non-literal with . It rejects it the exact same way. Does the already make use of this feature? If not, the seems to propagate the required token. Oh, unless the compiler steps in... That's actually probably the case... I've reverted it back to the type.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete.\nSince you implemented this, would you be willing to write up the stabilization PR?\nsure\nThis appears to be missing from the Rust reference", "commid": "rust_issue_35625", "tokennum": 242}], "negative_passages": []} {"query_id": "q-en-rust-c8851f733914b45eb50adbb756282aafab4338d99e014072b6653b345aba1f52", "query": "} else { resolution.binding = Some(nonglob_binding); } resolution.shadowed_glob = Some(glob_binding); if let Some(old_binding) = resolution.shadowed_glob { assert!(old_binding.is_glob_import()); if glob_binding.res() != old_binding.res() { resolution.shadowed_glob = Some(this.ambiguity( AmbiguityKind::GlobVsGlob, old_binding, glob_binding, )); } else if !old_binding.vis.is_at_least(binding.vis, this.tcx) { resolution.shadowed_glob = Some(glob_binding); } } else { resolution.shadowed_glob = Some(glob_binding); } } (false, false) => { return Err(old_binding);", "positive_passages": [{"docid": "doc-en-rust-14d87f94d6dd2f60938cc3099809c6fbd53892083d65f782d3f0ae98e232297d", "text": " $DIR/nested-impl-trait-fail.rs:17:5 | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b>> | -- hidden type `[&'s u8; 1]` captures the lifetime `'s` as defined here ... LL | [a] | ^^^ | help: to declare that `impl IntoIterator + Cap<'b>>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b>> + 's | ++++ help: to declare that `impl Cap<'a> + Cap<'b>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b> + 's> | ++++ error[E0700]: hidden type for `impl Cap<'a> + Cap<'b>` captures lifetime that does not appear in bounds --> $DIR/nested-impl-trait-fail.rs:17:5 | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b>> | -- hidden type `&'s u8` captures the lifetime `'s` as defined here ... LL | [a] | ^^^ | help: to declare that `impl IntoIterator + Cap<'b>>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b>> + 's | ++++ help: to declare that `impl Cap<'a> + Cap<'b>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b> + 's> | ++++ error[E0700]: hidden type for `impl IntoIterator + Cap<'b>>` captures lifetime that does not appear in bounds --> $DIR/nested-impl-trait-fail.rs:28:5 | LL | fn fail_late_bound<'s, 'a, 'b>( | -- hidden type `[&'s u8; 1]` captures the lifetime `'s` as defined here ... LL | [a] | ^^^ | help: to declare that `impl IntoIterator + Cap<'b>>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | ) -> impl IntoIterator + Cap<'b>> + 's { | ++++ help: to declare that `impl Cap<'a> + Cap<'b>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | ) -> impl IntoIterator + Cap<'b> + 's> { | ++++ error[E0700]: hidden type for `impl Cap<'a> + Cap<'b>` captures lifetime that does not appear in bounds --> $DIR/nested-impl-trait-fail.rs:28:5 | LL | fn fail_late_bound<'s, 'a, 'b>( | -- hidden type `&'s u8` captures the lifetime `'s` as defined here ... LL | [a] | ^^^ | help: to declare that `impl IntoIterator + Cap<'b>>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | ) -> impl IntoIterator + Cap<'b>> + 's { | ++++ help: to declare that `impl Cap<'a> + Cap<'b>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | ) -> impl IntoIterator + Cap<'b> + 's> { | ++++ error: aborting due to 4 previous errors For more information about this error, try `rustc --explain E0700`. ", "positive_passages": [{"docid": "doc-en-rust-52ff78bb803c348d23865431fcf95aca9114b024ef3ad08fdb44f7d12198f25f", "text": "The following code should pass regardless of the declaration order of lifetimes: This is related to but it's different in that we do have a lower bound region, , but we fail to recognize that because , which is responsible for calculating from , assumes that the outlive relation is a total order, which is not true. Here is the graph of : ! So we need an efficient algorithm that works for partial order relations, but the one that comes to mind is . The perf may not be really important here but I'm curious to see the alternatives. label C-bug T-compiler A-borrow-checker E-mentor E-help-wanted $DIR/issue-111220-tuple-struct-fields.rs:8:13 | LL | let Self(x) = self; | ^^^^^^^ error[E0603]: tuple struct constructor `A` is private --> $DIR/issue-111220-tuple-struct-fields.rs:20:13 | LL | let Self(a) = self; | ^^^^^^^ error[E0603]: tuple struct constructor `A` is private --> $DIR/issue-111220-tuple-struct-fields.rs:40:13 | LL | let Self(a) = self; | ^^^^^^^ error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0603`. ", "positive_passages": [{"docid": "doc-en-rust-0867c49da7efb293e9753113abb91f30e498ac532bb3b84093850d54c0bc3742", "text": "I tried this code: I expected to see this happen: compilation should fail because the trait impl is defined outside of the module for the type but refers to its private field. Instead, this happened: this compiled successfully This reproduces on the playground for stable Rust, version 1.69.0:\nAdding a main function and attempting to call the (playground link: ) raises the compilation error: !\nThis can be reproduced with cross-crate (e.i., any other crate can access the private field). This means that all encapsulations that take advantage of the field being private can be bypassed. (For example, if we change Vec to a tuple struct, the same thing as setlen could be done in safe code by abusing this bug.) label +I-unsound Also, this can be reproduced in all versions where is stable (e.i., 1.32+). It's a matter of how you initialize Foo. If you provide a constructor for Foo, you won't get a compile error:\nJust for completeness, turning this into a segfault without external crates/just std:\nlabel A-visibility\nthis has been unsound ever since in version 1.32\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "commid": "rust_issue_111220", "tokennum": 283}], "negative_passages": []} {"query_id": "q-en-rust-c9d2dbf45a54a93526db7e64928f9524ec8063471a3553872c7ed9e76da4a762", "query": "#![feature(core)] #![feature(core_intrinsics)] #![feature(core_prelude)] #![feature(core_slice_ext)] #![feature(custom_attribute)] #![feature(fundamental)] #![feature(lang_items)]", "positive_passages": [{"docid": "doc-en-rust-1508071aab5c3ca124f1c55a16ad46d3925f157d86a42f0fd907d4eebd01313a", "text": "We need to find a way to implement Clone for where . In user code, \"cloning\" such a box manually is simple: will do it with a short roundtrip through which should be unproblematic. In the rust distribution, we can't impl Clone easily since the tools to do so are in libcollections but the Box type lives in liballoc.\nYou do not need vec for the implementation. Something like the following would work: Note that because can panic it is required to keep track of the partial result so the correct stuff is dropped.\nYes, that's nice that it can be carefully solved by essentially extracting the needed parts from what Vec does. But the Box code is otherwise so clean, it doesn't feel like the right way to do it.\nYou're welcome to suggest an alternative but I doubt it can be done in a manner that is both correct and significantly cleaner :)\nYour fundamental analysis is great: just isn't an exception safe representation while filling it with elements. It needs to keep track of the fill level, so a representation like Vec or BoxBuild is needed...\nClone is also wanted for , but that can be much simpler: There is no panic issue, and it can use a simple memcpy of the data.", "commid": "rust_issue_25097", "tokennum": 271}], "negative_passages": []} {"query_id": "q-en-rust-c9f23288d747730f8bc41609e334dbdbf9b597c2dfac94c28aa1044f3f5a8759", "query": " //@edition:2021 macro_rules! foo { () => { println!('hello world'); //~^ ERROR unterminated character literal //~| ERROR prefix `world` is unknown } } fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-eff82a61886b25b0f1dd4cf2300b1ef5995dc7fc7eccfb7efad8700de136ebcb", "text": " $DIR/auxiliary/issue-30123-aux.rs:14:5 | LL | pub fn new() -> Self { | ^^^^^^^^^^^^^^^^^^^^ = note: the function or associated item was found for - `issue_30123_aux::Graph`", "positive_passages": [{"docid": "doc-en-rust-a9511334498e06a54c6bf0f4bd67d9115d74046b72122463958e7ee6eb1751e7", "text": " $DIR/empty-struct-braces-pat-3.rs:21:9 | LL | XE::XEmpty3() => () | ^^^^^^^^^^^^^ not a tuple struct or tuple variant | help: use the struct variant pattern syntax | LL | XE::XEmpty3 {} => () | ~~ error[E0164]: expected tuple struct or tuple variant, found struct variant `E::Empty3` --> $DIR/empty-struct-braces-pat-3.rs:25:9 | LL | E::Empty3(..) => () | ^^^^^^^^^^^^^ not a tuple struct or tuple variant | help: use the struct variant pattern syntax | LL | E::Empty3 {} => () | ~~ error[E0164]: expected tuple struct or tuple variant, found struct variant `XE::XEmpty3` --> $DIR/empty-struct-braces-pat-3.rs:29:9 | LL | XE::XEmpty3(..) => () | ^^^^^^^^^^^^^^^ not a tuple struct or tuple variant | help: use the struct variant pattern syntax | LL | XE::XEmpty3 {} => () | ~~ error: aborting due to 4 previous errors", "positive_passages": [{"docid": "doc-en-rust-64b23c5c36e9988ec4c5b0f9b27af4a414e6c71c77fec9aaba09da710256fe45", "text": "I wrote code that looked like this while not having my brain turned on 100%. Due to said brain not functioning, it took me a while to understand what rustc was trying to tell me with the message pointing to the pattern with the text . sure looked like a unit variant to me! What rustc was trying to tell me was that the definition of the variant was a struct variant (\"found struct variant\") and my pattern wasn't matching the definition (\"this pattern you specified looks like it's trying to match a variant that is not a unit struct, unit variant or constant\"). The change I needed to make was adding to my pattern, because I was trying to distinguish different variants from each other, not use pieces within the variants. Suggesting seems like a decent starting point to me, but there could definitely be cases I'm not considering here and I'm totally open to hearing that No response No response $DIR/issue-35976.rs:20:9 | LL | fn wait(&self) where Self: Sized; | ----- this has a `Sized` requirement ... LL | arg.wait(); | ^^^^ | help: another candidate was found in the following trait, perhaps add a `use` for it: | LL | use private::Future; | error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-d8977df5e10b75cc49fbb47eb9f2a769467cee6682dc525ce06703efc8a957e5", "text": " $DIR/issue-35976.rs:20:9 | LL | fn wait(&self) where Self: Sized; | ----- this has a `Sized` requirement ... LL | arg.wait(); | ^^^^ | help: another candidate was found in the following trait, perhaps add a `use` for it: | LL | use private::Future; | error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-5f960f59572946d373b6c33323ee34953a0738c81a0bfbea0a933aa8d60f79bf", "text": "(cc with whom I was discussing this on Mastodon)\nHaven't look at this yet, maybe I misinterpreted what was achieving, and that commit just needs a revert -- can't recall why I did that.\nYeah, lol, I totally overlooked that the would be load-bearing due to the way we treat trait methods on as inherent. I'll revert , modify the test, etc.", "commid": "rust_issue_105159", "tokennum": 89}], "negative_passages": []} {"query_id": "q-en-rust-cb97ccf0434fafa7221037601a666b7919a0e3323b774bca0313909fb5ddfb91", "query": "\"rustc_data_structures\", \"rustc_hir\", \"rustc_hir_pretty\", \"rustc_lexer\", \"rustc_middle\", \"rustc_parse\", \"rustc_session\", \"rustc_span\", \"serde_json\",", "positive_passages": [{"docid": "doc-en-rust-9f7caf72595177bf2822e32eb545f7227354c9856f07d783118015d474032cf3", "text": "contains a function, which allows to to, well, retokenize existing span: There's only caller of this function: And there's only one caller of that caller: In other words, the only reason why we need is to get the span of hir::Usewhich happens to be a glob import. ( rls needs to know the span of use foo:: with explicit list of imported items\" code action. This was the first demoed code action and is still that is implemented ) I'd like to remove the function, as it is one of the few remaining things that requires to be public, and as it is very much a hack anyway. Additionally, I believe I've broken this function accidently a while ago: Rather than retokenizing a span, it retokenized file from the begining until the end of the span (whcih, as uses are usually on top, worked correctly most of the times by accident). What would be the best way to fix this? Add hir::Use-- this seem to be the \"proper\" solution here Move hack's implementation to where it belongs: in , just fetch the text of the file and look for ('') Remove the deglog functionlity from RLS? I don't like this option as it handicaps the RLS before we formally decided to deprecated it. But I don't completely disregard it either, as, as a whole, it seems to be a full-stack one off hack penetrating every layer of abstraction. WDYT?\nI'd suggest the second alternative (move hack's implementation to in librutcsaveanalysis), save analysis is a home of many hacks already and one more until save analysis is deprecated in its entirety won't hurt. Adding span for would be ok if it were useful for anything else at least.", "commid": "rust_issue_76046", "tokennum": 399}], "negative_passages": []} {"query_id": "q-en-rust-cbb81e1248aa72a7269711e23058031217489ee2095f08ffc90f613211d2617f", "query": "// Region folder impl<'tcx> TyCtxt<'tcx> { /// Folds the escaping and free regions in `value` using `f`, and /// sets `skipped_regions` to true if any late-bound region was found /// and skipped. /// Folds the escaping and free regions in `value` using `f`. pub fn fold_regions( self, value: T,", "positive_passages": [{"docid": "doc-en-rust-5403be4736dcdebcacc161ad98cf88b87d02ef27a07e54c42506e3e30d54d00e", "text": " $DIR/type-errors.rs:18:29 | LL | pub fn autoderef_source(e: &TypeError) { | ^^^^^^^^^ not found in this scope error[E0412]: cannot find type `TypeError` in this scope --> $DIR/type-errors.rs:23:29 | LL | pub fn autoderef_target(_: &TypeError) {} | ^^^^^^^^^ not found in this scope error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0412`.", "positive_passages": [{"docid": "doc-en-rust-e8a556184d3ee8d481a844eb181edcbf5b39ce86ed75ae52fc28a83741bad2d0", "text": " $DIR/rpitit-hidden-types-self-implied-wf-via-param.rs:8:38 | LL | fn extend<'a: 'a>(s: &'a str) -> (Option<&'static &'a ()>, &'static str) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the pointer is valid for the static lifetime note: but the referenced data is only valid for the lifetime `'a` as defined here --> $DIR/rpitit-hidden-types-self-implied-wf-via-param.rs:8:15 | LL | fn extend<'a: 'a>(s: &'a str) -> (Option<&'static &'a ()>, &'static str) | ^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0491`. ", "positive_passages": [{"docid": "doc-en-rust-af4a6d734eec2405d68437928d1762b0a1450d4c9315ae958757d9bfdc6d7292", "text": "This is a use-after-free that compiles with 1.74.0-nightly (2023-09-13 ): cc", "commid": "rust_issue_116060", "tokennum": 30}], "negative_passages": []} {"query_id": "q-en-rust-cddbcf8e472471148da5baaaa83fd15183b37ed20bf51db3da3eef6a078f3dc3", "query": " struct X; fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-fab3ca8c9179b639d7db0f30758e18fd05d791de7fbda0ef99ca6ccdc52995f6", "text": "Const generic defaults are being stabilized, $DIR/issue-90113.rs:17:9 | LL | Cons(..) => {} | ^^^^ not found in this scope | help: consider importing this tuple variant | LL | use list::List::Cons; | error: aborting due to previous error For more information about this error, try `rustc --explain E0531`. ", "positive_passages": [{"docid": "doc-en-rust-6f4363a60ee42a8d7dfa81c0b742aa8eb64a04a914b15145776cc0e885c53908", "text": "Bisected to a rollup, but I think it's very likely to be . cc searched nightlies: from nightly-2021-10-01 to nightly-2021-10-19 regressed nightly: nightly-2021-10-02 searched commits: from to regressed commit: #[unstable(feature = \"mutex_unpoison\", issue = \"96469\")] #[stable(feature = \"mutex_unpoison\", since = \"CURRENT_RUSTC_VERSION\")] pub fn clear_poison(&self) { self.poison.clear(); }", "positive_passages": [{"docid": "doc-en-rust-b0f8e8fb091eb29094beeb12734b684ad68e573ed218ab52600f0b38b92f951d", "text": "Feature gate: This is a tracking issue for functions to clear the poisoned flag on and . [x] [x] Implementation: [x] Final comment period (FCP) [ ] Stabilization PR $DIR/location-insensitive-scopes-issue-117146.rs:10:18 | LL | let b = |_| &a; | --- -^ | | || | | |borrowed value does not live long enough | | returning this value requires that `a` is borrowed for `'static` | value captured here ... LL | } | - `a` dropped here while still borrowed | note: due to current limitations in the borrow checker, this implies a `'static` lifetime --> $DIR/location-insensitive-scopes-issue-117146.rs:20:22 | LL | fn bad &()>(_: F) {} | ^^^ error: implementation of `Fn` is not general enough --> $DIR/location-insensitive-scopes-issue-117146.rs:13:5 | LL | bad(&b); | ^^^^^^^ implementation of `Fn` is not general enough | = note: closure with signature `fn(&'2 ()) -> &()` must implement `Fn<(&'1 (),)>`, for any lifetime `'1`... = note: ...but it actually implements `Fn<(&'2 (),)>`, for some specific lifetime `'2` error: implementation of `FnOnce` is not general enough --> $DIR/location-insensitive-scopes-issue-117146.rs:13:5 | LL | bad(&b); | ^^^^^^^ implementation of `FnOnce` is not general enough | = note: closure with signature `fn(&'2 ()) -> &()` must implement `FnOnce<(&'1 (),)>`, for any lifetime `'1`... = note: ...but it actually implements `FnOnce<(&'2 (),)>`, for some specific lifetime `'2` error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0597`. ", "positive_passages": [{"docid": "doc-en-rust-9eeaaa7acbac320ad451a6095a00086a3e46c7d3f164983cd5c4a0b060c733ad", "text": "File: snippet: Version information Command: $DIR/bad-builder.rs:2:15 | LL | Vec::::mew() | ^^^ | | | function or associated item not found in `Vec` | help: there is an associated function with a similar name: `new` | note: if you're trying to build a new `Vec` consider using one of the following associated functions: Vec::::new Vec::::with_capacity Vec::::from_raw_parts Vec::::new_in and 2 others --> $SRC_DIR/alloc/src/vec/mod.rs:LL:COL error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0599`. ", "positive_passages": [{"docid": "doc-en-rust-a9511334498e06a54c6bf0f4bd67d9115d74046b72122463958e7ee6eb1751e7", "text": " $DIR/start_lang_item_with_target_feature.rs:13:1 | LL | #[target_feature(enable = \"avx2\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ LL | LL | fn start(_main: fn() -> T, _argc: isize, _argv: *const *const u8, _sigpipe: u8) -> isize { | ------------------------------------------------------------------------------------------- `start` language item function is not allowed to have `#[target_feature]` error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-d6f19833b4d55a19558c344ec77f996c005c4f43f0f3a11acf2c70ec42b32de6", "text": "This works (but it shouldn't): similar to , cc (tracking issue) label T-lang T-compiler C-bug I-unsound F-targetfeature11\ncc , author of Note that this is about the nostd panic handler, not about the function. This fails as it should: gives\nugh we really need a way to make sure that every single \"this is called by rust\" function gets a check... I assume that these functions can't be unsafe, so I guess those checks are a food way to start looking and make sure to deduplicate the logic there?\nYeah sadly we do need to make sure, not just for the constructs that we have now but also it needs to be kept in mind for the future constructs we add or stabilize. A lot thankfully goes through either the trait system (operators or the GlobalAlloc trait used by ) or function pointers (alloc error hook, std panic hook). Functions that you can make callable solely by setting an attribute, those are rare. The best list of lang items I could find was in the , so that would be a great start for such an exhaustive search. Thankfully most of them are unstable, but still it needs to be addressed for each of them before stabilization (if that is a target).\nWG-prioritization assigning priority (). label -I-prioritize +P-medium\nlabel +requires-nightly\nThere is also another table of lang items in the source code, .", "commid": "rust_issue_109411", "tokennum": 318}], "negative_passages": []} {"query_id": "q-en-rust-d3553e1c18c24da8fd4bf785335b167c626710a0f472a3484322cb7d9c2ceeaf", "query": "{ debug!(?obligation, \"confirm_fn_pointer_candidate\"); // Okay to skip binder; it is reintroduced below. let self_ty = self.infcx.shallow_resolve(obligation.self_ty().skip_binder()); let self_ty = self .infcx .shallow_resolve(obligation.self_ty().no_bound_vars()) .expect(\"fn pointer should not capture bound vars from predicate\"); let sig = self_ty.fn_sig(self.tcx()); let trait_ref = closure_trait_ref_and_return_type( self.tcx(),", "positive_passages": [{"docid": "doc-en-rust-0e86f5cad76845a9579f9f9606cb740329e6e79be0820e577c6d84ca9b39e498", "text": "An there actually does panic for the test. I only moved this code between the files, the same comment and code pattern exists as well for normal closures: Originally posted by in\nI couldn't repro this failure, maybe was doing instead of ? The ordering is important here.\nI believe I did, yes.", "commid": "rust_issue_104825", "tokennum": 64}], "negative_passages": []} {"query_id": "q-en-rust-d3b4ab2817b051432764410372f06757d87ff82e80fbfb0bc8b36b46de6a4ed6", "query": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use foo::baz; use bar::baz; //~ ERROR a module named `baz` has already been imported use foo::Quux; use bar::Quux; //~ ERROR a trait named `Quux` has already been imported use foo::blah; use bar::blah; //~ ERROR a type named `blah` has already been imported use foo::WOMP; use bar::WOMP; //~ ERROR a value named `WOMP` has already been imported fn main() {} mod foo { pub mod baz {} pub trait Quux { } pub type blah = (f64, u32); pub const WOMP: u8 = 5; } mod bar { pub mod baz {} pub type Quux = i32; struct blah { x: i8 } pub const WOMP: i8 = -5; } ", "positive_passages": [{"docid": "doc-en-rust-a49a60d983ca263c6df1f1b675bee487e19954dff841c117235ecac4d35e3050", "text": "I found an error message that seems a bit off to me. The code: Clearly it should fail to compile, and it does. But the error message is: I would expect something like \"a module name has already been imported\", instead. Is it accurate to call it a type?", "commid": "rust_issue_25396", "tokennum": 60}], "negative_passages": []} {"query_id": "q-en-rust-d43d198445a768013215536077db097a35c5266b2bdb1d15a53b42cbac22e4c7", "query": "let trait_m = tcx.opt_associated_item(impl_m.trait_item_def_id.unwrap()).unwrap(); let impl_trait_ref = tcx.impl_trait_ref(impl_m.impl_container(tcx).unwrap()).unwrap().instantiate_identity(); let param_env = tcx.param_env(impl_m_def_id); // First, check a few of the same things as `compare_impl_method`, // just so we don't ICE during substitution later. check_method_is_structurally_compatible(tcx, impl_m, trait_m, impl_trait_ref, true)?;", "positive_passages": [{"docid": "doc-en-rust-af4a6d734eec2405d68437928d1762b0a1450d4c9315ae958757d9bfdc6d7292", "text": "This is a use-after-free that compiles with 1.74.0-nightly (2023-09-13 ): cc", "commid": "rust_issue_116060", "tokennum": 30}], "negative_passages": []} {"query_id": "q-en-rust-d45e18504f7fb56a6d602dd01cb85faeb374ec4a6bd6c6171be819427597ae53", "query": "ENV RUST_CONFIGURE_ARGS --enable-extended --disable-docs ENV SCRIPT python2.7 ../x.py dist --host $HOSTS --target $HOSTS # FIXME(#36150) this will fail the bootstrap. Probably means something bad is # happening! ENV NO_LLVM_ASSERTIONS 1 ", "positive_passages": [{"docid": "doc-en-rust-8001657e7cfe3991a0e31abfbdd568df54d28d4abce1fb86fb56b15072b47766", "text": "From . This means we do not blanket turn on LLVM assertions on all of our bots, which we would ideally like to do. I'm going to try turning them off for just the cross bot though.\ncc cc\nSeems related to somewhat. We do not use the maxnum intrinsic (totally should!) but from the existence of bug for maxnum seems to suggest that a similar issue for some non-maxnum intrinsic is likely to exist (and libcore is kinda intrinsic-heavy).", "commid": "rust_issue_36150", "tokennum": 105}], "negative_passages": []} {"query_id": "q-en-rust-d485b68aecc027e13818c1f565d410ed64979746064d00f684bd3cc59e06b104", "query": " error[E0669]: invalid value for constraint in inline assembly --> $DIR/issue-51431.rs:7:32 | LL | asm! {\"mov $0,$1\"::\"0\"(\"bx\"),\"1\"(0x00)} | ^^^^ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-bad15165f9c17e24bf9acf94720de1600514f6cf5915295f602149400d545451", "text": "Binder(<[type error] as ToBytes) This only happens if there are two erroneous functions, if I take out it works fine. It seems like I was trying to do this wrong in the first place since the first function gives an error . Not sure what the right way to do this is. //! ## Case sensitivity //! //! Unless otherwise indicated path methods that do not access the filesystem, //! such as [`Path::starts_with`] and [`Path::ends_with`], are case sensitive no //! matter the platform or filesystem. An exception to this is made for Windows //! drive letters. //! //! ## Simple usage //! //! Path manipulation includes both parsing components from slices and building", "positive_passages": [{"docid": "doc-en-rust-268daca50ce956852981f01701d4faa6e6ddc8fed00a3d27b721f37c53f18ef1", "text": "Case sensitivity is a property of the filesystem, not operating system and these functions are filesystem agnostic. As thus, this issue is not actionable. Closing. If you feel differently please complain and I\u2019ll reopen.\nIt's fine if the functions want to be case-sensitive, but their docs need to be changed to point this out explicitly and noticeably. Every Windows user is going to assume that the functions are case-insensitive just like they are in other languages on the platform. (I imagine OSX users will have the same expectation?) It's made slightly worse because is handled in a special way that makes evaluate to despite the mismatched case. So someone using that as a quick test of their assumption would end up with the wrong idea. Would you mind reopening for this reason?\nI think it would also be good to have a crate like provide a that does hit the filesystem. Such an API would also be able to detect when the same folder is specified but using alternate prefixes or mount points or through symbolic links.\nThe driver letter is a symlink in the Object Manager and lookup is always case-insensitive. Folders are located on the filesystem and may or may not be case sensitive. I do think platform specific behaviour should be documented if it can be a source of confusion.", "commid": "rust_issue_66260", "tokennum": 281}], "negative_passages": []} {"query_id": "q-en-rust-d4ce3545c5901bb881591dbb64de2c723a31a1958af51f81191bc2031d9b0f6f", "query": "match expr.node { ast::ExprField(ref base, ident) => { if let ty::ty_struct(id, _) = ty::expr_ty_adjusted(self.tcx, &**base).sty { self.check_field(expr.span, id, NamedField(ident.node)); self.check_field(expr.span, id, NamedField(ident.node.name)); } } ast::ExprTupField(ref base, idx) => {", "positive_passages": [{"docid": "doc-en-rust-cd356b80efe8963569b705c0d082e39ab100fdda7683d49f27a524f808e2b4f1", "text": "Tracking issue for: rust-lang/rfcs Functional record update does not mark the source as consumed, even if does not implement Copy. This means that e.g. the source's destructor will eventually be run, so the following code prints :\nNote that, as discovered, you can also explicitly consume , also allowing code like:\n:bomb:\nIs legal, even if all members are private? It's a bit surprising\nIt is. There's the potentially-useful case where some of the members are private, and FRU allows you to update the public ones. Of course it should consume the source even then.\nP-backcompat-lang, 1.0 beta\nIn my opinion the problem here is the privacy violation. The idea is that is expanded to and so forth. In this case, that does not trigger a move, because the types of those fields are . I think that part is acting as expected. What is surprising is that one can do this from outside the module.\nThat was (but my favored solution isn't minor).\nWhen I initially saw this problem, I did not see it as a privacy violation, for the (obvious?) reason that since no private members of had been mentioned in the code , the client module is not breaking any abstraction introduced by privacy. I think this line of thinking is analogous to outline of the useful case where FRU allows one to update the public members of a type and silently move the private ones as well, via . This is useful for the case where the author of wants to reserve the right to add new private members to while still allowing for clients to use the convenient FRU syntax. I originally was going to try to explain my point of view via analogy with something from hygienic macros: If I were to define (and export) a macro within 's defining module: then it should, in principle, not be a privacy violation to use from client code outside 's defining module. (That is, in principle, hygiene should allow macros to expose controlled access to private internals.) Then I realized the crucial problem with that kind of analogy: is something that the designer of has to choose to put into the module; i.e., it is opt-in. But currently in Rust, FRU is not opt-in; its not even something you can opt out of.", "commid": "rust_issue_21407", "tokennum": 487}], "negative_passages": []} {"query_id": "q-en-rust-d4ce3545c5901bb881591dbb64de2c723a31a1958af51f81191bc2031d9b0f6f", "query": "match expr.node { ast::ExprField(ref base, ident) => { if let ty::ty_struct(id, _) = ty::expr_ty_adjusted(self.tcx, &**base).sty { self.check_field(expr.span, id, NamedField(ident.node)); self.check_field(expr.span, id, NamedField(ident.node.name)); } } ast::ExprTupField(ref base, idx) => {", "positive_passages": [{"docid": "doc-en-rust-d6f89ff452205249e804ee67896c49d22c052f99387d1c0f11af3b93dd1a3665", "text": "Any struct automatically is given support for it, whether it be appropriate for that struct or not. And that seems wrong, at least given examples like the one provided here in . This latter point, that providing support for FRU should not be something that is silently attached to a type like , seems somewhat in line with the thinking given in . But I am also pretty sure we cannot adopt anything of the scope given in that draft. (Still, maybe there's a hint of solution embedded within, in the sense of having some way of marking a type as data vs abstract; see final two bullet points below.) So, as I see it, we can either: out a way to support the point-of-view outlined here in the description for , in that should somehow be treated as consuming . Such a change seems like it would be quite subtle, since we would need to have some way of expressing that code like is sound, even if the types of the fields and are consumed by their respective and methods. (In other words, need to consume , yet somehow not completely consume it. Maybe there's a way to think of it as consuming \"whatever is left of the type\", I do not know.) Advantages: This would address the bug as described by the issue author, and would not break any code that was not already broken. Drawbacks: It is not clear what the semantics are for this form (in other words: the phrase \"would not break any code\" above is only true if you first accept that this hypothesized semantics exists). Also, there may not be time to do this properly for 1.0. , try to adopt a data/abstract-type distinction along the lines of the one in draft RFC. (I'm not going to write Advantages/Drawbacks for this; I think we clearly do not have time to do it that way for 1.0) , change FRU into a non-hygienic expansion into the record construction form with all fields listed, in the sense that is synonymous with and subject to privacy checks. Advantages: Seems really simple to implement, and is consistent with at least one core team member's mental model of FRU. Drawbacks: If we did this, then code like the example above would break.", "commid": "rust_issue_21407", "tokennum": 474}], "negative_passages": []} {"query_id": "q-en-rust-d4ce3545c5901bb881591dbb64de2c723a31a1958af51f81191bc2031d9b0f6f", "query": "match expr.node { ast::ExprField(ref base, ident) => { if let ty::ty_struct(id, _) = ty::expr_ty_adjusted(self.tcx, &**base).sty { self.check_field(expr.span, id, NamedField(ident.node)); self.check_field(expr.span, id, NamedField(ident.node.name)); } } ast::ExprTupField(ref base, idx) => {", "positive_passages": [{"docid": "doc-en-rust-052ce01de26b08a8f737abdfb24caf536f7787a9ed0ef876d1b88ac232477464", "text": ", let FRU keep its current hygienic (aka \"privacy violating\") semantics, but also make FRU something one must opt-in to support on a type. E.g. make a builtin trait that a struct must implement in order to be usable with FRU. (Or maybe its an attribute you attach to the item.) Advantages: Also seems really simple to implement (in the language). Accommodates users of FRU for the example. Drawbacks: If we did this, it would impose a burden on all code today that makes use of FRU, since they would have to start implementing . Thus, not simple to implement for the libraries and the overall ecosystem. , a hybrid between the previous two bullet-points: By default, treat FRU as non-hygienic expansion, but add a builtin trait that one can opt-into to get the hygienic (aka \"privacy violating\") semantics. As a bonus, perhaps implementing on a struct would also imply implementing on that struct, which seems like it would basically give us something very similar to the semantics desired by (Or, again, maybe its an attribute you attach to the item. This detail does not particularly matter to me.) Advantage: While this is obviously more complicated than the previous two bullet-points, it has the advantage that it has a staged landing strategy: We could just_ implement the change to FRU to be non-hygienic expansion for 1.0 beta. We could add at an arbitrary point in the future; it would not have to be in the 1.0 release. Drawback: People have claimed that some of my past proposals were a bit ... baroque. I imagine this one is no exception. :) (In other words, this semantics is a little complicated to explain to a newcomer.)\nI don't want to make a big fuss about it, but I do want to note, again, that our \"deadlines\" are entirely self-imposed, and that under these circumstances \"we don't have time\" is slightly odd, because we can make time, any time we want. The blanket refusal to even contemplate doing that - even in light of new information - and to weigh the tradeoffs on their own merits is something I still don't really understand.", "commid": "rust_issue_21407", "tokennum": 486}], "negative_passages": []} {"query_id": "q-en-rust-d4ce3545c5901bb881591dbb64de2c723a31a1958af51f81191bc2031d9b0f6f", "query": "match expr.node { ast::ExprField(ref base, ident) => { if let ty::ty_struct(id, _) = ty::expr_ty_adjusted(self.tcx, &**base).sty { self.check_field(expr.span, id, NamedField(ident.node)); self.check_field(expr.span, id, NamedField(ident.node.name)); } } ast::ExprTupField(ref base, idx) => {", "positive_passages": [{"docid": "doc-en-rust-f88caec2557592d4606dad39cd734d5f1a3e33cd1274c6919ee6120c35705a54", "text": "That out of the way, an additional possibility, let's call it 6., for list: We could just make the \"abstract type\" vs. \"data type\" distinction based on \"does or doesn't the type have a private field\". This doesn't entirely fill me with satisfaction (as with \"worse is better\" solutions in general), with specific drawbacks being that it doesn't extend cleanly to the nullary case (no fields at all), and that in terms of UX a change of semantics based on the presence or absence of a private field may be unexpected, but on the plus side, it's simpler, and would, I think, work.\nSure, hypothetical show-stopper issues could cause us to revisit our self-imposed schedule. I personally don't think that this issue, on its own at least, provides sufficient motivation for schedule revision. I admit that your draft RFC seems like it addresses a host of issues, so maybe all of them taken in unison would be a different story. (But then again, that's not even my call to make.)\nThere were a couple more options that I should have put on the list, for completeness, but overlooked. 6 note, the \"has any private fields ==abstract-type; all public fields ==data-type\", is one of them. This would (probably) be less work than adopting the draft RFC in full, but I am not sure what the other Advantages/Disadvantages are. I invite others to write an Advantages/Drawbacks for this approach (perhaps in an RFC PR). 7 We could add a way for a struct to opt-out of FRU support entirely. So in this approach, one would attach the appropriate attribute to , , etc. Advantages: won't break any existing code Drawbacks: Seems quite fragile; we may be good about auditing our own stdlib, but clients who are not aware of the attribute may inadvertently expose themselves to the same bug that currently plagues .\nAlso, I just wanted to mention: when I wrote my comment 6 hours ago, I came to the keyboard thinking that the FRU support for adding new private fields to and having still work was an important use case. But after some reflection in the hours since, supporting that use case does not seem as important to me.", "commid": "rust_issue_21407", "tokennum": 489}], "negative_passages": []} {"query_id": "q-en-rust-d4ce3545c5901bb881591dbb64de2c723a31a1958af51f81191bc2031d9b0f6f", "query": "match expr.node { ast::ExprField(ref base, ident) => { if let ty::ty_struct(id, _) = ty::expr_ty_adjusted(self.tcx, &**base).sty { self.check_field(expr.span, id, NamedField(ident.node)); self.check_field(expr.span, id, NamedField(ident.node.name)); } } ast::ExprTupField(ref base, idx) => {", "positive_passages": [{"docid": "doc-en-rust-af18126ee9950e4e8ddf5f7113df149b986b16d1a95a5eb29bbef32e79a631f3", "text": "Here is why: It is relatively easy for a developer who wants to provide that sort of extensible abstraction can still do it, by factoring the type into a all-public front-end with a single public field that holds the all-private back-end, like so: So, given that there exists a reasonable pattern to bring back usable FRU even under the option \"(3.) change FRU into a non-hygienic expansion into the record construction form with all fields listed, in the sense that is synonymous with and subject to privacy checks.\", I would not object to us adopting (3.) aka solution.\nI think that \"has any private field\" is a decent way to handle abstractness (you can always add a private unit field). On the other hand, Rust supports trait bounds of types \u2013 these should handle roles just fine \u2013 you shouldn't be able to coerce past a trait bound. By the way, this is somewhat subtle, because coercing to is wrong (as it leaks memory \u2013 it coerces past the implicit bound), but coercing to is fine (as the bound isn't used by ). However, this isn't a property of \u2013 you still can't coerce to \u2013 so we probably want a relatively-sophisticated way to occasionally ignore roles.\nI mean, Rust already has a mechanism for ensuring representation-abstraction \u2013 private fields \u2013 which are semantic (they are handled after type-checking, we probably want to handle them during it, but they definitely aren't handled syntactically or during resolution). It is just that FRU currently accesses them improperly. Also, it doesn't have much to do with e.g. roles \u2013 is an abstract type, but it doesn't use any bound on . only uses the implicit bound. A for data-only types would be nice, through.\nAnother alternative came to me while I was drafting an RFC about this. (But its an alternative that I do not particularly like.) 8a Instead of making \"has any private fields\" the thing that makes FRU unavailable, instead make the rule be \"has all private fields.\" Thus types like and will not be subject to the vulnerability outlined here. Adding a single field to an otherwise private struct is what changes the nature of a struct with respect to abstraction and FRU, rather than adding a single non- field to an otherwise public struct.", "commid": "rust_issue_21407", "tokennum": 511}], "negative_passages": []} {"query_id": "q-en-rust-d4ce3545c5901bb881591dbb64de2c723a31a1958af51f81191bc2031d9b0f6f", "query": "match expr.node { ast::ExprField(ref base, ident) => { if let ty::ty_struct(id, _) = ty::expr_ty_adjusted(self.tcx, &**base).sty { self.check_field(expr.span, id, NamedField(ident.node)); self.check_field(expr.span, id, NamedField(ident.node.name)); } } ast::ExprTupField(ref base, idx) => {", "positive_passages": [{"docid": "doc-en-rust-2970bb6c7ef2c53faa928bc9972b6000d9bf53fad0a8ad5bc68188bb06d21ee9", "text": "8b, which is so similar to 8a that I gave them both the same numeral: Outlaw the trivial FRU form . That is, to use FRU, you have to use at least one field in the constructing expression. Again, this implies that types like and will not be subject to the vulnerability outlined here. (This may be a decent change to make to the language regardless of whatever else happens on this ticket; the only argument in favor of keeping support for such a trivial form is maybe for supporting certain macro-expansions. But fully-general macro-authors already have to tread very carefully around corner cases on structs, due to issues like the empty braces problem (see )\nThat would leave the vulnerability intact in the case where the type has both public fields and private fields, the fact that no such type currently exists in the standard library notwithstanding (just as a very artificial demonstration, consider a with ).\nyep. Library authors would have to factor their structures very carefully. (I described the drawback you mention in the RFC I posted shortly after you wrote your comment, though I did not include the detail of having the private fields be , which does seem like it could make the situation all the more subtle. Not insurmountable, but a definite drawback.) Just to be clear, I'm not saying I prefer (8a) or (8b); I'm just trying to be complete in my exploration of the solution space.\nRepurposed as the tracking issue for rust-lang/rfcs\nI'll just go snag", "commid": "rust_issue_21407", "tokennum": 323}], "negative_passages": []} {"query_id": "q-en-rust-d4f00b74c9c6eeb988d62688c80db79acbf29c34c27c0461f309028c87b32371", "query": "panic!(); } #[test] fn panic_in_write_doesnt_flush_in_drop() { static WRITES: AtomicUsize = AtomicUsize::new(0); struct PanicWriter; impl Write for PanicWriter { fn write(&mut self, _: &[u8]) -> io::Result { WRITES.fetch_add(1, Ordering::SeqCst); panic!(); } fn flush(&mut self) -> io::Result<()> { Ok(()) } } thread::spawn(|| { let mut writer = BufWriter::new(PanicWriter); writer.write(b\"hello world\"); writer.flush(); }).join().err().unwrap(); assert_eq!(WRITES.load(Ordering::SeqCst), 1); } #[bench] fn bench_buffered_reader(b: &mut test::Bencher) { b.iter(|| {", "positive_passages": [{"docid": "doc-en-rust-7eb3c566a4cc705575f4c2ee3217e12bb55b9548ee9b0475cdedd41fa840b6a5", "text": "There is a panic safety issue in : after a call panics, the impl of calls again, which means the buffer contents are potentially written twice. This may cause an application to overwrite parts of a file that it did not mean to overwrite (in a DB engine written in Rust, this could cause unrecoverable data corruption!). Demonstration: The expected output of the demo program is , the actual output is: More generally, we need a story for panic safety in Rust. My takeaway from the related discussions (e.g. , , the trait) was that only code and impls should have to worry about panic safety. The demo app contains none of these, so I'd consider this a bug in . (otherwise all implementations would need to provide the strong exception safety guarantee?) Solution: could use temporarily mark the buffer as empty during the calls; so that the impl doesn't do anything after a panic. However, this doesn't help if the panic occurs during a call...\nNote that Rust is normally referring to \"memory safety\" when talking about safety, so the issue here isn't in conflict with the related discussions. It's definitely a bug in , but I want to point out how specific it is here: the only reason you have odd behaviour is because you have a implementation that will panic after successfully writing the data it is given. I'm all for making more robust, but it's not just \"if the call panics\", it's \"if the call panics after successfully writing data\". Panicking before a successful write (or if the write fails), would not have the same issue.\nIf your impl is panicking in the middle of a write, it seems to me like you're in data corruption land with or without a second write. This is not quite the case - and thread locals expand that surface area. What's specifically the expected behavior here? Is the implementation specifically the thing that needs tweaking, or should the totally poison itself? For reference, Java's class has no special handling for exceptions thrown from the inner : , so adjusting alone seems like a reasonable route to take.\nWithout the , a panic in the middle of a write (without ) only corrupts the part of the file that the application meant to overwrite. In a DB engine context, that means the resulting corruption can be repaired when the transaction is rolled back.", "commid": "rust_issue_30888", "tokennum": 503}], "negative_passages": []} {"query_id": "q-en-rust-d4f00b74c9c6eeb988d62688c80db79acbf29c34c27c0461f309028c87b32371", "query": "panic!(); } #[test] fn panic_in_write_doesnt_flush_in_drop() { static WRITES: AtomicUsize = AtomicUsize::new(0); struct PanicWriter; impl Write for PanicWriter { fn write(&mut self, _: &[u8]) -> io::Result { WRITES.fetch_add(1, Ordering::SeqCst); panic!(); } fn flush(&mut self) -> io::Result<()> { Ok(()) } } thread::spawn(|| { let mut writer = BufWriter::new(PanicWriter); writer.write(b\"hello world\"); writer.flush(); }).join().err().unwrap(); assert_eq!(WRITES.load(Ordering::SeqCst), 1); } #[bench] fn bench_buffered_reader(b: &mut test::Bencher) { b.iter(|| {", "positive_passages": [{"docid": "doc-en-rust-6614dcbedfbb2f754c1340f1bd29d576b23573a905cee9a4431a1c2203f10d3e", "text": "But with , a panic after some data was written might corrupt parts of the file that are not saved in the rollback journal, so recovery not possible. I think we need some guideline like: If a method call panics, it's a bug to call any additional methods on the same object. Thus, the bug is in . may need something similar to a poison-flag so that the impl can detect whether a previous call panicked, but it doesn't need to protect against it's own methods being called after a panic (that would be considered a bug in the caller). The Java has the same problem, but the expectation in the Java world is different: it's trivially possible to catch an exception and continue using the potentially-invalid state; so in Java's case I would argue that all implementations should provide the strong exception safety guarantee to avoid this kind of issue. Essentially, the Rust world needs to decide between a Java/C++-style \"strong exception safety guarantee\" (=failing method call should not have any side effects) or a new \"no-use-after-panic guarantee\" (=after a failing method call, no additional method calls should occur). Some types (like BufWriter) can be compatible with at most one of the guarantees -- strong exception safety requires that a panicking call has no side effects; no-use-after-panic requires that a panicking call has the side effect of suppressing the buffer flush in the impl. The trait seems to be roughly equivalent to basic exception safety. Types that do not implement instead live in the world. Generic types implementing need to be careful not to call any functions on the type parameter from within the implementation if a previous call panicked (unless they use a bound).\ntriage: P-medium is fixing the data corruption in", "commid": "rust_issue_30888", "tokennum": 378}], "negative_passages": []} {"query_id": "q-en-rust-d51a1e608437a95bb1e11700dcd4efb444ba2f3fc2d232940a2e2d1668db58db", "query": "// ignore it. We can't put it on the struct header anyway. RegionKind::ReLateBound(..) => false, // This can appear in `where Self: ` bounds (#64855): // // struct Bar(::Type) where Self: ; // struct Baz<'a>(&'a Self) where Self: ; RegionKind::ReEmpty => false, // These regions don't appear in types from type declarations: RegionKind::ReEmpty | RegionKind::ReErased RegionKind::ReErased | RegionKind::ReClosureBound(..) | RegionKind::ReScope(..) | RegionKind::ReVar(..)", "positive_passages": [{"docid": "doc-en-rust-7985c267ad2f3f852db605f6024dae811735b7f2e909df00229c2dacf6c3338a", "text": "Internal compiler error when missing bound on struct. Reproducible on stable/beta/nightly on . Also, not sure if it already has it's own issue, but is causing an illegal instruction error when you run it with on this code as well, which is also reproducible on playground if you turn the backtrace setting on. I tried this code: () I expected to see this happen: Instead, this happened: rustc versions (from playground): LL | || LL | || use std::ptr::null; LL | || use std::task::{Context, RawWaker, RawWakerVTable, Waker}; ... || LL | || drop(cx_ref); LL | || }); | ||_____-^ `&mut Context<'_>` may not be safely transferred across an unwind boundary | |_____| | within this `{async block@$DIR/async-is-unwindsafe.rs:12:19: 30:6}` | within this `{async block@$DIR/async-is-unwindsafe.rs:12:19: 29:6}` | = help: within `{async block@$DIR/async-is-unwindsafe.rs:12:19: 30:6}`, the trait `UnwindSafe` is not implemented for `&mut Context<'_>`, which is required by `{async block@$DIR/async-is-unwindsafe.rs:12:19: 30:6}: UnwindSafe` = help: within `{async block@$DIR/async-is-unwindsafe.rs:12:19: 29:6}`, the trait `UnwindSafe` is not implemented for `&mut Context<'_>`, which is required by `{async block@$DIR/async-is-unwindsafe.rs:12:19: 29:6}: UnwindSafe` = note: `UnwindSafe` is implemented for `&Context<'_>`, but not for `&mut Context<'_>` note: future does not implement `UnwindSafe` as this value is used across an await --> $DIR/async-is-unwindsafe.rs:26:18 --> $DIR/async-is-unwindsafe.rs:25:18 | LL | let cx_ref = &mut cx; | ------ has type `&mut Context<'_>` which does not implement `UnwindSafe`", "positive_passages": [{"docid": "doc-en-rust-c4a5273a933e8a31db83bc0532f1121df06f4a5ea514257a527d1d7d6a098f8b", "text": "https://crater-\nI'll need to re-check later but this seems to bisect to somehow.\nThe UI test changes from that PR appear to back this up, as confusing as it is.\nI'm not sure what's going on in that crate, if it built successfully by mistake, or how that PR interacts with it, so let's cc the PR author and reviewer for validation.\nAfter , no longer implements and , because the context now contains a which doesn't implement those traits. So this is an issue with the standard library and not the crate being built. Repro: label -T-compiler +T-libs +S-has-mcve\nWG-prioritization assigning priority (). Noticing this on which says that was meant to be confined on nightly. Unsure about the fallout of this regression but for now marking as critical for tracking purposes. label -I-prioritize +P-critical\nRevert up:\nIsn't not unwind safe?\nThat line was by your PR, so while what you say is technically true, talking about what \"already happened\" when we are traveling back and forth through time via version control may be more confusing than helpful.\nThat line was already there. I the line below it though. Original PR:\nApologies, I guess I can't count numbers today? Hm, so, the full error of the regression is this: Note that it's trying to make a pointer implement . And for what it's worth, stable does indeed document is . Thus unfortunately that line you pointed to, as opposed to the one I thought you were referring to, is effectively just documenting that all are , and that will include regardless of whether it is is . And confusingly, , unlike .\nI have opened as an alternative.", "commid": "rust_issue_125193", "tokennum": 371}], "negative_passages": []} {"query_id": "q-en-rust-d59f032aa8dfb920401f196b28b8b9a7e1885947a134bf877c7e31369c71604e", "query": "Some(&def::DefVariant(_, variant_id, _)) => { for field in fields { self.check_field(pattern.span, variant_id, NamedField(field.node.ident)); NamedField(field.node.ident.name)); } } _ => self.tcx.sess.span_bug(pattern.span,", "positive_passages": [{"docid": "doc-en-rust-cd356b80efe8963569b705c0d082e39ab100fdda7683d49f27a524f808e2b4f1", "text": "Tracking issue for: rust-lang/rfcs Functional record update does not mark the source as consumed, even if does not implement Copy. This means that e.g. the source's destructor will eventually be run, so the following code prints :\nNote that, as discovered, you can also explicitly consume , also allowing code like:\n:bomb:\nIs legal, even if all members are private? It's a bit surprising\nIt is. There's the potentially-useful case where some of the members are private, and FRU allows you to update the public ones. Of course it should consume the source even then.\nP-backcompat-lang, 1.0 beta\nIn my opinion the problem here is the privacy violation. The idea is that is expanded to and so forth. In this case, that does not trigger a move, because the types of those fields are . I think that part is acting as expected. What is surprising is that one can do this from outside the module.\nThat was (but my favored solution isn't minor).\nWhen I initially saw this problem, I did not see it as a privacy violation, for the (obvious?) reason that since no private members of had been mentioned in the code , the client module is not breaking any abstraction introduced by privacy. I think this line of thinking is analogous to outline of the useful case where FRU allows one to update the public members of a type and silently move the private ones as well, via . This is useful for the case where the author of wants to reserve the right to add new private members to while still allowing for clients to use the convenient FRU syntax. I originally was going to try to explain my point of view via analogy with something from hygienic macros: If I were to define (and export) a macro within 's defining module: then it should, in principle, not be a privacy violation to use from client code outside 's defining module. (That is, in principle, hygiene should allow macros to expose controlled access to private internals.) Then I realized the crucial problem with that kind of analogy: is something that the designer of has to choose to put into the module; i.e., it is opt-in. But currently in Rust, FRU is not opt-in; its not even something you can opt out of.", "commid": "rust_issue_21407", "tokennum": 487}], "negative_passages": []} {"query_id": "q-en-rust-d59f032aa8dfb920401f196b28b8b9a7e1885947a134bf877c7e31369c71604e", "query": "Some(&def::DefVariant(_, variant_id, _)) => { for field in fields { self.check_field(pattern.span, variant_id, NamedField(field.node.ident)); NamedField(field.node.ident.name)); } } _ => self.tcx.sess.span_bug(pattern.span,", "positive_passages": [{"docid": "doc-en-rust-d6f89ff452205249e804ee67896c49d22c052f99387d1c0f11af3b93dd1a3665", "text": "Any struct automatically is given support for it, whether it be appropriate for that struct or not. And that seems wrong, at least given examples like the one provided here in . This latter point, that providing support for FRU should not be something that is silently attached to a type like , seems somewhat in line with the thinking given in . But I am also pretty sure we cannot adopt anything of the scope given in that draft. (Still, maybe there's a hint of solution embedded within, in the sense of having some way of marking a type as data vs abstract; see final two bullet points below.) So, as I see it, we can either: out a way to support the point-of-view outlined here in the description for , in that should somehow be treated as consuming . Such a change seems like it would be quite subtle, since we would need to have some way of expressing that code like is sound, even if the types of the fields and are consumed by their respective and methods. (In other words, need to consume , yet somehow not completely consume it. Maybe there's a way to think of it as consuming \"whatever is left of the type\", I do not know.) Advantages: This would address the bug as described by the issue author, and would not break any code that was not already broken. Drawbacks: It is not clear what the semantics are for this form (in other words: the phrase \"would not break any code\" above is only true if you first accept that this hypothesized semantics exists). Also, there may not be time to do this properly for 1.0. , try to adopt a data/abstract-type distinction along the lines of the one in draft RFC. (I'm not going to write Advantages/Drawbacks for this; I think we clearly do not have time to do it that way for 1.0) , change FRU into a non-hygienic expansion into the record construction form with all fields listed, in the sense that is synonymous with and subject to privacy checks. Advantages: Seems really simple to implement, and is consistent with at least one core team member's mental model of FRU. Drawbacks: If we did this, then code like the example above would break.", "commid": "rust_issue_21407", "tokennum": 474}], "negative_passages": []} {"query_id": "q-en-rust-d59f032aa8dfb920401f196b28b8b9a7e1885947a134bf877c7e31369c71604e", "query": "Some(&def::DefVariant(_, variant_id, _)) => { for field in fields { self.check_field(pattern.span, variant_id, NamedField(field.node.ident)); NamedField(field.node.ident.name)); } } _ => self.tcx.sess.span_bug(pattern.span,", "positive_passages": [{"docid": "doc-en-rust-052ce01de26b08a8f737abdfb24caf536f7787a9ed0ef876d1b88ac232477464", "text": ", let FRU keep its current hygienic (aka \"privacy violating\") semantics, but also make FRU something one must opt-in to support on a type. E.g. make a builtin trait that a struct must implement in order to be usable with FRU. (Or maybe its an attribute you attach to the item.) Advantages: Also seems really simple to implement (in the language). Accommodates users of FRU for the example. Drawbacks: If we did this, it would impose a burden on all code today that makes use of FRU, since they would have to start implementing . Thus, not simple to implement for the libraries and the overall ecosystem. , a hybrid between the previous two bullet-points: By default, treat FRU as non-hygienic expansion, but add a builtin trait that one can opt-into to get the hygienic (aka \"privacy violating\") semantics. As a bonus, perhaps implementing on a struct would also imply implementing on that struct, which seems like it would basically give us something very similar to the semantics desired by (Or, again, maybe its an attribute you attach to the item. This detail does not particularly matter to me.) Advantage: While this is obviously more complicated than the previous two bullet-points, it has the advantage that it has a staged landing strategy: We could just_ implement the change to FRU to be non-hygienic expansion for 1.0 beta. We could add at an arbitrary point in the future; it would not have to be in the 1.0 release. Drawback: People have claimed that some of my past proposals were a bit ... baroque. I imagine this one is no exception. :) (In other words, this semantics is a little complicated to explain to a newcomer.)\nI don't want to make a big fuss about it, but I do want to note, again, that our \"deadlines\" are entirely self-imposed, and that under these circumstances \"we don't have time\" is slightly odd, because we can make time, any time we want. The blanket refusal to even contemplate doing that - even in light of new information - and to weigh the tradeoffs on their own merits is something I still don't really understand.", "commid": "rust_issue_21407", "tokennum": 486}], "negative_passages": []} {"query_id": "q-en-rust-d59f032aa8dfb920401f196b28b8b9a7e1885947a134bf877c7e31369c71604e", "query": "Some(&def::DefVariant(_, variant_id, _)) => { for field in fields { self.check_field(pattern.span, variant_id, NamedField(field.node.ident)); NamedField(field.node.ident.name)); } } _ => self.tcx.sess.span_bug(pattern.span,", "positive_passages": [{"docid": "doc-en-rust-f88caec2557592d4606dad39cd734d5f1a3e33cd1274c6919ee6120c35705a54", "text": "That out of the way, an additional possibility, let's call it 6., for list: We could just make the \"abstract type\" vs. \"data type\" distinction based on \"does or doesn't the type have a private field\". This doesn't entirely fill me with satisfaction (as with \"worse is better\" solutions in general), with specific drawbacks being that it doesn't extend cleanly to the nullary case (no fields at all), and that in terms of UX a change of semantics based on the presence or absence of a private field may be unexpected, but on the plus side, it's simpler, and would, I think, work.\nSure, hypothetical show-stopper issues could cause us to revisit our self-imposed schedule. I personally don't think that this issue, on its own at least, provides sufficient motivation for schedule revision. I admit that your draft RFC seems like it addresses a host of issues, so maybe all of them taken in unison would be a different story. (But then again, that's not even my call to make.)\nThere were a couple more options that I should have put on the list, for completeness, but overlooked. 6 note, the \"has any private fields ==abstract-type; all public fields ==data-type\", is one of them. This would (probably) be less work than adopting the draft RFC in full, but I am not sure what the other Advantages/Disadvantages are. I invite others to write an Advantages/Drawbacks for this approach (perhaps in an RFC PR). 7 We could add a way for a struct to opt-out of FRU support entirely. So in this approach, one would attach the appropriate attribute to , , etc. Advantages: won't break any existing code Drawbacks: Seems quite fragile; we may be good about auditing our own stdlib, but clients who are not aware of the attribute may inadvertently expose themselves to the same bug that currently plagues .\nAlso, I just wanted to mention: when I wrote my comment 6 hours ago, I came to the keyboard thinking that the FRU support for adding new private fields to and having still work was an important use case. But after some reflection in the hours since, supporting that use case does not seem as important to me.", "commid": "rust_issue_21407", "tokennum": 489}], "negative_passages": []} {"query_id": "q-en-rust-d59f032aa8dfb920401f196b28b8b9a7e1885947a134bf877c7e31369c71604e", "query": "Some(&def::DefVariant(_, variant_id, _)) => { for field in fields { self.check_field(pattern.span, variant_id, NamedField(field.node.ident)); NamedField(field.node.ident.name)); } } _ => self.tcx.sess.span_bug(pattern.span,", "positive_passages": [{"docid": "doc-en-rust-af18126ee9950e4e8ddf5f7113df149b986b16d1a95a5eb29bbef32e79a631f3", "text": "Here is why: It is relatively easy for a developer who wants to provide that sort of extensible abstraction can still do it, by factoring the type into a all-public front-end with a single public field that holds the all-private back-end, like so: So, given that there exists a reasonable pattern to bring back usable FRU even under the option \"(3.) change FRU into a non-hygienic expansion into the record construction form with all fields listed, in the sense that is synonymous with and subject to privacy checks.\", I would not object to us adopting (3.) aka solution.\nI think that \"has any private field\" is a decent way to handle abstractness (you can always add a private unit field). On the other hand, Rust supports trait bounds of types \u2013 these should handle roles just fine \u2013 you shouldn't be able to coerce past a trait bound. By the way, this is somewhat subtle, because coercing to is wrong (as it leaks memory \u2013 it coerces past the implicit bound), but coercing to is fine (as the bound isn't used by ). However, this isn't a property of \u2013 you still can't coerce to \u2013 so we probably want a relatively-sophisticated way to occasionally ignore roles.\nI mean, Rust already has a mechanism for ensuring representation-abstraction \u2013 private fields \u2013 which are semantic (they are handled after type-checking, we probably want to handle them during it, but they definitely aren't handled syntactically or during resolution). It is just that FRU currently accesses them improperly. Also, it doesn't have much to do with e.g. roles \u2013 is an abstract type, but it doesn't use any bound on . only uses the implicit bound. A for data-only types would be nice, through.\nAnother alternative came to me while I was drafting an RFC about this. (But its an alternative that I do not particularly like.) 8a Instead of making \"has any private fields\" the thing that makes FRU unavailable, instead make the rule be \"has all private fields.\" Thus types like and will not be subject to the vulnerability outlined here. Adding a single field to an otherwise private struct is what changes the nature of a struct with respect to abstraction and FRU, rather than adding a single non- field to an otherwise public struct.", "commid": "rust_issue_21407", "tokennum": 511}], "negative_passages": []} {"query_id": "q-en-rust-d59f032aa8dfb920401f196b28b8b9a7e1885947a134bf877c7e31369c71604e", "query": "Some(&def::DefVariant(_, variant_id, _)) => { for field in fields { self.check_field(pattern.span, variant_id, NamedField(field.node.ident)); NamedField(field.node.ident.name)); } } _ => self.tcx.sess.span_bug(pattern.span,", "positive_passages": [{"docid": "doc-en-rust-2970bb6c7ef2c53faa928bc9972b6000d9bf53fad0a8ad5bc68188bb06d21ee9", "text": "8b, which is so similar to 8a that I gave them both the same numeral: Outlaw the trivial FRU form . That is, to use FRU, you have to use at least one field in the constructing expression. Again, this implies that types like and will not be subject to the vulnerability outlined here. (This may be a decent change to make to the language regardless of whatever else happens on this ticket; the only argument in favor of keeping support for such a trivial form is maybe for supporting certain macro-expansions. But fully-general macro-authors already have to tread very carefully around corner cases on structs, due to issues like the empty braces problem (see )\nThat would leave the vulnerability intact in the case where the type has both public fields and private fields, the fact that no such type currently exists in the standard library notwithstanding (just as a very artificial demonstration, consider a with ).\nyep. Library authors would have to factor their structures very carefully. (I described the drawback you mention in the RFC I posted shortly after you wrote your comment, though I did not include the detail of having the private fields be , which does seem like it could make the situation all the more subtle. Not insurmountable, but a definite drawback.) Just to be clear, I'm not saying I prefer (8a) or (8b); I'm just trying to be complete in my exploration of the solution space.\nRepurposed as the tracking issue for rust-lang/rfcs\nI'll just go snag", "commid": "rust_issue_21407", "tokennum": 323}], "negative_passages": []} {"query_id": "q-en-rust-d614942aa4e737dbbd9da257ae6155c69f597cb5039b2938495dc8677098511e", "query": "/// /// assert_eq!(n.leading_zeros(), 2); /// ``` #[doc = concat!(\"[`ilog2`]: \", stringify!($SelfT), \"::ilog2\")] #[stable(feature = \"rust1\", since = \"1.0.0\")] #[rustc_const_stable(feature = \"const_math\", since = \"1.32.0\")] #[must_use = \"this returns the result of the operation, ", "positive_passages": [{"docid": "doc-en-rust-49ec9f37f3015e9b3b24c7ec2ddf3b4040d6f440722cb64b31b7c6e3ff8dab25", "text": "I was chatting with today, who mentioned it'd be helpful to mention the newly stabilized APIs from the docs. The benefit of using is that even if the type widens, the result doesn't change. It'd be nice if someone could contribute docs mentioning from . This probably just needs to be a one-liner. $DIR/unused_parens_json_suggestion.rs:17:14 {\"message\":\"unnecessary parentheses around assigned value\",\"code\":{\"code\":\"unused_parens\",\"explanation\":null},\"level\":\"error\",\"spans\":[{\"file_name\":\"$DIR/unused_parens_json_suggestion.rs\",\"byte_start\":596,\"byte_end\":609,\"line_start\":16,\"line_end\":16,\"column_start\":14,\"column_end\":27,\"is_primary\":true,\"text\":[{\"text\":\" let _a = (1 / (2 + 3)); --> $DIR/unused_parens_json_suggestion.rs:16:14 | LL | let _a = (1 / (2 + 3)); | ^^^^^^^^^^^^^ help: remove these parentheses | note: lint level defined here --> $DIR/unused_parens_json_suggestion.rs:11:9 --> $DIR/unused_parens_json_suggestion.rs:10:9 | LL | #![warn(unused_parens)] LL | #![deny(unused_parens)] | ^^^^^^^^^^^^^ \" } \"} {\"message\":\"aborting due to previous error\",\"code\":null,\"level\":\"error\",\"spans\":[],\"children\":[],\"rendered\":\"error: aborting due to previous error \"} ", "positive_passages": [{"docid": "doc-en-rust-4ec7e73a404ac76d69c671c6f06db99d936ae8f0078fcfaac8411a19baadc5dd", "text": "Update test file to use + annotations instead of (see https://rust-) The files need to update are: - $DIR/gat-dont-ice-on-absent-feature.rs:7:5 | LL | type Item<'b> = &'b Foo; | ^^^^^^^^^^^^^^^^^^^^^^^^ | = note: for more information, see https://github.com/rust-lang/rust/issues/44265 = help: add #![feature(generic_associated_types)] to the crate attributes to enable error: aborting due to previous error For more information about this error, try `rustc --explain E0658`. ", "positive_passages": [{"docid": "doc-en-rust-a307df6afd7fee0359c06b3ec49e368dd69f57ee370ac4b77c4fed4ee685c190", "text": "The following minimal example causes an internal compiler error. Error in question: rustc version: This error was encountered on stable, but I can confirm that it also occurs on the latest nightly ().\nHasn't the feature gate issue been fixed? :|\nI've removed the nomination and this to the tracking issue for GATs . The current impl is highly incomplete and ICEs are, sadly, expected. Also, I verified that a feature gate is indeed required.\ntriage: P-medium\nI don't know if I get you wrong, but the example above ICEs on stable, beta and nightly without feature gate\noops, looks like I accidentally left the feature gate in there when I tried it out, but it produces the same ICE when it is omitted as well.\nNo, niko edited your question. ping I think you got something wrong there. Can you please clarify that?\nOh, I guess I was indeed incorreect. I tried it on play but didn't notice that, after the \"correct\" error, it goes on to print an ICE.\nWhat about adding the I-nominated tag again?\nI'm not sure why we would nominate this for discussion. Yes, it is an ICE (and an ICE that occurs without the user adding a feature gate), but the message pretty clearly states that you're using an unstable feature. It would be great to fix it so that it issues the error diagnostic without ICEing, but I wouldn't re-prioritize that work above P-medium.\nassigning to self since its a clear annoyance\ntriage: P-medium. (dupe of )\n(the remaining bug, when the feature gate is enabled, is tracked in issue )", "commid": "rust_issue_60654", "tokennum": 354}], "negative_passages": []} {"query_id": "q-en-rust-db24a937fb2dcab5c8a626f02af123e82bb3d96e1c2606ba55cb976639441a08", "query": "pub fn normal_test_fn() { #[expect(unused_mut, reason = \"this expectation will create a diagnostic with the default lint level\")] //~^ WARNING this lint expectation is unfulfilled //~| WARNING this lint expectation is unfulfilled //~| NOTE this expectation will create a diagnostic with the default lint level //~| NOTE this expectation will create a diagnostic with the default lint level //~| NOTE duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` let mut v = vec![1, 1, 2, 3, 5]; v.sort(); // Check that lint lists including `unfulfilled_lint_expectations` are also handled correctly #[expect(unused, unfulfilled_lint_expectations, reason = \"the expectation for `unused` should be fulfilled\")] //~^ WARNING this lint expectation is unfulfilled //~| WARNING this lint expectation is unfulfilled //~| NOTE the expectation for `unused` should be fulfilled //~| NOTE the expectation for `unused` should be fulfilled //~| NOTE the `unfulfilled_lint_expectations` lint can't be expected and will always produce this message //~| NOTE the `unfulfilled_lint_expectations` lint can't be expected and will always produce this message //~| NOTE duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` let value = \"I'm unused\"; }", "positive_passages": [{"docid": "doc-en-rust-9da6e95c335f48d6b06b8a6cd003c89c70bc53bedd6bf2c5c7ac9bd6bf6d5f89", "text": "I tried this code: I expected to catch lint and suppress it, as it should. Instead, compiler issues a diagnostic both about unused value and unfulfilled lint expectation: It does work as expected though if is applied to function. I used the latest stable (Rust 1.81), but it is reproducible both on latest beta (2024-09-04 ) and latest nightly (2024-09-05 ). labels: +F-lintreasons, +A-diagnostics $DIR/issue-79467.rs:4:5 | LL | dyn 'static: 'static + Copy, | ^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0224`. ", "positive_passages": [{"docid": "doc-en-rust-02a7b28e8c04b401cc1b4b964192dc76569ee3ce2be8257050f62fcea8cacc40", "text": " $DIR/issue-93647.rs:2:5 | LL | (||1usize)() | ^^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0015`. ", "positive_passages": [{"docid": "doc-en-rust-fab3ca8c9179b639d7db0f30758e18fd05d791de7fbda0ef99ca6ccdc52995f6", "text": "Const generic defaults are being stabilized, $DIR/issue-93647.rs:2:5 | LL | (||1usize)() | ^^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0015`. ", "positive_passages": [{"docid": "doc-en-rust-da4b543757f0f59cd2c68f75eec2130e5847519474f60ca31783db1ed199f8fc", "text": "Yeah I think in this case it helped uncover the bug, because of the fact it ICEd and showed us we weren't resolving the closure default.\nI guess this example shows that this did help us uncover a bug, but kind of indirectly: This just didn't uncover a case where \"lost\" bound vars.\nthis is now fixed on beta", "commid": "rust_issue_93647", "tokennum": 71}], "negative_passages": []} {"query_id": "q-en-rust-de190ef5af3b9ac6c353221899d6c84e29a87e5c8e095cb49d3cb3cee5648195", "query": "match self.find_library_crate() { Some(t) => t, None => { self.sess.abort_if_errors(); let message = match root_ident { None => format!(\"can't find crate for `{}`\", self.ident), Some(c) => format!(\"can't find crate for `{}` which `{}` depends on\",", "positive_passages": [{"docid": "doc-en-rust-2768418157edf563d5501b1799ac47f472307da0ad598bb842feb3f59e9eb8f2", "text": "Built glfw-rs using rust 0.10-pre. When referencing this crate in another source file and building, I get error: internal compiler error: unexpected failure This message reflects a bug in the Rust compiler. We would appreciate a bug report: note: the compiler hit an unexpected failure path. this is a bug Ok(task 'rustc' failed at 'assertion failed: ()', ) This is on OSX 10.9: rustc 0.10-pre ( 2014-02-18 10:16:48 -0800) host: x8664-apple-darwin\nThis means that you've got two versions of the same lying around, and doesn't like having two versions. This will be fixed by , I just need to land it :(\nI have just build the library for the first time, and I can only find one dylib for it on my system (next to the rlib, but I gues that shouldn't cause any problems). Is there another possibility for this error? Gesendet: Mittwoch, 19. Februar 2014 um 01:09 UhrVon: \"Alex Crichton\" : mozilla/rust hhildebr : Re: [rust] Compiler error when referencing crate \"glfw-rs\" () This means that you've got two versions of the same libglfw- lying around, and rustc doesn't like having two versions. This will be fixed by , I just need to land it :( \u2014 Reply to this email directly or view it on GitHub.\nmy reading of is that having a dylib and an rlib simultaneously is a problem.\nHaving a dylib and an rlib simultaneously should not be a problem, it is in fact quite common. Having 2 dylibs and 1 rlib or 2 rlibs and 1 dylib however is a problem.\nsorry, my mistake in parsing the title of .\nYes, indeed, I had several identical dylibs, but it wasn't the glfw-rs.. there were rust libs in /usr/local/lib and /usr/local/lib/rustlib/... Thanks! Gesendet: Mittwoch, 19.", "commid": "rust_issue_12377", "tokennum": 483}], "negative_passages": []} {"query_id": "q-en-rust-de190ef5af3b9ac6c353221899d6c84e29a87e5c8e095cb49d3cb3cee5648195", "query": "match self.find_library_crate() { Some(t) => t, None => { self.sess.abort_if_errors(); let message = match root_ident { None => format!(\"can't find crate for `{}`\", self.ident), Some(c) => format!(\"can't find crate for `{}` which `{}` depends on\",", "positive_passages": [{"docid": "doc-en-rust-eb189aebd03b6ca0bdf6b8ced56904a7aac687ebe49e624c5f51beae46de063d", "text": "Februar 2014 um 17:15 UhrVon: \"Alex Crichton\" : mozilla/rust hhildebr : Re: [rust] Compiler error when referencing crate \"glfw-rs\" () Having a dylib and an rlib simultaneously should not be a problem, it is in fact quite common. Having 2 dylibs and 1 rlib or 2 rlibs and 1 dylib however is a problem. \u2014 Reply to this email directly or view it on GitHub.\nso can we close this ticket now, in that case? Or do you want to wait for alexcrichton's patches to land, and see what things look like for you then?\nNono, the ticket can be closed for now. Thanks for your help! Gesendet: Mittwoch, 19. Februar 2014 um 21:30 UhrVon: \"Felix S Klock II\" : mozilla/rust hhildebr : Re: [rust] Compiler error when referencing crate \"glfw-rs\" () so can we close this ticket now, in that case? Or do you want to wait for alexcrichton's patches to land, and see what things look like for you then? \u2014 Reply to this email directly or view it on GitHub.", "commid": "rust_issue_12377", "tokennum": 279}], "negative_passages": []} {"query_id": "q-en-rust-de193326748369fc8525d369772180fbeab56b1d8e2da177da78ac14114687cd", "query": "#[cfg(any(target_os = \"horizon\", target_os = \"hurd\"))] pub fn modified(&self) -> io::Result { Ok(SystemTime::from(self.stat.st_mtim)) SystemTime::new(self.stat.st_mtim.tv_sec as i64, self.stat.st_mtim.tv_nsec as i64) } #[cfg(not(any(", "positive_passages": [{"docid": "doc-en-rust-064e596224ab8b033dcbde195a3c23f89ac66198c13616cc80e4bf158342d6c9", "text": "Compiling project with causes the following compilation error: See this .\ncc target maintainer\nI guess has the same issue? This got broken by (\"unix time module now return result\") which dropped I guess the way forward is to make use instead. I'm wondering if there is a way to make sure that this kind of breakage doesn't happen before a rustc release?\nOur platform support policy is to avoid blocking PRs on whether tier 3 platforms build, because tier 3 platforms are not required to build.\nSure, I'm not saying to block PRs, but I'm wondering about somehow getting notified quickly enough to be able to react before a rustc release.\n(as in: getting the break notice from a getrandom CI build a dozen days after the merge does not look that robust)\nHm. Does the Rust compiler for hurd get built from a stable or beta branch of rustc? My understanding was that tier 3 targets generally used the nightly compiler in any case.\nIn Debian we will use the stable releases to compile all rust-needing Debian packages, as the Debian policy requires.\nThe breaking commit is not in the current 1.77 release but is in the 1.78 beta. It will not be in a stable release for another 5 weeks-ish. Would you like your PR that fixes it to be beta-nominated, so that it gets backported and is in the stable release of 1.78?\nYes, please.\nI recommend you set up a daily/weekly job on your side building master to check for regressions. ( may fix this)", "commid": "rust_issue_123032", "tokennum": 330}], "negative_passages": []} {"query_id": "q-en-rust-de367870a194fe263df68964da49d7c320bf1d1526ef855c3545af020fa863df", "query": "let result = match e.node { hir::ExprUnary(hir::UnNeg, ref inner) => { // unary neg literals already got their sign during creation match inner.node { hir::ExprLit(ref lit) => { use syntax::ast::*; use syntax::ast::LitIntType::*; const I8_OVERFLOW: u64 = ::std::i8::MAX as u64 + 1; const I16_OVERFLOW: u64 = ::std::i16::MAX as u64 + 1; const I32_OVERFLOW: u64 = ::std::i32::MAX as u64 + 1; const I64_OVERFLOW: u64 = ::std::i64::MAX as u64 + 1; match (&lit.node, ety.map(|t| &t.sty)) { (&LitKind::Int(I8_OVERFLOW, Unsuffixed), Some(&ty::TyInt(IntTy::I8))) | (&LitKind::Int(I8_OVERFLOW, Signed(IntTy::I8)), _) => { return Ok(Integral(I8(::std::i8::MIN))) }, (&LitKind::Int(I16_OVERFLOW, Unsuffixed), Some(&ty::TyInt(IntTy::I16))) | (&LitKind::Int(I16_OVERFLOW, Signed(IntTy::I16)), _) => { return Ok(Integral(I16(::std::i16::MIN))) }, (&LitKind::Int(I32_OVERFLOW, Unsuffixed), Some(&ty::TyInt(IntTy::I32))) | (&LitKind::Int(I32_OVERFLOW, Signed(IntTy::I32)), _) => { return Ok(Integral(I32(::std::i32::MIN))) }, (&LitKind::Int(I64_OVERFLOW, Unsuffixed), Some(&ty::TyInt(IntTy::I64))) | (&LitKind::Int(I64_OVERFLOW, Signed(IntTy::I64)), _) => { return Ok(Integral(I64(::std::i64::MIN))) }, (&LitKind::Int(n, Unsuffixed), Some(&ty::TyInt(IntTy::Is))) | (&LitKind::Int(n, Signed(IntTy::Is)), _) => { match tcx.sess.target.int_type { IntTy::I16 => if n == I16_OVERFLOW { return Ok(Integral(Isize(Is16(::std::i16::MIN)))); }, IntTy::I32 => if n == I32_OVERFLOW { return Ok(Integral(Isize(Is32(::std::i32::MIN)))); }, IntTy::I64 => if n == I64_OVERFLOW { return Ok(Integral(Isize(Is64(::std::i64::MIN)))); }, _ => bug!(), } }, _ => {}, } }, hir::ExprUnary(hir::UnNeg, ref inner) => { // skip `--$expr` return eval_const_expr_partial(tcx, inner, ty_hint, fn_args); }, _ => {}, if let hir::ExprLit(ref lit) = inner.node { use syntax::ast::*; use syntax::ast::LitIntType::*; const I8_OVERFLOW: u64 = ::std::i8::MAX as u64 + 1; const I16_OVERFLOW: u64 = ::std::i16::MAX as u64 + 1; const I32_OVERFLOW: u64 = ::std::i32::MAX as u64 + 1; const I64_OVERFLOW: u64 = ::std::i64::MAX as u64 + 1; match (&lit.node, ety.map(|t| &t.sty)) { (&LitKind::Int(I8_OVERFLOW, Unsuffixed), Some(&ty::TyInt(IntTy::I8))) | (&LitKind::Int(I8_OVERFLOW, Signed(IntTy::I8)), _) => { return Ok(Integral(I8(::std::i8::MIN))) }, (&LitKind::Int(I16_OVERFLOW, Unsuffixed), Some(&ty::TyInt(IntTy::I16))) | (&LitKind::Int(I16_OVERFLOW, Signed(IntTy::I16)), _) => { return Ok(Integral(I16(::std::i16::MIN))) }, (&LitKind::Int(I32_OVERFLOW, Unsuffixed), Some(&ty::TyInt(IntTy::I32))) | (&LitKind::Int(I32_OVERFLOW, Signed(IntTy::I32)), _) => { return Ok(Integral(I32(::std::i32::MIN))) }, (&LitKind::Int(I64_OVERFLOW, Unsuffixed), Some(&ty::TyInt(IntTy::I64))) | (&LitKind::Int(I64_OVERFLOW, Signed(IntTy::I64)), _) => { return Ok(Integral(I64(::std::i64::MIN))) }, (&LitKind::Int(n, Unsuffixed), Some(&ty::TyInt(IntTy::Is))) | (&LitKind::Int(n, Signed(IntTy::Is)), _) => { match tcx.sess.target.int_type { IntTy::I16 => if n == I16_OVERFLOW { return Ok(Integral(Isize(Is16(::std::i16::MIN)))); }, IntTy::I32 => if n == I32_OVERFLOW { return Ok(Integral(Isize(Is32(::std::i32::MIN)))); }, IntTy::I64 => if n == I64_OVERFLOW { return Ok(Integral(Isize(Is64(::std::i64::MIN)))); }, _ => bug!(), } }, _ => {}, } } match eval_const_expr_partial(tcx, &inner, ty_hint, fn_args)? { Float(f) => Float(-f),", "positive_passages": [{"docid": "doc-en-rust-c41efa6acce4e7ec8bfb39cc7fb2535bf145ce5bdbf38b427eff9c3f72ee0d1e", "text": "This is clearly ridiculous, but rustc is perfectly happy with it. Regression from 1.9.\nprints . Using the constant in any other way, e.g. errors out:\ncc is that fallout from your new const eval?\nI haven't removed anything, not that I can think of, to cause this.\nThen it's very likely fallout from (skipping double negations)\nThe remaining error in that testcase feels wrong. should, IMO, be - the literal is in range, but the negation overflows.\nActually I think it should be valid code, but yield , at least that's what the original literal out of range lints do\nBut overflows when negating, which is an error. Why should it be different than ?\nI have no particular opinion on this. But the lint has code handling double negations: (it's treating double negations as if they weren't there at all)\nThe double negation is not the issue here. The value of evaluates to , as expected. But consider the following code: This will also compile just fine and print . It goes wrong in the cast . Rust doesn't check if cast overflows, so the value becomes . can be changed to disallow casts that fail. (I'd be happy to implement this myself.) Or we can accept that these casts are can overflow silently.\nI believe the issue is that --128 (aka 128) is an intermediate result of the computation that is out of bounds for the i8. EDIT: The issue is about the fact that the compiler doesn't complain.\njup, that was a misunderstanding of some code on my part + the issue that processing HIR works the wrong way around (you start with the outer most expression). Reverting should do the job\nI thought that it was intended that would succesfully evaluate to . (That rustc knows that it's overflowing, but is smart to know that double negation will fix it in the end.) Pretend I said nothing.", "commid": "rust_issue_34395", "tokennum": 432}], "negative_passages": []} {"query_id": "q-en-rust-de4a7516747ebd0725ec2e0dd975eb2bbadb7839a39466e8298be36f785d89c0", "query": "fn
poll_write_vectored( self, cx: &mut Option<String>, bufs: &[usize] bufs: &[usize], ) -> Option<Result<usize, Error>> { ... } } No newline at end of file", "positive_passages": [{"docid": "doc-en-rust-d86f98e29e96907eb74e2979df8b01395c41a9d6fde295b825e0d20753d1bba3", "text": "Take a look at the docs for : Rustdoc is making good choices for whether to wrap the declarations ! But when it wraps, it doesn't include a trailing comma on the last parameter. I think it should, because the default style guide wants a comma there. (And rustdoc should continue not putting a trailing comma when the declaration is shown as a single line.) $DIR/unaligned_references.rs:90:20 | LL | let _ref = &m1.1.a; | ^^^^^^^ | note: the lint level is defined here --> $DIR/unaligned_references.rs:1:9 | LL | #![deny(unaligned_references)] | ^^^^^^^^^^^^^^^^^^^^ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #82523 = note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) Future breakage diagnostic: error: reference to packed field is unaligned --> $DIR/unaligned_references.rs:100:20 | LL | let _ref = &m2.1.a; | ^^^^^^^ | note: the lint level is defined here --> $DIR/unaligned_references.rs:1:9 | LL | #![deny(unaligned_references)] | ^^^^^^^^^^^^^^^^^^^^ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #82523 = note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) ", "positive_passages": [{"docid": "doc-en-rust-d380ca66a3c478ea98f8da700dbf73638e54c0836be0e8e46532c0dbb72351f3", "text": "I tried this code: I expected to see this happen: The reference is aligned Instead, this happened: : label I-unsound A-mir\nWe have a pass to add moves for drops in packed structs, but it doesn't fire in this case. That's because there's a bug - this computation cannot just take into account the innermost field, it needs to compute a minimum of some sort.\nis this a regression? What did you exactly expected the output to be? (I'd like to run a bisection). thanks.\nThis was right in . Getting a consistent reproduction of this is slightly difficult because the reference might happen to align correctly by accident. In other words, the fact that the example above prints a pointer that is not 2-aligned means that something is certainly wrong. However, if the address were to be 2-aligned that could either be because the bug is fixed or because we happened to get lucky. The consistent way to check this is by looking at . In 1.52.0 the MIR contains which is correct (the move ensures that the resulting place is aligned) while 1.53.0 contains Note that there is no move.\nWG-prioritization assigning priority (). label -I-prioritize +P-high E-needs-bisection\nI have run a bisection with on your . The bisection could not pinpoint a single commit, maybe it sinked into a rollup merge. But if it helps, the regression should be in and coming from one of these PRs do you see in any of those a possible culprit? (I don't have the context to)\nYeah, it's , cc\nNot sure how would have an effect here, it should be a NOP for with . Looks like something else is amiss as well and that PR surfaced the bug? Where is the logic that moves packed structs to a new place before dropping implemented? That's odd, I would have expected this to be inside ... after all calling on a packed struct also has to do this move. Why does MIR building adjust the caller to satisfy a requirement that the callee should be responsible for?\nthe pass is called or something like that.", "commid": "rust_issue_99838", "tokennum": 465}], "negative_passages": []} {"query_id": "q-en-rust-df6eea57ea0b0eb12e599618878fb4af77180e8b4830634293dc84bcb75cb4cd", "query": "= note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) Future breakage diagnostic: error: reference to packed field is unaligned --> $DIR/unaligned_references.rs:90:20 | LL | let _ref = &m1.1.a; | ^^^^^^^ | note: the lint level is defined here --> $DIR/unaligned_references.rs:1:9 | LL | #![deny(unaligned_references)] | ^^^^^^^^^^^^^^^^^^^^ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #82523 = note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) Future breakage diagnostic: error: reference to packed field is unaligned --> $DIR/unaligned_references.rs:100:20 | LL | let _ref = &m2.1.a; | ^^^^^^^ | note: the lint level is defined here --> $DIR/unaligned_references.rs:1:9 | LL | #![deny(unaligned_references)] | ^^^^^^^^^^^^^^^^^^^^ = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #82523 = note: fields of packed structs are not properly aligned, and creating a misaligned reference is undefined behavior (even if that reference is never dereferenced) = help: copy the field contents to a local variable, or replace the reference with a raw pointer and use `read_unaligned`/`write_unaligned` (loads and stores via `*p` must be properly aligned even when using raw pointers) ", "positive_passages": [{"docid": "doc-en-rust-68d7d02aab47f0ccb4315d63eeee38030b338c848906e6471b100c61288b127b", "text": "And this can't take place inside the callee because the callee receives a , and if that reference is unaligned then UB has already occurred and there's nothing the callee can do to fix things anymore\ntakes a raw pointer, so it could very well be the place where the packed struct alignment trouble is handled. And IMO it'd be a much more natural place than doing this by transforming caller MIR.\nI don't understand the suggestion. The example above never calls for a packed struct. The offending drop in the MIR in the example above is . The type of that place is - but if were to be responsible for moving, then we would need to emit a move in literally every drop shim for a non-trivial alignment type.\nOh I see, I didn't quite read the example correctly. The problematic drop is the one that happens on field assignment, and we know it's a packed field because of the path we are accessing it with. EDIT: no wait this makes no sense. Now I am confused by your comment.\nNo what I just said makes no sense. What happens is that this is a partial drop -- one field has been moved out () and then the logic for dropping the remaining fields seems to be wrong. So is relevant because it changed . And it looks like somehow the change is wrong; here is also despite the reference being unaligned.\nshould fix this.", "commid": "rust_issue_99838", "tokennum": 299}], "negative_passages": []} {"query_id": "q-en-rust-df763650a9370a411fb171de761c1a1dd1ba249b51a2dbe40c8d18c0ce4284c2", "query": "// + span: $DIR/nrvo-simple.rs:3:20: 3:21 // + literal: Const { ty: u8, val: Value(Scalar(0x00)) } StorageLive(_3); // scope 1 at $DIR/nrvo-simple.rs:4:5: 4:19 StorageLive(_5); // scope 1 at $DIR/nrvo-simple.rs:4:10: 4:18 StorageLive(_6); // scope 1 at $DIR/nrvo-simple.rs:4:10: 4:18 - _6 = &mut _2; // scope 1 at $DIR/nrvo-simple.rs:4:10: 4:18 + _6 = &mut _0; // scope 1 at $DIR/nrvo-simple.rs:4:10: 4:18 _3 = move _1(move _6) -> bb1; // scope 1 at $DIR/nrvo-simple.rs:4:5: 4:19 _5 = &mut (*_6); // scope 1 at $DIR/nrvo-simple.rs:4:10: 4:18 _3 = move _1(move _5) -> bb1; // scope 1 at $DIR/nrvo-simple.rs:4:5: 4:19 } bb1: { StorageDead(_5); // scope 1 at $DIR/nrvo-simple.rs:4:18: 4:19 StorageDead(_6); // scope 1 at $DIR/nrvo-simple.rs:4:19: 4:20 StorageDead(_3); // scope 1 at $DIR/nrvo-simple.rs:4:19: 4:20 - _0 = _2; // scope 1 at $DIR/nrvo-simple.rs:5:5: 5:8 - StorageDead(_2); // scope 0 at $DIR/nrvo-simple.rs:6:1: 6:2", "positive_passages": [{"docid": "doc-en-rust-7ada8d6f69606f3b233f58af8491570449e4c55a0546ecf90df357a91b9c5b9b", "text": "Since InstCombine can now introduce where there was none before, without checking that the value is in fact not used afterwards. This may cause a value to be marked as uninitialized by the dataflow, but still be used in that uninitialized state. If InstCombine ran before the generator transform this would be acutely unsound and could probably be exploited directly, but other than that transform I don't think we rely on this property yet. This impacts , which does run after InstCombine and relies on the dataflow. I've observed this on , where it produces this diff: The second use of occurs despite the first move deinitializing the local. cc\nMaybe we should stop making instcombine convert to ? This could be done in the pre-codegen cleanups, but for MIR it is wrong to use or on such values in general. Alternatively instcombine needs to start using dataflow to only insert moves when the site is the last use of the local.", "commid": "rust_issue_72797", "tokennum": 213}], "negative_passages": []} {"query_id": "q-en-rust-df844c1ea3fbe0e52ce01764e5818be01ebd93e9b64fb89e6eba8cf28cadaeed", "query": "this.visit_ty(&ty); } } GenericParamKind::Const { ref ty, .. } => { GenericParamKind::Const { ref ty, default } => { let was_in_const_generic = this.is_in_const_generic; this.is_in_const_generic = true; walk_list!(this, visit_param_bound, param.bounds); this.visit_ty(&ty); if let Some(default) = default { this.visit_body(this.tcx.hir().body(default.body)); } this.is_in_const_generic = was_in_const_generic; } }", "positive_passages": [{"docid": "doc-en-rust-fab3ca8c9179b639d7db0f30758e18fd05d791de7fbda0ef99ca6ccdc52995f6", "text": "Const generic defaults are being stabilized, $DIR/feature-gate-unsafe-extern-blocks.rs:9:5", "positive_passages": [{"docid": "doc-en-rust-314973a620198a343d95c6001fa5535e22745954ce83c0927d2a46bf1bfbe699", "text": "With the following code: We get this error: Making the block then gives: cc\n:+1:, this should error in both cases but the diagnostics are misleading. Going to fix it. r?", "commid": "rust_issue_126327", "tokennum": 42}], "negative_passages": []} {"query_id": "q-en-rust-e09eef73ea5b656b10d838d17498b8673e1bcbd7e508654885b3d6af18d462f7", "query": "## missing_doc_code_examples This lint is **allowed by default**. It detects when a documentation block This lint is **allowed by default** and is **nightly-only**. It detects when a documentation block is missing a code example. For example: ```rust", "positive_passages": [{"docid": "doc-en-rust-2f15b74a95299f559142cae3dbc4bb94db05427621a9bd0c13b8207a4a751378", "text": "I wanted to try out the missingdoccodeexamples rustdoc lint, but it seems to be bugged. I tried following the example given in . Steps to reproduce: Create a new crate: Then edit the file: I would now expect to get a linting error when building the crate, since the add function is public, but does not have a code example. However, as well as and completes without any errors.\nThe lint only works on nightly, the docs should probably be updated.\nThese docs live in To fix this, add 'and is nightly-only' to the first sentence.\nI created PR to update the documentation. Thanks for clarifying. :-)", "commid": "rust_issue_76194", "tokennum": 144}], "negative_passages": []} {"query_id": "q-en-rust-e0bdbfac134e0eedcd115e79480d4d068c7c3339ccdaf1377bfc98ddacb5f215", "query": "use std::cmp; use std::string::String; use crate::clean::{self, DocFragment, Item}; use crate::clean::{self, DocFragment, DocFragmentKind, Item}; use crate::core::DocContext; use crate::fold::{self, DocFolder}; use crate::passes::Pass;", "positive_passages": [{"docid": "doc-en-rust-8ee58d94fa0ef584ff51898c2e18b4f6db9d2a2f8a32ce61c15bd0598bc08be9", "text": " $DIR/issue-66906.rs:3:12 | LL | #![feature(const_generics)] | ^^^^^^^^^^^^^^ | = note: `#[warn(incomplete_features)]` on by default ", "positive_passages": [{"docid": "doc-en-rust-37acd1f3390bb5eae9106ee301264310bf2b12e6de290416e772a1545c55f554", "text": "results in an ICE (): $DIR/proper-span-for-type-error.rs:8:5 | LL | a().await | ^^^^^^^^^ expected enum `Result`, found `()` | = note: expected enum `Result<(), i32>` found unit type `()` help: try wrapping the expression in `Ok` | LL | Ok(a().await) | +++ + error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-e21c3ba017dc2f8cb73898979cf59e3a56342a283d84799812dd3475df656304", "text": "This code: produces: Which is invalid. Looks like the span of the desugaring of uses only the part, instead of the span of the entire expression. This results in invalid suggestions when used by the code in .\nI'm not entirely sure if this means that desugaring should use a bigger span, or if the suggestion diagnostic needs to be interpreted differently.\nhmm.. is the only case where the desugared expr's span doesn't cover the whole piece of code it comes from? That doesn't sound right. If wonder if the expr should have the bigger span by default, and then async-specific code could grab the tighter span for diagnostics that need to point at the operator.\nSome of the desugaring only uses the span, but the parts relevant for diagnostics like this do use the full span. Not sure if there's a problem there too.\nI thought I might have accidentally introduced this bug in , but right after that PR was merged, the output was correct: So something changed in the last two months in what spans are used for await desugarings, it seems.\nactually I think it was introduced in in , cc Do you need someone to fix this? I'm happy to put up a quick PR.\nYeah, looks like it: I just finished bisecting, which pointed at a rollup involving .\nthis was introduced in the PR linked earlier to try and improve other errors. It is tricky because reducing the span to only incurs these kind of bugs. On the other hand, the errors that got improved there became much less noisy, so I wouldn't want to revert the whole change. My fix above will solve this issue, but I'm sure there will be others in the future :-/", "commid": "rust_issue_93074", "tokennum": 367}], "negative_passages": []} {"query_id": "q-en-rust-e27d5a3e493f6a044711cbef3da1e2b8f7cbfb1e10496ba6d04f69c9caf633ce", "query": "// Because it is `?Sized`, it will always be the last field in memory. // Note: This is a detail of the current implementation of the compiler, // and is not a guaranteed language detail. Do not rely on it outside of std. unsafe { data_offset_align(align_of_val(&*ptr)) } unsafe { data_offset_align(align_of_val_raw(ptr)) } } #[inline]", "positive_passages": [{"docid": "doc-en-rust-07f455ab8d3245e09e11158804f664bc6b9f872353b4bf634000d5ee06f37e90", "text": "I tried this code: When I run it under , I get this: Creating a reference from such pointer looks suspicious indeed. I could send a pull request that adds a condition there, but I want to ask first if this is being done because \u201ewe are , we can\u201c, or if this is something that really should be fixed. : (but I've looked at the git sources and the thing is still there) $DIR/unsized6.rs:7:12", "positive_passages": [{"docid": "doc-en-rust-09dba29fa8aa11509d6cd49c561b11d2548cab6627b306896c4b759f3fbc1ad4", "text": "I tried compiling this code: which resulted in the following output of rustc: So the compiler says So I tried adding a borrow and recompiling the code: which resulted in: so the compiler says and so I'm stuck in a loop. The compiler can't decide whether I should borrow or not. As a newbie to Rust, this is very frustrating and the opposite of helpful, it just confuses me further. Please fix this indecisiveness :) :\nFor reference, what you likely want to do is one of the following which means they cannot be the type of a local binding. You can give a binding a slice type (), because that's morally equivalent to a pointer (with sizing info), which has a compile time determined size, as all borrows are .\nCurrent output: After :", "commid": "rust_issue_72742", "tokennum": 172}], "negative_passages": []} {"query_id": "q-en-rust-e2cd955fe8768197d7dda142b9bb44bbdd1771e8992e8292f9afecaf89711a7a", "query": "&Path(\"lib/libstd.so\")); assert_eq!(res.to_str(), ~\"@executable_path/../lib\"); } #[test] fn test_get_absolute_rpath() { let res = get_absolute_rpath(&Path(\"lib/libstd.so\")); debug!(\"test_get_absolute_rpath: %s vs. %s\", res.to_str(), os::make_absolute(&Path(\"lib\")).to_str()); assert_eq!(res, os::make_absolute(&Path(\"lib\"))); } }", "positive_passages": [{"docid": "doc-en-rust-b3127c5b2931668785539d3dc2215a7dc0521bae0c64b183669c2615dbe4f6fc", "text": "These are configured something like\nI am not sure why this happens off hand, but when bootstrapping the non-build-triple compilers artifacts are copied from e.g. to .", "commid": "rust_issue_7191", "tokennum": 38}], "negative_passages": []} {"query_id": "q-en-rust-e33f519f317ff6462a62ef68d4e29c0cd2218e6d2ec4b832f87b518ec428624d", "query": "} #[derive(Diagnostic)] #[diag(parse_at_dot_dot_in_struct_pattern)] pub(crate) struct AtDotDotInStructPattern { #[primary_span] pub span: Span, #[suggestion(code = \"\", style = \"verbose\", applicability = \"machine-applicable\")] pub remove: Span, pub ident: Ident, } #[derive(Diagnostic)] #[diag(parse_at_in_struct_pattern)] #[note] #[help] pub(crate) struct AtInStructPattern { #[primary_span] pub span: Span, } #[derive(Diagnostic)] #[diag(parse_dot_dot_dot_for_remaining_fields)] pub(crate) struct DotDotDotForRemainingFields { #[primary_span]", "positive_passages": [{"docid": "doc-en-rust-1fb348516fce42735a2f7d856f66678c531adef3f524310c5b14170cbebf9cf5", "text": "This erroneous code: Produces this error message It would be better if instead it produced an error message like this: Tried the code in these Rust versions and they all produced the same error message: 1.37.0 stable1.45.0 stable1.46.0-beta.1 (2020-07-15 )1.47.0-nightly (2020-07-22 ) $DIR/lex-bad-str-literal-as-char-4.rs:4:25 | LL | println!('hello world'); | ^^^^^ unknown prefix | = note: prefixed identifiers and literals are reserved since Rust 2021 help: if you meant to write a string literal, use double quotes | LL | println!(\"hello world\"); | ~ ~ error[E0762]: unterminated character literal --> $DIR/lex-bad-str-literal-as-char-4.rs:4:30 | LL | println!('hello world'); | ^^^ | help: if you meant to write a string literal, use double quotes | LL | println!(\"hello world\"); | ~ ~ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0762`. ", "positive_passages": [{"docid": "doc-en-rust-eff82a61886b25b0f1dd4cf2300b1ef5995dc7fc7eccfb7efad8700de136ebcb", "text": " $DIR/feature-gate-generic_associated_types.rs:16:5 | LL | type Pointer2: Deref where T: Clone, U: Clone; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: add #![feature(generic_associated_types)] to the crate attributes to enable error[E0658]: generic associated types are unstable (see issue #44265) --> $DIR/feature-gate-generic_associated_types.rs:22:5 --> $DIR/feature-gate-generic_associated_types.rs:23:5 | LL | type Pointer = Box; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", "positive_passages": [{"docid": "doc-en-rust-d2696cc37cdbedb80da25c591152044f2e8391cf31101bcf1942427d4cba1626", "text": "clauses on associated types don't work yet so should be feature gated, presumably with . cc . Happens on versions = 1.24", "commid": "rust_issue_49365", "tokennum": 28}], "negative_passages": []} {"query_id": "q-en-rust-e790c255b297e474bd5fee7860e3d869e40db62b7134b583c9f7cea963f7a893", "query": " fn part(_: u16) -> u32 { 1 } fn main() { for n in 100_000.. { //~^ ERROR: literal out of range for `u16` let _ = part(n); } } ", "positive_passages": [{"docid": "doc-en-rust-3db888818268f2e058d840ecf54bc68a793ddb678944677856eed5923b06907c", "text": "An ICE is observable with this code: Changing to take a rather than makes the ICE go away.\nInvestigation shows that will trigger the ICE, while will not\nIntroduced in cc We can probably get away with doing an early return if and no other changes.", "commid": "rust_issue_63364", "tokennum": 52}], "negative_passages": []} {"query_id": "q-en-rust-e7ac36f4474f8c909549840bc2c12bdee67bc0ad3625456d3dec69799d0119bb", "query": " error[E0726]: implicit elided lifetime not allowed here --> $DIR/wf-in-foreign-fn-decls-issue-80468.rs:13:16 | LL | impl Trait for Ref {} | ^^^- help: indicate the anonymous lifetime: `<'_>` error[E0308]: mismatched types --> $DIR/wf-in-foreign-fn-decls-issue-80468.rs:16:21 | LL | pub fn repro(_: Wrapper); | ^^^^^^^^^^^^ lifetime mismatch | = note: expected trait `Trait` found trait `Trait` note: the anonymous lifetime #1 defined on the method body at 16:5... --> $DIR/wf-in-foreign-fn-decls-issue-80468.rs:16:5 | LL | pub fn repro(_: Wrapper); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ = note: ...does not necessarily outlive the static lifetime error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-7c709c2f5ae2ada94580a48e8d8ea5a7ce339728a9d7168b9b77fe08cb0eac1d", "text": "Repro: As of current nightly: This is the unreachable line: The same crash reproduces back to 1.47.0. Older stable compilers do not crash:\nBisect: searched nightlies: from nightly-2020-07-10 to nightly-2020-08-27 regressed nightly: nightly-2020-07-22 searched commits: from to regressed commit: () Mentioning because appears the most relevant.\nAssigning .", "commid": "rust_issue_80468", "tokennum": 93}], "negative_passages": []} {"query_id": "q-en-rust-e7bbf5f78ef1850d8eb1d35e35864a2fe23e4d2854334bf96f228b0a71142e27", "query": "} // walk type and init value self.visit_ty(typ); if let Some(expr) = expr { self.visit_expr(expr); } self.nest_tables(id, |v| { v.visit_ty(typ); if let Some(expr) = expr { v.visit_expr(expr); } }); } // FIXME tuple structs should generate tuple-specific data.", "positive_passages": [{"docid": "doc-en-rust-bcb716d01e0aa92e792c79ce9116a6404c964666a769ff595f0e8e001f55676e", "text": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?", "commid": "rust_issue_59134", "tokennum": 497}], "negative_passages": []} {"query_id": "q-en-rust-e7bbf5f78ef1850d8eb1d35e35864a2fe23e4d2854334bf96f228b0a71142e27", "query": "} // walk type and init value self.visit_ty(typ); if let Some(expr) = expr { self.visit_expr(expr); } self.nest_tables(id, |v| { v.visit_ty(typ); if let Some(expr) = expr { v.visit_expr(expr); } }); } // FIXME tuple structs should generate tuple-specific data.", "positive_passages": [{"docid": "doc-en-rust-1560eee77cf6c7650928cb9ddd0f8e103b6279cb08a15618e7077839aa0fe6b3", "text": "I think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "commid": "rust_issue_59134", "tokennum": 164}], "negative_passages": []} {"query_id": "q-en-rust-e7c693d7a01ec8d18fb04b551b2a37a5dba71f5fcea97d5c9f9c241e0ca2a14b", "query": "LL - fn f4(x1: Box, x2: Box, x3: Box) { LL + fn f4(x1: Box, x2: Box, x3: Box) { | help: consider borrowing here | LL | let y: &X = *x1; | + error[E0277]: the size for values of type `X` cannot be known at compilation time --> $DIR/unsized6.rs:32:9", "positive_passages": [{"docid": "doc-en-rust-09dba29fa8aa11509d6cd49c561b11d2548cab6627b306896c4b759f3fbc1ad4", "text": "I tried compiling this code: which resulted in the following output of rustc: So the compiler says So I tried adding a borrow and recompiling the code: which resulted in: so the compiler says and so I'm stuck in a loop. The compiler can't decide whether I should borrow or not. As a newbie to Rust, this is very frustrating and the opposite of helpful, it just confuses me further. Please fix this indecisiveness :) :\nFor reference, what you likely want to do is one of the following which means they cannot be the type of a local binding. You can give a binding a slice type (), because that's morally equivalent to a pointer (with sizing info), which has a compile time determined size, as all borrows are .\nCurrent output: After :", "commid": "rust_issue_72742", "tokennum": 172}], "negative_passages": []} {"query_id": "q-en-rust-e849e82e1b62b1909fd17f0206582630eff2ff1f9da5e60a64b3dbd00c24f455", "query": " #![deny(unused_must_use)] use std::{ops::Deref, pin::Pin}; #[must_use] struct MustUse; #[must_use] struct MustUsePtr<'a, T>(&'a T); impl<'a, T> Deref for MustUsePtr<'a, T> { type Target = T; fn deref(&self) -> &Self::Target { self.0 } } fn pin_ref() -> Pin<&'static ()> { Pin::new(&()) } fn pin_ref_mut() -> Pin<&'static mut ()> { Pin::new(unimplemented!()) } fn pin_must_use_ptr() -> Pin> { Pin::new(MustUsePtr(&())) } fn pin_box() -> Pin> { Box::pin(()) } fn pin_box_must_use() -> Pin> { Box::pin(MustUse) } fn main() { pin_ref(); pin_ref_mut(); pin_must_use_ptr(); //~ ERROR unused pinned `MustUsePtr` that must be used pin_box(); pin_box_must_use(); //~ ERROR unused pinned boxed `MustUse` that must be used } ", "positive_passages": [{"docid": "doc-en-rust-eb2f9b3bb39caefb93b615bcdd1a7a477ee66bea050804de4650e31a26511576", "text": " $DIR/unsized6.rs:24:9", "positive_passages": [{"docid": "doc-en-rust-09dba29fa8aa11509d6cd49c561b11d2548cab6627b306896c4b759f3fbc1ad4", "text": "I tried compiling this code: which resulted in the following output of rustc: So the compiler says So I tried adding a borrow and recompiling the code: which resulted in: so the compiler says and so I'm stuck in a loop. The compiler can't decide whether I should borrow or not. As a newbie to Rust, this is very frustrating and the opposite of helpful, it just confuses me further. Please fix this indecisiveness :) :\nFor reference, what you likely want to do is one of the following which means they cannot be the type of a local binding. You can give a binding a slice type (), because that's morally equivalent to a pointer (with sizing info), which has a compile time determined size, as all borrows are .\nCurrent output: After :", "commid": "rust_issue_72742", "tokennum": 172}], "negative_passages": []} {"query_id": "q-en-rust-e9fe4c232018d65f9e79ed0bde17b2d832dc28d77ac7502b5e0eb990d9bba873", "query": "if c.pgp { pgp::init(c.root); } else { warn(\"command \"gpg\" is not found\"); warn(\"you have to install \"gpg\" from source \" + \" or package manager to get it to work correctly\"); } c", "positive_passages": [{"docid": "doc-en-rust-ded8174e54900f87470131eb9a7413cb6ed146d7d7347f97b8465da4913ff300", "text": "cargo is relying on the to find rustc, but if one runs cargo with , cargo will error out with the unhelpful: cargo should detect this and return a proper error message for this. A proper workaround for this is to make sure that has an absolute path to the binaries set, as in: It needs to be absolute because cargo does a chdir to the build directory, which would break a relative path.\nWe have a function, that cargo can use to determine the path to itself, then use to launch the corresponding rustc.", "commid": "rust_issue_1806", "tokennum": 114}], "negative_passages": []} {"query_id": "q-en-rust-e9fe4c232018d65f9e79ed0bde17b2d832dc28d77ac7502b5e0eb990d9bba873", "query": "if c.pgp { pgp::init(c.root); } else { warn(\"command \"gpg\" is not found\"); warn(\"you have to install \"gpg\" from source \" + \" or package manager to get it to work correctly\"); } c", "positive_passages": [{"docid": "doc-en-rust-9b23404c624087e4b6f505e241c33a30618643f870354e7d6c1e0c1a714a2788", "text": "Instead of just saying the key can't be verified, explain why.\nYeah, it should also not fail! Missing gpg is a warning condition, not an error condition.", "commid": "rust_issue_1643", "tokennum": 36}], "negative_passages": []} {"query_id": "q-en-rust-ea0874a951079ef0431383bdf3c1947a28cd2fa5bad1121c8df79fb1128d4cc7", "query": " error[E0425]: cannot find function `foo1` in crate `similar_unstable_method` --> $DIR/issue-109177.rs:7:30 | LL | similar_unstable_method::foo1(); | ^^^^ help: a function with a similar name exists: `foo` | ::: $DIR/auxiliary/similar-unstable-method.rs:5:1 | LL | pub fn foo() {} | ------------ similarly named function `foo` defined here error[E0599]: no method named `foo1` found for struct `Foo` in the current scope --> $DIR/issue-109177.rs:11:9 | LL | foo.foo1(); | ^^^^ method not found in `Foo` error: aborting due to 2 previous errors Some errors have detailed explanations: E0425, E0599. For more information about an error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-ff59e9541a0d52181662510875499f6cbc9629ee449eb6bbf9552e9a4e66d7a1", "text": "It would be more helpful to suggest the method for the typo rather than a nightly-only symbol which isn't actually available. the suggestion doesn't work. method is actually available on a . method has been stable since 1.0.0. had trouble noticing the difference between and and could have used the compiler's help. No response Reproduced with 1.68.0 stable and 1.70 nightly. $DIR/recover-from-bad-variant.rs:12:9", "positive_passages": [{"docid": "doc-en-rust-64b23c5c36e9988ec4c5b0f9b27af4a414e6c71c77fec9aaba09da710256fe45", "text": "I wrote code that looked like this while not having my brain turned on 100%. Due to said brain not functioning, it took me a while to understand what rustc was trying to tell me with the message pointing to the pattern with the text . sure looked like a unit variant to me! What rustc was trying to tell me was that the definition of the variant was a struct variant (\"found struct variant\") and my pattern wasn't matching the definition (\"this pattern you specified looks like it's trying to match a variant that is not a unit struct, unit variant or constant\"). The change I needed to make was adding to my pattern, because I was trying to distinguish different variants from each other, not use pieces within the variants. Suggesting seems like a decent starting point to me, but there could definitely be cases I'm not considering here and I'm totally open to hearing that No response No response $DIR/issue-28134.rs:13:1 | LL | #![test] //~ ERROR only functions may be used as tests | ^^^^^^^^ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-f73158d12088b85532de80e8993b95975e895b595acb73f97c12468715d91ba8", "text": "This appears to now be fixed.\nNeat! Just needs a test if there isn't one already.\nThere's , but I'm not sure if that qualifies as testing this in particular. If not, I can add one.\nMarking as E-needstest. I don't think the test I linked previously really fits this case.", "commid": "rust_issue_28134", "tokennum": 72}], "negative_passages": []} {"query_id": "q-en-rust-edf4511c5b93f48faea13102c553d078c766b843dfe411cd18c136c508f1da76", "query": "without modifying the original\"] #[inline] #[track_caller] #[rustc_inherit_overflow_checks] #[allow(arithmetic_overflow)] pub const fn ilog2(self) -> u32 { match self.checked_ilog2() { Some(n) => n, None => { // In debug builds, trigger a panic on None. // This should optimize completely out in release builds. let _ = Self::MAX + 1; 0 }, } self.checked_ilog2().expect(\"argument of integer logarithm must be positive\") } /// Returns the base 10 logarithm of the number, rounded down. /// /// # Panics /// /// When the number is negative or zero it panics in debug mode and the return value /// is 0 in release mode. /// This function will panic if `self` is less than or equal to zero. /// /// # Example ///", "positive_passages": [{"docid": "doc-en-rust-16845635e3831018a56fbab5a89a516a67a8c8004ce6fe80aff6afd5b41cb770", "text": "Filing since it's in pFCP over at , and I don't think we can reasonably change it after stabilization (especially since it's documented). Currently, in release mode evaluates to . (And same for the other bases.) It's not obvious to me that that's necessarily the best option. For example, the implementation is slightly nicer on CPUs from the past or so (and I think would be just as nice on older ones ), but gives #[rustc_inherit_overflow_checks] #[allow(arithmetic_overflow)] pub const fn ilog2(self) -> u32 { match self.checked_ilog2() { Some(n) => n, None => { // In debug builds, trigger a panic on None. // This should optimize completely out in release builds. let _ = Self::MAX + 1; 0 }, } self.checked_ilog2().expect(\"argument of integer logarithm must be positive\") } /// Returns the base 10 logarithm of the number, rounded down. /// /// # Panics /// /// When the number is negative or zero it panics in debug mode and the return value /// is 0 in release mode. /// This function will panic if `self` is less than or equal to zero. /// /// # Example ///", "positive_passages": [{"docid": "doc-en-rust-fbf7ca2720a5b84d779735158e477e946deaf8928bf458f061749ca37b4e24bf", "text": "I think returning (wrapped if necessary) is a clear distinguish between normal and error result. Returning an is not ideal currently because it will makes the function practically unusable in const context (before const unwrap is stabilized) If getting closest to the \"correct\" math answer, how about choosing ? However it seems weird and still seems better to me\nPerhaps in other languages. Rust is known for being better than that. I have chosen to use Rust because it's a bit better than Python (see above). Let's return Option and and wait for const unwrap to be stabilized before making ilog const, then.\nThat's what does already. Is your proposal, then, that the version of just shouldn't exist on ? How do you square that with precedent of other things -- like or -- that have the version return , but the un-prefixed one wrap in release mode?\nwhat about treating it like division by zero and just panicking?\nI'd be fine with -1. The point of these methods is to get fast code (otherwise many won't use them and will prefer to do to keep the code branchless and avoid the ugly ) and a defined value outside of the usual input range. (from above) is a good example. If you want stricter guarantees than that, you can use .\nA minor point: being 0 is actually a reasonable choice - even though 0 is not a power of two, it is congruent to the correct power of two modulo 2^32 (which is 2^32 itself), so exactly what is expected in wrapping arithmetic. But unfortunately there's no \"\u2212\u221e mod 2^32\" that could be used for .\nModular arithmetic is confusing in the context of logarithm, if you want to return the discrete logarithm of 0 in the release mode, then there are a lot of candidates, because (in case of ) therefore in the sense of modular arithmetic, could be any multiple of . If you really want to achieve something close to , then I guess the best value is as I mentioned above. (Interestingly though, = , which is a multiple of 8, 16, 32, 64, 128)\nThat's certainly a possibility. suggested not doing that back in the original PR, though:\nDoesn't implicitly include a check for zero in the code generated?", "commid": "rust_issue_100422", "tokennum": 493}], "negative_passages": []} {"query_id": "q-en-rust-edf4511c5b93f48faea13102c553d078c766b843dfe411cd18c136c508f1da76", "query": "without modifying the original\"] #[inline] #[track_caller] #[rustc_inherit_overflow_checks] #[allow(arithmetic_overflow)] pub const fn ilog2(self) -> u32 { match self.checked_ilog2() { Some(n) => n, None => { // In debug builds, trigger a panic on None. // This should optimize completely out in release builds. let _ = Self::MAX + 1; 0 }, } self.checked_ilog2().expect(\"argument of integer logarithm must be positive\") } /// Returns the base 10 logarithm of the number, rounded down. /// /// # Panics /// /// When the number is negative or zero it panics in debug mode and the return value /// is 0 in release mode. /// This function will panic if `self` is less than or equal to zero. /// /// # Example ///", "positive_passages": [{"docid": "doc-en-rust-a6d726bb543a3d79f485fcd8839136433bd3f20a147c51bfd0b422f18c0f9384", "text": "]\nAlso, since the methods are implemented for signed integers as well, another issue would be the log of negative values. For example for floating-point, would return −∞, but would return NaN. So if would return some value for −∞, would panic anyway, or return some value?\nI don't think it makes sense to do anything other than panic in the case of negative values, which in turn is probably an argument for panicking on zero as well. I guess one could argue that \"wrapping\" could be interpreted as wrapping the preimage to fit in the function's domain (rather than just wrapping the image to fit in the range) but IMO that would be entirely unintuitive and unhelpful as well. After thinking about it a bit, it seems to me that simply cannot have a meaningful variant, because semantics (as reasonably interpreted) cannot answer the question \"what should the result be for values for which the original function is undefined?\" There are no elements in 's domain for which the function underflows or overflows.\nThat's only the case on old chips. As I linked in the OP (), that happens in the instruction generation very-last-phase of LLVM on old chips, and on newer chips it knows to emit a different instruction instead of the branching code: The good news is that it looks like newer LLVM has gotten smarter here. It used to be that, because of how the intrinsic gets translated, it would emit obviously-redundant assembly on old chips in some code patterns, as you could see in Rust 1.16:\nis supposed to find the largest so that . Since computers use wrapping arithmetic, the equation is better put as . The largest satisfying this equation in the case is (aka ). So if we consider a wrapping to be the inverse of , makes perfect sense (and is very efficient on modern CPUs, as others have pointed out). I would also advocate for to be as a method which always has this behaviour. [Edit: it already behaves that way]\nNote that's already how works . It's just that it wraps to in release right now -- this issue doesn't propose removing the panic in debug behaviour, just to figure out what the best value for release would be.", "commid": "rust_issue_100422", "tokennum": 489}], "negative_passages": []} {"query_id": "q-en-rust-edf4511c5b93f48faea13102c553d078c766b843dfe411cd18c136c508f1da76", "query": "without modifying the original\"] #[inline] #[track_caller] #[rustc_inherit_overflow_checks] #[allow(arithmetic_overflow)] pub const fn ilog2(self) -> u32 { match self.checked_ilog2() { Some(n) => n, None => { // In debug builds, trigger a panic on None. // This should optimize completely out in release builds. let _ = Self::MAX + 1; 0 }, } self.checked_ilog2().expect(\"argument of integer logarithm must be positive\") } /// Returns the base 10 logarithm of the number, rounded down. /// /// # Panics /// /// When the number is negative or zero it panics in debug mode and the return value /// is 0 in release mode. /// This function will panic if `self` is less than or equal to zero. /// /// # Example ///", "positive_passages": [{"docid": "doc-en-rust-27c613f3f8527b8326d9aea32362974a9087706fc83216eab7583f2aa3f6c584", "text": "Since, from the notes, it seems like the libs-api meeting today said \"how about we panic instead of wrap\", I'll try to answer my own question from above: I would propose the following rule: Conceptually, the result is first calculated in true \u2124 (aka to infinite precision). Then for the non-prefixed versions If that result is undefined (, , etc), then it panics (regardless of compilation mode) If that result is defined and fits in the result type, it's returned. If that result is defined but not in-range for the result type, it panics if overflow checking is enabled, and wraps (mod 2 #[rustc_inherit_overflow_checks] #[allow(arithmetic_overflow)] pub const fn ilog2(self) -> u32 { match self.checked_ilog2() { Some(n) => n, None => { // In debug builds, trigger a panic on None. // This should optimize completely out in release builds. let _ = Self::MAX + 1; 0 }, } self.checked_ilog2().expect(\"argument of integer logarithm must be positive\") } /// Returns the base 10 logarithm of the number, rounded down. /// /// # Panics /// /// When the number is negative or zero it panics in debug mode and the return value /// is 0 in release mode. /// This function will panic if `self` is less than or equal to zero. /// /// # Example ///", "positive_passages": [{"docid": "doc-en-rust-3ff2787f82c1b03a7c890a7f1b06135626f563d95989ca3a5b4fa4123a69a70f", "text": "It looks like llvm isn't smart enough today to turn into simply a count-leading-zeros and a subtract instruction. :(\nWe discussed this again in the libs-api meeting, and we concluded pretty much the same as rule, meaning that these functions should panic on zero in all compilation modes, since there's not a modulo-2 #[cfg(target_arch = \"x86_64\")] fn f16(){ const_assert!((1f16).to_bits(), 0x3c00); const_assert!(u16::from_be_bytes(1f16.to_be_bytes()), 0x3c00); const_assert!((12.5f16).to_bits(), 0x4a40); const_assert!(u16::from_le_bytes(12.5f16.to_le_bytes()), 0x4a40); const_assert!((1337f16).to_bits(), 0x6539); const_assert!(u16::from_ne_bytes(1337f16.to_ne_bytes()), 0x6539); const_assert!((-14.25f16).to_bits(), 0xcb20); const_assert!(f16::from_bits(0x3c00), 1.0); const_assert!(f16::from_be_bytes(0x3c00u16.to_be_bytes()), 1.0); const_assert!(f16::from_bits(0x4a40), 12.5); const_assert!(f16::from_le_bytes(0x4a40u16.to_le_bytes()), 12.5); const_assert!(f16::from_bits(0x5be0), 252.0); const_assert!(f16::from_ne_bytes(0x5be0u16.to_ne_bytes()), 252.0); const_assert!(f16::from_bits(0xcb20), -14.25); // Check that NaNs roundtrip their bits regardless of signalingness // 0xA is 0b1010; 0x5 is 0b0101 -- so these two together clobbers all the mantissa bits // NOTE: These names assume `f{BITS}::NAN` is a quiet NAN and IEEE754-2008's NaN rules apply! const QUIET_NAN: u16 = f16::NAN.to_bits() ^ 0x0155; const SIGNALING_NAN: u16 = f16::NAN.to_bits() ^ 0x02AA; const_assert!(f16::from_bits(QUIET_NAN).is_nan()); const_assert!(f16::from_bits(SIGNALING_NAN).is_nan()); const_assert!(f16::from_bits(QUIET_NAN).to_bits(), QUIET_NAN); if !has_broken_floats() { const_assert!(f16::from_bits(SIGNALING_NAN).to_bits(), SIGNALING_NAN); } } fn f32() { const_assert!((1f32).to_bits(), 0x3f800000); const_assert!(u32::from_be_bytes(1f32.to_be_bytes()), 0x3f800000);", "positive_passages": [{"docid": "doc-en-rust-0050025081a22f913cc307b1407faf3d0b71f1113df92b81da1c66feb031f188", "text": "This file is only and : , but we should add the other float types to this test. The change should wait for since that changes some of these semantics (this PR is almost merged). There are probably a handful of tests that only check and but could use updates with the new types. $DIR/issue-52240.rs:9:27 | LL | if let (Some(Foo::Bar(ref mut val)), _) = (&arr.get(0), 0) { | ^^^^^^^^^^^ cannot borrow as mutable error: aborting due to previous error For more information about this error, try `rustc --explain E0596`. ", "positive_passages": [{"docid": "doc-en-rust-27abf7dffdd9fb3420783921a65ebbfb4054467a1c3c0f7240038bbba2872f0e", "text": "I expected just a compiler error . Compiler panicked: Meta\nThis is fixed in the beta version (1.30).", "commid": "rust_issue_54966", "tokennum": 24}], "negative_passages": []} {"query_id": "q-en-rust-ee1b06b1ba120e82d03cda2b54cf96dbd8c01f9424ae71121eef6e2c5a33910c", "query": " error[E0596]: cannot borrow data in a `&` reference as mutable --> $DIR/issue-52240.rs:9:27 | LL | if let (Some(Foo::Bar(ref mut val)), _) = (&arr.get(0), 0) { | ^^^^^^^^^^^ cannot borrow as mutable error: aborting due to previous error For more information about this error, try `rustc --explain E0596`. ", "positive_passages": [{"docid": "doc-en-rust-afa01b1e4a6f7a1fc98bc53710b50fb4a7badf9201d0c207379809b43c80a6cc", "text": "This lets you get a mutable reference to an immutable value under a specific case. Code: From what I can tell, the tuple and at least two depths of Enums are necessary, along with an on the right side. There's probably other variations, but that seems to work well enough. Expected output would be a compile error, instead output is This works on both stable and nightly\nwould fix this as well?\nFixed by . Off: I wonder why we can't ship nll yet. It would solve so many issues right now, because of the match ergonomics sigh Match ergonomics produces roughly 10% of all errors/unsoundness/ICEs atm (at least it feels like this)\nYes correctly fixed this issue.", "commid": "rust_issue_52240", "tokennum": 157}], "negative_passages": []} {"query_id": "q-en-rust-ee211adc3e710542c6fb4b5ba0a5e11a7c8aabb9de51b97f5297d855630c0eb4", "query": "rscope: &RegionScope, span: Span, principal_trait_ref: ty::PolyTraitRef<'tcx>, mut projection_bounds: Vec>, // Empty for boxed closures projection_bounds: Vec>, // Empty for boxed closures partitioned_bounds: PartitionedBounds) -> ty::ExistentialBounds<'tcx> {", "positive_passages": [{"docid": "doc-en-rust-3934a1b92c7e17508c6b88bced70af5d14e1d80fcd45b847830b1281e6361ebc", "text": "It looks like if you go and grab this commit of timely-dataflow from github, you get a nicely reproducible ICE. TravisCI gets it too! In case anything changes in the meantime, the commit you want is Related issues: . The times I've seen this before I haven't be able to reproduce it, but since it seems to happen from a clean pull, I thought I would show you. To repro, just follow the commands here. git clone, cargo build, cargo test. Boom!\nIt seems like I can make the ICE go away by replacing a member variable with one that uses an equivalent trait which doesn't use associated types: Swapping that in removes the ICE, swapping it out brings the ICE back. So, maybe something isn't working great for boxed traits with associated types. The commit that fixes things is\nDoing some testing, and the repro in (not mine) is way simpler. It also explodes on 1.1 stable, with a similar complaint. Just boxing a trait with two associated types.\nI run into this problem, too (with stable ).", "commid": "rust_issue_27222", "tokennum": 232}], "negative_passages": []} {"query_id": "q-en-rust-ee211adc3e710542c6fb4b5ba0a5e11a7c8aabb9de51b97f5297d855630c0eb4", "query": "rscope: &RegionScope, span: Span, principal_trait_ref: ty::PolyTraitRef<'tcx>, mut projection_bounds: Vec>, // Empty for boxed closures projection_bounds: Vec>, // Empty for boxed closures partitioned_bounds: PartitionedBounds) -> ty::ExistentialBounds<'tcx> {", "positive_passages": [{"docid": "doc-en-rust-59896f2a78933cbb77beb7a499e617bbdcbb69e9c72cd913fbde9be24226cf8c", "text": "Hi, I ran into an interesting ICE while working on a personal project. This seems like it could be related to issues or , but I'm not sure. I set up a . The README includes a description of the bug and the compiler backtrace.\nGetting the same ICE occasionally in (At more specifically, not minimized because I don't think I can produce a smaller example than the package above).\nThis is a somewhat flaky ICE I used to see a bunch back in April, but it seemed like it sorted itself out with the 1.0. I've found it again! It is on stable 1.1, so I thought I would bump this post 1.0 issue rather than my open pre-1.0 issues. I'm really not sure what causes this, but the common features tend to be doing a bunch of builds with light changes to a local crate (path=\"...\") on which the project depends (in this case, local edits to the timely crate). It feels a lot like it is Cargo getting just a bit confused about what has changed and what hasn't, as will sometimes fix the problem. Previously, toggling LTO would tickle/untickle the bug, as would insisting on some inlining. Just now, fixed it up, after would not... Maybe someone should just put a on the bounds list... =/", "commid": "rust_issue_25467", "tokennum": 296}], "negative_passages": []} {"query_id": "q-en-rust-ee5a133a619c4748305d18b9e2f8f8703076ccc86688f95fb86c0bb9fb5fa318", "query": "/// A free importable items suggested in case of resolution failure. struct ImportSuggestion { did: Option, path: Path, }", "positive_passages": [{"docid": "doc-en-rust-a807c1dab372d105219deafe7e918540761b044ffca4d9b7fabfa5c5e9c806db", "text": "For this code: You get the following error message: The suggestion may be excused to be not 100% perfect, but here it suggests to add an use statement despite that same item being imported already. What I want is this exactly: that it removes stuff from the suggestion list that is already being imported. Note that changing the glob import to a non glob one doesn't change anything.\ncc\nWhy is that even an error?\ntuple structs have private fields by default, you can't construct instances of B outside of the foo module.\noh ^^ maybe we should simply modify this error message then to say that the \"function\" is private?\nIt already says \"constructor is not visible here due to private fields\". That's okay, but maybe you might have meant another B from another module or something. However, the candidate offered is not any better as its the same one you are attempting to use.\nReopening to track \"removal of suggestion\" work for pub tuple structs with private fields.", "commid": "rust_issue_42944", "tokennum": 214}], "negative_passages": []} {"query_id": "q-en-rust-ee65442dc6a3e5a226c1cecd5dbf9af4d222b440e50222c9c7ef08ca5cce71d8", "query": "\"cargo_metadata 0.9.1\", \"clippy-mini-macro-test\", \"clippy_lints\", \"compiletest_rs\", \"compiletest_rs 0.5.0\", \"derive-new\", \"lazy_static 1.4.0\", \"regex\",", "positive_passages": [{"docid": "doc-en-rust-3f3c0f1ce1d31fd93e7d2268e61d172d1e905825bf1bc0665a3ad81021f0ebfc", "text": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool clippy-driver no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! And nominating for compiler team prioritization.", "commid": "rust_issue_71114", "tokennum": 81}], "negative_passages": []} {"query_id": "q-en-rust-ee7a8565674f1b5bd07fc0bf9a2e8dea24de6353a52dc98e1b4fef236b96b1e5", "query": " error: pretty-print failed to write `/tmp/` due to $ERROR_MESSAGE error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-ca024e14f49f0dd23c01f572db476d23c938022645617c1074021fced75ee0f8", "text": "If you , you get a normal error like but if you try to at the same time: rust will panic which is a bit weird. Mayne that panic should be demoted to a regular error? $DIR/implied-bounds-unnorm-associated-type-3.rs:19:5 | LL | fn zero_copy_from<'b>(cart: &'b [T]) -> &'b [T] { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: consider adding an explicit lifetime bound `T: 'static`... = note: ...so that the type `[T]` will meet its required lifetime bounds error: aborting due to previous error For more information about this error, try `rustc --explain E0310`. ", "positive_passages": [{"docid": "doc-en-rust-d884f00e678538a5565bf540b39e72304f9db1780692eb037221c922782010fb", "text": "Note: The regression is valid/\"allowed\" (there's a hole in the typesystem being patched), I'm just filing this issue now so we can make sure that this regression is tracked in case it got \"fixed\" by accident. Feel free to close if it's a known quantity; I'm just worried that this got \"accidentally\" fixed in some other change. My crate is being fixed to not face this problem, but in general there's a chance other crates are falling afoul of this as well; we should perhaps assess the impact and start off with it being a future incompatibility warning if necessary. I tried this code: () I expected to see this happen: It compiles, like it does on stable, or at least throws a future incompatibility warning. Instead, this happened: It errors: It most recently worked on: Rust 1.56.0\nThis regressed in , which was a rollup. Further testing shows this resulted from . It's marked as being a partial revert, so I'm not entirely sure what the chronology is here. CC Note that we think this \"regression\" probably isn't actually a problem, per the note at the top.\nWhat the....\nOkay so reverting fully also errors. Stable 1.55.0 fails. Checking (a couple commits before ) now, but I'm assuming it's going to error. So, this was actually a bug in 1.56.0 - prior to that, this didn't compile, rightfully. My guess is because of treating unnormalized function args & output as WF, we saw , replaced it with , assumed that was WF, so we didn't actually check that . I'm going to go ahead and make a PR to add a test that closes this issue. This was a bug in 1.56.0, not a regression since.\nGood to know, thanks! (And good that this was just a one-version regression; less chance of there being in-the-wild breakage)\nI should figure out an excuse to make yoke a dependency of rustc, it seems really good at finding typesystem corner cases and maybe we should just have it be a test\nIsn't it failing on the wrong line?", "commid": "rust_issue_91899", "tokennum": 487}], "negative_passages": []} {"query_id": "q-en-rust-ef945189dd4e8823523ba99f957006562f23e986183a0ba5ac9ec518f4cac436", "query": " error[E0310]: the parameter type `T` may not live long enough --> $DIR/implied-bounds-unnorm-associated-type-3.rs:19:5 | LL | fn zero_copy_from<'b>(cart: &'b [T]) -> &'b [T] { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: consider adding an explicit lifetime bound `T: 'static`... = note: ...so that the type `[T]` will meet its required lifetime bounds error: aborting due to previous error For more information about this error, try `rustc --explain E0310`. ", "positive_passages": [{"docid": "doc-en-rust-9f10a579dafcdf8a691d6588be4b9c0309c1ac324fade573757e39f745f1763b", "text": "I would have expected it to fail on the because is the thing that needs to match\nHaha yeah, it's a thorn in my side So, it's interesting. The check is , which is true. What I might expect here is it to fail because you haven't written (which is required for to be well-formed. But I think this is implied bounds, since this is fine:\nIn my experience WFness of has always been a bit variable where sometimes you can assume it and sometimes it requires you to spell it out, so maybe this is another instance of that.\nThanks!", "commid": "rust_issue_91899", "tokennum": 123}], "negative_passages": []} {"query_id": "q-en-rust-efa1d12ec444e148f92cbf0828276880d7227bbc44c9bfd9f61d7fddf23d3455", "query": " struct Bug([u8; panic!(1)]); //~ ERROR panicking in constants is unstable // Note: non-`&str` panic arguments gained a separate error in PR #80734 // which is why this doesn't match the issue struct Bug([u8; panic!(\"panic\")]); //~ ERROR panicking in constants is unstable fn main() {}", "positive_passages": [{"docid": "doc-en-rust-7c73542d67b097e4fd1fee1922c673ac020986f94704300c10704cf32974803c", "text": "The panic payload of the macro (as passed on to ) is generic, but the CTFE panic support assumes it will always be an . This leads to ICEs: Cc\nI guess we could support this, but we'd have to find some way to render the errors. Maybe we need to wait for const traits so we can require const Debug?\nAlternatively constck could require the argument to have type .\nalso requires a\nYeah, the libcore panic machinery does not support non-format-string panics.\nAye, left out the latter of my message. is likely the most common argument to , and it's not unusual with e.g. ; so I agree that waiting for const traits before supporting any other arguments is reasonable.\nIm hitting this ICE on stable without feature flag:\nHm indeed, that is interesting: any idea why we go on here after seeing an error? Does this kind of behavior mean any part of CTFE is reachable on stable somehow?\nyes. We're not gating CTFE on whether there were stability (or any other static checks) failing. I've been abusing this since 1.0 to demo things on the stable compiler and never got around to fixing it. I don't even know if we have a tracking issue for it.\n:rofl: So what would be a possible approach here? Replace such consts by early enough that their MIR never actually gets evaluated? We might as well use this issue, IMO...\nThat would require us to mutate all MIR and potentially even HIR. Other possible solutions: poison the mir Body if const checks fail (by adding a field like or by just nuking the body by replacing it with ) and thus make allow const evaluation to check if the mir Body failed the checks and bail out before doing anything make the have an field similar to\nOn second thought, let's leave this issue for the ICE, and make a new one for \"we can trigger nightly-only const code on stable because we don't abort compilation early enough\":\nGiven that this ICE is pre-existing, as shown by , can we un-block ? I'd really like to see const_panic stabilized, and this seems like a minor issue to be blocking on.\nThat comment is an example of accidentally partially exposing an unstable feature on stable -- a separate bug tracked in I don't see how that's an argument for shipping that feature on stable when it is broken.", "commid": "rust_issue_66693", "tokennum": 515}], "negative_passages": []} {"query_id": "q-en-rust-efa1d12ec444e148f92cbf0828276880d7227bbc44c9bfd9f61d7fddf23d3455", "query": " struct Bug([u8; panic!(1)]); //~ ERROR panicking in constants is unstable // Note: non-`&str` panic arguments gained a separate error in PR #80734 // which is why this doesn't match the issue struct Bug([u8; panic!(\"panic\")]); //~ ERROR panicking in constants is unstable fn main() {}", "positive_passages": [{"docid": "doc-en-rust-21ce43edd15410cd7321881478b82e77d0a12e87f14b1a6a8e5e03c80674bc9f", "text": "In particular, applies to all unstable const features. Fixing this bug here is not even that hard I think: somewhere around here there needs to be a check that the argument has type . Re: , AFAIK that one is also blocked on figuring out how the diagnostics should look like, and a further concern (though maybe not blocking) is .\nI guess that might make this a non-issue, where it is proposed also would require as first argument.\nFor reference: , which was recently merged. The plan is to fix the behaviour of in the Rust 2021 edition. Given that this ICE is listed as a blocker for , does this mean that stabilizing that will have to wait for the next edition?\nNo, because breaking changes are allowed to be made to unstable features without moving to a new edition; that's kinda the whole idea. The 2021 edition will effectively just extend the same restriction to all invocations, const context or not.\nI'd like to tackle this but I'm not very experienced with the CTFE code. I get that we need to check from the that we've matched on in this branch and we need to match on the enum, but: For the first question, I see and that we probably want to check that the type of the tuple and that there's variants of that are , but do we stop the iteration when we get one that is a type? Is the projection list always guaranteed to terminate in a variant of that has a ? And once we have an instance of , do we just check that there's any_ projection in the list that has a ?\nyou can use to get the type of the . Then you can test its to be .", "commid": "rust_issue_66693", "tokennum": 352}], "negative_passages": []} {"query_id": "q-en-rust-efac5abc02554e1d108f26bd0ed4c7f75374265572cc071b27dfba475f1abee4", "query": "asm_args: cvs![\"-mthumb-interwork\", \"-march=armv4t\", \"-mlittle-endian\",], // minimum extra features, these cannot be disabled via -C features: \"+soft-float,+strict-align\".into(), // Also force-enable 32-bit atomics, which allows the use of atomic load/store only. // The resulting atomics are ABI incompatible with atomics backed by libatomic. features: \"+soft-float,+strict-align,+atomics-32\".into(), panic_strategy: PanicStrategy::Abort, relocation_model: RelocModel::Static,", "positive_passages": [{"docid": "doc-en-rust-9d3c81fe33119973c08c87410d7d37fcbfdb78fb83e02fe3f92be033b67d8c9f", "text": "If you try to compile the crate with , it fails with the following error: This seems to be a regression from to , and my best guess for the change that caused it is this pull request: $DIR/issue-89064.rs:17:16 | LL | let _ = A::foo::(); | ^^^ expected 0 generic arguments | note: associated function defined here, with 0 generic parameters --> $DIR/issue-89064.rs:4:8 | LL | fn foo() {} | ^^^ help: consider moving this generic argument to the `A` trait, which takes up to 1 argument | LL - let _ = A::foo::(); LL + let _ = A::::foo(); | help: remove these generics | LL - let _ = A::foo::(); LL + let _ = A::foo(); | error[E0107]: this associated function takes 0 generic arguments but 2 generic arguments were supplied --> $DIR/issue-89064.rs:22:16 | LL | let _ = B::bar::(); | ^^^ expected 0 generic arguments | note: associated function defined here, with 0 generic parameters --> $DIR/issue-89064.rs:8:8 | LL | fn bar() {} | ^^^ help: consider moving these generic arguments to the `B` trait, which takes up to 2 arguments | LL - let _ = B::bar::(); LL + let _ = B::::bar(); | help: remove these generics | LL - let _ = B::bar::(); LL + let _ = B::bar(); | error[E0107]: this associated function takes 0 generic arguments but 1 generic argument was supplied --> $DIR/issue-89064.rs:27:21 | LL | let _ = A::::foo::(); | ^^^----- help: remove these generics | | | expected 0 generic arguments | note: associated function defined here, with 0 generic parameters --> $DIR/issue-89064.rs:4:8 | LL | fn foo() {} | ^^^ error[E0107]: this associated function takes 0 generic arguments but 1 generic argument was supplied --> $DIR/issue-89064.rs:31:16 | LL | let _ = 42.into::>(); | ^^^^ expected 0 generic arguments | note: associated function defined here, with 0 generic parameters --> $SRC_DIR/core/src/convert/mod.rs:LL:COL | LL | fn into(self) -> T; | ^^^^ help: consider moving this generic argument to the `Into` trait, which takes up to 1 argument | LL | let _ = Into::>::into(42); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ help: remove these generics | LL - let _ = 42.into::>(); LL + let _ = 42.into(); | error: aborting due to 4 previous errors For more information about this error, try `rustc --explain E0107`. ", "positive_passages": [{"docid": "doc-en-rust-c6674e20ed90b7f60d5a3d2c25149d2f37743c5a187ef4fdd77b1627c50694d6", "text": " $DIR/dont-ice-on-invalid-lifetime-in-macro-definition.rs:5:17 | LL | e:: | ^ unknown prefix | = note: prefixed identifiers and literals are reserved since Rust 2021 help: consider inserting whitespace here | LL | e:: | + error: aborting due to 1 previous error ", "positive_passages": [{"docid": "doc-en-rust-eff82a61886b25b0f1dd4cf2300b1ef5995dc7fc7eccfb7efad8700de136ebcb", "text": " $DIR/issue-72574-1.rs:4:14 | LL | (_a, _x @ ..) => {} | ^^^^^^^ this is only allowed in slice patterns | = help: remove this and bind each tuple field independently help: if you don't need to use the contents of _x, discard the tuple's remaining fields | LL | (_a, ..) => {} | ^^ error: aborting due to previous error ", "positive_passages": [{"docid": "doc-en-rust-a48b7de958c52aaf8f6556c4a1124cd11db50916d01764712aa5be78a9e191d2", "text": "and I are working on diagnostics for the rest idiom in slice patterns () and came across this issue when an attempt is made to use it in tuples and tuple structs. IIUC, this is allowed: and this is not allowed (note the addition of ): The error messages for the tuple and tuple struct cases include the following incorrect information: This should state that the id binding to rest is not allowed. patterns are allowed. is only allowed in slice patterns, not in tuple or tuple struct patterns. This should state that the idiom is only allowed in slice patterns. $DIR/async-is-unwindsafe.rs:12:5 | LL | is_unwindsafe(async { | _____^_____________- | |_____| | || LL | || LL | || LL | || use std::ptr::null; ... || LL | || drop(cx_ref); LL | || }); | ||_____-^ `&mut (dyn Any + 'static)` may not be safely transferred across an unwind boundary | |_____| | within this `{async block@$DIR/async-is-unwindsafe.rs:12:19: 30:6}` | = help: within `{async block@$DIR/async-is-unwindsafe.rs:12:19: 30:6}`, the trait `UnwindSafe` is not implemented for `&mut (dyn Any + 'static)`, which is required by `{async block@$DIR/async-is-unwindsafe.rs:12:19: 30:6}: UnwindSafe` note: future does not implement `UnwindSafe` as this value is used across an await --> $DIR/async-is-unwindsafe.rs:26:18 | LL | let mut cx = Context::from_waker(&waker); | ------ has type `Context<'_>` which does not implement `UnwindSafe` ... LL | async {}.await; // this needs an inner await point | ^^^^^ await occurs here, with `mut cx` maybe used later note: required by a bound in `is_unwindsafe` --> $DIR/async-is-unwindsafe.rs:3:26 | LL | fn is_unwindsafe(_: impl std::panic::UnwindSafe) {} | ^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `is_unwindsafe` error: aborting due to 2 previous errors error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0277`.", "positive_passages": [{"docid": "doc-en-rust-c4a5273a933e8a31db83bc0532f1121df06f4a5ea514257a527d1d7d6a098f8b", "text": "https://crater-\nI'll need to re-check later but this seems to bisect to somehow.\nThe UI test changes from that PR appear to back this up, as confusing as it is.\nI'm not sure what's going on in that crate, if it built successfully by mistake, or how that PR interacts with it, so let's cc the PR author and reviewer for validation.\nAfter , no longer implements and , because the context now contains a which doesn't implement those traits. So this is an issue with the standard library and not the crate being built. Repro: label -T-compiler +T-libs +S-has-mcve\nWG-prioritization assigning priority (). Noticing this on which says that was meant to be confined on nightly. Unsure about the fallout of this regression but for now marking as critical for tracking purposes. label -I-prioritize +P-critical\nRevert up:\nIsn't not unwind safe?\nThat line was by your PR, so while what you say is technically true, talking about what \"already happened\" when we are traveling back and forth through time via version control may be more confusing than helpful.\nThat line was already there. I the line below it though. Original PR:\nApologies, I guess I can't count numbers today? Hm, so, the full error of the regression is this: Note that it's trying to make a pointer implement . And for what it's worth, stable does indeed document is . Thus unfortunately that line you pointed to, as opposed to the one I thought you were referring to, is effectively just documenting that all are , and that will include regardless of whether it is is . And confusingly, , unlike .\nI have opened as an alternative.", "commid": "rust_issue_125193", "tokennum": 371}], "negative_passages": []} {"query_id": "q-en-rust-f1b0fae89bfd07c38c7b05307750bdd1d168d4d8c7858077dd60f3b0297dd2aa", "query": " Subproject commit d30da544a8afc5d78391dee270bdf40e74a215d3 Subproject commit c8a8767c56ad3d3f4eb45c87b95026936fb9aa35 ", "positive_passages": [{"docid": "doc-en-rust-0032656003d49486747818b78387703e152e7892a57c25caee873b32b608d81d", "text": "The AVR-Rust post continues to track AVR-LLVM, which is tracking upstream LLVM. That means we are running into changes from the future LLVM 4.0. I figured we could open up this issue now to track the changes required to stay compatible. This is the current list of upgrade hazards. Note that some of the text may be nonsensical as I don't truly grok all the internals of LLVM so some changes are wild stabs in the dark. It's also possible that some failed fix for earlier problems actually caused later problems: [x] DIDescriptorFlags became a real C++ enum and not a bitset () [x] An EH personality is required for every function that can unwind () [x] pass changed header and constructor name () [x] changed API () [x] Bitcode/ReaderWriter.h has been renamed / API changed () () [x] changed API () [x] changed API () [x] prefers over () [x] string type changes () [x] is not guaranteed to be null terminated () [x] changed return types () [x] Archive support changed error handling types () [x] Teach Rust about an LLVM 4.0 API change for creating debug info for global variables () [x] Check all errors in LLVMRustArchiveIterator API () [x] Upgrade emscripten (kripken/emscripten-fastcomp, kripken/emscripten-fastcomp-clang) [x] Alignment now represented with 32 bits instead of 64 () [x] API changed again () [x] now takes an instead of a and () [x] Symbol lookup error when linking dylibs () [x] Undefined behavior in tests () [x] Unoptimised builds of compilerbuiltins on ARM have references to () A WIP PR that performs the upgrade! Some issues whose resolution is believed to be blocked on LLVM 4.0: [ ]\nNext time we do this upgrade we'll need emscripten to do it to. It would be good if we could pull in a working llvm wasm backend in the same upgrade, but that may not be ready before the end of the year.\nFor the record, I am also working with a fork of LLVM that is tracking upstream (not AVR-LLVM but similar in spirit) and am starting to work on this now.", "commid": "rust_issue_37609", "tokennum": 547}], "negative_passages": []} {"query_id": "q-en-rust-f1b0fae89bfd07c38c7b05307750bdd1d168d4d8c7858077dd60f3b0297dd2aa", "query": " Subproject commit d30da544a8afc5d78391dee270bdf40e74a215d3 Subproject commit c8a8767c56ad3d3f4eb45c87b95026936fb9aa35 ", "positive_passages": [{"docid": "doc-en-rust-b825ff3dc6ae4176fd7ecfcfa06ff0a46c85d7baa86a087f5f86be03408e3df4", "text": "I'm looking through the AVR-Rust commits and I'd love to coordinate --- feel free to ping me on IRC () or via mail!\nNote that there is a regression reported against llvm 4.0 regarding the powi intrinsic on armhf targets: I'm explicitly mentioning this here, because there were previously problems with it, see and related issues. Also cc rust-lang-nursery/compiler-builtins, an issue about the same problem in the compiler-rt rust port.\nTo clarify what I think the is referring to (correct me if I'm wrong) In LLVM 3.9 and before, the calling convention LLVM used with the powidf2 and powisf2 intrinsics was basically the C calling convention In LLVM 4.0, the backend was corrected to expect these intrinsics to have the aapcs calling convention One person that aapcs is expected for powis2f and powid2f, while another a claiming that the normal C calling convention should be used. So it seems that if the LLVM change is not reverted (expected powisf2 to be aapcs) then we need to update our definition. This is going to be a huge PITA for supporting multiple LLVM versions as well. We need to select the calling convention depending on the LLVM version as well...\nAny objections to doing that coordination here on GitHub, or on one of our forks?\nI don't have strong preferences for the communication channel.\nshould address what is listed as API changes\" above (the core of the change is in how attributes are represented).\nI would prefer to upstream LLVM 4.0 compatibility patches sooner rather than later. That includes my own work obviously, but I'm also looking at the commits I cherry-picked from AVR-Rust. In particular, looks like a reasonable simplification even independent of LLVM 4.0 compatibility, and looks like a nice little self-contained fix. Could you create PRs for those? (If you don't have the time but are otherwise fine with upstreaming, I'd be happy to manage that, giving proper credit of course.)\nAbsolutely! I wasn't really sure how much we could expect LLVM 4 to change as it goes towards stable / how much churn the Rust maintainers would want to put up with. I'll get those pulled out and submitted soon. I think I'm also starting to understand some of the silly error handling that the new LLVM does.", "commid": "rust_issue_37609", "tokennum": 538}], "negative_passages": []} {"query_id": "q-en-rust-f1b0fae89bfd07c38c7b05307750bdd1d168d4d8c7858077dd60f3b0297dd2aa", "query": " Subproject commit d30da544a8afc5d78391dee270bdf40e74a215d3 Subproject commit c8a8767c56ad3d3f4eb45c87b95026936fb9aa35 ", "positive_passages": [{"docid": "doc-en-rust-d8c508b539f75924a09b687b12d456595d2134b4f4d573dca0ee6458541c00a8", "text": "Awesome, thanks in advance!\nBTW the PR reference for \"llvm::AttrBuilder changed API\" is wrong, it points at this issue instead of\nDoes anyone have ETA? I'd like to know when I can use AVR. :)\nYou should follow along at avr-rust/rust for the AVR-specific issues. Upgrading to LLVM 4 will not magically yield AVR support.\nOk, thank you!\nI've created for the API change.\nI've also created for another debug info API change.\nAfter putting all of those patches on top of master, was successfully built. I got several undefined references, most likely due to my local LLVM setup.\nI think the next step after all those patches land is to allow LLVM 4.0 in the configure/rustbuild scripts and then actually update the submodule.\nWe'll also need to fix that AAPCS bug first, I just noticed it\nDon't forget : I think that means that the Rust fork of LLVM is following the Emscripten fork of LLVM, thus Emscripten needs to upgrade to LLVM 4.0 / master before Rust can.\nAnd I'm hoping to supersede it with\nI just ran into which changed the alignment from u64 to u32. Will think about a proper fix later, unless someone beats me to it (but should merge first to avoid further conflicts).\nAnd it says So, that will be \"exciting\".\nIs there much left blocking an upgrade?\nThis: Has to be done too but I don't know if it's currently being worked on. Any info on that ^?\nNo nobody is actively working on an emscripten upgrade. Since even when that is done it will probably take a lot of work to land an LLVM upgrade, there's probably more we can do to prepare. Once we decide we want to do it, somebody can prepare a patch to throw at the bots. It seems likely stuff will not work and it will be a slog to fix it all, which will give us time. We don't need emcc to generate wasm stds, just a fairly small patch on our LLVM branch that will hopefully port without much difficulty. So if somebody feels motivated, it's not too soon to start going after it. As to the emscripten upgrade, we should at least let know that we're starting to think seriously about doing the upgrade, and need emscripten to upgrade LLVM too.", "commid": "rust_issue_37609", "tokennum": 539}], "negative_passages": []} {"query_id": "q-en-rust-f1b0fae89bfd07c38c7b05307750bdd1d168d4d8c7858077dd60f3b0297dd2aa", "query": " Subproject commit d30da544a8afc5d78391dee270bdf40e74a215d3 Subproject commit c8a8767c56ad3d3f4eb45c87b95026936fb9aa35 ", "positive_passages": [{"docid": "doc-en-rust-3f7c24e04e18bdd9163f17d7c308a4e17719371a2d0e199a0b9e560bd6cb202e", "text": "It seems like that could be a big effort. Another alternative is to use the LLVM wasm backend, but I still have not heard that it is generally usable yet. cc\nI can look into the effort needed to update emscripten's fastcomp LLVM version, but I need to know a commit to aim for. Sounds like the goal here is 4.0, which hasn't released yet, so I guess I have to wait on that?\nSo far as AVR is concerned, any commit within the last month will do. There will likely be a few more months until 4.0 is actually tagged (every ~6 months a new release is made). It is quite common for projects to track LLVM master, which I'm personally a fan of.\nBut is it common to release snapshots from LLVM master? When LLVM itself only releases every 6 months, it seems incongruous for Rust to pick arbitrary snapshots in its 6-week release cycle.\nLLVM 4.0 will branch in a week and the final release will happen in late feburary (schedule says 21st for final tag, but this is flexible IIRC).\nFrom the llvm-dev mailing list:\nbranch has happened\nAn update on the \"powidf2 / powisf2 calling convention changed\" situation: The regression has been fixed on llvm master () however from what I can tell the fixes are not yet in the 4.0 branch, although the fix may be backported ().\nI've started working on updating emscripten kripken/emscripten-fastcomp kripken/emscripten-fastcomp-clang I've also created a new bug requesting that the regression fix be merged into 4.0 .\nsays the emcc LLVM upgrade is ready to go. or are we prepared to begin the Rust LLVM upgrade? I think what needs to happen is to submit the Rust PR and start throwing it against the bots. Once it looks like that is close to landing then we'll have to coordinate both project's upgrades to minimize the breakage window where rustc and emcc have different backends. I'm not sure how best to do that - we might just try to upgrade as simultaneously as possible and accept up to a few days of breakage, or we might ask emscripten to stage their build such that they can swap their incoming branch right after we land to master.", "commid": "rust_issue_37609", "tokennum": 526}], "negative_passages": []} {"query_id": "q-en-rust-f1b0fae89bfd07c38c7b05307750bdd1d168d4d8c7858077dd60f3b0297dd2aa", "query": " Subproject commit d30da544a8afc5d78391dee270bdf40e74a215d3 Subproject commit c8a8767c56ad3d3f4eb45c87b95026936fb9aa35 ", "positive_passages": [{"docid": "doc-en-rust-d08d57ad91dcd821a66e6bf4a09770d40ddb42dd5816a251300a62a967ffb4ed", "text": "needs to know the exact LLVM commit we're moving to as well, and that will become clear once the PR starts working through the queue. The stars are aligning. Let's make this happen. Thanks for doing the fastcomp upgrade.\nWe are not fully compatible with LLVM 4.0. I have heard from at least one person (on IRC) that building against stock LLVM 4.0 (RC) does not work. I haven't personally inspected all failures, but there's at least two things still missing: (and possibly ). It shouldn't be a conceptually difficult patch to apply the equivalent changes to rustc, but possibly a big diff. I haven't digested the implications yet, but it might be a relatively simple change localized to the function in that currently calls .\nThe fix has been merged into the 4.0 release (). I'm going to try and build Rust against stock LLVM 4.0 and figure out what needs modifying.\nOn top of what mentioned: now takes an instead of a and ()\nI've got a half-baked patch to use the new signature for I'm not familiar enough with Rust internals to finish it off. I ran into a problem in as it is given a but in order to use we need a . We need to call in order to build an instance of .\nI've opened for the change that mentioned. I've also opened for the 32-bit alignment change. I believe that is the only thing we need to update aside from these two PRs in order to get the compiler to successfully be built with LLVM 4.0.\nSo to summarize where we are with LLVM 4 (correct me if I'm wrong): 4.0 will be released in about 1 week (2/21). are no known issues with supporting LLVM 4.0, but there are probably issues that will be discovered once we try to use it in nightlies. runs on in-tree LLVM + LLVM 3.7 (so PRs haven't actually been tested on LLVM 4.0 yet). support LLVM back to ~3.6(?", "commid": "rust_issue_37609", "tokennum": 465}], "negative_passages": []} {"query_id": "q-en-rust-f1b0fae89bfd07c38c7b05307750bdd1d168d4d8c7858077dd60f3b0297dd2aa", "query": " Subproject commit d30da544a8afc5d78391dee270bdf40e74a215d3 Subproject commit c8a8767c56ad3d3f4eb45c87b95026936fb9aa35 ", "positive_passages": [{"docid": "doc-en-rust-6ee5ceb18eabad3b778a81eed6b79cb6832e65aaa7bebd81d106c9ad178e134d", "text": ")~ 3.7 and will continue to support it until either: a. It's so old that it can be removed without bothering package maintainers b. We need at least version X in order to fix an important bug c. The API is so different from version Y that it's a huge pain to support will open a PR to update the submodule shortly after release but it will take a month to land due to: a. Waiting for emscripten ( -) b. Bugs in upstream c. Missed LLVM API changes d. Making bors happy e. All of the above\nThanks for the summary The only correction I'd make is that in your point (4) you can s/3.6/3.7/, but otherwise it's accurate :) If you'd like to start preparing a PR to update to LLVM 4.0 I don't think we need to block waiting for LLVM itself to release 4.0. Currently only one Travis builder runs on each PR but we could temporarily allow more builders running on an LLVM 4.0 PR to start weeding out bugs before it hits\nI gave it a go but I don't think my git-fu (or patience) is strong enough to handle a big merge like this. What I can do is try to run rustc with upstream LLVM and file/fix issues as I find them. There is at least one issue remaining as of (details to come).\nWhoever ends up doing the LLVM upgrade, please also upgrade the compiler-rt submodule so that the sanitizer related LLVM passes don't diverge from what the sanitizer runtimes expect.\nIn fact, if you upgrade before the release, they usually mention your project in the release notes. :)\nSo whats the next issue(s) to subscribe to in the race-to-AVR? :)\nAs says on , we can integrate the latest Rust master into the avr-rust compiler. After that, we can spend some time getting compiling for AVR. Once that is working, we could then start looking into upstreaming the AVR target to Rust.\nAVR-specific issues can be raised / tracked in For example, there are two that I'm aware of right now:\nhas been closed, so that can be checked off in the top post.", "commid": "rust_issue_37609", "tokennum": 499}], "negative_passages": []} {"query_id": "q-en-rust-f1bb32c3235f87b1e2ed4e27910efb712bb4519b417a29063c3b5aa3588d91d6", "query": ".map(|res| (res, extra_fragment.clone())), type_ns: match self.resolve( path_str, disambiguator, TypeNS, ¤t_item, base_node,", "positive_passages": [{"docid": "doc-en-rust-61432f56219da73b3d8b56806dcc1faf500f0340871ae45fa63f399bd89a3fb3", "text": "I tried to document this code: I expected to see this happen: charstd::char` points to module page of char type points to char primitive page Instead, this happened: type points to module page of char :\nI think the problem is that modules are in the type namespace, so is doing exactly what you are asking it for (the module, since it has higher priority). Since the built-in types scope is the lowest precedence, perhaps there needs to be a way to access it directly? Maybe something like or ? Or maybe should skip over modules, and keep looking deeper in the scope hierarchy? (cc )\nOh hmm I see what you mean. There are : Types, values, and macros. Since modules and types are in the same namespace, it uses the first one it finds. Maybe we could work around this by first resolving in an empty module (so it has nothing in scope), then only falling back to the current scope if we don't find something there?\nI'm surprised this hasn't caused trouble before now. Does the root of never use the primitive?\nI don't know how it works, but the resolver is smart enough to know that in a type context won't match a module (like , the won't match a module named \"char\". Maybe it has something to do with late-resolution which has a to give it some context?\nAdding that would be more of area ... I can give it a shot though.\nIt looks like doesn't even return a resolution and only records it for later use ... I'm going to go with the empty module workaround and we can fix it later if it turns out to cause issues.\nThe double resolve seems hacky. We could probably make type@ specifically not resolve to a module, perhaps? Also, note that modules are in all namespaces\nHow would that work? We could skip any module resolutions, but since modules have precedence over types I don't know how we'd get the primitive type to resolve afterwards.\nOh hold on we have a function already. We can look at directly.\nThis doesn't have anything to do with primitives, this has to do with glob imports vs non-glob imports. You can get the same issue with the following:\nThis also doesn't have to do with modules being \"resolved first\" or anything. Things that are non-glob in-scope are resolved first.", "commid": "rust_issue_74063", "tokennum": 511}], "negative_passages": []} {"query_id": "q-en-rust-f1bb32c3235f87b1e2ed4e27910efb712bb4519b417a29063c3b5aa3588d91d6", "query": ".map(|res| (res, extra_fragment.clone())), type_ns: match self.resolve( path_str, disambiguator, TypeNS, ¤t_item, base_node,", "positive_passages": [{"docid": "doc-en-rust-1bad049a447735444f3ad60bbaefb3440c9d43c0a1ae5f6aabc0f7f34e628fe7", "text": "Actually, my bad: This does have to do with primitives in the sense that primitives are the only thing that still work when shadowed because of some special code in checking the primitive tables. I think when the string matches one of the primitive table entries for TypeNS lookups that were explicitly annotated with type@ and resolve to a module we can hack this in. In the future we might want to tighten up this code so that e.g. can't resolve to an enum, etc.\ndo you mind if I fix at the same time and resolve primitives even before modules? That seems like the less surprising behavior to me, the module can still be disambiguated with .\nI'm a bit wary of doing this because having the char module in scope is not a common situation, and it will get weirder for people who have a different char module. I think we should resolve to whatever is in scope, but if is explicitly mentioned we can then force resolve to the primitive.", "commid": "rust_issue_74063", "tokennum": 206}], "negative_passages": []} {"query_id": "q-en-rust-f1d239be0dfe5d0e0f474dbc4722998542c559d16af3ddf44e3482eb6bc06c21", "query": "assert_eq!(find_best_match_for_name(&input, Symbol::intern(\"1111111111\"), None), None); let input = vec![Symbol::intern(\"aAAA\")]; let input = vec![Symbol::intern(\"AAAA\")]; assert_eq!( find_best_match_for_name(&input, Symbol::intern(\"AAAA\"), None), Some(Symbol::intern(\"aAAA\")) find_best_match_for_name(&input, Symbol::intern(\"aaaa\"), None), Some(Symbol::intern(\"AAAA\")) ); let input = vec![Symbol::intern(\"AAAA\")]; // Returns None because `lev_distance > max_dist / 3` assert_eq!(find_best_match_for_name(&input, Symbol::intern(\"aaaa\"), None), None); let input = vec![Symbol::intern(\"AAAA\")]; assert_eq!( find_best_match_for_name(&input, Symbol::intern(\"aaaa\"), Some(4)), Some(Symbol::intern(\"AAAA\"))", "positive_passages": [{"docid": "doc-en-rust-68c6f35e025cebcb8286b1360c5297d850ed364c3a6a9f39a1bb805bfd2fed77", "text": "Given the following code: The current output is: $DIR/generics.rs:17:43 --> $DIR/generics.rs:18:43 | LL | f2: extern \"C-cmse-nonsecure-call\" fn(impl Copy, u32, u32, u32) -> u64, | ^^^^^^^^^", "positive_passages": [{"docid": "doc-en-rust-47f681e79359d4538a29b4fe6560472bcd9c6a462701bf8783cc73cb7468c995", "text": " $DIR/future-prelude-collision-generic.rs:28:5 | LL | Generic::from_iter(1); | ^^^^^^^^^^^^^^^^^^ help: disambiguate the associated function: ` as MyFromIter>::from_iter` | note: the lint level is defined here --> $DIR/future-prelude-collision-generic.rs:5:9 | LL | #![warn(rust_2021_prelude_collisions)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ = warning: this is accepted in the current edition (Rust 2018) but is a hard error in Rust 2021! = note: for more information, see issue #85684 warning: trait-associated function `from_iter` will become ambiguous in Rust 2021 --> $DIR/future-prelude-collision-generic.rs:31:5 | LL | Generic::::from_iter(1); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: disambiguate the associated function: ` as MyFromIter>::from_iter` | = warning: this is accepted in the current edition (Rust 2018) but is a hard error in Rust 2021! = note: for more information, see issue #85684 warning: trait-associated function `from_iter` will become ambiguous in Rust 2021 --> $DIR/future-prelude-collision-generic.rs:34:5 | LL | Generic::<_, _>::from_iter(1); | ^^^^^^^^^^^^^^^^^^^^^^^^^^ help: disambiguate the associated function: ` as MyFromIter>::from_iter` | = warning: this is accepted in the current edition (Rust 2018) but is a hard error in Rust 2021! = note: for more information, see issue #85684 warning: 3 warnings emitted ", "positive_passages": [{"docid": "doc-en-rust-165d89f539c3ce756ddff3a662687fb727b2b42ab91cc282c32b5e13623e74e1", "text": "For generic types, the prelude collision lint can produce suggestions that do not compile: Code: Suggestion from the prelude collision lint: Error after applying the suggestion: $DIR/clone-on-unconstrained-borrowed-type-param.rs:3:5 | LL | fn wat(t: &T) -> T { | - - expected `T` because of return type | | | this type parameter LL | t.clone() | ^^^^^^^^^ expected type parameter `T`, found `&T` | = note: expected type parameter `T` found reference `&T` note: `T` does not implement `Clone`, so `&T` was cloned instead --> $DIR/clone-on-unconstrained-borrowed-type-param.rs:3:5 | LL | t.clone() | ^ help: consider restricting type parameter `T` | LL | fn wat(t: &T) -> T { | +++++++ error[E0308]: mismatched types --> $DIR/clone-on-unconstrained-borrowed-type-param.rs:9:5 | LL | fn wut(t: &Foo) -> Foo { | --- expected `Foo` because of return type LL | t.clone() | ^^^^^^^^^ expected struct `Foo`, found `&Foo` | note: `Foo` does not implement `Clone`, so `&Foo` was cloned instead --> $DIR/clone-on-unconstrained-borrowed-type-param.rs:9:5 | LL | t.clone() | ^ help: consider annotating `Foo` with `#[derive(Clone)]` | LL | #[derive(Clone)] | error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "positive_passages": [{"docid": "doc-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "text": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): \"Screen // Regression test for #72051, hang when resolving regions. // check-pass // edition:2018 pub async fn query<'a>(_: &(), _: &(), _: (&(dyn std::any::Any + 'a),) ) {} fn main() {} ", "positive_passages": [{"docid": "doc-en-rust-5d43d4b51e473d6b281e1d8ec940587a45f47bfe6076e417a10f9c1d62925477", "text": "() Errors: Backtrace: modify labels: +I-hang +T-compiler +C-bug +A-lifetimes\nsomewhat minimized: To replicate this I needed: async two other arguments by ref at least one arg must not be bound to , compiles quickly the problematic argument must be contain an indirection: compiles fine. and also freeze. and a trait object containing at least one non marker trait explicitly bound to . and are both ok\nrepro is a regression in 1.43. It compiles immediately with 1.39 through 1.42.\nI am currently bisecting this using\nregression in modify labels: -E-needs-bisection\nFYI who had context on .", "commid": "rust_issue_72051", "tokennum": 150}], "negative_passages": []} {"query_id": "q-en-rust-f4d6ec07215a1a32663b89912b6987b6ce425def3d306872ad55a6423dedac7f", "query": "``` ptr::read(&v as *const _ as *const SomeType) // `v` transmuted to `SomeType` ``` Note that this does not move `v` (unlike `transmute`), and may need a call to `mem::forget(v)` in case you want to avoid destructors being called. \"##, E0152: r##\"", "positive_passages": [{"docid": "doc-en-rust-c58f7b7660477a36d4f4e0b97257ff6dd013746ba8d66ba6cc4a6a220b7b64ed", "text": "The message for E0139 suggests replacing with . Actually performing such a replacement will result in double drops, since the version consumes but the version does not. (It presumably needs a or something.)", "commid": "rust_issue_29922", "tokennum": 41}], "negative_passages": []} {"query_id": "q-en-rust-f4d6ec07215a1a32663b89912b6987b6ce425def3d306872ad55a6423dedac7f", "query": "``` ptr::read(&v as *const _ as *const SomeType) // `v` transmuted to `SomeType` ``` Note that this does not move `v` (unlike `transmute`), and may need a call to `mem::forget(v)` in case you want to avoid destructors being called. \"##, E0152: r##\"", "positive_passages": [{"docid": "doc-en-rust-b42b6ef93bee4d7180bb7d77309475fb36bc4cc78a287c17c9828feb5e3fa4cb", "text": "On currently nightly, I get: () Yet the description for this error was merged weeks ago:\ncc , in the future when adding new diagnostics mods be sure to add a call", "commid": "rust_issue_29665", "tokennum": 38}], "negative_passages": []} {"query_id": "q-en-rust-f51dd86c6fe0650bd4639055873731075bc189ae821d3087bea34461a4b6298a", "query": "} } fn type_bound( &self, ty: Ty<'tcx>, visited: &mut SsoHashSet>, ) -> VerifyBound<'tcx> { match *ty.kind() { ty::Param(p) => self.param_bound(p), ty::Projection(data) => self.projection_bound(data, visited), ty::FnDef(_, substs) => { // HACK(eddyb) ignore lifetimes found shallowly in `substs`. // This is inconsistent with `ty::Adt` (including all substs), // but consistent with previous (accidental) behavior. // See https://github.com/rust-lang/rust/issues/70917 // for further background and discussion. let mut bounds = substs .iter() .filter_map(|child| match child.unpack() { GenericArgKind::Type(ty) => Some(self.type_bound(ty, visited)), GenericArgKind::Lifetime(_) => None, GenericArgKind::Const(_) => Some(self.recursive_bound(child, visited)), }) .filter(|bound| { // Remove bounds that must hold, since they are not interesting. !bound.must_hold() }); match (bounds.next(), bounds.next()) { (Some(first), None) => first, (first, second) => VerifyBound::AllBounds( first.into_iter().chain(second).chain(bounds).collect(), ), } } _ => self.recursive_bound(ty.into(), visited), } } fn param_bound(&self, param_ty: ty::ParamTy) -> VerifyBound<'tcx> { debug!(\"param_bound(param_ty={:?})\", param_ty);", "positive_passages": [{"docid": "doc-en-rust-cc977dabed974d4bcbd1cb2e4f8886eb7e1f6cae7a3aa4a4ee14b49a9dba2970", "text": " $DIR/feature-gate-thread_local.rs:18:1 | 18 | #[thread_local] //~ ERROR `#[thread_local]` is an experimental feature", "positive_passages": [{"docid": "doc-en-rust-a286439ec637b6a7e4cb759b48a2616bb0696b1ec7771f286f775eb9bc9b74ea", "text": "I will like to fix this. What should be the better warning message here? Or I just need to remove the last sentence?\nIt seems fine to just remove the sentence.\nFixed by", "commid": "rust_issue_47755", "tokennum": 37}], "negative_passages": []} {"query_id": "q-en-rust-f585893ba90bff36b6dd1bd6682787492633b47be18625c11992edae7bb251a7", "query": "/// let num_trailing = unsafe { cttz_nonzero(x) }; /// assert_eq!(num_trailing, 3); /// ``` #[rustc_const_unstable(feature = \"const_cttz\", issue = \"none\")] #[rustc_const_stable(feature = \"const_cttz\", since = \"1.53.0\")] pub fn cttz_nonzero(x: T) -> T; /// Reverses the bytes in an integer type `T`.", "positive_passages": [{"docid": "doc-en-rust-085409dcef93908a429af332ac7e4f51617f3e573aa37899e533c49777a5f9ea", "text": "Tracking issue for trailingzeros and leadingzeros for non zero types as introduced by . The feature gate for the issue is . On many architectures, this functions can perform better than trailingzeros/leading_zeros on the underlying integer type, as special handling of zero can be avoided.\nIt seems on my (common) architecture it doesn't give an advantage: : : So the question is: is this NonZero method worth keeping? Are the architectures where it gives a performance advantage common enough?\nbut as you show in your example if you want to make a distributable binary and due to this not set the target-cpu there is a big difference\nThe architecture I'm using is quite standard. A distributable binary could probably use it as \"universal target\". I don't know much common is going to be an instruction like tzcnt in the future. If you think a similar instruction will be sufficiently uncommon then please ignore my comments here :-)\nmerge cc\nTeam member has proposed to merge this. The next step is review by the rest of the tagged team members: [ ] [x] [x] [x] [x] [x] No concerns currently listed. Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete. As the automated representative of the governance process, I would like to thank the author for their work and everyone else who contributed. The RFC will be merged soon.\nstarted to make the stabilization PR but hit a problem I think that constcttz needs to be stabilized for this to be stabilized from //! If an intrinsic is supposed to be used from a with a attribute, //! the intrinsic's attribute must be , too. Such a change should not be done //! without T-lang consulation, because it bakes a feature into the language that cannot be //! replicated in user code without compiler support.", "commid": "rust_issue_79143", "tokennum": 482}], "negative_passages": []} {"query_id": "q-en-rust-f5a29efa4da0cfd210a12f483def8dcc3e3f9a2e20830b24a7d2cd887f062964", "query": " error[E0668]: malformed inline assembly --> $DIR/issue-62046.rs:8:9 | LL | asm!(\"nop\" : \"+r\"(\"r15\")); | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to previous error For more information about this error, try `rustc --explain E0668`. ", "positive_passages": [{"docid": "doc-en-rust-ab673e532e457140214dc1a9d6605ec7269c0e2a977a3bc6ded94c5dc20eb855", "text": "Was working on bug and tried to compile the program below. The inline assembly may be bogus: I make no guarantees of its correctness. crashed with this output: : (but also happened on a nightly from late May)\nThe inline assembly is in fact bogus. Here's a minimal example that crashes the compiler. The problem is likely using the string operand (by mistake, because I'm dumb).\n - [wasm32-unknown-unknown](platform-support/wasm32-unknown-unknown.md) - [wasm64-unknown-unknown](platform-support/wasm64-unknown-unknown.md) - [*-win7-windows-msvc](platform-support/win7-windows-msvc.md) - [x86_64-fortanix-unknown-sgx](platform-support/x86_64-fortanix-unknown-sgx.md)", "positive_passages": [{"docid": "doc-en-rust-9a994ebf459940df871763a70de91670c66bd2d2064028ac59873b47fceaf38d", "text": "This can be replicated with a minimal : Compile it to WASM with Then, check if the generated WASM uses reference types: $DIR/issue-66968-suggest-sorted-words.rs:3:20 | LL | println!(\"{}\", a_variable_longer_name); | ^^^^^^^^^^^^^^^^^^^^^^ help: a local variable with a similar name exists: `a_longer_variable_name` error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "positive_passages": [{"docid": "doc-en-rust-f0a7e9ef309afa13524b79e1e4eb6b2936bb23e506865eeeb3c4450a51ecd325", "text": "On identifiers that have more than one word, it is relatively common to write them in the wrong order ( \u2192 ). These are normally not found by Levenshtein distance checks, but we could do a basic \"split on ``, sort and join before comparison\" so that we could suggest the right identifier. $DIR/issue-68062-const-extern-fns-dont-need-fn-specifier.rs:5:25 | LL | const extern \"Rust\" PUT_ANYTHING_YOU_WANT_HERE bug() -> usize { 1 } | ^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `fn` error[E0658]: `const extern fn` definitions are unstable --> $DIR/issue-68062-const-extern-fns-dont-need-fn-specifier.rs:5:5 | LL | const extern \"Rust\" PUT_ANYTHING_YOU_WANT_HERE bug() -> usize { 1 } | ^^^^^^^^^^^^ | = note: for more information, see https://github.com/rust-lang/rust/issues/64926 = help: add `#![feature(const_extern_fn)]` to the crate attributes to enable error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0658`. ", "positive_passages": [{"docid": "doc-en-rust-4b896e75ea201d5026a72d90704af60b8d8f16ff87fb6c41e963080cb6fd89bb", "text": "The following program compiles, and prints :\nAs pointed out by on discord , this also works on stable:\nRegression introduced in 1.40.0, specifically this line:\nI have a fix in", "commid": "rust_issue_68062", "tokennum": 42}], "negative_passages": []} {"query_id": "q-en-rust-f947d7096af983d401ed027d151f494a1c409e0fcda4f29248055af515354876", "query": "trace!(\"Running RemoveUnneededDrops on {:?}\", body.source); let did = body.source.def_id(); let param_env = tcx.param_env(did); let param_env = tcx.param_env_reveal_all_normalized(did); let mut should_simplify = false; let (basic_blocks, local_decls) = body.basic_blocks_and_local_decls_mut();", "positive_passages": [{"docid": "doc-en-rust-e6a6c868c4a82b9d69b09a43e032398fcfee12d6a72ad15fe0e145f111e0caa3", "text": "Previously: Now: $DIR/double-opaque-parent-predicates.rs:3:12 | LL | #![feature(generic_const_exprs)] | ^^^^^^^^^^^^^^^^^^^ | = note: see issue #76560 for more information = note: `#[warn(incomplete_features)]` on by default warning: 1 warning emitted ", "positive_passages": [{"docid": "doc-en-rust-577b2d31afb6271cb2335741b57a534d2c858d544dfcacd2268b43d5423d6d50", "text": " $DIR/double-opaque-parent-predicates.rs:3:12 | LL | #![feature(generic_const_exprs)] | ^^^^^^^^^^^^^^^^^^^ | = note: see issue #76560 for more information = note: `#[warn(incomplete_features)]` on by default warning: 1 warning emitted ", "positive_passages": [{"docid": "doc-en-rust-66fc9963abdafc9e6929ffd62b615687417c9bbafc1ca99f5237423c8cbe5f5c", "text": " $DIR/issue-119463.rs:11:5 | LL | fn nonexistent() {} | ^^^^^^^^^^^^^^^^^^^ not a member of trait `issue_119463_extern::PrivateTrait` error[E0603]: trait `PrivateTrait` is private --> $DIR/issue-119463.rs:7:27 | LL | impl issue_119463_extern::PrivateTrait for S { | ^^^^^^^^^^^^ private trait | note: the trait `PrivateTrait` is defined here --> $DIR/auxiliary/issue-119463-extern.rs:1:1 | LL | trait PrivateTrait { | ^^^^^^^^^^^^^^^^^^ error: aborting due to 2 previous errors Some errors have detailed explanations: E0407, E0603. For more information about an error, try `rustc --explain E0407`. ", "positive_passages": [{"docid": "doc-en-rust-8e5cfb6988ed7e1692af3b99f6a32dfb23f144534166ed1c5c1c1042a808dc43", "text": " $DIR/issue-119463.rs:11:5 | LL | fn nonexistent() {} | ^^^^^^^^^^^^^^^^^^^ not a member of trait `issue_119463_extern::PrivateTrait` error[E0603]: trait `PrivateTrait` is private --> $DIR/issue-119463.rs:7:27 | LL | impl issue_119463_extern::PrivateTrait for S { | ^^^^^^^^^^^^ private trait | note: the trait `PrivateTrait` is defined here --> $DIR/auxiliary/issue-119463-extern.rs:1:1 | LL | trait PrivateTrait { | ^^^^^^^^^^^^^^^^^^ error: aborting due to 2 previous errors Some errors have detailed explanations: E0407, E0603. For more information about an error, try `rustc --explain E0407`. ", "positive_passages": [{"docid": "doc-en-rust-71e7458504b5fce5e43cf3f1535504eed30918b4c3a9049912f4a8419f2d901a", "text": "rustcmiddle[]::query::erase::Erased<[u8; 8usize]23: 0x7f6adef1f8ec - rustcquerysystem[]: :query::plumbing::tryexecutequery:: error[E0407]: method `nonexistent` is not a member of trait `issue_119463_extern::PrivateTrait` --> $DIR/issue-119463.rs:11:5 | LL | fn nonexistent() {} | ^^^^^^^^^^^^^^^^^^^ not a member of trait `issue_119463_extern::PrivateTrait` error[E0603]: trait `PrivateTrait` is private --> $DIR/issue-119463.rs:7:27 | LL | impl issue_119463_extern::PrivateTrait for S { | ^^^^^^^^^^^^ private trait | note: the trait `PrivateTrait` is defined here --> $DIR/auxiliary/issue-119463-extern.rs:1:1 | LL | trait PrivateTrait { | ^^^^^^^^^^^^^^^^^^ error: aborting due to 2 previous errors Some errors have detailed explanations: E0407, E0603. For more information about an error, try `rustc --explain E0407`. ", "positive_passages": [{"docid": "doc-en-rust-d57947fd37a2380d989893e16e2040584f50321ec93296f960f89bed501f56fd", "text": "Aas core::ops::function::FnOnce