repo stringlengths 8 35 | pull_number int64 14 15.2k | instance_id stringlengths 13 40 | issue_numbers listlengths 1 3 | base_commit stringlengths 40 40 | patch stringlengths 274 342k | test_patch stringlengths 308 274k | problem_statement stringlengths 25 44.3k | hints_text stringlengths 0 33.6k | created_at stringlengths 19 30 | version stringlengths 3 5 | environment_setup_commit stringlengths 40 40 | FAIL_TO_PASS listlengths 1 1.1k | PASS_TO_PASS listlengths 0 7.38k | FAIL_TO_FAIL listlengths 0 1.72k | PASS_TO_FAIL listlengths 0 49 | __index_level_0__ int64 0 689 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sigoden/aichat | 1,056 | sigoden__aichat-1056 | [
"1055"
] | 737580bb87f3b1d2f040dd7a4ddacca3595915a0 | diff --git a/src/cli.rs b/src/cli.rs
--- a/src/cli.rs
+++ b/src/cli.rs
@@ -73,12 +73,7 @@ pub struct Cli {
impl Cli {
pub fn text(&self) -> Option<String> {
- let text = self
- .text
- .iter()
- .map(|x| x.trim().to_string())
- .collect::<Vec<String>>()
- .join(" ");
+ let text = self.text.to_vec().join(" ");
if text.is_empty() {
return None;
}
diff --git a/src/config/input.rs b/src/config/input.rs
--- a/src/config/input.rs
+++ b/src/config/input.rs
@@ -91,7 +91,7 @@ impl Input {
}
for (path, contents) in files {
texts.push(format!(
- "============ PATH: {path} ============\n\n{contents}\n"
+ "============ PATH: {path} ============\n{contents}\n"
));
}
let (role, with_session, with_agent) = resolve_role(&config.read(), role);
diff --git a/src/repl/mod.rs b/src/repl/mod.rs
--- a/src/repl/mod.rs
+++ b/src/repl/mod.rs
@@ -280,7 +280,7 @@ impl Repl {
Some(args) => match args.split_once(['\n', ' ']) {
Some((name, text)) => {
let role = self.config.read().retrieve_role(name.trim())?;
- let input = Input::from_str(&self.config, text.trim(), Some(role));
+ let input = Input::from_str(&self.config, text, Some(role));
ask(&self.config, self.abort_signal.clone(), input, false).await?;
}
None => {
diff --git a/src/repl/mod.rs b/src/repl/mod.rs
--- a/src/repl/mod.rs
+++ b/src/repl/mod.rs
@@ -718,8 +718,9 @@ fn split_files_text(line: &str, is_win: bool) -> (Vec<String>, &str) {
word.to_string()
}
};
+ let chars: Vec<char> = line.chars().collect();
- for (i, char) in line.chars().enumerate() {
+ for (i, char) in chars.iter().cloned().enumerate() {
match unbalance {
Some(ub_char) if ub_char == char => {
word.push(char);
diff --git a/src/repl/mod.rs b/src/repl/mod.rs
--- a/src/repl/mod.rs
+++ b/src/repl/mod.rs
@@ -730,12 +731,15 @@ fn split_files_text(line: &str, is_win: bool) -> (Vec<String>, &str) {
}
None => match char {
' ' | '\t' | '\r' | '\n' => {
+ if char == '\r' && chars.get(i + 1) == Some(&'\n') {
+ continue;
+ }
if let Some('\\') = prev_char.filter(|_| !is_win) {
word.push(char);
} else if !word.is_empty() {
if word == "--" {
word.clear();
- text_starts_at = Some(i);
+ text_starts_at = Some(i + 1);
break;
}
words.push(unquote_word(&word));
diff --git a/src/repl/mod.rs b/src/repl/mod.rs
--- a/src/repl/mod.rs
+++ b/src/repl/mod.rs
@@ -763,7 +767,7 @@ fn split_files_text(line: &str, is_win: bool) -> (Vec<String>, &str) {
words.push(unquote_word(&word));
}
let text = match text_starts_at {
- Some(start) => line[start..].trim(),
+ Some(start) => &line[start..],
None => "",
};
| diff --git a/src/repl/mod.rs b/src/repl/mod.rs
--- a/src/repl/mod.rs
+++ b/src/repl/mod.rs
@@ -807,6 +811,10 @@ mod tests {
split_files_text("file.txt -- hello", false),
(vec!["file.txt".into()], "hello")
);
+ assert_eq!(
+ split_files_text("file.txt -- \thello", false),
+ (vec!["file.txt".into()], "\thello")
+ );
assert_eq!(
split_files_text("file.txt --\nhello", false),
(vec!["file.txt".into()], "hello")
| Unable to define a grammar check that keeps indentation.
**Describe the bug**
When passing a string via argument, or STDIN, whitespace is trimmed in my role.
**To Reproduce**
I edited the grammar role example, and tried to make it work with code comments and indentation (I pipe strings from my editor to aichat, and need it to keep the formatting). This is my role:
```markdown
---
model: claude:claude-3-5-sonnet-latest
temperature: 0
top_p: 0
---
Your task is to take the text provided and rewrite it into a clear, grammatically
correct version while preserving the original meaning as closely as possible.
Correct any spelling mistakes, punctuation errors, verb tense issues, word choice
problems, and other grammatical mistakes.
It is possible that what you get is not just text, but markdown, code comments,
HTML, or something like that. In any case, I want you to only edit the text,
not the function (that is, do not spell check the function or variable names,
only the comments).
The rest should be returned as-is, including indentation and any
other extra white-space before text.
```
When I call it via stdin/args it does not work as expected however:
```shell
$ echo ' this is a problem' | aichat --role grammar
This is a problem.
$ aichat --role grammar --code ' this is a porblem'
This is a problem.
```
I assume I'm doing something wrong with `aichat`, or with my role definition, sorry if this is documented somehwere that I'm not aware of.
**Expected behavior**
The grammar check should preserve white-space.
**Configuration**
```
aichat --info
model claude:claude-3-5-sonnet-latest
max_output_tokens 8192 (current model)
temperature -
top_p -
dry_run false
stream true
save false
keybindings emacs
wrap no
wrap_code false
function_calling true
use_tools -
agent_prelude -
save_session -
compress_threshold 4000
rag_reranker_model -
rag_top_k 5
highlight true
light_theme false
config_file /Users/denis/.dotfiles/aichat/config.yaml
env_file /Users/denis/.dotfiles/aichat/.env
roles_dir /Users/denis/.dotfiles/aichat/roles
sessions_dir /Users/denis/.dotfiles/aichat/sessions
rags_dir /Users/denis/.dotfiles/aichat/rags
functions_dir /Users/denis/.dotfiles/aichat/functions
messages_file /Users/denis/.dotfiles/aichat/messages.md
```
**Environment (please complete the following information):**
- macOS 15.1
- aichat 0.24.0
- alacritty 0.14.0
| 2024-12-14T06:50:17 | 0.25 | 737580bb87f3b1d2f040dd7a4ddacca3595915a0 | [
"repl::tests::test_split_files_text"
] | [
"config::role::tests::test_merge_prompt_name",
"config::role::tests::test_parse_structure_prompt1",
"config::role::tests::test_parse_structure_prompt2",
"config::role::tests::test_parse_structure_prompt3",
"config::role::tests::test_match_name",
"client::stream::tests::test_json_stream_ndjson",
"rag::sp... | [] | [] | 0 | |
alacritty/alacritty | 8,356 | alacritty__alacritty-8356 | [
"8314",
"8268"
] | 22a447573bbd67c0a5d3946d58d6d61bac3b4ad2 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -8,6 +8,22 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
Notable changes to the `alacritty_terminal` crate are documented in its
[CHANGELOG](./alacritty_terminal/CHANGELOG.md).
+## 0.14.1-rc1
+
+### Changed
+
+- Always emit `1` for the first parameter when having modifiers in kitty keyboard protocol
+
+### Fixed
+
+- Mouse/Vi cursor hint highlighting broken on the terminal cursor line
+- Hint launcher opening arbitrary text, when terminal content changed while opening
+- `SemanticRight`/`SemanticLeft` vi motions breaking with wide semantic escape characters
+- `alacritty migrate` crashing with recursive toml imports
+- Migrating nonexistent toml import breaking the entire migration
+- Crash when pressing certain modifier keys on macOS 15+
+- First daemon mode window ignoring window options passed through CLI
+
## 0.14.0
### Packaging
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -32,7 +32,7 @@ dependencies = [
[[package]]
name = "alacritty"
-version = "0.14.0"
+version = "0.14.1-rc1"
dependencies = [
"ahash",
"alacritty_config",
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -92,7 +92,7 @@ dependencies = [
[[package]]
name = "alacritty_terminal"
-version = "0.24.1"
+version = "0.24.2-rc1"
dependencies = [
"base64",
"bitflags 2.6.0",
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -966,10 +966,11 @@ dependencies = [
[[package]]
name = "js-sys"
-version = "0.3.69"
+version = "0.3.76"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "29c15563dc2726973df627357ce0c9ddddbea194836909d655df6a75d2cf296d"
+checksum = "6717b6b5b077764fb5966237269cb3c64edddde4b14ce42647430a78ced9e7b7"
dependencies = [
+ "once_cell",
"wasm-bindgen",
]
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -2096,23 +2097,23 @@ checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
[[package]]
name = "wasm-bindgen"
-version = "0.2.92"
+version = "0.2.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "4be2531df63900aeb2bca0daaaddec08491ee64ceecbee5076636a3b026795a8"
+checksum = "a474f6281d1d70c17ae7aa6a613c87fce69a127e2624002df63dcb39d6cf6396"
dependencies = [
"cfg-if",
+ "once_cell",
"wasm-bindgen-macro",
]
[[package]]
name = "wasm-bindgen-backend"
-version = "0.2.92"
+version = "0.2.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "614d787b966d3989fa7bb98a654e369c762374fd3213d212cfc0251257e747da"
+checksum = "5f89bb38646b4f81674e8f5c3fb81b562be1fd936d84320f3264486418519c79"
dependencies = [
"bumpalo",
"log",
- "once_cell",
"proc-macro2",
"quote",
"syn",
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -2121,21 +2122,22 @@ dependencies = [
[[package]]
name = "wasm-bindgen-futures"
-version = "0.4.42"
+version = "0.4.49"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "76bc14366121efc8dbb487ab05bcc9d346b3b5ec0eaa76e46594cabbe51762c0"
+checksum = "38176d9b44ea84e9184eff0bc34cc167ed044f816accfe5922e54d84cf48eca2"
dependencies = [
"cfg-if",
"js-sys",
+ "once_cell",
"wasm-bindgen",
"web-sys",
]
[[package]]
name = "wasm-bindgen-macro"
-version = "0.2.92"
+version = "0.2.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "a1f8823de937b71b9460c0c34e25f3da88250760bec0ebac694b49997550d726"
+checksum = "2cc6181fd9a7492eef6fef1f33961e3695e4579b9872a6f7c83aee556666d4fe"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -2143,9 +2145,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro-support"
-version = "0.2.92"
+version = "0.2.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "e94f17b526d0a461a191c78ea52bbce64071ed5c04c9ffe424dcb38f74171bb7"
+checksum = "30d7a95b763d3c45903ed6c81f156801839e5ee968bb07e534c44df0fcd330c2"
dependencies = [
"proc-macro2",
"quote",
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -2156,9 +2158,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-shared"
-version = "0.2.92"
+version = "0.2.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "af190c94f2773fdb3729c55b007a722abb5384da03bc0986df4c289bf5567e96"
+checksum = "943aab3fdaaa029a6e0271b35ea10b72b943135afe9bffca82384098ad0e06a6"
[[package]]
name = "wayland-backend"
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -2271,9 +2273,9 @@ dependencies = [
[[package]]
name = "web-sys"
-version = "0.3.69"
+version = "0.3.76"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "77afa9a11836342370f4817622a2f0f418b134426d91a82dfb48f532d2ec13ef"
+checksum = "04dd7223427d52553d3702c004d3b2fe07c148165faa56313cb00211e31c12bc"
dependencies = [
"js-sys",
"wasm-bindgen",
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -2536,9 +2538,9 @@ checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
[[package]]
name = "winit"
-version = "0.30.4"
+version = "0.30.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "4225ddd8ab67b8b59a2fee4b34889ebf13c0460c1c3fa297c58e21eb87801b33"
+checksum = "dba50bc8ef4b6f1a75c9274fb95aa9a8f63fbc66c56f391bd85cf68d51e7b1a3"
dependencies = [
"ahash",
"android-activity",
diff --git a/INSTALL.md b/INSTALL.md
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -84,7 +84,7 @@ to build Alacritty. Here's an apt command that should install all of them. If
something is still found to be missing, please open an issue.
```sh
-apt install cmake pkg-config libfreetype6-dev libfontconfig1-dev libxcb-xfixes0-dev libxkbcommon-dev python3
+apt install cmake g++ pkg-config libfreetype6-dev libfontconfig1-dev libxcb-xfixes0-dev libxkbcommon-dev python3
```
#### Arch Linux
diff --git a/alacritty/Cargo.toml b/alacritty/Cargo.toml
--- a/alacritty/Cargo.toml
+++ b/alacritty/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "alacritty"
-version = "0.14.0"
+version = "0.14.1-rc1"
authors = ["Christian Duerr <contact@christianduerr.com>", "Joe Wilm <joe@jwilm.com>"]
license = "Apache-2.0"
description = "A fast, cross-platform, OpenGL terminal emulator"
diff --git a/alacritty/Cargo.toml b/alacritty/Cargo.toml
--- a/alacritty/Cargo.toml
+++ b/alacritty/Cargo.toml
@@ -12,7 +12,7 @@ rust-version = "1.74.0"
[dependencies.alacritty_terminal]
path = "../alacritty_terminal"
-version = "0.24.1"
+version = "0.24.2-rc1"
[dependencies.alacritty_config_derive]
path = "../alacritty_config_derive"
diff --git a/alacritty/Cargo.toml b/alacritty/Cargo.toml
--- a/alacritty/Cargo.toml
+++ b/alacritty/Cargo.toml
@@ -41,7 +41,7 @@ tempfile = "3.12.0"
toml = "0.8.2"
toml_edit = "0.22.21"
unicode-width = "0.1"
-winit = { version = "0.30.4", default-features = false, features = ["rwh_06", "serde"] }
+winit = { version = "0.30.7", default-features = false, features = ["rwh_06", "serde"] }
[build-dependencies]
gl_generator = "0.14.0"
diff --git a/alacritty/src/config/bindings.rs b/alacritty/src/config/bindings.rs
--- a/alacritty/src/config/bindings.rs
+++ b/alacritty/src/config/bindings.rs
@@ -5,6 +5,7 @@ use std::fmt::{self, Debug, Display};
use bitflags::bitflags;
use serde::de::{self, Error as SerdeError, MapAccess, Unexpected, Visitor};
use serde::{Deserialize, Deserializer};
+use std::rc::Rc;
use toml::Value as SerdeValue;
use winit::event::MouseButton;
use winit::keyboard::{
diff --git a/alacritty/src/config/bindings.rs b/alacritty/src/config/bindings.rs
--- a/alacritty/src/config/bindings.rs
+++ b/alacritty/src/config/bindings.rs
@@ -96,7 +97,7 @@ pub enum Action {
/// Regex keyboard hints.
#[config(skip)]
- Hint(Hint),
+ Hint(Rc<Hint>),
/// Move vi mode cursor.
#[config(skip)]
diff --git a/alacritty/src/config/bindings.rs b/alacritty/src/config/bindings.rs
--- a/alacritty/src/config/bindings.rs
+++ b/alacritty/src/config/bindings.rs
@@ -790,7 +791,7 @@ impl<'a> Deserialize<'a> for ModeWrapper {
{
struct ModeVisitor;
- impl<'a> Visitor<'a> for ModeVisitor {
+ impl Visitor<'_> for ModeVisitor {
type Value = ModeWrapper;
fn expecting(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
diff --git a/alacritty/src/config/bindings.rs b/alacritty/src/config/bindings.rs
--- a/alacritty/src/config/bindings.rs
+++ b/alacritty/src/config/bindings.rs
@@ -844,7 +845,7 @@ impl<'a> Deserialize<'a> for MouseButtonWrapper {
{
struct MouseButtonVisitor;
- impl<'a> Visitor<'a> for MouseButtonVisitor {
+ impl Visitor<'_> for MouseButtonVisitor {
type Value = MouseButtonWrapper;
fn expecting(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
diff --git a/alacritty/src/config/bindings.rs b/alacritty/src/config/bindings.rs
--- a/alacritty/src/config/bindings.rs
+++ b/alacritty/src/config/bindings.rs
@@ -956,7 +957,7 @@ impl<'a> Deserialize<'a> for RawBinding {
{
struct FieldVisitor;
- impl<'a> Visitor<'a> for FieldVisitor {
+ impl Visitor<'_> for FieldVisitor {
type Value = Field;
fn expecting(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
diff --git a/alacritty/src/config/bindings.rs b/alacritty/src/config/bindings.rs
--- a/alacritty/src/config/bindings.rs
+++ b/alacritty/src/config/bindings.rs
@@ -1204,7 +1205,7 @@ impl<'a> de::Deserialize<'a> for ModsWrapper {
{
struct ModsVisitor;
- impl<'a> Visitor<'a> for ModsVisitor {
+ impl Visitor<'_> for ModsVisitor {
type Value = ModsWrapper;
fn expecting(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
diff --git a/alacritty/src/config/font.rs b/alacritty/src/config/font.rs
--- a/alacritty/src/config/font.rs
+++ b/alacritty/src/config/font.rs
@@ -144,7 +144,7 @@ impl<'de> Deserialize<'de> for Size {
D: Deserializer<'de>,
{
struct NumVisitor;
- impl<'v> Visitor<'v> for NumVisitor {
+ impl Visitor<'_> for NumVisitor {
type Value = Size;
fn expecting(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
diff --git a/alacritty/src/config/ui_config.rs b/alacritty/src/config/ui_config.rs
--- a/alacritty/src/config/ui_config.rs
+++ b/alacritty/src/config/ui_config.rs
@@ -253,7 +253,7 @@ pub struct Hints {
alphabet: HintsAlphabet,
/// All configured terminal hints.
- pub enabled: Vec<Hint>,
+ pub enabled: Vec<Rc<Hint>>,
}
impl Default for Hints {
diff --git a/alacritty/src/config/ui_config.rs b/alacritty/src/config/ui_config.rs
--- a/alacritty/src/config/ui_config.rs
+++ b/alacritty/src/config/ui_config.rs
@@ -274,7 +274,7 @@ impl Default for Hints {
});
Self {
- enabled: vec![Hint {
+ enabled: vec![Rc::new(Hint {
content,
action,
persist: false,
diff --git a/alacritty/src/config/ui_config.rs b/alacritty/src/config/ui_config.rs
--- a/alacritty/src/config/ui_config.rs
+++ b/alacritty/src/config/ui_config.rs
@@ -288,7 +288,7 @@ impl Default for Hints {
mods: ModsWrapper(ModifiersState::SHIFT | ModifiersState::CONTROL),
mode: Default::default(),
}),
- }],
+ })],
alphabet: Default::default(),
}
}
diff --git a/alacritty/src/config/ui_config.rs b/alacritty/src/config/ui_config.rs
--- a/alacritty/src/config/ui_config.rs
+++ b/alacritty/src/config/ui_config.rs
@@ -619,7 +619,7 @@ impl SerdeReplace for Program {
}
pub(crate) struct StringVisitor;
-impl<'de> serde::de::Visitor<'de> for StringVisitor {
+impl serde::de::Visitor<'_> for StringVisitor {
type Value = String;
fn expecting(&self, formatter: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
diff --git a/alacritty/src/display/color.rs b/alacritty/src/display/color.rs
--- a/alacritty/src/display/color.rs
+++ b/alacritty/src/display/color.rs
@@ -18,7 +18,7 @@ pub const DIM_FACTOR: f32 = 0.66;
#[derive(Copy, Clone)]
pub struct List([Rgb; COUNT]);
-impl<'a> From<&'a Colors> for List {
+impl From<&'_ Colors> for List {
fn from(colors: &Colors) -> List {
// Type inference fails without this annotation.
let mut list = List([Rgb::default(); COUNT]);
diff --git a/alacritty/src/display/color.rs b/alacritty/src/display/color.rs
--- a/alacritty/src/display/color.rs
+++ b/alacritty/src/display/color.rs
@@ -235,7 +235,7 @@ impl<'de> Deserialize<'de> for Rgb {
b: u8,
}
- impl<'a> Visitor<'a> for RgbVisitor {
+ impl Visitor<'_> for RgbVisitor {
type Value = Rgb;
fn expecting(&self, f: &mut Formatter<'_>) -> fmt::Result {
diff --git a/alacritty/src/display/color.rs b/alacritty/src/display/color.rs
--- a/alacritty/src/display/color.rs
+++ b/alacritty/src/display/color.rs
@@ -331,7 +331,7 @@ impl<'de> Deserialize<'de> for CellRgb {
const EXPECTING: &str = "CellForeground, CellBackground, or hex color like #ff00ff";
struct CellRgbVisitor;
- impl<'a> Visitor<'a> for CellRgbVisitor {
+ impl Visitor<'_> for CellRgbVisitor {
type Value = CellRgb;
fn expecting(&self, f: &mut Formatter<'_>) -> fmt::Result {
diff --git a/alacritty/src/display/content.rs b/alacritty/src/display/content.rs
--- a/alacritty/src/display/content.rs
+++ b/alacritty/src/display/content.rs
@@ -150,7 +150,7 @@ impl<'a> RenderableContent<'a> {
}
}
-impl<'a> Iterator for RenderableContent<'a> {
+impl Iterator for RenderableContent<'_> {
type Item = RenderableCell;
/// Gets the next renderable cell.
diff --git a/alacritty/src/display/content.rs b/alacritty/src/display/content.rs
--- a/alacritty/src/display/content.rs
+++ b/alacritty/src/display/content.rs
@@ -453,7 +453,7 @@ struct Hint<'a> {
labels: &'a Vec<Vec<char>>,
}
-impl<'a> Hint<'a> {
+impl Hint<'_> {
/// Advance the hint iterator.
///
/// If the point is within a hint, the keyboard shortcut character that should be displayed at
diff --git a/alacritty/src/display/content.rs b/alacritty/src/display/content.rs
--- a/alacritty/src/display/content.rs
+++ b/alacritty/src/display/content.rs
@@ -543,7 +543,7 @@ impl<'a> HintMatches<'a> {
}
}
-impl<'a> Deref for HintMatches<'a> {
+impl Deref for HintMatches<'_> {
type Target = [Match];
fn deref(&self) -> &Self::Target {
diff --git a/alacritty/src/display/damage.rs b/alacritty/src/display/damage.rs
--- a/alacritty/src/display/damage.rs
+++ b/alacritty/src/display/damage.rs
@@ -193,9 +193,11 @@ impl FrameDamage {
/// Check if a range is damaged.
#[inline]
pub fn intersects(&self, start: Point<usize>, end: Point<usize>) -> bool {
+ let start_line = &self.lines[start.line];
+ let end_line = &self.lines[end.line];
self.full
- || self.lines[start.line].left <= start.column
- || self.lines[end.line].right >= end.column
+ || (start_line.left..=start_line.right).contains(&start.column)
+ || (end_line.left..=end_line.right).contains(&end.column)
|| (start.line + 1..end.line).any(|line| self.lines[line].is_damaged())
}
}
diff --git a/alacritty/src/display/damage.rs b/alacritty/src/display/damage.rs
--- a/alacritty/src/display/damage.rs
+++ b/alacritty/src/display/damage.rs
@@ -249,7 +251,7 @@ impl<'a> RenderDamageIterator<'a> {
}
}
-impl<'a> Iterator for RenderDamageIterator<'a> {
+impl Iterator for RenderDamageIterator<'_> {
type Item = Rect;
fn next(&mut self) -> Option<Rect> {
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -1,6 +1,8 @@
+use std::borrow::Cow;
use std::cmp::Reverse;
use std::collections::HashSet;
use std::iter;
+use std::rc::Rc;
use ahash::RandomState;
use winit::keyboard::ModifiersState;
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -23,7 +25,7 @@ const HINT_SPLIT_PERCENTAGE: f32 = 0.5;
/// Keyboard regex hint state.
pub struct HintState {
/// Hint currently in use.
- hint: Option<Hint>,
+ hint: Option<Rc<Hint>>,
/// Alphabet for hint labels.
alphabet: String,
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -56,7 +58,7 @@ impl HintState {
}
/// Start the hint selection process.
- pub fn start(&mut self, hint: Hint) {
+ pub fn start(&mut self, hint: Rc<Hint>) {
self.hint = Some(hint);
}
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -150,7 +152,7 @@ impl HintState {
// Check if the selected label is fully matched.
if label.len() == 1 {
let bounds = self.matches[index].clone();
- let action = hint.action.clone();
+ let hint = hint.clone();
// Exit hint mode unless it requires explicit dismissal.
if hint.persist {
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -161,7 +163,7 @@ impl HintState {
// Hyperlinks take precedence over regex matches.
let hyperlink = term.grid()[*bounds.start()].hyperlink();
- Some(HintMatch { action, bounds, hyperlink })
+ Some(HintMatch { bounds, hyperlink, hint })
} else {
// Store character to preserve the selection.
self.keys.push(c);
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -192,13 +194,14 @@ impl HintState {
/// Hint match which was selected by the user.
#[derive(PartialEq, Eq, Debug, Clone)]
pub struct HintMatch {
- /// Action for handling the text.
- action: HintAction,
-
/// Terminal range matching the hint.
bounds: Match,
+ /// OSC 8 hyperlink.
hyperlink: Option<Hyperlink>,
+
+ /// Hint which triggered this match.
+ hint: Rc<Hint>,
}
impl HintMatch {
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -210,7 +213,7 @@ impl HintMatch {
#[inline]
pub fn action(&self) -> &HintAction {
- &self.action
+ &self.hint.action
}
#[inline]
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -221,6 +224,29 @@ impl HintMatch {
pub fn hyperlink(&self) -> Option<&Hyperlink> {
self.hyperlink.as_ref()
}
+
+ /// Get the text content of the hint match.
+ ///
+ /// This will always revalidate the hint text, to account for terminal content
+ /// changes since the [`HintMatch`] was constructed. The text of the hint might
+ /// be different from its original value, but it will **always** be a valid
+ /// match for this hint.
+ pub fn text<T>(&self, term: &Term<T>) -> Option<Cow<'_, str>> {
+ // Revalidate hyperlink match.
+ if let Some(hyperlink) = &self.hyperlink {
+ let (validated, bounds) = hyperlink_at(term, *self.bounds.start())?;
+ return (&validated == hyperlink && bounds == self.bounds)
+ .then(|| hyperlink.uri().into());
+ }
+
+ // Revalidate regex match.
+ let regex = self.hint.content.regex.as_ref()?;
+ let bounds = regex.with_compiled(|regex| {
+ regex_match_at(term, *self.bounds.start(), regex, self.hint.post_processing)
+ })??;
+ (bounds == self.bounds)
+ .then(|| term.bounds_to_string(*bounds.start(), *bounds.end()).into())
+ }
}
/// Generator for creating new hint labels.
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -382,18 +408,14 @@ pub fn highlighted_at<T>(
if let Some((hyperlink, bounds)) =
hint.content.hyperlinks.then(|| hyperlink_at(term, point)).flatten()
{
- return Some(HintMatch {
- bounds,
- action: hint.action.clone(),
- hyperlink: Some(hyperlink),
- });
+ return Some(HintMatch { bounds, hyperlink: Some(hyperlink), hint: hint.clone() });
}
let bounds = hint.content.regex.as_ref().and_then(|regex| {
regex.with_compiled(|regex| regex_match_at(term, point, regex, hint.post_processing))
});
if let Some(bounds) = bounds.flatten() {
- return Some(HintMatch { bounds, action: hint.action.clone(), hyperlink: None });
+ return Some(HintMatch { bounds, hint: hint.clone(), hyperlink: None });
}
None
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -554,7 +576,7 @@ impl<'a, T> HintPostProcessor<'a, T> {
}
}
-impl<'a, T> Iterator for HintPostProcessor<'a, T> {
+impl<T> Iterator for HintPostProcessor<'_, T> {
type Item = Match;
fn next(&mut self) -> Option<Self::Item> {
diff --git a/alacritty/src/display/meter.rs b/alacritty/src/display/meter.rs
--- a/alacritty/src/display/meter.rs
+++ b/alacritty/src/display/meter.rs
@@ -57,7 +57,7 @@ impl<'a> Sampler<'a> {
}
}
-impl<'a> Drop for Sampler<'a> {
+impl Drop for Sampler<'_> {
fn drop(&mut self) {
self.meter.add_sample(self.alive_duration());
}
diff --git a/alacritty/src/display/mod.rs b/alacritty/src/display/mod.rs
--- a/alacritty/src/display/mod.rs
+++ b/alacritty/src/display/mod.rs
@@ -342,9 +342,13 @@ pub struct Display {
/// Hint highlighted by the mouse.
pub highlighted_hint: Option<HintMatch>,
+ /// Frames since hint highlight was created.
+ highlighted_hint_age: usize,
/// Hint highlighted by the vi mode cursor.
pub vi_highlighted_hint: Option<HintMatch>,
+ /// Frames since hint highlight was created.
+ vi_highlighted_hint_age: usize,
pub raw_window_handle: RawWindowHandle,
diff --git a/alacritty/src/display/mod.rs b/alacritty/src/display/mod.rs
--- a/alacritty/src/display/mod.rs
+++ b/alacritty/src/display/mod.rs
@@ -516,6 +520,8 @@ impl Display {
font_size,
window,
pending_renderer_update: Default::default(),
+ vi_highlighted_hint_age: Default::default(),
+ highlighted_hint_age: Default::default(),
vi_highlighted_hint: Default::default(),
highlighted_hint: Default::default(),
hint_mouse_point: Default::default(),
diff --git a/alacritty/src/display/mod.rs b/alacritty/src/display/mod.rs
--- a/alacritty/src/display/mod.rs
+++ b/alacritty/src/display/mod.rs
@@ -759,7 +765,7 @@ impl Display {
drop(terminal);
// Invalidate highlighted hints if grid has changed.
- self.validate_hints(display_offset);
+ self.validate_hint_highlights(display_offset);
// Add damage from alacritty's UI elements overlapping terminal.
diff --git a/alacritty/src/display/mod.rs b/alacritty/src/display/mod.rs
--- a/alacritty/src/display/mod.rs
+++ b/alacritty/src/display/mod.rs
@@ -1017,9 +1023,10 @@ impl Display {
};
let mut dirty = vi_highlighted_hint != self.vi_highlighted_hint;
self.vi_highlighted_hint = vi_highlighted_hint;
+ self.vi_highlighted_hint_age = 0;
// Force full redraw if the vi mode highlight was cleared.
- if dirty && self.vi_highlighted_hint.is_none() {
+ if dirty {
self.damage_tracker.frame().mark_fully_damaged();
}
diff --git a/alacritty/src/display/mod.rs b/alacritty/src/display/mod.rs
--- a/alacritty/src/display/mod.rs
+++ b/alacritty/src/display/mod.rs
@@ -1055,9 +1062,10 @@ impl Display {
let mouse_highlight_dirty = self.highlighted_hint != highlighted_hint;
dirty |= mouse_highlight_dirty;
self.highlighted_hint = highlighted_hint;
+ self.highlighted_hint_age = 0;
- // Force full redraw if the mouse cursor highlight was cleared.
- if mouse_highlight_dirty && self.highlighted_hint.is_none() {
+ // Force full redraw if the mouse cursor highlight was changed.
+ if mouse_highlight_dirty {
self.damage_tracker.frame().mark_fully_damaged();
}
diff --git a/alacritty/src/display/mod.rs b/alacritty/src/display/mod.rs
--- a/alacritty/src/display/mod.rs
+++ b/alacritty/src/display/mod.rs
@@ -1331,16 +1339,24 @@ impl Display {
}
/// Check whether a hint highlight needs to be cleared.
- fn validate_hints(&mut self, display_offset: usize) {
+ fn validate_hint_highlights(&mut self, display_offset: usize) {
let frame = self.damage_tracker.frame();
- for (hint, reset_mouse) in
- [(&mut self.highlighted_hint, true), (&mut self.vi_highlighted_hint, false)]
- {
+ let hints = [
+ (&mut self.highlighted_hint, &mut self.highlighted_hint_age, true),
+ (&mut self.vi_highlighted_hint, &mut self.vi_highlighted_hint_age, false),
+ ];
+ for (hint, hint_age, reset_mouse) in hints {
let (start, end) = match hint {
Some(hint) => (*hint.bounds().start(), *hint.bounds().end()),
- None => return,
+ None => continue,
};
+ // Ignore hints that were created this frame.
+ *hint_age += 1;
+ if *hint_age == 1 {
+ continue;
+ }
+
// Convert hint bounds to viewport coordinates.
let start = term::point_to_viewport(display_offset, start).unwrap_or_default();
let end = term::point_to_viewport(display_offset, end).unwrap_or_else(|| {
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -140,14 +140,14 @@ impl Processor {
pub fn create_initial_window(
&mut self,
event_loop: &ActiveEventLoop,
+ window_options: WindowOptions,
) -> Result<(), Box<dyn Error>> {
- let options = match self.initial_window_options.take() {
- Some(options) => options,
- None => return Ok(()),
- };
-
- let window_context =
- WindowContext::initial(event_loop, self.proxy.clone(), self.config.clone(), options)?;
+ let window_context = WindowContext::initial(
+ event_loop,
+ self.proxy.clone(),
+ self.config.clone(),
+ window_options,
+ )?;
self.gl_config = Some(window_context.display.gl_context().config());
self.windows.insert(window_context.id(), window_context);
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -225,10 +225,12 @@ impl ApplicationHandler<Event> for Processor {
return;
}
- if let Err(err) = self.create_initial_window(event_loop) {
- self.initial_window_error = Some(err);
- event_loop.exit();
- return;
+ if let Some(window_options) = self.initial_window_options.take() {
+ if let Err(err) = self.create_initial_window(event_loop, window_options) {
+ self.initial_window_error = Some(err);
+ event_loop.exit();
+ return;
+ }
}
info!("Initialisation complete");
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -276,7 +278,7 @@ impl ApplicationHandler<Event> for Processor {
}
// Handle events which don't mandate the WindowId.
- match (&event.payload, event.window_id.as_ref()) {
+ match (event.payload, event.window_id.as_ref()) {
// Process IPC config update.
#[cfg(unix)]
(EventType::IpcConfig(ipc_config), window_id) => {
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -315,7 +317,7 @@ impl ApplicationHandler<Event> for Processor {
}
// Load config and update each terminal.
- if let Ok(config) = config::reload(path, &mut self.cli_options) {
+ if let Ok(config) = config::reload(&path, &mut self.cli_options) {
self.config = Rc::new(config);
// Restart config monitor if imports changed.
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -346,17 +348,17 @@ impl ApplicationHandler<Event> for Processor {
if self.gl_config.is_none() {
// Handle initial window creation in daemon mode.
- if let Err(err) = self.create_initial_window(event_loop) {
+ if let Err(err) = self.create_initial_window(event_loop, options) {
self.initial_window_error = Some(err);
event_loop.exit();
}
- } else if let Err(err) = self.create_window(event_loop, options.clone()) {
+ } else if let Err(err) = self.create_window(event_loop, options) {
error!("Could not open window: {:?}", err);
}
},
// Process events affecting all windows.
- (_, None) => {
- let event = WinitEvent::UserEvent(event);
+ (payload, None) => {
+ let event = WinitEvent::UserEvent(Event::new(payload, None));
for window_context in self.windows.values_mut() {
window_context.handle_event(
#[cfg(target_os = "macos")]
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -405,7 +407,7 @@ impl ApplicationHandler<Event> for Processor {
}
}
},
- (_, Some(window_id)) => {
+ (payload, Some(window_id)) => {
if let Some(window_context) = self.windows.get_mut(window_id) {
window_context.handle_event(
#[cfg(target_os = "macos")]
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -413,7 +415,7 @@ impl ApplicationHandler<Event> for Processor {
&self.proxy,
&mut self.clipboard,
&mut self.scheduler,
- WinitEvent::UserEvent(event),
+ WinitEvent::UserEvent(Event::new(payload, *window_id)),
);
}
},
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -1141,16 +1143,16 @@ impl<'a, N: Notify + 'a, T: EventListener> input::ActionContext<T> for ActionCon
}
let hint_bounds = hint.bounds();
- let text = match hint.hyperlink() {
- Some(hyperlink) => hyperlink.uri().to_owned(),
- None => self.terminal.bounds_to_string(*hint_bounds.start(), *hint_bounds.end()),
+ let text = match hint.text(self.terminal) {
+ Some(text) => text,
+ None => return,
};
match &hint.action() {
// Launch an external program.
HintAction::Command(command) => {
let mut args = command.args().to_vec();
- args.push(text);
+ args.push(text.into());
self.spawn_daemon(command.program(), &args);
},
// Copy the text to the clipboard.
diff --git a/alacritty/src/input/keyboard.rs b/alacritty/src/input/keyboard.rs
--- a/alacritty/src/input/keyboard.rs
+++ b/alacritty/src/input/keyboard.rs
@@ -280,7 +280,7 @@ fn build_sequence(key: KeyEvent, mods: ModifiersState, mode: TermMode) -> Vec<u8
let sequence_base = context
.try_build_numpad(&key)
.or_else(|| context.try_build_named_kitty(&key))
- .or_else(|| context.try_build_named_normal(&key))
+ .or_else(|| context.try_build_named_normal(&key, associated_text.is_some()))
.or_else(|| context.try_build_control_char_or_mod(&key, &mut modifiers))
.or_else(|| context.try_build_textual(&key, associated_text));
diff --git a/alacritty/src/input/keyboard.rs b/alacritty/src/input/keyboard.rs
--- a/alacritty/src/input/keyboard.rs
+++ b/alacritty/src/input/keyboard.rs
@@ -483,14 +483,23 @@ impl SequenceBuilder {
}
/// Try building from [`NamedKey`].
- fn try_build_named_normal(&self, key: &KeyEvent) -> Option<SequenceBase> {
+ fn try_build_named_normal(
+ &self,
+ key: &KeyEvent,
+ has_associated_text: bool,
+ ) -> Option<SequenceBase> {
let named = match key.logical_key {
Key::Named(named) => named,
_ => return None,
};
// The default parameter is 1, so we can omit it.
- let one_based = if self.modifiers.is_empty() && !self.kitty_event_type { "" } else { "1" };
+ let one_based =
+ if self.modifiers.is_empty() && !self.kitty_event_type && !has_associated_text {
+ ""
+ } else {
+ "1"
+ };
let (base, terminator) = match named {
NamedKey::PageUp => ("5", SequenceTerminator::Normal('~')),
NamedKey::PageDown => ("6", SequenceTerminator::Normal('~')),
diff --git a/alacritty/src/migrate/mod.rs b/alacritty/src/migrate/mod.rs
--- a/alacritty/src/migrate/mod.rs
+++ b/alacritty/src/migrate/mod.rs
@@ -151,7 +151,15 @@ fn migrate_imports(
// Migrate each import.
for import in imports.into_iter().filter_map(|item| item.as_str()) {
let normalized_path = config::normalize_import(path, import);
- let migration = migrate_config(options, &normalized_path, recursion_limit)?;
+
+ if !normalized_path.exists() {
+ if options.dry_run {
+ println!("Skipping migration for nonexistent path: {}", normalized_path.display());
+ }
+ continue;
+ }
+
+ let migration = migrate_config(options, &normalized_path, recursion_limit - 1)?;
if options.dry_run {
println!("{}", migration.success_message(true));
}
diff --git a/alacritty/src/migrate/mod.rs b/alacritty/src/migrate/mod.rs
--- a/alacritty/src/migrate/mod.rs
+++ b/alacritty/src/migrate/mod.rs
@@ -244,7 +252,7 @@ enum Migration<'a> {
Yaml((&'a Path, String)),
}
-impl<'a> Migration<'a> {
+impl Migration<'_> {
/// Get the success message for this migration.
fn success_message(&self, import: bool) -> String {
match self {
diff --git a/alacritty/src/renderer/text/gles2.rs b/alacritty/src/renderer/text/gles2.rs
--- a/alacritty/src/renderer/text/gles2.rs
+++ b/alacritty/src/renderer/text/gles2.rs
@@ -346,7 +346,7 @@ pub struct RenderApi<'a> {
dual_source_blending: bool,
}
-impl<'a> Drop for RenderApi<'a> {
+impl Drop for RenderApi<'_> {
fn drop(&mut self) {
if !self.batch.is_empty() {
self.render_batch();
diff --git a/alacritty/src/renderer/text/gles2.rs b/alacritty/src/renderer/text/gles2.rs
--- a/alacritty/src/renderer/text/gles2.rs
+++ b/alacritty/src/renderer/text/gles2.rs
@@ -354,7 +354,7 @@ impl<'a> Drop for RenderApi<'a> {
}
}
-impl<'a> LoadGlyph for RenderApi<'a> {
+impl LoadGlyph for RenderApi<'_> {
fn load_glyph(&mut self, rasterized: &RasterizedGlyph) -> Glyph {
Atlas::load_glyph(self.active_tex, self.atlas, self.current_atlas, rasterized)
}
diff --git a/alacritty/src/renderer/text/gles2.rs b/alacritty/src/renderer/text/gles2.rs
--- a/alacritty/src/renderer/text/gles2.rs
+++ b/alacritty/src/renderer/text/gles2.rs
@@ -364,7 +364,7 @@ impl<'a> LoadGlyph for RenderApi<'a> {
}
}
-impl<'a> TextRenderApi<Batch> for RenderApi<'a> {
+impl TextRenderApi<Batch> for RenderApi<'_> {
fn batch(&mut self) -> &mut Batch {
self.batch
}
diff --git a/alacritty/src/renderer/text/glsl3.rs b/alacritty/src/renderer/text/glsl3.rs
--- a/alacritty/src/renderer/text/glsl3.rs
+++ b/alacritty/src/renderer/text/glsl3.rs
@@ -215,7 +215,7 @@ pub struct RenderApi<'a> {
program: &'a mut TextShaderProgram,
}
-impl<'a> TextRenderApi<Batch> for RenderApi<'a> {
+impl TextRenderApi<Batch> for RenderApi<'_> {
fn batch(&mut self) -> &mut Batch {
self.batch
}
diff --git a/alacritty/src/renderer/text/glsl3.rs b/alacritty/src/renderer/text/glsl3.rs
--- a/alacritty/src/renderer/text/glsl3.rs
+++ b/alacritty/src/renderer/text/glsl3.rs
@@ -261,7 +261,7 @@ impl<'a> TextRenderApi<Batch> for RenderApi<'a> {
}
}
-impl<'a> LoadGlyph for RenderApi<'a> {
+impl LoadGlyph for RenderApi<'_> {
fn load_glyph(&mut self, rasterized: &RasterizedGlyph) -> Glyph {
Atlas::load_glyph(self.active_tex, self.atlas, self.current_atlas, rasterized)
}
diff --git a/alacritty/src/renderer/text/glsl3.rs b/alacritty/src/renderer/text/glsl3.rs
--- a/alacritty/src/renderer/text/glsl3.rs
+++ b/alacritty/src/renderer/text/glsl3.rs
@@ -271,7 +271,7 @@ impl<'a> LoadGlyph for RenderApi<'a> {
}
}
-impl<'a> Drop for RenderApi<'a> {
+impl Drop for RenderApi<'_> {
fn drop(&mut self) {
if !self.batch.is_empty() {
self.render_batch();
diff --git a/alacritty/src/renderer/text/mod.rs b/alacritty/src/renderer/text/mod.rs
--- a/alacritty/src/renderer/text/mod.rs
+++ b/alacritty/src/renderer/text/mod.rs
@@ -186,7 +186,7 @@ pub struct LoaderApi<'a> {
current_atlas: &'a mut usize,
}
-impl<'a> LoadGlyph for LoaderApi<'a> {
+impl LoadGlyph for LoaderApi<'_> {
fn load_glyph(&mut self, rasterized: &RasterizedGlyph) -> Glyph {
Atlas::load_glyph(self.active_tex, self.atlas, self.current_atlas, rasterized)
}
diff --git a/alacritty/src/string.rs b/alacritty/src/string.rs
--- a/alacritty/src/string.rs
+++ b/alacritty/src/string.rs
@@ -106,7 +106,7 @@ impl<'a> StrShortener<'a> {
}
}
-impl<'a> Iterator for StrShortener<'a> {
+impl Iterator for StrShortener<'_> {
type Item = char;
fn next(&mut self) -> Option<Self::Item> {
diff --git a/alacritty/windows/wix/alacritty.wxs b/alacritty/windows/wix/alacritty.wxs
--- a/alacritty/windows/wix/alacritty.wxs
+++ b/alacritty/windows/wix/alacritty.wxs
@@ -1,5 +1,5 @@
<Wix xmlns="http://wixtoolset.org/schemas/v4/wxs" xmlns:ui="http://wixtoolset.org/schemas/v4/wxs/ui">
- <Package Name="Alacritty" UpgradeCode="87c21c74-dbd5-4584-89d5-46d9cd0c40a7" Language="1033" Codepage="1252" Version="0.14.0" Manufacturer="Alacritty" InstallerVersion="200">
+ <Package Name="Alacritty" UpgradeCode="87c21c74-dbd5-4584-89d5-46d9cd0c40a7" Language="1033" Codepage="1252" Version="0.14.1-rc1" Manufacturer="Alacritty" InstallerVersion="200">
<MajorUpgrade AllowSameVersionUpgrades="yes" DowngradeErrorMessage="A newer version of [ProductName] is already installed." />
<Icon Id="AlacrittyIco" SourceFile=".\alacritty\windows\alacritty.ico" />
<WixVariable Id="WixUILicenseRtf" Value=".\alacritty\windows\wix\license.rtf" />
diff --git a/alacritty_terminal/CHANGELOG.md b/alacritty_terminal/CHANGELOG.md
--- a/alacritty_terminal/CHANGELOG.md
+++ b/alacritty_terminal/CHANGELOG.md
@@ -8,6 +8,12 @@ sections should follow the order `Added`, `Changed`, `Deprecated`, `Fixed` and
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
+## 0.24.2
+
+### Fixed
+
+- Semantic escape search across wide characters
+
## 0.24.1
### Changed
diff --git a/alacritty_terminal/Cargo.toml b/alacritty_terminal/Cargo.toml
--- a/alacritty_terminal/Cargo.toml
+++ b/alacritty_terminal/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "alacritty_terminal"
-version = "0.24.1"
+version = "0.24.2-rc1"
authors = ["Christian Duerr <contact@christianduerr.com>", "Joe Wilm <joe@jwilm.com>"]
license = "Apache-2.0"
description = "Library for writing terminal emulators"
diff --git a/alacritty_terminal/src/grid/mod.rs b/alacritty_terminal/src/grid/mod.rs
--- a/alacritty_terminal/src/grid/mod.rs
+++ b/alacritty_terminal/src/grid/mod.rs
@@ -615,7 +615,7 @@ pub trait BidirectionalIterator: Iterator {
fn prev(&mut self) -> Option<Self::Item>;
}
-impl<'a, T> BidirectionalIterator for GridIterator<'a, T> {
+impl<T> BidirectionalIterator for GridIterator<'_, T> {
fn prev(&mut self) -> Option<Self::Item> {
let topmost_line = self.grid.topmost_line();
let last_column = self.grid.last_column();
diff --git a/alacritty_terminal/src/term/mod.rs b/alacritty_terminal/src/term/mod.rs
--- a/alacritty_terminal/src/term/mod.rs
+++ b/alacritty_terminal/src/term/mod.rs
@@ -199,7 +199,7 @@ impl<'a> TermDamageIterator<'a> {
}
}
-impl<'a> Iterator for TermDamageIterator<'a> {
+impl Iterator for TermDamageIterator<'_> {
type Item = LineDamageBounds;
fn next(&mut self) -> Option<Self::Item> {
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -656,7 +656,7 @@ impl<'a, T> RegexIter<'a, T> {
}
}
-impl<'a, T> Iterator for RegexIter<'a, T> {
+impl<T> Iterator for RegexIter<'_, T> {
type Item = Match;
fn next(&mut self) -> Option<Self::Item> {
diff --git a/alacritty_terminal/src/vi_mode.rs b/alacritty_terminal/src/vi_mode.rs
--- a/alacritty_terminal/src/vi_mode.rs
+++ b/alacritty_terminal/src/vi_mode.rs
@@ -265,14 +265,14 @@ fn semantic<T: EventListener>(
}
};
- // Make sure we jump above wide chars.
- point = term.expand_wide(point, direction);
-
// Move to word boundary.
if direction != side && !is_boundary(term, point, direction) {
point = expand_semantic(point);
}
+ // Make sure we jump above wide chars.
+ point = term.expand_wide(point, direction);
+
// Skip whitespace.
let mut next_point = advance(term, point, direction);
while !is_boundary(term, point, direction) && is_space(term, next_point) {
diff --git a/alacritty_terminal/src/vi_mode.rs b/alacritty_terminal/src/vi_mode.rs
--- a/alacritty_terminal/src/vi_mode.rs
+++ b/alacritty_terminal/src/vi_mode.rs
@@ -283,6 +283,11 @@ fn semantic<T: EventListener>(
// Assure minimum movement of one cell.
if !is_boundary(term, point, direction) {
point = advance(term, point, direction);
+
+ // Skip over wide cell spacers.
+ if direction == Direction::Left {
+ point = term.expand_wide(point, direction);
+ }
}
// Move to word boundary.
diff --git a/extra/osx/Alacritty.app/Contents/Info.plist b/extra/osx/Alacritty.app/Contents/Info.plist
--- a/extra/osx/Alacritty.app/Contents/Info.plist
+++ b/extra/osx/Alacritty.app/Contents/Info.plist
@@ -15,7 +15,7 @@
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
- <string>0.14.0</string>
+ <string>0.14.1-rc1</string>
<key>CFBundleSupportedPlatforms</key>
<array>
<string>MacOSX</string>
| diff --git a/alacritty/src/input/mod.rs b/alacritty/src/input/mod.rs
--- a/alacritty/src/input/mod.rs
+++ b/alacritty/src/input/mod.rs
@@ -1136,7 +1136,7 @@ mod tests {
inline_search_state: &'a mut InlineSearchState,
}
- impl<'a, T: EventListener> super::ActionContext<T> for ActionContext<'a, T> {
+ impl<T: EventListener> super::ActionContext<T> for ActionContext<'_, T> {
fn search_next(
&mut self,
_origin: Point,
diff --git a/alacritty_terminal/src/term/mod.rs b/alacritty_terminal/src/term/mod.rs
--- a/alacritty_terminal/src/term/mod.rs
+++ b/alacritty_terminal/src/term/mod.rs
@@ -930,6 +930,11 @@ impl<T> Term<T> {
&self.config.semantic_escape_chars
}
+ #[cfg(test)]
+ pub(crate) fn set_semantic_escape_chars(&mut self, semantic_escape_chars: &str) {
+ self.config.semantic_escape_chars = semantic_escape_chars.into();
+ }
+
/// Active terminal cursor style.
///
/// While vi mode is active, this will automatically return the vi mode cursor style.
diff --git a/alacritty_terminal/src/vi_mode.rs b/alacritty_terminal/src/vi_mode.rs
--- a/alacritty_terminal/src/vi_mode.rs
+++ b/alacritty_terminal/src/vi_mode.rs
@@ -820,4 +825,42 @@ mod tests {
cursor = cursor.scroll(&term, -20);
assert_eq!(cursor.point, Point::new(Line(19), Column(0)));
}
+
+ #[test]
+ fn wide_semantic_char() {
+ let mut term = term();
+ term.set_semantic_escape_chars("-");
+ term.grid_mut()[Line(0)][Column(0)].c = 'x';
+ term.grid_mut()[Line(0)][Column(1)].c = 'x';
+ term.grid_mut()[Line(0)][Column(2)].c = '-';
+ term.grid_mut()[Line(0)][Column(2)].flags.insert(Flags::WIDE_CHAR);
+ term.grid_mut()[Line(0)][Column(3)].c = ' ';
+ term.grid_mut()[Line(0)][Column(3)].flags.insert(Flags::WIDE_CHAR_SPACER);
+ term.grid_mut()[Line(0)][Column(4)].c = 'x';
+ term.grid_mut()[Line(0)][Column(5)].c = 'x';
+
+ // Test motion to the right.
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(0)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticRight);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(2)));
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(2)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticRight);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(4)));
+
+ // Test motion to the left.
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(5)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticLeft);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(4)));
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(4)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticLeft);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(2)));
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(2)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticLeft);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(0)));
+ }
}
| Semantic characters in Vi Mode and moving left
### System
OS: Linux 6.11.9 (arch)
Version: alacritty 0.14.0 (22a44757)
Sway 1:1.10-1
```
[0.000000942s] [INFO ] [alacritty] Welcome to Alacritty
[0.000051958s] [INFO ] [alacritty] Version 0.14.0 (22a44757)
[0.000061095s] [INFO ] [alacritty] Running on Wayland
[0.000525696s] [INFO ] [alacritty] Configuration files loaded from:
".config/alacritty/alacritty.toml"
".config/alacritty/themes/themes/tokyo-night.toml"
[0.032020983s] [INFO ] [alacritty] Using EGL 1.5
[0.032067079s] [DEBUG] [alacritty] Picked GL Config:
buffer_type: Some(Rgb { r_size: 8, g_size: 8, b_size: 8 })
alpha_size: 8
num_samples: 0
hardware_accelerated: true
supports_transparency: Some(true)
config_api: Api(OPENGL | GLES1 | GLES2 | GLES3)
srgb_capable: true
[0.035075269s] [INFO ] [alacritty] Window scale factor: 1
[0.036997955s] [DEBUG] [alacritty] Loading "JetBrains Mono" font
[0.048522765s] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Regular, load_flags: LoadFlag(FORCE_AUTOHINT | TARGET_LIGHT | TARGET_MONO), render_mode: "Lcd", lcd_filter: 1 }
[0.052494282s] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Italic, load_flags: LoadFlag(FORCE_AUTOHINT | TARGET_LIGHT | TARGET_MONO), render_mode: "Lcd", lcd_filter: 1 }
[0.056145538s] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Bold Italic, load_flags: LoadFlag(FORCE_AUTOHINT | TARGET_LIGHT | TARGET_MONO), render_mode: "Lcd", lcd_filter: 1 }
[0.061237837s] [INFO ] [alacritty] Running on AMD Radeon Graphics (radeonsi, renoir, LLVM 18.1.8, DRM 3.59, 6.11.9-zen1-1-zen)
[0.061260339s] [INFO ] [alacritty] OpenGL version 4.6 (Core Profile) Mesa 24.2.7-arch1.1, shader_version 4.60
[0.061267753s] [INFO ] [alacritty] Using OpenGL 3.3 renderer
[0.068799656s] [DEBUG] [alacritty] Enabled debug logging for OpenGL
[0.070534168s] [DEBUG] [alacritty] Filling glyph cache with common glyphs
[0.080691745s] [INFO ] [alacritty] Cell size: 10 x 25
[0.080720799s] [INFO ] [alacritty] Padding: 0 x 0
[0.080725207s] [INFO ] [alacritty] Width: 800, Height: 600
[0.080761626s] [INFO ] [alacritty] PTY dimensions: 24 x 80
[0.084459059s] [INFO ] [alacritty] Initialisation complete
[0.105657768s] [INFO ] [alacritty] Font size changed to 18.666666 px
[0.105695439s] [INFO ] [alacritty] Cell size: 10 x 25
[0.105708323s] [DEBUG] [alacritty_terminal] New num_cols is 191 and num_lines is 40
[0.110029896s] [INFO ] [alacritty] Padding: 0 x 0
[0.110058470s] [INFO ] [alacritty] Width: 1916, Height: 1020
```
When in Vi Mode [Shift]+[Ctrl]+[Space], moving to the left with [B] "SemanticLeft" does not work for all semantic characters:
`this is a test-STAHP.STAHP—yes this is a test-move test•lets go`
In the above, the characters to the left of "STAHP" causes the cursor to... stop. Moving forward with [E] is fine. The only way to move past those characters is to use the cursor keys.
Please be so kind as to look into this if you have the time at hand to spare.
With humble thanks.
macOS specific keyboard input crashes alacritty 0.14.0
edit: changed the topic title to better reflect the issue given what others have commented regarding their own experience with crashing.
I use a piece of software on macOS called [BetterTouchTool](https://folivora.ai/) and one of its features is being able to turn caps lock into a hyper key. Hyper key for the uninitiated is actually a simultaneous key press event for the actual modifier keys `Command|Shift|Control|Option`. I use the hyper key in combination with keyboard bindings within alacritty such that my hyper key essentially handles the standard bash / readline Ctrl-* key sequences--that is to say, Hyper-C acts as Ctrl-c and terminates a process.
```
[[keyboard.bindings]]
chars = "\u0003"
key = "C"
mods = "Command|Shift|Control|Option"
```
Previously, (i.e. the past two years) this was working just fine.
Expected behavior: The character sequence defined in the key binding is sent, the desired effect occurs, and the terminal continues to operate without issue.
Actual behavior: Repeated (although sometimes on the first try) use of the hyper key crashes the terminal causing it to suddenly close.
### System
OS:
```
> sw_vers
ProductName: macOS
ProductVersion: 15.0.1
BuildVersion: 24A348
```
Version:
```
> alacritty --version
alacritty 0.14.0 (22a4475)
```
### Logs
```
[2.758060875s] [INFO ] [alacritty_winit_event] Event { window_id: Some(WindowId(5198973488)), payload: Frame }
[2.758098667s] [INFO ] [alacritty_winit_event] About to wait
[2.758120958s] [INFO ] [alacritty_winit_event] About to wait
[2.765228375s] [INFO ] [alacritty_winit_event] About to wait
[2.771734167s] [INFO ] [alacritty_winit_event] ModifiersChanged(Modifiers { state: ModifiersState(0x0), pressed_mods: ModifiersKeys(0x0) })
[2.771803708s] [INFO ] [alacritty_winit_event] KeyboardInput { device_id: DeviceId(DeviceId), event: KeyEvent { physical_key: Code(ControlLeft), logical_key: Named(Control), text: None, location: Left, state: Released, repeat: false, platform_specific: KeyEventExtra { text_with_all_modifiers: None, key_without_modifiers: Named(Control) } }, is_synthetic: false }
[2.771818125s] [INFO ] [alacritty_winit_event] ModifiersChanged(Modifiers { state: ModifiersState(SHIFT | ALT | SUPER), pressed_mods: ModifiersKeys(0x0) })
[2.771874583s] [INFO ] [alacritty_winit_event] KeyboardInput { device_id: DeviceId(DeviceId), event: KeyEvent { physical_key: Code(ShiftLeft), logical_key: Named(Shift), text: None, location: Left, state: Released, repeat: false, platform_specific: KeyEventExtra { text_with_all_modifiers: None, key_without_modifiers: Named(Shift) } }, is_synthetic: false }
[2.771896125s] [INFO ] [alacritty_winit_event] ModifiersChanged(Modifiers { state: ModifiersState(ALT | SUPER), pressed_mods: ModifiersKeys(0x0) })
[2.771950375s] [INFO ] [alacritty_winit_event] KeyboardInput { device_id: DeviceId(DeviceId), event: KeyEvent { physical_key: Code(AltLeft), logical_key: Named(Alt), text: None, location: Left, state: Released, repeat: false, platform_specific: KeyEventExtra { text_with_all_modifiers: None, key_without_modifiers: Named(Alt) } }, is_synthetic: false }
[2.771962542s] [INFO ] [alacritty_winit_event] ModifiersChanged(Modifiers { state: ModifiersState(SUPER), pressed_mods: ModifiersKeys(0x0) })
[2.772015583s] [INFO ] [alacritty_winit_event] KeyboardInput { device_id: DeviceId(DeviceId), event: KeyEvent { physical_key: Code(SuperLeft), logical_key: Named(Super), text: None, location: Left, state: Released, repeat: false, platform_specific: KeyEventExtra { text_with_all_modifiers: None, key_without_modifiers: Named(Super) } }, is_synthetic: false }
[2.772026875s] [INFO ] [alacritty_winit_event] ModifiersChanged(Modifiers { state: ModifiersState(0x0), pressed_mods: ModifiersKeys(0x0) })
[2.772066917s] [INFO ] [alacritty_winit_event] About to wait
[2.772157250s] [INFO ] [alacritty_winit_event] About to wait
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid message sent to event "NSEvent: type=FlagsChanged loc=(0,744) time=3879.1 flags=0x100 win=0x135e20a30 winNum=1167 ctxt=0x0 keyCode=15"'
*** First throw call stack:
(
0 CoreFoundation 0x00000001955b0ec0 __exceptionPreprocess + 176
1 libobjc.A.dylib 0x0000000195096cd8 objc_exception_throw + 88
2 Foundation 0x00000001967ad588 -[NSCalendarDate initWithCoder:] + 0
3 AppKit 0x0000000199225cec -[NSEvent charactersIgnoringModifiers] + 604
4 alacritty 0x0000000104969154 _ZN5winit13platform_impl5macos5event16create_key_event17h5ffd7f45014bc6b9E + 2508
5 alacritty 0x00000001049594a8 _ZN5winit13platform_impl5macos4view9WinitView13flags_changed17h58c32ee31fdc0f1bE + 344
6 AppKit 0x00000001991a80a8 -[NSWindow(NSEventRouting) _reallySendEvent:isDelayedEvent:] + 484
7 AppKit 0x00000001991a7cf4 -[NSWindow(NSEventRouting) sendEvent:] + 284
8 AppKit 0x00000001999a53e0 -[NSApplication(NSEventRouting) sendEvent:] + 1212
9 alacritty 0x0000000104961a3c _ZN5winit13platform_impl5macos3app16WinitApplication10send_event17h9d7b11ed09e187deE + 1040
10 AppKit 0x00000001995b8984 -[NSApplication _handleEvent:] + 60
11 AppKit 0x0000000199073ba4 -[NSApplication run] + 520
12 alacritty 0x000000010469e090 _ZN5winit13platform_impl5macos13event_handler12EventHandler3set17ha6185e593ba31ad4E + 280
13 alacritty 0x000000010474d378 _ZN9alacritty4main17h290211ac615fedbeE + 7624
14 alacritty 0x00000001046c2060 _ZN3std3sys9backtrace28__rust_begin_short_backtrace17h86645a6c57eec1ecE + 12
15 alacritty 0x000000010468392c _ZN3std2rt10lang_start28_$u7b$$u7b$closure$u7d$$u7d$17h7b27810acbbe86f2E + 16
16 alacritty 0x00000001048f1598 _ZN3std2rt19lang_start_internal17h9e88109c8deb8787E + 808
17 alacritty 0x000000010474de4c main + 52
18 dyld 0x00000001950d4274 start + 2840
)
libc++abi: terminating due to uncaught exception of type NSException
[1] 14442 abort alacritty -vv --print-events
```
I cannot confirm if this hyper key issue applies to any other piece of software that adds this functionality (such as karabiner elements) or if this is an issue specific to how bettertouchtool implements hyperkey. For meanwhile, I've downgraded back to [alacritty 0.13.2](https://github.com/Homebrew/homebrew-cask/blob/ee57cb02e89536c4eb0624e7175705c947693ac7/Casks/a/alacritty.rb) which has resolved the issue.
```
curl 'https://raw.githubusercontent.com/Homebrew/homebrew-cask/ee57cb02e89536c4eb0624e7175705c947693ac7/Casks/a/alacritty.rb' -o alacritty.rb
brew install -s alacritty.rb
```
Not sure if there's something I'm overlooking with the latest update. Appreciate any help.
| 2024-12-14T09:10:33 | 0.14 | 22a447573bbd67c0a5d3946d58d6d61bac3b4ad2 | [
"vi_mode::tests::wide_semantic_char"
] | [
"grid::storage::tests::indexing",
"grid::storage::tests::rotate",
"grid::storage::tests::shrink_before_and_after_zero",
"grid::storage::tests::rotate_wrap_zero",
"grid::storage::tests::indexing_above_inner_len - should panic",
"grid::storage::tests::shrink_after_zero",
"grid::storage::tests::shrink_befo... | [] | [] | 1 | |
alacritty/alacritty | 8,315 | alacritty__alacritty-8315 | [
"8314"
] | 4f739a7e2b933f6828ebf64654c8a8c573bf0ec1 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -18,6 +18,7 @@ Notable changes to the `alacritty_terminal` crate are documented in its
- Mouse/Vi cursor hint highlighting broken on the terminal cursor line
- Hint launcher opening arbitrary text, when terminal content changed while opening
+- `SemanticRight`/`SemanticLeft` vi motions breaking with wide semantic escape characters
## 0.14.0
diff --git a/alacritty_terminal/src/vi_mode.rs b/alacritty_terminal/src/vi_mode.rs
--- a/alacritty_terminal/src/vi_mode.rs
+++ b/alacritty_terminal/src/vi_mode.rs
@@ -265,14 +265,14 @@ fn semantic<T: EventListener>(
}
};
- // Make sure we jump above wide chars.
- point = term.expand_wide(point, direction);
-
// Move to word boundary.
if direction != side && !is_boundary(term, point, direction) {
point = expand_semantic(point);
}
+ // Make sure we jump above wide chars.
+ point = term.expand_wide(point, direction);
+
// Skip whitespace.
let mut next_point = advance(term, point, direction);
while !is_boundary(term, point, direction) && is_space(term, next_point) {
diff --git a/alacritty_terminal/src/vi_mode.rs b/alacritty_terminal/src/vi_mode.rs
--- a/alacritty_terminal/src/vi_mode.rs
+++ b/alacritty_terminal/src/vi_mode.rs
@@ -283,6 +283,11 @@ fn semantic<T: EventListener>(
// Assure minimum movement of one cell.
if !is_boundary(term, point, direction) {
point = advance(term, point, direction);
+
+ // Skip over wide cell spacers.
+ if direction == Direction::Left {
+ point = term.expand_wide(point, direction);
+ }
}
// Move to word boundary.
| diff --git a/alacritty_terminal/src/term/mod.rs b/alacritty_terminal/src/term/mod.rs
--- a/alacritty_terminal/src/term/mod.rs
+++ b/alacritty_terminal/src/term/mod.rs
@@ -930,6 +930,11 @@ impl<T> Term<T> {
&self.config.semantic_escape_chars
}
+ #[cfg(test)]
+ pub(crate) fn set_semantic_escape_chars(&mut self, semantic_escape_chars: &str) {
+ self.config.semantic_escape_chars = semantic_escape_chars.into();
+ }
+
/// Active terminal cursor style.
///
/// While vi mode is active, this will automatically return the vi mode cursor style.
diff --git a/alacritty_terminal/src/vi_mode.rs b/alacritty_terminal/src/vi_mode.rs
--- a/alacritty_terminal/src/vi_mode.rs
+++ b/alacritty_terminal/src/vi_mode.rs
@@ -820,4 +825,42 @@ mod tests {
cursor = cursor.scroll(&term, -20);
assert_eq!(cursor.point, Point::new(Line(19), Column(0)));
}
+
+ #[test]
+ fn wide_semantic_char() {
+ let mut term = term();
+ term.set_semantic_escape_chars("-");
+ term.grid_mut()[Line(0)][Column(0)].c = 'x';
+ term.grid_mut()[Line(0)][Column(1)].c = 'x';
+ term.grid_mut()[Line(0)][Column(2)].c = '-';
+ term.grid_mut()[Line(0)][Column(2)].flags.insert(Flags::WIDE_CHAR);
+ term.grid_mut()[Line(0)][Column(3)].c = ' ';
+ term.grid_mut()[Line(0)][Column(3)].flags.insert(Flags::WIDE_CHAR_SPACER);
+ term.grid_mut()[Line(0)][Column(4)].c = 'x';
+ term.grid_mut()[Line(0)][Column(5)].c = 'x';
+
+ // Test motion to the right.
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(0)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticRight);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(2)));
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(2)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticRight);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(4)));
+
+ // Test motion to the left.
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(5)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticLeft);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(4)));
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(4)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticLeft);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(2)));
+
+ let mut cursor = ViModeCursor::new(Point::new(Line(0), Column(2)));
+ cursor = cursor.motion(&mut term, ViMotion::SemanticLeft);
+ assert_eq!(cursor.point, Point::new(Line(0), Column(0)));
+ }
}
| Semantic characters in Vi Mode and moving left
### System
OS: Linux 6.11.9 (arch)
Version: alacritty 0.14.0 (22a44757)
Sway 1:1.10-1
```
[0.000000942s] [INFO ] [alacritty] Welcome to Alacritty
[0.000051958s] [INFO ] [alacritty] Version 0.14.0 (22a44757)
[0.000061095s] [INFO ] [alacritty] Running on Wayland
[0.000525696s] [INFO ] [alacritty] Configuration files loaded from:
".config/alacritty/alacritty.toml"
".config/alacritty/themes/themes/tokyo-night.toml"
[0.032020983s] [INFO ] [alacritty] Using EGL 1.5
[0.032067079s] [DEBUG] [alacritty] Picked GL Config:
buffer_type: Some(Rgb { r_size: 8, g_size: 8, b_size: 8 })
alpha_size: 8
num_samples: 0
hardware_accelerated: true
supports_transparency: Some(true)
config_api: Api(OPENGL | GLES1 | GLES2 | GLES3)
srgb_capable: true
[0.035075269s] [INFO ] [alacritty] Window scale factor: 1
[0.036997955s] [DEBUG] [alacritty] Loading "JetBrains Mono" font
[0.048522765s] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Regular, load_flags: LoadFlag(FORCE_AUTOHINT | TARGET_LIGHT | TARGET_MONO), render_mode: "Lcd", lcd_filter: 1 }
[0.052494282s] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Italic, load_flags: LoadFlag(FORCE_AUTOHINT | TARGET_LIGHT | TARGET_MONO), render_mode: "Lcd", lcd_filter: 1 }
[0.056145538s] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Bold Italic, load_flags: LoadFlag(FORCE_AUTOHINT | TARGET_LIGHT | TARGET_MONO), render_mode: "Lcd", lcd_filter: 1 }
[0.061237837s] [INFO ] [alacritty] Running on AMD Radeon Graphics (radeonsi, renoir, LLVM 18.1.8, DRM 3.59, 6.11.9-zen1-1-zen)
[0.061260339s] [INFO ] [alacritty] OpenGL version 4.6 (Core Profile) Mesa 24.2.7-arch1.1, shader_version 4.60
[0.061267753s] [INFO ] [alacritty] Using OpenGL 3.3 renderer
[0.068799656s] [DEBUG] [alacritty] Enabled debug logging for OpenGL
[0.070534168s] [DEBUG] [alacritty] Filling glyph cache with common glyphs
[0.080691745s] [INFO ] [alacritty] Cell size: 10 x 25
[0.080720799s] [INFO ] [alacritty] Padding: 0 x 0
[0.080725207s] [INFO ] [alacritty] Width: 800, Height: 600
[0.080761626s] [INFO ] [alacritty] PTY dimensions: 24 x 80
[0.084459059s] [INFO ] [alacritty] Initialisation complete
[0.105657768s] [INFO ] [alacritty] Font size changed to 18.666666 px
[0.105695439s] [INFO ] [alacritty] Cell size: 10 x 25
[0.105708323s] [DEBUG] [alacritty_terminal] New num_cols is 191 and num_lines is 40
[0.110029896s] [INFO ] [alacritty] Padding: 0 x 0
[0.110058470s] [INFO ] [alacritty] Width: 1916, Height: 1020
```
When in Vi Mode [Shift]+[Ctrl]+[Space], moving to the left with [B] "SemanticLeft" does not work for all semantic characters:
`this is a test-STAHP.STAHP—yes this is a test-move test•lets go`
In the above, the characters to the left of "STAHP" causes the cursor to... stop. Moving forward with [E] is fine. The only way to move past those characters is to use the cursor keys.
Please be so kind as to look into this if you have the time at hand to spare.
With humble thanks.
| I'd assume that you have those wide chars in semantic chars specified, since wide chars are not there by default.
Yes.
`[selection]
semantic_escape_chars = ",│`|:\"' ()[]{}<>\t•-—-."
`
It seems like `b` isn't the only binding affected. The `w` binding also does not work correctly. | 2024-11-20T09:21:31 | 1.74 | 4f739a7e2b933f6828ebf64654c8a8c573bf0ec1 | [
"vi_mode::tests::wide_semantic_char"
] | [
"grid::storage::tests::indexing",
"grid::storage::tests::rotate",
"grid::storage::tests::indexing_above_inner_len - should panic",
"grid::storage::tests::rotate_wrap_zero",
"grid::storage::tests::shrink_after_zero",
"grid::storage::tests::shrink_before_and_after_zero",
"grid::storage::tests::shrink_befo... | [] | [] | 2 |
alacritty/alacritty | 8,069 | alacritty__alacritty-8069 | [
"8060"
] | 5e6b92db85b3ea7ffd06a7a5ae0d2d62ad5946a6 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -31,6 +31,7 @@ Notable changes to the `alacritty_terminal` crate are documented in its
- Leaking FDs when closing windows on Unix systems
- Config emitting errors for nonexistent import paths
- Kitty keyboard protocol reporting shifted key codes
+- Broken search with words broken across line boundary on the first character
## 0.13.2
diff --git a/alacritty/src/renderer/text/glyph_cache.rs b/alacritty/src/renderer/text/glyph_cache.rs
--- a/alacritty/src/renderer/text/glyph_cache.rs
+++ b/alacritty/src/renderer/text/glyph_cache.rs
@@ -187,14 +187,9 @@ impl GlyphCache {
///
/// This will fail when the glyph could not be rasterized. Usually this is due to the glyph
/// not being present in any font.
- pub fn get<L: ?Sized>(
- &mut self,
- glyph_key: GlyphKey,
- loader: &mut L,
- show_missing: bool,
- ) -> Glyph
+ pub fn get<L>(&mut self, glyph_key: GlyphKey, loader: &mut L, show_missing: bool) -> Glyph
where
- L: LoadGlyph,
+ L: LoadGlyph + ?Sized,
{
// Try to load glyph from cache.
if let Some(glyph) = self.cache.get(&glyph_key) {
diff --git a/alacritty/src/renderer/text/glyph_cache.rs b/alacritty/src/renderer/text/glyph_cache.rs
--- a/alacritty/src/renderer/text/glyph_cache.rs
+++ b/alacritty/src/renderer/text/glyph_cache.rs
@@ -242,9 +237,9 @@ impl GlyphCache {
/// Load glyph into the atlas.
///
/// This will apply all transforms defined for the glyph cache to the rasterized glyph before
- pub fn load_glyph<L: ?Sized>(&self, loader: &mut L, mut glyph: RasterizedGlyph) -> Glyph
+ pub fn load_glyph<L>(&self, loader: &mut L, mut glyph: RasterizedGlyph) -> Glyph
where
- L: LoadGlyph,
+ L: LoadGlyph + ?Sized,
{
glyph.left += i32::from(self.glyph_offset.x);
glyph.top += i32::from(self.glyph_offset.y);
diff --git a/alacritty_terminal/src/term/mod.rs b/alacritty_terminal/src/term/mod.rs
--- a/alacritty_terminal/src/term/mod.rs
+++ b/alacritty_terminal/src/term/mod.rs
@@ -1445,15 +1445,15 @@ impl<T: EventListener> Handler for Term<T> {
/// edition, in LINE FEED mode,
///
/// > The execution of the formatter functions LINE FEED (LF), FORM FEED
- /// (FF), LINE TABULATION (VT) cause only movement of the active position in
- /// the direction of the line progression.
+ /// > (FF), LINE TABULATION (VT) cause only movement of the active position in
+ /// > the direction of the line progression.
///
/// In NEW LINE mode,
///
/// > The execution of the formatter functions LINE FEED (LF), FORM FEED
- /// (FF), LINE TABULATION (VT) cause movement to the line home position on
- /// the following line, the following form, etc. In the case of LF this is
- /// referred to as the New Line (NL) option.
+ /// > (FF), LINE TABULATION (VT) cause movement to the line home position on
+ /// > the following line, the following form, etc. In the case of LF this is
+ /// > referred to as the New Line (NL) option.
///
/// Additionally, ECMA-48 4th edition says that this option is deprecated.
/// ECMA-48 5th edition only mentions this option (without explanation)
diff --git a/alacritty_terminal/src/term/mod.rs b/alacritty_terminal/src/term/mod.rs
--- a/alacritty_terminal/src/term/mod.rs
+++ b/alacritty_terminal/src/term/mod.rs
@@ -2187,7 +2187,7 @@ impl<T: EventListener> Handler for Term<T> {
fn set_title(&mut self, title: Option<String>) {
trace!("Setting title to '{:?}'", title);
- self.title = title.clone();
+ self.title.clone_from(&title);
let title_event = match title {
Some(title) => Event::Title(title),
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -293,12 +293,12 @@ impl<T> Term<T> {
let mut state = regex.dfa.start_state_forward(&mut regex.cache, &input).unwrap();
let mut iter = self.grid.iter_from(start);
- let mut last_wrapped = false;
let mut regex_match = None;
let mut done = false;
let mut cell = iter.cell();
self.skip_fullwidth(&mut iter, &mut cell, regex.direction);
+ let mut last_wrapped = cell.flags.contains(Flags::WRAPLINE);
let mut c = cell.c;
let mut point = iter.point();
| diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -1155,4 +1155,20 @@ mod tests {
assert_eq!(start, Point::new(Line(1), Column(0)));
assert_eq!(end, Point::new(Line(1), Column(2)));
}
+
+ #[test]
+ fn inline_word_search() {
+ #[rustfmt::skip]
+ let term = mock_term("\
+ word word word word w\n\
+ ord word word word\
+ ");
+
+ let mut regex = RegexSearch::new("word").unwrap();
+ let start = Point::new(Line(1), Column(4));
+ let end = Point::new(Line(0), Column(0));
+ let match_start = Point::new(Line(0), Column(20));
+ let match_end = Point::new(Line(1), Column(2));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(match_start..=match_end));
+ }
}
| Bug: backward search on single line goes in cycles
When using backward search, it sometimes goes in cycles without moving a cursor to some search result. In example video, before resizing, everything works fine, but after resizing, it cycles only on several last elements on the line, instead of cycling through all occurrences
https://github.com/alacritty/alacritty/assets/37012324/34cbb259-9d3d-4de5-83df-f92ca66735ba
### System
OS: Linux, Wayland, Sway
Version: alacritty 0.13.2
### Logs
No logs
| The provided video cannot be played back.
@chrisduerr , I can play it back. Do you use firefox? If so, videos in github with firefox is a known bug. Try downloading video or using another browser :shrug:
Even without video, try to create a long line with like 30 or 40 entries of the same word and try backward-searching this word. Search will break | 2024-06-29T07:36:44 | 1.70 | 5e6b92db85b3ea7ffd06a7a5ae0d2d62ad5946a6 | [
"term::search::tests::inline_word_search"
] | [
"grid::storage::tests::indexing",
"grid::storage::tests::indexing_above_inner_len - should panic",
"grid::storage::tests::rotate",
"grid::storage::tests::rotate_wrap_zero",
"grid::storage::tests::shrink_after_zero",
"grid::storage::tests::shrink_before_and_after_zero",
"grid::storage::tests::shrink_then... | [] | [] | 3 |
alacritty/alacritty | 7,729 | alacritty__alacritty-7729 | [
"7720"
] | c354f58f421c267cc1472414eaaa9f738509ede0 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -28,6 +28,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
- Autokey no longer working with alacritty on X11
- Freeze when moving window between monitors on Xfwm
- Mouse cursor not changing on Wayland when cursor theme uses legacy cursor icon names
+- Config keys are available under proper names
### Changed
diff --git a/alacritty_config_derive/src/serde_replace.rs b/alacritty_config_derive/src/serde_replace.rs
--- a/alacritty_config_derive/src/serde_replace.rs
+++ b/alacritty_config_derive/src/serde_replace.rs
@@ -44,7 +44,10 @@ pub fn derive_recursive<T>(
) -> TokenStream2 {
let GenericsStreams { unconstrained, constrained, .. } =
crate::generics_streams(&generics.params);
- let replace_arms = match_arms(&fields);
+ let replace_arms = match match_arms(&fields) {
+ Err(e) => return e.to_compile_error(),
+ Ok(replace_arms) => replace_arms,
+ };
quote! {
#[allow(clippy::extra_unused_lifetimes)]
diff --git a/alacritty_config_derive/src/serde_replace.rs b/alacritty_config_derive/src/serde_replace.rs
--- a/alacritty_config_derive/src/serde_replace.rs
+++ b/alacritty_config_derive/src/serde_replace.rs
@@ -75,7 +78,7 @@ pub fn derive_recursive<T>(
}
/// Create SerdeReplace recursive match arms.
-fn match_arms<T>(fields: &Punctuated<Field, T>) -> TokenStream2 {
+fn match_arms<T>(fields: &Punctuated<Field, T>) -> Result<TokenStream2, syn::Error> {
let mut stream = TokenStream2::default();
let mut flattened_arm = None;
diff --git a/alacritty_config_derive/src/serde_replace.rs b/alacritty_config_derive/src/serde_replace.rs
--- a/alacritty_config_derive/src/serde_replace.rs
+++ b/alacritty_config_derive/src/serde_replace.rs
@@ -88,19 +91,42 @@ fn match_arms<T>(fields: &Punctuated<Field, T>) -> TokenStream2 {
let flatten = field
.attrs
.iter()
+ .filter(|attr| (*attr).path().is_ident("config"))
.filter_map(|attr| attr.parse_args::<Attr>().ok())
.any(|parsed| parsed.ident.as_str() == "flatten");
if flatten && flattened_arm.is_some() {
- return Error::new(ident.span(), MULTIPLE_FLATTEN_ERROR).to_compile_error();
+ return Err(Error::new(ident.span(), MULTIPLE_FLATTEN_ERROR));
} else if flatten {
flattened_arm = Some(quote! {
_ => alacritty_config::SerdeReplace::replace(&mut self.#ident, value)?,
});
} else {
+ // Extract all `#[config(alias = "...")]` attribute values.
+ let aliases = field
+ .attrs
+ .iter()
+ .filter(|attr| (*attr).path().is_ident("config"))
+ .filter_map(|attr| attr.parse_args::<Attr>().ok())
+ .filter(|parsed| parsed.ident.as_str() == "alias")
+ .map(|parsed| {
+ let value = parsed
+ .param
+ .ok_or_else(|| format!("Field \"{}\" has no alias value", ident))?
+ .value();
+
+ if value.trim().is_empty() {
+ return Err(format!("Field \"{}\" has an empty alias value", ident));
+ }
+
+ Ok(value)
+ })
+ .collect::<Result<Vec<String>, String>>()
+ .map_err(|msg| Error::new(ident.span(), msg))?;
+
stream.extend(quote! {
- #literal => alacritty_config::SerdeReplace::replace(&mut self.#ident, next_value)?,
- });
+ #(#aliases)|* | #literal => alacritty_config::SerdeReplace::replace(&mut
+ self.#ident, next_value)?, });
}
}
diff --git a/alacritty_config_derive/src/serde_replace.rs b/alacritty_config_derive/src/serde_replace.rs
--- a/alacritty_config_derive/src/serde_replace.rs
+++ b/alacritty_config_derive/src/serde_replace.rs
@@ -109,5 +135,5 @@ fn match_arms<T>(fields: &Punctuated<Field, T>) -> TokenStream2 {
stream.extend(flattened_arm);
}
- stream
+ Ok(stream)
}
| diff --git a/alacritty_config_derive/tests/config.rs b/alacritty_config_derive/tests/config.rs
--- a/alacritty_config_derive/tests/config.rs
+++ b/alacritty_config_derive/tests/config.rs
@@ -23,7 +23,7 @@ impl Default for TestEnum {
#[derive(ConfigDeserialize)]
struct Test {
- #[config(alias = "noalias")]
+ #[config(alias = "field1_alias")]
#[config(deprecated = "use field2 instead")]
field1: usize,
#[config(deprecated = "shouldn't be hit")]
diff --git a/alacritty_config_derive/tests/config.rs b/alacritty_config_derive/tests/config.rs
--- a/alacritty_config_derive/tests/config.rs
+++ b/alacritty_config_derive/tests/config.rs
@@ -39,6 +39,9 @@ struct Test {
enom_error: TestEnum,
#[config(removed = "it's gone")]
gone: bool,
+ #[config(alias = "multiple_alias1")]
+ #[config(alias = "multiple_alias2")]
+ multiple_alias_field: usize,
}
impl Default for Test {
diff --git a/alacritty_config_derive/tests/config.rs b/alacritty_config_derive/tests/config.rs
--- a/alacritty_config_derive/tests/config.rs
+++ b/alacritty_config_derive/tests/config.rs
@@ -53,6 +56,7 @@ impl Default for Test {
enom_big: TestEnum::default(),
enom_error: TestEnum::default(),
gone: false,
+ multiple_alias_field: 0,
}
}
}
diff --git a/alacritty_config_derive/tests/config.rs b/alacritty_config_derive/tests/config.rs
--- a/alacritty_config_derive/tests/config.rs
+++ b/alacritty_config_derive/tests/config.rs
@@ -70,6 +74,7 @@ struct Test2<T: Default> {
#[derive(ConfigDeserialize, Default)]
struct Test3 {
+ #[config(alias = "flatty_alias")]
flatty: usize,
}
diff --git a/alacritty_config_derive/tests/config.rs b/alacritty_config_derive/tests/config.rs
--- a/alacritty_config_derive/tests/config.rs
+++ b/alacritty_config_derive/tests/config.rs
@@ -189,6 +194,33 @@ fn replace_derive() {
assert_eq!(test.nesting.newtype, NewType(9));
}
+#[test]
+fn replace_derive_using_alias() {
+ let mut test = Test::default();
+
+ assert_ne!(test.field1, 9);
+
+ let value = toml::from_str("field1_alias=9").unwrap();
+ test.replace(value).unwrap();
+
+ assert_eq!(test.field1, 9);
+}
+
+#[test]
+fn replace_derive_using_multiple_aliases() {
+ let mut test = Test::default();
+
+ let toml_value = toml::from_str("multiple_alias1=6").unwrap();
+ test.replace(toml_value).unwrap();
+
+ assert_eq!(test.multiple_alias_field, 6);
+
+ let toml_value = toml::from_str("multiple_alias1=7").unwrap();
+ test.replace(toml_value).unwrap();
+
+ assert_eq!(test.multiple_alias_field, 7);
+}
+
#[test]
fn replace_flatten() {
let mut test = Test::default();
diff --git a/alacritty_config_derive/tests/config.rs b/alacritty_config_derive/tests/config.rs
--- a/alacritty_config_derive/tests/config.rs
+++ b/alacritty_config_derive/tests/config.rs
@@ -198,3 +230,15 @@ fn replace_flatten() {
assert_eq!(test.flatten.flatty, 7);
}
+
+#[test]
+fn replace_flatten_using_alias() {
+ let mut test = Test::default();
+
+ assert_ne!(test.flatten.flatty, 7);
+
+ let value = toml::from_str("flatty_alias=7").unwrap();
+ test.replace(value).unwrap();
+
+ assert_eq!(test.flatten.flatty, 7);
+}
| CLI -o config overrides should accept aliases as documented
### Expected behavior
As indicated by `man alacritty`, the `-o` option can be used to override configuration from the `alacritty.toml` config file with the properties specified by `man 5 alacritty`. One would therefore expect that
```alacritty -o 'colors.cursor.cursor="CellBackground"'```
leads to an invisible cursor.
### Actual behavior
Instead, it gives the error
```Unable to override option 'colors.cursor.cursor="CellBackground"': Field 'cursor' does not exist```
and no change to the background color is effected.
### Workaround
Inspecting the source code, this appears to stem from the fact that the `colors.cursor` configuration option is an `InvertedCellColors` struct, for which `cursor` is an alias for a property named `background`. Indeed,
```alacritty -o 'colors.cursor.background="CellBackground"'```
yields the desired result. The `-o` overrides should resolve those aliases since only the aliases are used in the documentation.
### Possible cause
Digging a bit deeper, I think the relevant part of the code is the function `derive_deserialize` in `alacritty/alacritty_config_derive/src/config_deserialize/de_struct.rs` which implements `SerdeReplace` without using the result of `fields_deserializer` which includes the substitution of aliases.
### System
OS: Linux 6.7.4-arch1-1
Version: alacritty 0.13.1 (fe2a3c56)
Compositor: sway
| The problem is [here](https://github.com/alacritty/alacritty/blob/master/alacritty_config_derive/src/serde_replace.rs#L51).
This is what's generated when I look with `cargo expand`:
```rust
#[allow(clippy::extra_unused_lifetimes)]
impl<'de> alacritty_config::SerdeReplace for InvertedCellColors {
fn replace(
&mut self,
value: toml::Value,
) -> Result<(), Box<dyn std::error::Error>> {
match value.as_table() {
Some(table) => {
for (field, next_value) in table {
let next_value = next_value.clone();
let value = value.clone();
match field.as_str() {
"foreground" => {
alacritty_config::SerdeReplace::replace(
&mut self.foreground,
next_value,
)?
}
"background" => {
alacritty_config::SerdeReplace::replace(
&mut self.background,
next_value,
)?
}
_ => {
let error = {
let res = ::alloc::fmt::format(
format_args!("Field \"{0}\" does not exist", field),
);
res
};
return Err(error.into());
}
}
}
}
None => *self = serde::Deserialize::deserialize(value)?,
}
Ok(())
}
}
```
The `alias=cursor` is not used in the replace | 2024-02-14T03:46:11 | 1.72 | 41d2f1df4509148a6f1ffb3de59c2389a3a93283 | [
"replace_derive_using_alias",
"replace_derive_using_multiple_aliases",
"replace_flatten_using_alias"
] | [
"replace_derive",
"field_replacement",
"replace_flatten",
"config_deserialize"
] | [] | [] | 4 |
alacritty/alacritty | 7,204 | alacritty__alacritty-7204 | [
"7097"
] | 77aa9f42bac4377efe26512d71098d21b9b547fd | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -49,6 +49,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
- Cut off wide characters in preedit string
- Scrolling on touchscreens
- Double clicking on CSD titlebar not always maximizing a window on Wayland
+- Excessive memory usage when using regexes with a large number of possible states
### Removed
diff --git a/alacritty/src/config/ui_config.rs b/alacritty/src/config/ui_config.rs
--- a/alacritty/src/config/ui_config.rs
+++ b/alacritty/src/config/ui_config.rs
@@ -485,7 +485,7 @@ impl LazyRegex {
/// Execute a function with the compiled regex DFAs as parameter.
pub fn with_compiled<T, F>(&self, f: F) -> Option<T>
where
- F: FnMut(&RegexSearch) -> T,
+ F: FnMut(&mut RegexSearch) -> T,
{
self.0.borrow_mut().compiled().map(f)
}
diff --git a/alacritty/src/config/ui_config.rs b/alacritty/src/config/ui_config.rs
--- a/alacritty/src/config/ui_config.rs
+++ b/alacritty/src/config/ui_config.rs
@@ -514,7 +514,7 @@ impl LazyRegexVariant {
///
/// If the regex is not already compiled, this will compile the DFAs and store them for future
/// access.
- fn compiled(&mut self) -> Option<&RegexSearch> {
+ fn compiled(&mut self) -> Option<&mut RegexSearch> {
// Check if the regex has already been compiled.
let regex = match self {
Self::Compiled(regex_search) => return Some(regex_search),
diff --git a/alacritty/src/display/content.rs b/alacritty/src/display/content.rs
--- a/alacritty/src/display/content.rs
+++ b/alacritty/src/display/content.rs
@@ -41,7 +41,7 @@ impl<'a> RenderableContent<'a> {
config: &'a UiConfig,
display: &'a mut Display,
term: &'a Term<T>,
- search_state: &'a SearchState,
+ search_state: &'a mut SearchState,
) -> Self {
let search = search_state.dfas().map(|dfas| HintMatches::visible_regex_matches(term, dfas));
let focused_match = search_state.focused_match();
diff --git a/alacritty/src/display/content.rs b/alacritty/src/display/content.rs
--- a/alacritty/src/display/content.rs
+++ b/alacritty/src/display/content.rs
@@ -486,7 +486,7 @@ impl<'a> HintMatches<'a> {
}
/// Create from regex matches on term visable part.
- fn visible_regex_matches<T>(term: &Term<T>, dfas: &RegexSearch) -> Self {
+ fn visible_regex_matches<T>(term: &Term<T>, dfas: &mut RegexSearch) -> Self {
let matches = hint::visible_regex_match_iter(term, dfas).collect::<Vec<_>>();
Self::new(matches)
}
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -90,7 +90,8 @@ impl HintState {
// Apply post-processing and search for sub-matches if necessary.
if hint.post_processing {
- self.matches.extend(matches.flat_map(|rm| {
+ let mut matches = matches.collect::<Vec<_>>();
+ self.matches.extend(matches.drain(..).flat_map(|rm| {
HintPostProcessor::new(term, regex, rm).collect::<Vec<_>>()
}));
} else {
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -289,7 +290,7 @@ impl HintLabels {
/// Iterate over all visible regex matches.
pub fn visible_regex_match_iter<'a, T>(
term: &'a Term<T>,
- regex: &'a RegexSearch,
+ regex: &'a mut RegexSearch,
) -> impl Iterator<Item = Match> + 'a {
let viewport_start = Line(-(term.grid().display_offset() as i32));
let viewport_end = viewport_start + term.bottommost_line();
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -344,7 +345,7 @@ pub fn visible_unique_hyperlinks_iter<T>(term: &Term<T>) -> impl Iterator<Item =
fn regex_match_at<T>(
term: &Term<T>,
point: Point,
- regex: &RegexSearch,
+ regex: &mut RegexSearch,
post_processing: bool,
) -> Option<Match> {
let regex_match = visible_regex_match_iter(term, regex).find(|rm| rm.contains(&point))?;
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -450,7 +451,7 @@ fn hyperlink_at<T>(term: &Term<T>, point: Point) -> Option<(Hyperlink, Match)> {
/// Iterator over all post-processed matches inside an existing hint match.
struct HintPostProcessor<'a, T> {
/// Regex search DFAs.
- regex: &'a RegexSearch,
+ regex: &'a mut RegexSearch,
/// Terminal reference.
term: &'a Term<T>,
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -467,7 +468,7 @@ struct HintPostProcessor<'a, T> {
impl<'a, T> HintPostProcessor<'a, T> {
/// Create a new iterator for an unprocessed match.
- fn new(term: &'a Term<T>, regex: &'a RegexSearch, regex_match: Match) -> Self {
+ fn new(term: &'a Term<T>, regex: &'a mut RegexSearch, regex_match: Match) -> Self {
let mut post_processor = Self {
next_match: None,
start: *regex_match.start(),
diff --git a/alacritty/src/display/mod.rs b/alacritty/src/display/mod.rs
--- a/alacritty/src/display/mod.rs
+++ b/alacritty/src/display/mod.rs
@@ -759,7 +759,7 @@ impl Display {
scheduler: &mut Scheduler,
message_buffer: &MessageBuffer,
config: &UiConfig,
- search_state: &SearchState,
+ search_state: &mut SearchState,
) {
// Collect renderable content before the terminal is dropped.
let mut content = RenderableContent::new(config, self, &terminal, search_state);
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -154,8 +154,8 @@ impl SearchState {
}
/// Active search dfas.
- pub fn dfas(&self) -> Option<&RegexSearch> {
- self.dfas.as_ref()
+ pub fn dfas(&mut self) -> Option<&mut RegexSearch> {
+ self.dfas.as_mut()
}
/// Search regex text if a search is active.
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -637,7 +637,7 @@ impl<'a, N: Notify + 'a, T: EventListener> input::ActionContext<T> for ActionCon
fn search_next(&mut self, origin: Point, direction: Direction, side: Side) -> Option<Match> {
self.search_state
.dfas
- .as_ref()
+ .as_mut()
.and_then(|dfas| self.terminal.search_next(dfas, origin, direction, side, None))
}
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -913,7 +913,7 @@ impl<'a, N: Notify + 'a, T: EventListener> ActionContext<'a, N, T> {
/// Jump to the first regex match from the search origin.
fn goto_match(&mut self, mut limit: Option<usize>) {
- let dfas = match &self.search_state.dfas {
+ let dfas = match &mut self.search_state.dfas {
Some(dfas) => dfas,
None => return,
};
diff --git a/alacritty/src/window_context.rs b/alacritty/src/window_context.rs
--- a/alacritty/src/window_context.rs
+++ b/alacritty/src/window_context.rs
@@ -398,7 +398,7 @@ impl WindowContext {
scheduler,
&self.message_buffer,
&self.config,
- &self.search_state,
+ &mut self.search_state,
);
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -1,10 +1,11 @@
use std::cmp::max;
+use std::error::Error;
use std::mem;
use std::ops::RangeInclusive;
-pub use regex_automata::dfa::dense::BuildError;
-use regex_automata::dfa::dense::{Builder, Config, DFA};
-use regex_automata::dfa::Automaton;
+use log::{debug, warn};
+use regex_automata::hybrid::dfa::{Builder, Cache, Config, DFA};
+pub use regex_automata::hybrid::BuildError;
use regex_automata::nfa::thompson::Config as ThompsonConfig;
use regex_automata::util::syntax::Config as SyntaxConfig;
use regex_automata::{Anchored, Input};
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -17,38 +18,59 @@ use crate::term::Term;
/// Used to match equal brackets, when performing a bracket-pair selection.
const BRACKET_PAIRS: [(char, char); 4] = [('(', ')'), ('[', ']'), ('{', '}'), ('<', '>')];
-/// Maximum DFA size to prevent pathological regexes taking down the entire system.
-const MAX_DFA_SIZE: usize = 100_000_000;
-
pub type Match = RangeInclusive<Point>;
/// Terminal regex search state.
#[derive(Clone, Debug)]
pub struct RegexSearch {
- dfa: DFA<Vec<u32>>,
- rdfa: DFA<Vec<u32>>,
+ fdfa: LazyDfa,
+ rdfa: LazyDfa,
}
impl RegexSearch {
/// Build the forward and backward search DFAs.
pub fn new(search: &str) -> Result<RegexSearch, Box<BuildError>> {
// Setup configs for both DFA directions.
+ //
+ // Bounds are based on Regex's meta engine:
+ // https://github.com/rust-lang/regex/blob/061ee815ef2c44101dba7b0b124600fcb03c1912/regex-automata/src/meta/wrappers.rs#L581-L599
let has_uppercase = search.chars().any(|c| c.is_uppercase());
let syntax_config = SyntaxConfig::new().case_insensitive(!has_uppercase);
- let config = Config::new().dfa_size_limit(Some(MAX_DFA_SIZE));
+ let config =
+ Config::new().minimum_cache_clear_count(Some(3)).minimum_bytes_per_state(Some(10));
+ let max_size = config.get_cache_capacity();
+ let mut thompson_config = ThompsonConfig::new().nfa_size_limit(Some(max_size));
// Create Regex DFA for left-to-right search.
- let dfa = Builder::new().configure(config.clone()).syntax(syntax_config).build(search)?;
+ let fdfa = Builder::new()
+ .configure(config.clone())
+ .syntax(syntax_config)
+ .thompson(thompson_config.clone())
+ .build(search)?;
// Create Regex DFA for right-to-left search.
- let thompson_config = ThompsonConfig::new().reverse(true);
+ thompson_config = thompson_config.reverse(true);
let rdfa = Builder::new()
.configure(config)
.syntax(syntax_config)
.thompson(thompson_config)
.build(search)?;
- Ok(RegexSearch { dfa, rdfa })
+ Ok(RegexSearch { fdfa: fdfa.into(), rdfa: rdfa.into() })
+ }
+}
+
+/// Runtime-evaluated DFA.
+#[derive(Clone, Debug)]
+struct LazyDfa {
+ dfa: DFA,
+ cache: Cache,
+}
+
+impl From<DFA> for LazyDfa {
+ fn from(dfa: DFA) -> Self {
+ let cache = dfa.create_cache();
+ Self { dfa, cache }
}
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -56,7 +78,7 @@ impl<T> Term<T> {
/// Get next search match in the specified direction.
pub fn search_next(
&self,
- regex: &RegexSearch,
+ regex: &mut RegexSearch,
mut origin: Point,
direction: Direction,
side: Side,
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -75,7 +97,7 @@ impl<T> Term<T> {
/// Find the next match to the right of the origin.
fn next_match_right(
&self,
- regex: &RegexSearch,
+ regex: &mut RegexSearch,
origin: Point,
side: Side,
max_lines: Option<usize>,
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -114,7 +136,7 @@ impl<T> Term<T> {
/// Find the next match to the left of the origin.
fn next_match_left(
&self,
- regex: &RegexSearch,
+ regex: &mut RegexSearch,
origin: Point,
side: Side,
max_lines: Option<usize>,
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -163,14 +185,14 @@ impl<T> Term<T> {
/// The origin is always included in the regex.
pub fn regex_search_left(
&self,
- regex: &RegexSearch,
+ regex: &mut RegexSearch,
start: Point,
end: Point,
) -> Option<Match> {
// Find start and end of match.
- let match_start = self.regex_search(start, end, Direction::Left, false, ®ex.rdfa)?;
+ let match_start = self.regex_search(start, end, Direction::Left, false, &mut regex.rdfa)?;
let match_end =
- self.regex_search(match_start, start, Direction::Right, true, ®ex.dfa)?;
+ self.regex_search(match_start, start, Direction::Right, true, &mut regex.fdfa)?;
Some(match_start..=match_end)
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -180,14 +202,14 @@ impl<T> Term<T> {
/// The origin is always included in the regex.
pub fn regex_search_right(
&self,
- regex: &RegexSearch,
+ regex: &mut RegexSearch,
start: Point,
end: Point,
) -> Option<Match> {
// Find start and end of match.
- let match_end = self.regex_search(start, end, Direction::Right, false, ®ex.dfa)?;
+ let match_end = self.regex_search(start, end, Direction::Right, false, &mut regex.fdfa)?;
let match_start =
- self.regex_search(match_end, start, Direction::Left, true, ®ex.rdfa)?;
+ self.regex_search(match_end, start, Direction::Left, true, &mut regex.rdfa)?;
Some(match_start..=match_end)
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -201,8 +223,29 @@ impl<T> Term<T> {
end: Point,
direction: Direction,
anchored: bool,
- regex: &impl Automaton,
+ regex: &mut LazyDfa,
) -> Option<Point> {
+ match self.regex_search_internal(start, end, direction, anchored, regex) {
+ Ok(regex_match) => regex_match,
+ Err(err) => {
+ warn!("Regex exceeded complexity limit");
+ debug!(" {err}");
+ None
+ },
+ }
+ }
+
+ /// Find the next regex match.
+ ///
+ /// To automatically log regex complexity errors, use [`Self::regex_search`] instead.
+ fn regex_search_internal(
+ &self,
+ start: Point,
+ end: Point,
+ direction: Direction,
+ anchored: bool,
+ regex: &mut LazyDfa,
+ ) -> Result<Option<Point>, Box<dyn Error>> {
let topmost_line = self.topmost_line();
let screen_lines = self.screen_lines() as i32;
let last_column = self.last_column();
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -216,8 +259,7 @@ impl<T> Term<T> {
// Get start state for the DFA.
let regex_anchored = if anchored { Anchored::Yes } else { Anchored::No };
let input = Input::new(&[]).anchored(regex_anchored);
- let start_state = regex.start_state_forward(&input).unwrap();
- let mut state = start_state;
+ let mut state = regex.dfa.start_state_forward(&mut regex.cache, &input).unwrap();
let mut iter = self.grid.iter_from(start);
let mut last_wrapped = false;
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -244,19 +286,18 @@ impl<T> Term<T> {
Direction::Left => buf[utf8_len - i - 1],
};
- // Since we get the state from the DFA, it doesn't need to be checked.
- state = unsafe { regex.next_state_unchecked(state, byte) };
+ state = regex.dfa.next_state(&mut regex.cache, state, byte)?;
// Matches require one additional BYTE of lookahead, so we check the match state for
// the first byte of every new character to determine if the last character was a
// match.
- if i == 0 && regex.is_match_state(state) {
+ if i == 0 && state.is_match() {
regex_match = Some(last_point);
}
}
// Abort on dead states.
- if regex.is_dead_state(state) {
+ if state.is_dead() {
break;
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -264,8 +305,8 @@ impl<T> Term<T> {
if point == end || done {
// When reaching the end-of-input, we need to notify the parser that no look-ahead
// is possible and check if the current state is still a match.
- state = regex.next_eoi_state(state);
- if regex.is_match_state(state) {
+ state = regex.dfa.next_eoi_state(&mut regex.cache, state)?;
+ if state.is_match() {
regex_match = Some(point);
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -303,12 +344,12 @@ impl<T> Term<T> {
None => {
// When reaching the end-of-input, we need to notify the parser that no
// look-ahead is possible and check if the current state is still a match.
- state = regex.next_eoi_state(state);
- if regex.is_match_state(state) {
+ state = regex.dfa.next_eoi_state(&mut regex.cache, state)?;
+ if state.is_match() {
regex_match = Some(last_point);
}
- state = start_state;
+ state = regex.dfa.start_state_forward(&mut regex.cache, &input)?;
},
}
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -316,7 +357,7 @@ impl<T> Term<T> {
last_wrapped = wrapped;
}
- regex_match
+ Ok(regex_match)
}
/// Advance a grid iterator over fullwidth characters.
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -478,7 +519,7 @@ pub struct RegexIter<'a, T> {
point: Point,
end: Point,
direction: Direction,
- regex: &'a RegexSearch,
+ regex: &'a mut RegexSearch,
term: &'a Term<T>,
done: bool,
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -489,7 +530,7 @@ impl<'a, T> RegexIter<'a, T> {
end: Point,
direction: Direction,
term: &'a Term<T>,
- regex: &'a RegexSearch,
+ regex: &'a mut RegexSearch,
) -> Self {
Self { point: start, done: false, end, direction, term, regex }
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -505,7 +546,7 @@ impl<'a, T> RegexIter<'a, T> {
}
/// Get the next match in the specified direction.
- fn next_match(&self) -> Option<Match> {
+ fn next_match(&mut self) -> Option<Match> {
match self.direction {
Direction::Right => self.term.regex_search_right(self.regex, self.point, self.end),
Direction::Left => self.term.regex_search_left(self.regex, self.point, self.end),
| diff --git a/alacritty/src/config/ui_config.rs b/alacritty/src/config/ui_config.rs
--- a/alacritty/src/config/ui_config.rs
+++ b/alacritty/src/config/ui_config.rs
@@ -578,8 +578,8 @@ mod tests {
"ftp://ftp.example.org",
] {
let term = mock_term(regular_url);
- let regex = RegexSearch::new(URL_REGEX).unwrap();
- let matches = visible_regex_match_iter(&term, ®ex).collect::<Vec<_>>();
+ let mut regex = RegexSearch::new(URL_REGEX).unwrap();
+ let matches = visible_regex_match_iter(&term, &mut regex).collect::<Vec<_>>();
assert_eq!(
matches.len(),
1,
diff --git a/alacritty/src/config/ui_config.rs b/alacritty/src/config/ui_config.rs
--- a/alacritty/src/config/ui_config.rs
+++ b/alacritty/src/config/ui_config.rs
@@ -599,8 +599,8 @@ mod tests {
"mailto:",
] {
let term = mock_term(url_like);
- let regex = RegexSearch::new(URL_REGEX).unwrap();
- let matches = visible_regex_match_iter(&term, ®ex).collect::<Vec<_>>();
+ let mut regex = RegexSearch::new(URL_REGEX).unwrap();
+ let matches = visible_regex_match_iter(&term, &mut regex).collect::<Vec<_>>();
assert!(
matches.is_empty(),
"Should not match url in string {url_like}, but instead got: {matches:?}"
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -638,11 +639,11 @@ mod tests {
fn closed_bracket_does_not_result_in_infinite_iterator() {
let term = mock_term(" ) ");
- let search = RegexSearch::new("[^/ ]").unwrap();
+ let mut search = RegexSearch::new("[^/ ]").unwrap();
let count = HintPostProcessor::new(
&term,
- &search,
+ &mut search,
Point::new(Line(0), Column(1))..=Point::new(Line(0), Column(1)),
)
.take(1)
diff --git a/alacritty/src/display/hint.rs b/alacritty/src/display/hint.rs
--- a/alacritty/src/display/hint.rs
+++ b/alacritty/src/display/hint.rs
@@ -694,9 +695,9 @@ mod tests {
// The Term returned from this call will have a viewport starting at 0 and ending at 4096.
// That's good enough for this test, since it only cares about visible content.
let term = mock_term(&content);
- let regex = RegexSearch::new("match!").unwrap();
+ let mut regex = RegexSearch::new("match!").unwrap();
// The interator should match everything in the viewport.
- assert_eq!(visible_regex_match_iter(&term, ®ex).count(), 4096);
+ assert_eq!(visible_regex_match_iter(&term, &mut regex).count(), 4096);
}
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -561,12 +602,12 @@ mod tests {
");
// Check regex across wrapped and unwrapped lines.
- let regex = RegexSearch::new("Ala.*123").unwrap();
+ let mut regex = RegexSearch::new("Ala.*123").unwrap();
let start = Point::new(Line(1), Column(0));
let end = Point::new(Line(4), Column(2));
let match_start = Point::new(Line(1), Column(0));
let match_end = Point::new(Line(2), Column(2));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(match_start..=match_end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -581,12 +622,12 @@ mod tests {
");
// Check regex across wrapped and unwrapped lines.
- let regex = RegexSearch::new("Ala.*123").unwrap();
+ let mut regex = RegexSearch::new("Ala.*123").unwrap();
let start = Point::new(Line(4), Column(2));
let end = Point::new(Line(1), Column(0));
let match_start = Point::new(Line(1), Column(0));
let match_end = Point::new(Line(2), Column(2));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(match_start..=match_end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -598,16 +639,16 @@ mod tests {
");
// Greedy stopped at linebreak.
- let regex = RegexSearch::new("Ala.*critty").unwrap();
+ let mut regex = RegexSearch::new("Ala.*critty").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(0), Column(25));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(start..=end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(start..=end));
// Greedy stopped at dead state.
- let regex = RegexSearch::new("Ala[^y]*critty").unwrap();
+ let mut regex = RegexSearch::new("Ala[^y]*critty").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(0), Column(15));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(start..=end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(start..=end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -619,10 +660,10 @@ mod tests {
third\
");
- let regex = RegexSearch::new("nothing").unwrap();
+ let mut regex = RegexSearch::new("nothing").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(2), Column(4));
- assert_eq!(term.regex_search_right(®ex, start, end), None);
+ assert_eq!(term.regex_search_right(&mut regex, start, end), None);
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -634,10 +675,10 @@ mod tests {
third\
");
- let regex = RegexSearch::new("nothing").unwrap();
+ let mut regex = RegexSearch::new("nothing").unwrap();
let start = Point::new(Line(2), Column(4));
let end = Point::new(Line(0), Column(0));
- assert_eq!(term.regex_search_left(®ex, start, end), None);
+ assert_eq!(term.regex_search_left(&mut regex, start, end), None);
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -649,12 +690,12 @@ mod tests {
");
// Make sure the cell containing the linebreak is not skipped.
- let regex = RegexSearch::new("te.*123").unwrap();
+ let mut regex = RegexSearch::new("te.*123").unwrap();
let start = Point::new(Line(1), Column(0));
let end = Point::new(Line(0), Column(0));
let match_start = Point::new(Line(0), Column(0));
let match_end = Point::new(Line(0), Column(9));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(match_start..=match_end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -666,11 +707,11 @@ mod tests {
");
// Make sure the cell containing the linebreak is not skipped.
- let regex = RegexSearch::new("te.*123").unwrap();
+ let mut regex = RegexSearch::new("te.*123").unwrap();
let start = Point::new(Line(0), Column(2));
let end = Point::new(Line(1), Column(9));
let match_start = Point::new(Line(1), Column(0));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(match_start..=end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(match_start..=end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -678,10 +719,10 @@ mod tests {
let term = mock_term("alacritty");
// Make sure dead state cell is skipped when reversing.
- let regex = RegexSearch::new("alacrit").unwrap();
+ let mut regex = RegexSearch::new("alacrit").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(0), Column(6));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(start..=end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(start..=end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -689,68 +730,68 @@ mod tests {
let term = mock_term("zooo lense");
// Make sure the reverse DFA operates the same as a forward DFA.
- let regex = RegexSearch::new("zoo").unwrap();
+ let mut regex = RegexSearch::new("zoo").unwrap();
let start = Point::new(Line(0), Column(9));
let end = Point::new(Line(0), Column(0));
let match_start = Point::new(Line(0), Column(0));
let match_end = Point::new(Line(0), Column(2));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(match_start..=match_end));
}
#[test]
fn multibyte_unicode() {
let term = mock_term("testвосибing");
- let regex = RegexSearch::new("te.*ing").unwrap();
+ let mut regex = RegexSearch::new("te.*ing").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(0), Column(11));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(start..=end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(start..=end));
- let regex = RegexSearch::new("te.*ing").unwrap();
+ let mut regex = RegexSearch::new("te.*ing").unwrap();
let start = Point::new(Line(0), Column(11));
let end = Point::new(Line(0), Column(0));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(end..=start));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(end..=start));
}
#[test]
fn end_on_multibyte_unicode() {
let term = mock_term("testвосиб");
- let regex = RegexSearch::new("te.*и").unwrap();
+ let mut regex = RegexSearch::new("te.*и").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(0), Column(8));
let match_end = Point::new(Line(0), Column(7));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(start..=match_end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(start..=match_end));
}
#[test]
fn fullwidth() {
let term = mock_term("a🦇x🦇");
- let regex = RegexSearch::new("[^ ]*").unwrap();
+ let mut regex = RegexSearch::new("[^ ]*").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(0), Column(5));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(start..=end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(start..=end));
- let regex = RegexSearch::new("[^ ]*").unwrap();
+ let mut regex = RegexSearch::new("[^ ]*").unwrap();
let start = Point::new(Line(0), Column(5));
let end = Point::new(Line(0), Column(0));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(end..=start));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(end..=start));
}
#[test]
fn singlecell_fullwidth() {
let term = mock_term("🦇");
- let regex = RegexSearch::new("🦇").unwrap();
+ let mut regex = RegexSearch::new("🦇").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(0), Column(1));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(start..=end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(start..=end));
- let regex = RegexSearch::new("🦇").unwrap();
+ let mut regex = RegexSearch::new("🦇").unwrap();
let start = Point::new(Line(0), Column(1));
let end = Point::new(Line(0), Column(0));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(end..=start));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(end..=start));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -761,16 +802,16 @@ mod tests {
let end = Point::new(Line(0), Column(4));
// Ensure ending without a match doesn't loop indefinitely.
- let regex = RegexSearch::new("x").unwrap();
- assert_eq!(term.regex_search_right(®ex, start, end), None);
+ let mut regex = RegexSearch::new("x").unwrap();
+ assert_eq!(term.regex_search_right(&mut regex, start, end), None);
- let regex = RegexSearch::new("x").unwrap();
+ let mut regex = RegexSearch::new("x").unwrap();
let match_end = Point::new(Line(0), Column(5));
- assert_eq!(term.regex_search_right(®ex, start, match_end), None);
+ assert_eq!(term.regex_search_right(&mut regex, start, match_end), None);
// Ensure match is captured when only partially inside range.
- let regex = RegexSearch::new("jarr🦇").unwrap();
- assert_eq!(term.regex_search_right(®ex, start, end), Some(start..=match_end));
+ let mut regex = RegexSearch::new("jarr🦇").unwrap();
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(start..=match_end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -781,17 +822,17 @@ mod tests {
xxx\
");
- let regex = RegexSearch::new("xxx").unwrap();
+ let mut regex = RegexSearch::new("xxx").unwrap();
let start = Point::new(Line(0), Column(2));
let end = Point::new(Line(1), Column(2));
let match_start = Point::new(Line(1), Column(0));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(match_start..=end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(match_start..=end));
- let regex = RegexSearch::new("xxx").unwrap();
+ let mut regex = RegexSearch::new("xxx").unwrap();
let start = Point::new(Line(1), Column(0));
let end = Point::new(Line(0), Column(0));
let match_end = Point::new(Line(0), Column(2));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(end..=match_end));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(end..=match_end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -802,19 +843,19 @@ mod tests {
xx🦇\
");
- let regex = RegexSearch::new("🦇x").unwrap();
+ let mut regex = RegexSearch::new("🦇x").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(1), Column(3));
let match_start = Point::new(Line(0), Column(0));
let match_end = Point::new(Line(0), Column(2));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(match_start..=match_end));
- let regex = RegexSearch::new("x🦇").unwrap();
+ let mut regex = RegexSearch::new("x🦇").unwrap();
let start = Point::new(Line(1), Column(2));
let end = Point::new(Line(0), Column(0));
let match_start = Point::new(Line(1), Column(1));
let match_end = Point::new(Line(1), Column(3));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(match_start..=match_end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -826,33 +867,33 @@ mod tests {
");
term.grid[Line(0)][Column(3)].flags.insert(Flags::LEADING_WIDE_CHAR_SPACER);
- let regex = RegexSearch::new("🦇x").unwrap();
+ let mut regex = RegexSearch::new("🦇x").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(1), Column(3));
let match_start = Point::new(Line(0), Column(3));
let match_end = Point::new(Line(1), Column(2));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(match_start..=match_end));
- let regex = RegexSearch::new("🦇x").unwrap();
+ let mut regex = RegexSearch::new("🦇x").unwrap();
let start = Point::new(Line(1), Column(3));
let end = Point::new(Line(0), Column(0));
let match_start = Point::new(Line(0), Column(3));
let match_end = Point::new(Line(1), Column(2));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(match_start..=match_end));
- let regex = RegexSearch::new("x🦇").unwrap();
+ let mut regex = RegexSearch::new("x🦇").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(1), Column(3));
let match_start = Point::new(Line(0), Column(2));
let match_end = Point::new(Line(1), Column(1));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(match_start..=match_end));
- let regex = RegexSearch::new("x🦇").unwrap();
+ let mut regex = RegexSearch::new("x🦇").unwrap();
let start = Point::new(Line(1), Column(3));
let end = Point::new(Line(0), Column(0));
let match_start = Point::new(Line(0), Column(2));
let match_end = Point::new(Line(1), Column(1));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(match_start..=match_end));
}
#[test]
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -863,12 +904,12 @@ mod tests {
term.grid[Line(0)][Column(1)].c = '字';
term.grid[Line(0)][Column(1)].flags = Flags::WIDE_CHAR;
- let regex = RegexSearch::new("test").unwrap();
+ let mut regex = RegexSearch::new("test").unwrap();
let start = Point::new(Line(0), Column(0));
let end = Point::new(Line(0), Column(1));
- let mut iter = RegexIter::new(start, end, Direction::Right, &term, ®ex);
+ let mut iter = RegexIter::new(start, end, Direction::Right, &term, &mut regex);
assert_eq!(iter.next(), None);
}
diff --git a/alacritty_terminal/src/term/search.rs b/alacritty_terminal/src/term/search.rs
--- a/alacritty_terminal/src/term/search.rs
+++ b/alacritty_terminal/src/term/search.rs
@@ -881,19 +922,34 @@ mod tests {
");
// Bottom to top.
- let regex = RegexSearch::new("abc").unwrap();
+ let mut regex = RegexSearch::new("abc").unwrap();
let start = Point::new(Line(1), Column(0));
let end = Point::new(Line(0), Column(2));
let match_start = Point::new(Line(0), Column(0));
let match_end = Point::new(Line(0), Column(2));
- assert_eq!(term.regex_search_right(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), Some(match_start..=match_end));
// Top to bottom.
- let regex = RegexSearch::new("def").unwrap();
+ let mut regex = RegexSearch::new("def").unwrap();
let start = Point::new(Line(0), Column(2));
let end = Point::new(Line(1), Column(0));
let match_start = Point::new(Line(1), Column(0));
let match_end = Point::new(Line(1), Column(2));
- assert_eq!(term.regex_search_left(®ex, start, end), Some(match_start..=match_end));
+ assert_eq!(term.regex_search_left(&mut regex, start, end), Some(match_start..=match_end));
+ }
+
+ #[test]
+ fn nfa_compile_error() {
+ assert!(RegexSearch::new("[0-9A-Za-z]{9999999}").is_err());
+ }
+
+ #[test]
+ fn runtime_cache_error() {
+ let term = mock_term(&str::repeat("i", 9999));
+
+ let mut regex = RegexSearch::new("[0-9A-Za-z]{9999}").unwrap();
+ let start = Point::new(Line(0), Column(0));
+ let end = Point::new(Line(0), Column(9999));
+ assert_eq!(term.regex_search_right(&mut regex, start, end), None);
}
}
| Hint regex with large chars amount. Hanging and continues memory consumption
### Problem
With the `Qm[0-9A-Za-z]{44}` as a hint regex pattern, Alacritty hangs upon activating Hints and memory consumption starts to growth continuously.
```yml
# ... rest of the config
hints:
enabled:
- regex: 'Qm[0-9A-Za-z]{44}'
action: Copy
binding:
key: U
mods: Control|Shift
```
But no issues with `sha256:([0-9a-f]{64})` pattern.
```yml
# ... rest of the config
hints:
enabled:
- regex: 'sha256:([0-9a-f]{64})'
action: Copy
binding:
key: U
mods: Control|Shift
```
No any hangs, hints highlighted almost immediately
### System
Version: 0.12.2
OS: Ubuntu 20.04
Compositor: compton
WM: xmonad
### Logs
Crashes: NONE
Font/Terminal size:
[screenshot](https://github.com/alacritty/alacritty/assets/29537473/e70f66a4-9ef0-4e26-b012-0c8c985db0b0)
Keyboard and bindings:
After activating Hints with the mentioned pattern Alacritty hangs and stops reporting any events with `--report-events` option
| Going to remove high priority from this since it shouldn't cause complete destruction anymore. But I still think we can probably do better. Potentially a lazily evaluated regex might perform more reasonably. | 2023-09-09T05:15:52 | 1.65 | 77aa9f42bac4377efe26512d71098d21b9b547fd | [
"term::search::tests::runtime_cache_error"
] | [
"grid::storage::tests::indexing",
"grid::storage::tests::rotate_wrap_zero",
"grid::storage::tests::rotate",
"grid::storage::tests::shrink_after_zero",
"grid::storage::tests::indexing_above_inner_len - should panic",
"grid::storage::tests::shrink_before_and_after_zero",
"grid::storage::tests::shrink_befo... | [] | [] | 5 |
alacritty/alacritty | 5,870 | alacritty__alacritty-5870 | [
"5840"
] | 8a26dee0a9167777709935789b95758e36885617 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,27 @@ The sections should follow the order `Packaging`, `Added`, `Changed`, `Fixed` an
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
+## 0.10.1-rc1
+
+### Added
+
+ - Option `font.builtin_box_drawing` to disable the built-in font for drawing box characters
+
+### Changed
+
+- Builtin font thickness is now based on cell width instead of underline thickness
+
+### Fixed
+
+- OSC 4 not handling `?`
+- `?` in OSC strings reporting default colors instead of modified ones
+- OSC 104 not clearing colors when second parameter is empty
+- Builtin font lines not contiguous when `font.offset` is used
+- `font.glyph_offset` is no longer applied on builtin font
+- Buili-in font arcs alignment
+- Repeated permission prompts on M1 macs
+- Colors being slightly off when using `colors.transparent_background_colors`
+
## 0.10.0
### Packaging
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -10,7 +10,7 @@ checksum = "aae1277d39aeec15cb388266ecc24b11c80469deae6067e17a1a7aa9e5c1f234"
[[package]]
name = "alacritty"
-version = "0.10.0"
+version = "0.10.1-rc1"
dependencies = [
"alacritty_config_derive",
"alacritty_terminal",
diff --git a/Cargo.lock b/Cargo.lock
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -56,7 +56,7 @@ dependencies = [
[[package]]
name = "alacritty_terminal"
-version = "0.16.0"
+version = "0.16.1-rc1"
dependencies = [
"alacritty_config_derive",
"base64",
diff --git a/Makefile b/Makefile
--- a/Makefile
+++ b/Makefile
@@ -52,6 +52,8 @@ $(APP_NAME)-%: $(TARGET)-%
@cp -fp $(APP_BINARY) $(APP_BINARY_DIR)
@cp -fp $(COMPLETIONS) $(APP_COMPLETIONS_DIR)
@touch -r "$(APP_BINARY)" "$(APP_DIR)/$(APP_NAME)"
+ @codesign --remove-signature "$(APP_DIR)/$(APP_NAME)"
+ @codesign --force --deep --sign - "$(APP_DIR)/$(APP_NAME)"
@echo "Created '$(APP_NAME)' in '$(APP_DIR)'"
dmg: $(DMG_NAME)-native ## Create an Alacritty.dmg
diff --git a/alacritty.yml b/alacritty.yml
--- a/alacritty.yml
+++ b/alacritty.yml
@@ -178,6 +178,13 @@
# it is recommended to set `use_thin_strokes` to `false`.
#use_thin_strokes: true
+ # Use built-in font for box drawing characters.
+ #
+ # If `true`, Alacritty will use a custom built-in font for box drawing
+ # characters (Unicode points 2500 - 259f).
+ #
+ #builtin_box_drawing: true
+
# If `true`, bold text is drawn using the bright color variants.
#draw_bold_text_with_bright_colors: false
diff --git a/alacritty/Cargo.toml b/alacritty/Cargo.toml
--- a/alacritty/Cargo.toml
+++ b/alacritty/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "alacritty"
-version = "0.10.0"
+version = "0.10.1-rc1"
authors = ["Christian Duerr <contact@christianduerr.com>", "Joe Wilm <joe@jwilm.com>"]
license = "Apache-2.0"
description = "A fast, cross-platform, OpenGL terminal emulator"
diff --git a/alacritty/Cargo.toml b/alacritty/Cargo.toml
--- a/alacritty/Cargo.toml
+++ b/alacritty/Cargo.toml
@@ -10,7 +10,7 @@ edition = "2018"
[dependencies.alacritty_terminal]
path = "../alacritty_terminal"
-version = "0.16.0"
+version = "0.16.1-rc1"
default-features = false
[dependencies.alacritty_config_derive]
diff --git a/alacritty/res/text.f.glsl b/alacritty/res/text.f.glsl
--- a/alacritty/res/text.f.glsl
+++ b/alacritty/res/text.f.glsl
@@ -18,7 +18,9 @@ void main() {
}
alphaMask = vec4(1.0);
- color = vec4(bg.rgb, bg.a);
+
+ // Premultiply background color by alpha.
+ color = vec4(bg.rgb * bg.a, bg.a);
} else if ((int(fg.a) & COLORED) != 0) {
// Color glyphs, like emojis.
vec4 glyphColor = texture(mask, TexCoords);
diff --git a/alacritty/src/config/font.rs b/alacritty/src/config/font.rs
--- a/alacritty/src/config/font.rs
+++ b/alacritty/src/config/font.rs
@@ -14,7 +14,7 @@ use crate::config::ui_config::Delta;
/// field in this struct. It might be nice in the future to have defaults for
/// each value independently. Alternatively, maybe erroring when the user
/// doesn't provide complete config is Ok.
-#[derive(ConfigDeserialize, Default, Debug, Clone, PartialEq, Eq)]
+#[derive(ConfigDeserialize, Debug, Clone, PartialEq, Eq)]
pub struct Font {
/// Extra spacing per character.
pub offset: Delta<i8>,
diff --git a/alacritty/src/config/font.rs b/alacritty/src/config/font.rs
--- a/alacritty/src/config/font.rs
+++ b/alacritty/src/config/font.rs
@@ -38,6 +38,9 @@ pub struct Font {
/// Font size in points.
size: Size,
+
+ /// Whether to use the built-in font for box drawing characters.
+ pub builtin_box_drawing: bool,
}
impl Font {
diff --git a/alacritty/src/config/font.rs b/alacritty/src/config/font.rs
--- a/alacritty/src/config/font.rs
+++ b/alacritty/src/config/font.rs
@@ -72,6 +75,22 @@ impl Font {
}
}
+impl Default for Font {
+ fn default() -> Font {
+ Self {
+ builtin_box_drawing: true,
+ use_thin_strokes: Default::default(),
+ glyph_offset: Default::default(),
+ bold_italic: Default::default(),
+ italic: Default::default(),
+ offset: Default::default(),
+ normal: Default::default(),
+ bold: Default::default(),
+ size: Default::default(),
+ }
+ }
+}
+
/// Description of the normal font.
#[derive(ConfigDeserialize, Debug, Clone, PartialEq, Eq)]
pub struct FontDescription {
diff --git a/alacritty/src/display/mod.rs b/alacritty/src/display/mod.rs
--- a/alacritty/src/display/mod.rs
+++ b/alacritty/src/display/mod.rs
@@ -12,7 +12,7 @@ use std::{f64, mem};
use glutin::dpi::{PhysicalPosition, PhysicalSize};
use glutin::event::ModifiersState;
use glutin::event_loop::EventLoopWindowTarget;
-#[cfg(not(any(target_os = "macos", windows)))]
+#[cfg(all(feature = "x11", not(any(target_os = "macos", windows))))]
use glutin::platform::unix::EventLoopWindowTargetExtUnix;
use glutin::window::CursorIcon;
use log::{debug, info};
diff --git a/alacritty/src/event.rs b/alacritty/src/event.rs
--- a/alacritty/src/event.rs
+++ b/alacritty/src/event.rs
@@ -1076,8 +1076,9 @@ impl input::Processor<EventProxy, ActionContext<'_, Notifier, EventProxy>> {
self.ctx.write_to_pty(text.into_bytes());
},
TerminalEvent::ColorRequest(index, format) => {
- let text = format(self.ctx.display.colors[index]);
- self.ctx.write_to_pty(text.into_bytes());
+ let color = self.ctx.terminal().colors()[index]
+ .unwrap_or(self.ctx.display.colors[index]);
+ self.ctx.write_to_pty(format(color).into_bytes());
},
TerminalEvent::PtyWrite(text) => self.ctx.write_to_pty(text.into_bytes()),
TerminalEvent::MouseCursorDirty => self.reset_mouse_cursor(),
diff --git a/alacritty/src/input.rs b/alacritty/src/input.rs
--- a/alacritty/src/input.rs
+++ b/alacritty/src/input.rs
@@ -151,7 +151,10 @@ impl<T: EventListener> Execute<T> for Action {
ctx.display().hint_state.start(hint.clone());
ctx.mark_dirty();
},
- Action::ToggleViMode => ctx.toggle_vi_mode(),
+ Action::ToggleViMode => {
+ ctx.on_typing_start();
+ ctx.toggle_vi_mode()
+ },
Action::ViMotion(motion) => {
ctx.on_typing_start();
ctx.terminal_mut().vi_motion(*motion);
diff --git a/alacritty/src/main.rs b/alacritty/src/main.rs
--- a/alacritty/src/main.rs
+++ b/alacritty/src/main.rs
@@ -20,6 +20,8 @@ use std::string::ToString;
use std::{fs, process};
use glutin::event_loop::EventLoop as GlutinEventLoop;
+#[cfg(all(feature = "x11", not(any(target_os = "macos", windows))))]
+use glutin::platform::unix::EventLoopWindowTargetExtUnix;
use log::info;
#[cfg(windows)]
use winapi::um::wincon::{AttachConsole, FreeConsole, ATTACH_PARENT_PROCESS};
diff --git a/alacritty/src/main.rs b/alacritty/src/main.rs
--- a/alacritty/src/main.rs
+++ b/alacritty/src/main.rs
@@ -126,8 +128,6 @@ impl Drop for TemporaryFiles {
/// Creates a window, the terminal state, PTY, I/O event loop, input processor,
/// config change monitor, and runs the main display loop.
fn alacritty(options: Options) -> Result<(), String> {
- info!("Welcome to Alacritty");
-
// Setup glutin event loop.
let window_event_loop = GlutinEventLoop::<Event>::with_user_event();
diff --git a/alacritty/src/main.rs b/alacritty/src/main.rs
--- a/alacritty/src/main.rs
+++ b/alacritty/src/main.rs
@@ -135,6 +135,14 @@ fn alacritty(options: Options) -> Result<(), String> {
let log_file = logging::initialize(&options, window_event_loop.create_proxy())
.expect("Unable to initialize logger");
+ info!("Welcome to Alacritty");
+ info!("Version {}", env!("VERSION"));
+
+ #[cfg(all(feature = "x11", not(any(target_os = "macos", windows))))]
+ info!("Running on {}", if window_event_loop.is_x11() { "X11" } else { "Wayland" });
+ #[cfg(not(any(feature = "x11", target_os = "macos", windows)))]
+ info!("Running on Wayland");
+
// Load configuration file.
let config = config::load(&options);
log_config_path(&config);
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -1,10 +1,12 @@
//! Hand-rolled drawing of unicode [box drawing](http://www.unicode.org/charts/PDF/U2500.pdf)
//! and [block elements](https://www.unicode.org/charts/PDF/U2580.pdf).
-use std::{cmp, mem};
+use std::{cmp, mem, ops};
use crossfont::{BitmapBuffer, Metrics, RasterizedGlyph};
+use crate::config::ui_config::Delta;
+
// Colors which are used for filling shade variants.
const COLOR_FILL_ALPHA_STEP_1: Pixel = Pixel { _r: 192, _g: 192, _b: 192 };
const COLOR_FILL_ALPHA_STEP_2: Pixel = Pixel { _r: 128, _g: 128, _b: 128 };
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -14,19 +16,33 @@ const COLOR_FILL_ALPHA_STEP_3: Pixel = Pixel { _r: 64, _g: 64, _b: 64 };
const COLOR_FILL: Pixel = Pixel { _r: 255, _g: 255, _b: 255 };
/// Returns the rasterized glyph if the character is part of the built-in font.
-pub fn builtin_glyph(character: char, metrics: &Metrics) -> Option<RasterizedGlyph> {
- match character {
+pub fn builtin_glyph(
+ character: char,
+ metrics: &Metrics,
+ offset: &Delta<i8>,
+ glyph_offset: &Delta<i8>,
+) -> Option<RasterizedGlyph> {
+ let mut glyph = match character {
// Box drawing characters and block elements.
- '\u{2500}'..='\u{259f}' => Some(box_drawing(character, metrics)),
- _ => None,
- }
+ '\u{2500}'..='\u{259f}' => box_drawing(character, metrics, offset),
+ _ => return None,
+ };
+
+ // Since we want to ignore `glyph_offset` for the built-in font, subtract it to compensate its
+ // addition when loading glyphs in the renderer.
+ glyph.left -= glyph_offset.x as i32;
+ glyph.top -= glyph_offset.y as i32;
+
+ Some(glyph)
}
-fn box_drawing(character: char, metrics: &Metrics) -> RasterizedGlyph {
- let height = metrics.line_height as usize;
- let width = metrics.average_advance as usize;
- let stroke_size = cmp::max(metrics.underline_thickness as usize, 1);
- let heavy_stroke_size = stroke_size * 3;
+fn box_drawing(character: char, metrics: &Metrics, offset: &Delta<i8>) -> RasterizedGlyph {
+ let height = (metrics.line_height as i32 + offset.y as i32) as usize;
+ let width = (metrics.average_advance as i32 + offset.x as i32) as usize;
+ // Use one eight of the cell width, since this is used as a step size for block elemenets.
+ let stroke_size = cmp::max((width as f32 / 8.).round() as usize, 1);
+ let heavy_stroke_size = stroke_size * 2;
+
// Certain symbols require larger canvas than the cell itself, since for proper contiguous
// lines they require drawing on neighbour cells. So treat them specially early on and handle
// 'normal' characters later.
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -52,7 +68,7 @@ fn box_drawing(character: char, metrics: &Metrics) -> RasterizedGlyph {
let from_x = 0.;
let to_x = x_end + 1.;
- for stroke_size in stroke_size..2 * stroke_size {
+ for stroke_size in 0..2 * stroke_size {
let stroke_size = stroke_size as f32 / 2.;
if character == '\u{2571}' || character == '\u{2573}' {
let h = y_end - stroke_size as f32;
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -327,7 +343,9 @@ fn box_drawing(character: char, metrics: &Metrics) -> RasterizedGlyph {
// Mirror `X` axis.
if character == '\u{256d}' || character == '\u{2570}' {
let center = canvas.x_center() as usize;
- let extra_offset = if width % 2 == 0 { 1 } else { 0 };
+
+ let extra_offset = if stroke_size % 2 == width % 2 { 0 } else { 1 };
+
let buffer = canvas.buffer_mut();
for y in 1..height {
let left = (y - 1) * width;
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -343,7 +361,9 @@ fn box_drawing(character: char, metrics: &Metrics) -> RasterizedGlyph {
// Mirror `Y` axis.
if character == '\u{256d}' || character == '\u{256e}' {
let center = canvas.y_center() as usize;
- let extra_offset = if height % 2 == 0 { 1 } else { 0 };
+
+ let extra_offset = if stroke_size % 2 == height % 2 { 0 } else { 1 };
+
let buffer = canvas.buffer_mut();
if extra_offset != 0 {
let bottom_row = (height - 1) * width;
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -365,7 +385,7 @@ fn box_drawing(character: char, metrics: &Metrics) -> RasterizedGlyph {
'\u{2580}'..='\u{2587}' | '\u{2589}'..='\u{2590}' | '\u{2594}' | '\u{2595}' => {
let width = width as f32;
let height = height as f32;
- let rect_width = match character {
+ let mut rect_width = match character {
'\u{2589}' => width * 7. / 8.,
'\u{258a}' => width * 6. / 8.,
'\u{258b}' => width * 5. / 8.,
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -377,7 +397,8 @@ fn box_drawing(character: char, metrics: &Metrics) -> RasterizedGlyph {
'\u{2595}' => width * 1. / 8.,
_ => width,
};
- let (rect_height, y) = match character {
+
+ let (mut rect_height, mut y) = match character {
'\u{2580}' => (height * 4. / 8., height * 8. / 8.),
'\u{2581}' => (height * 1. / 8., height * 1. / 8.),
'\u{2582}' => (height * 2. / 8., height * 2. / 8.),
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -389,12 +410,18 @@ fn box_drawing(character: char, metrics: &Metrics) -> RasterizedGlyph {
'\u{2594}' => (height * 1. / 8., height * 8. / 8.),
_ => (height, height),
};
+
// Fix `y` coordinates.
- let y = height - y;
+ y = height - y;
+
+ // Ensure that resulted glyph will be visible and also round sizes instead of straight
+ // flooring them.
+ rect_width = cmp::max(rect_width.round() as i32, 1) as f32;
+ rect_height = cmp::max(rect_height.round() as i32, 1) as f32;
let x = match character {
'\u{2590}' => canvas.x_center(),
- '\u{2595}' => width as f32 - width / 8.,
+ '\u{2595}' => width as f32 - rect_width,
_ => 0.,
};
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -469,6 +496,28 @@ impl Pixel {
}
}
+impl ops::Add for Pixel {
+ type Output = Pixel;
+
+ fn add(self, rhs: Pixel) -> Self::Output {
+ let _r = self._r.saturating_add(rhs._r);
+ let _g = self._g.saturating_add(rhs._g);
+ let _b = self._b.saturating_add(rhs._b);
+ Pixel { _r, _g, _b }
+ }
+}
+
+impl ops::Div<u8> for Pixel {
+ type Output = Pixel;
+
+ fn div(self, rhs: u8) -> Self::Output {
+ let _r = self._r / rhs;
+ let _g = self._g / rhs;
+ let _b = self._b / rhs;
+ Pixel { _r, _g, _b }
+ }
+}
+
/// Canvas which is used for simple line drawing operations.
///
/// The coordinate system is the following:
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -702,6 +751,24 @@ impl Canvas {
// Ensure the part closer to edges is properly filled.
self.draw_h_line(0., self.y_center(), stroke_size as f32, stroke_size);
self.draw_v_line(self.x_center(), 0., stroke_size as f32, stroke_size);
+
+ // Fill the resulted arc, since it could have gaps in-between.
+ for y in 0..self.height {
+ let row = y * self.width;
+ let left = match self.buffer[row..row + self.width].iter().position(|p| p._r != 0) {
+ Some(left) => row + left,
+ _ => continue,
+ };
+ let right = match self.buffer[row..row + self.width].iter().rposition(|p| p._r != 0) {
+ Some(right) => row + right,
+ _ => continue,
+ };
+
+ for index in left + 1..right {
+ self.buffer[index] =
+ self.buffer[index] + self.buffer[index - 1] / 2 + self.buffer[index + 1] / 2;
+ }
+ }
}
/// Fills the `Canvas` with the given `Color`.
diff --git a/alacritty/src/renderer/mod.rs b/alacritty/src/renderer/mod.rs
--- a/alacritty/src/renderer/mod.rs
+++ b/alacritty/src/renderer/mod.rs
@@ -132,11 +132,17 @@ pub struct GlyphCache {
/// Font size.
font_size: crossfont::Size,
+ /// Font offset.
+ font_offset: Delta<i8>,
+
/// Glyph offset.
glyph_offset: Delta<i8>,
/// Font metrics.
metrics: crossfont::Metrics,
+
+ /// Whether to use the built-in font for box drawing characters.
+ builtin_box_drawing: bool,
}
impl GlyphCache {
diff --git a/alacritty/src/renderer/mod.rs b/alacritty/src/renderer/mod.rs
--- a/alacritty/src/renderer/mod.rs
+++ b/alacritty/src/renderer/mod.rs
@@ -165,8 +171,10 @@ impl GlyphCache {
bold_key: bold,
italic_key: italic,
bold_italic_key: bold_italic,
+ font_offset: font.offset,
glyph_offset: font.glyph_offset,
metrics,
+ builtin_box_drawing: font.builtin_box_drawing,
};
cache.load_common_glyphs(loader);
diff --git a/alacritty/src/renderer/mod.rs b/alacritty/src/renderer/mod.rs
--- a/alacritty/src/renderer/mod.rs
+++ b/alacritty/src/renderer/mod.rs
@@ -268,7 +276,17 @@ impl GlyphCache {
// Rasterize the glyph using the built-in font for special characters or the user's font
// for everything else.
- let rasterized = builtin_font::builtin_glyph(glyph_key.character, &self.metrics)
+ let rasterized = self
+ .builtin_box_drawing
+ .then(|| {
+ builtin_font::builtin_glyph(
+ glyph_key.character,
+ &self.metrics,
+ &self.font_offset,
+ &self.glyph_offset,
+ )
+ })
+ .flatten()
.map_or_else(|| self.rasterizer.get_glyph(glyph_key), Ok);
let glyph = match rasterized {
diff --git a/alacritty/src/renderer/mod.rs b/alacritty/src/renderer/mod.rs
--- a/alacritty/src/renderer/mod.rs
+++ b/alacritty/src/renderer/mod.rs
@@ -335,6 +353,7 @@ impl GlyphCache {
) -> Result<(), crossfont::Error> {
// Update dpi scaling.
self.rasterizer.update_dpr(dpr as f32);
+ self.font_offset = font.offset;
// Recompute font keys.
let (regular, bold, italic, bold_italic) =
diff --git a/alacritty/src/renderer/mod.rs b/alacritty/src/renderer/mod.rs
--- a/alacritty/src/renderer/mod.rs
+++ b/alacritty/src/renderer/mod.rs
@@ -355,6 +374,7 @@ impl GlyphCache {
self.italic_key = italic;
self.bold_italic_key = bold_italic;
self.metrics = metrics;
+ self.builtin_box_drawing = font.builtin_box_drawing;
self.clear_glyph_cache(loader);
diff --git a/alacritty/windows/wix/alacritty.wxs b/alacritty/windows/wix/alacritty.wxs
--- a/alacritty/windows/wix/alacritty.wxs
+++ b/alacritty/windows/wix/alacritty.wxs
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="windows-1252"?>
<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi" xmlns:util="http://schemas.microsoft.com/wix/UtilExtension">
- <Product Name="Alacritty" Id="*" UpgradeCode="87c21c74-dbd5-4584-89d5-46d9cd0c40a7" Language="1033" Codepage="1252" Version="0.10.0" Manufacturer="Alacritty">
+ <Product Name="Alacritty" Id="*" UpgradeCode="87c21c74-dbd5-4584-89d5-46d9cd0c40a7" Language="1033" Codepage="1252" Version="0.10.1-rc1" Manufacturer="Alacritty">
<Package InstallerVersion="200" Compressed="yes" InstallScope="perMachine"/>
<MajorUpgrade AllowSameVersionUpgrades="yes" DowngradeErrorMessage="A newer version of [ProductName] is already installed."/>
<Icon Id="AlacrittyIco" SourceFile=".\extra\windows\alacritty.ico"/>
diff --git a/alacritty_terminal/Cargo.toml b/alacritty_terminal/Cargo.toml
--- a/alacritty_terminal/Cargo.toml
+++ b/alacritty_terminal/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "alacritty_terminal"
-version = "0.16.0"
+version = "0.16.1-rc1"
authors = ["Christian Duerr <contact@christianduerr.com>", "Joe Wilm <joe@jwilm.com>"]
license = "Apache-2.0"
description = "Library for writing terminal emulators"
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -972,17 +972,28 @@ where
// Set color index.
b"4" => {
- if params.len() > 1 && params.len() % 2 != 0 {
- for chunk in params[1..].chunks(2) {
- let index = parse_number(chunk[0]);
- let color = xparse_color(chunk[1]);
- if let (Some(i), Some(c)) = (index, color) {
- self.handler.set_color(i as usize, c);
- return;
- }
+ if params.len() <= 1 || params.len() % 2 == 0 {
+ unhandled(params);
+ return;
+ }
+
+ for chunk in params[1..].chunks(2) {
+ let index = match parse_number(chunk[0]) {
+ Some(index) => index,
+ None => {
+ unhandled(params);
+ continue;
+ },
+ };
+
+ if let Some(c) = xparse_color(chunk[1]) {
+ self.handler.set_color(index as usize, c);
+ } else if chunk[1] == b"?" {
+ self.handler.dynamic_color_sequence(index, index as usize, terminator);
+ } else {
+ unhandled(params);
}
}
- unhandled(params);
},
// Get/set Foreground, Background, Cursor colors.
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1053,7 +1064,7 @@ where
// Reset color index.
b"104" => {
// Reset all color indexes when no parameters are given.
- if params.len() == 1 {
+ if params.len() == 1 || params[1].is_empty() {
for i in 0..256 {
self.handler.reset_color(i);
}
diff --git a/alacritty_terminal/src/term/mod.rs b/alacritty_terminal/src/term/mod.rs
--- a/alacritty_terminal/src/term/mod.rs
+++ b/alacritty_terminal/src/term/mod.rs
@@ -818,6 +818,10 @@ impl<T> Term<T> {
cursor_cell.bg = bg;
cursor_cell.flags = flags;
}
+
+ pub fn colors(&self) -> &Colors {
+ &self.colors
+ }
}
impl<T> Dimensions for Term<T> {
diff --git a/extra/alacritty-msg.man b/extra/alacritty-msg.man
--- a/extra/alacritty-msg.man
+++ b/extra/alacritty-msg.man
@@ -1,4 +1,4 @@
-.TH ALACRITTY-MSG "1" "October 2021" "alacritty 0.10.0" "User Commands"
+.TH ALACRITTY-MSG "1" "October 2021" "alacritty 0.10.1-rc1" "User Commands"
.SH NAME
alacritty-msg \- Send messages to Alacritty
.SH "SYNOPSIS"
diff --git a/extra/alacritty.man b/extra/alacritty.man
--- a/extra/alacritty.man
+++ b/extra/alacritty.man
@@ -1,4 +1,4 @@
-.TH ALACRITTY "1" "August 2018" "alacritty 0.10.0" "User Commands"
+.TH ALACRITTY "1" "August 2018" "alacritty 0.10.1-rc1" "User Commands"
.SH NAME
Alacritty \- A fast, cross-platform, OpenGL terminal emulator
.SH "SYNOPSIS"
diff --git a/extra/osx/Alacritty.app/Contents/Info.plist b/extra/osx/Alacritty.app/Contents/Info.plist
--- a/extra/osx/Alacritty.app/Contents/Info.plist
+++ b/extra/osx/Alacritty.app/Contents/Info.plist
@@ -15,7 +15,7 @@
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
- <string>0.10.0</string>
+ <string>0.10.1-rc1</string>
<key>CFBundleSupportedPlatforms</key>
<array>
<string>MacOSX</string>
| diff --git a/.builds/freebsd.yml b/.builds/freebsd.yml
--- a/.builds/freebsd.yml
+++ b/.builds/freebsd.yml
@@ -33,7 +33,7 @@ tasks:
cargo clippy --all-targets
- feature-wayland: |
cd alacritty/alacritty
- cargo test --no-default-features --features=wayland
+ RUSTFLAGS="-D warnings" cargo test --no-default-features --features=wayland
- feature-x11: |
cd alacritty/alacritty
- cargo test --no-default-features --features=x11
+ RUSTFLAGS="-D warnings" cargo test --no-default-features --features=x11
diff --git a/.builds/linux.yml b/.builds/linux.yml
--- a/.builds/linux.yml
+++ b/.builds/linux.yml
@@ -36,7 +36,7 @@ tasks:
cargo clippy --all-targets
- feature-wayland: |
cd alacritty/alacritty
- cargo test --no-default-features --features=wayland
+ RUSTFLAGS="-D warnings" cargo test --no-default-features --features=wayland
- feature-x11: |
cd alacritty/alacritty
- cargo test --no-default-features --features=x11
+ RUSTFLAGS="-D warnings" cargo test --no-default-features --features=x11
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -730,7 +797,7 @@ mod test {
#[test]
fn builtin_line_drawing_glyphs_coverage() {
- // Dummy metrics values to test builtin glyphs coverage.
+ // Dummy metrics values to test built-in glyphs coverage.
let metrics = Metrics {
average_advance: 6.,
line_height: 16.,
diff --git a/alacritty/src/renderer/builtin_font.rs b/alacritty/src/renderer/builtin_font.rs
--- a/alacritty/src/renderer/builtin_font.rs
+++ b/alacritty/src/renderer/builtin_font.rs
@@ -741,13 +808,16 @@ mod test {
strikeout_thickness: 2.,
};
+ let offset = Default::default();
+ let glyph_offset = Default::default();
+
// Test coverage of box drawing characters.
for character in '\u{2500}'..='\u{259f}' {
- assert!(builtin_glyph(character, &metrics).is_some());
+ assert!(builtin_glyph(character, &metrics, &offset, &glyph_offset).is_some());
}
for character in ('\u{2450}'..'\u{2500}').chain('\u{25a0}'..'\u{2600}') {
- assert!(builtin_glyph(character, &metrics).is_none());
+ assert!(builtin_glyph(character, &metrics, &offset, &glyph_offset).is_none());
}
}
}
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1494,6 +1505,8 @@ mod tests {
charset: StandardCharset,
attr: Option<Attr>,
identity_reported: bool,
+ color: Option<Rgb>,
+ reset_colors: Vec<usize>,
}
impl Handler for MockHandler {
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1517,6 +1530,14 @@ mod tests {
fn reset_state(&mut self) {
*self = Self::default();
}
+
+ fn set_color(&mut self, _: usize, c: Rgb) {
+ self.color = Some(c);
+ }
+
+ fn reset_color(&mut self, index: usize) {
+ self.reset_colors.push(index)
+ }
}
impl Default for MockHandler {
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1526,6 +1547,8 @@ mod tests {
charset: StandardCharset::Ascii,
attr: None,
identity_reported: false,
+ color: None,
+ reset_colors: Vec::new(),
}
}
}
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1724,4 +1747,62 @@ mod tests {
fn parse_number_too_large() {
assert_eq!(parse_number(b"321"), None);
}
+
+ #[test]
+ fn parse_osc4_set_color() {
+ let bytes: &[u8] = b"\x1b]4;0;#fff\x1b\\";
+
+ let mut parser = Processor::new();
+ let mut handler = MockHandler::default();
+
+ for byte in bytes {
+ parser.advance(&mut handler, *byte);
+ }
+
+ assert_eq!(handler.color, Some(Rgb { r: 0xf0, g: 0xf0, b: 0xf0 }));
+ }
+
+ #[test]
+ fn parse_osc104_reset_color() {
+ let bytes: &[u8] = b"\x1b]104;1;\x1b\\";
+
+ let mut parser = Processor::new();
+ let mut handler = MockHandler::default();
+
+ for byte in bytes {
+ parser.advance(&mut handler, *byte);
+ }
+
+ assert_eq!(handler.reset_colors, vec![1]);
+ }
+
+ #[test]
+ fn parse_osc104_reset_all_colors() {
+ let bytes: &[u8] = b"\x1b]104;\x1b\\";
+
+ let mut parser = Processor::new();
+ let mut handler = MockHandler::default();
+
+ for byte in bytes {
+ parser.advance(&mut handler, *byte);
+ }
+
+ let expected: Vec<usize> = (0..256).collect();
+ assert_eq!(handler.reset_colors, expected);
+ }
+
+ #[test]
+ fn parse_osc104_reset_all_colors_no_semicolon() {
+ let bytes: &[u8] = b"\x1b]104\x1b\\";
+
+ let mut parser = Processor::new();
+ let mut handler = MockHandler::default();
+
+ for byte in bytes {
+ parser.advance(&mut handler, *byte);
+ }
+
+ let expected: Vec<usize> = (0..256).collect();
+ assert_eq!(handler.reset_colors, expected);
+ }
}
| Alacritty keeps asking for permissions
### System
OS: MacOS 12.0.1
Version: 0.10.0
### Expected Behavior
Alacritty asks for permissions to access folders once and remembers.
### Actual Behavior
Every keystroke in a protected MacOS folder (i.e. Desktop, Downloads, Documents) I get a prompt to give it access. This makes the terminal unusable. Pictures below.
<img width="312" alt="Screen Shot 2022-01-31 at 2 30 26 PM" src="https://user-images.githubusercontent.com/22417141/151861069-6e4cd773-643f-4715-9b4b-25238ce74dc3.png">
I have given Alacritty permission to access the folders and subfolders in question.
<img width="394" alt="Screen Shot 2022-01-31 at 2 30 35 PM" src="https://user-images.githubusercontent.com/22417141/151861071-186e9855-533b-4503-868e-1cd9f13c499d.png">
The permissions prompt appears every time I enter one of the above folders, perform any action, and even on each keystoke.
| 2022-02-10T01:07:06 | 0.10 | 8a26dee0a9167777709935789b95758e36885617 | [
"ansi::tests::parse_osc104_reset_all_colors"
] | [
"ansi::tests::parse_control_attribute",
"ansi::tests::parse_designate_g0_as_line_drawing",
"ansi::tests::parse_designate_g1_as_line_drawing_and_invoke",
"ansi::tests::parse_invalid_legacy_rgb_colors",
"ansi::tests::parse_invalid_number",
"ansi::tests::parse_invalid_rgb_colors",
"ansi::tests::parse_numbe... | [] | [] | 6 | |
alacritty/alacritty | 5,788 | alacritty__alacritty-5788 | [
"5542"
] | 60ef17e8e98b0ed219a145881d10ecd6b9f26e85 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -19,6 +19,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
- OSC 4 not handling `?`
- `?` in OSC strings reporting default colors instead of modified ones
+- OSC 104 not clearing colors when second parameter is empty
## 0.10.0
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1064,7 +1064,7 @@ where
// Reset color index.
b"104" => {
// Reset all color indexes when no parameters are given.
- if params.len() == 1 {
+ if params.len() == 1 || params[1].is_empty() {
for i in 0..256 {
self.handler.reset_color(i);
}
| diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1506,6 +1506,7 @@ mod tests {
attr: Option<Attr>,
identity_reported: bool,
color: Option<Rgb>,
+ reset_colors: Vec<usize>,
}
impl Handler for MockHandler {
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1533,6 +1534,10 @@ mod tests {
fn set_color(&mut self, _: usize, c: Rgb) {
self.color = Some(c);
}
+
+ fn reset_color(&mut self, index: usize) {
+ self.reset_colors.push(index)
+ }
}
impl Default for MockHandler {
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1543,6 +1548,7 @@ mod tests {
attr: None,
identity_reported: false,
color: None,
+ reset_colors: Vec::new(),
}
}
}
diff --git a/alacritty_terminal/src/ansi.rs b/alacritty_terminal/src/ansi.rs
--- a/alacritty_terminal/src/ansi.rs
+++ b/alacritty_terminal/src/ansi.rs
@@ -1755,4 +1761,48 @@ mod tests {
assert_eq!(handler.color, Some(Rgb { r: 0xf0, g: 0xf0, b: 0xf0 }));
}
+
+ #[test]
+ fn parse_osc104_reset_color() {
+ let bytes: &[u8] = b"\x1b]104;1;\x1b\\";
+
+ let mut parser = Processor::new();
+ let mut handler = MockHandler::default();
+
+ for byte in bytes {
+ parser.advance(&mut handler, *byte);
+ }
+
+ assert_eq!(handler.reset_colors, vec![1]);
+ }
+
+ #[test]
+ fn parse_osc104_reset_all_colors() {
+ let bytes: &[u8] = b"\x1b]104;\x1b\\";
+
+ let mut parser = Processor::new();
+ let mut handler = MockHandler::default();
+
+ for byte in bytes {
+ parser.advance(&mut handler, *byte);
+ }
+
+ let expected: Vec<usize> = (0..256).collect();
+ assert_eq!(handler.reset_colors, expected);
+ }
+
+ #[test]
+ fn parse_osc104_reset_all_colors_no_semicolon() {
+ let bytes: &[u8] = b"\x1b]104\x1b\\";
+
+ let mut parser = Processor::new();
+ let mut handler = MockHandler::default();
+
+ for byte in bytes {
+ parser.advance(&mut handler, *byte);
+ }
+
+ let expected: Vec<usize> = (0..256).collect();
+ assert_eq!(handler.reset_colors, expected);
+ }
}
| problems with OSC 4 and 104
[Documentation claims that OSC 4 and OSC 104 are supported](https://github.com/alacritty/alacritty/blob/master/docs/escape_support.md). However, testing reveals some issues.
If the OSC is fully supported, we should be able to query the state of colors. Here's a working query using OSC 11:
```
> echo -ne '\e]11;?\a'; cat
^[]11;rgb:ffff/ffff/ffff^G^C⏎
```
That tells me I have a white background.
Attempting to learn the state of a color in my color palette using OSC 4 does not work, however:
```
> echo -ne '\e]4;1;?\a'; cat
^C⏎
```
No information is returned. And in the log, we see:
> [2021-10-13 11:37:43.348998800] [DEBUG] [alacritty_terminal] [unhandled osc_dispatch]: [['4',],['1',],['?',],] at line 947
Strangely, we are able to change the colors with OSC 4:
```
> printf '\e]4;3;#490101\a'
```
That sets my dark yellow to a dark red, easy to see with e.g. `neofetch`.
However, attempting to undo this with OSC 104 does not work:
```
> printf '\e]104;\a'
```
My yellow remains red. The log reports:
> [2021-10-13 11:45:18.472954361] [DEBUG] [alacritty_terminal] [unhandled osc_dispatch]: [['1','0','4',],[],] at line 947
Using a different ending has no effect on results:
```
> echo -ne '\e]11;?\e\\'; cat
^[]11;rgb:ffff/ffff/ffff^[\^C⏎
> echo -ne '\e]4;1;?\e\\'; cat
^C⏎
```
I should be able to query my color palette with OSC 4, and reset my color palette using OSC 104. Currently I can only use OSC 4 to change colors. The above commands work fine in other shells such as iTerm2 or Foot. Similar commands for OSC 10, 11, 110, and 111 all work in Alacritty as expected.
### System
OS: macOS Catalina 10.15.7
Version: alacritty 0.9.0 (fed349a)
OS: Manjaro ARM Linux
Version: alacritty 0.10.0-dev (b6e05d2d)
DE: Wayland, sway
### Logs
> alacritty -vv
Created log file at "/tmp/Alacritty-25654.log"
[2021-10-13 11:36:58.858691035] [INFO ] [alacritty] Welcome to Alacritty
[2021-10-13 11:36:58.858972769] [INFO ] [alacritty] Configuration files loaded from:
"/home/nelson/.config/alacritty/alacritty.yml"
[2021-10-13 11:36:58.900859460] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Regular, load_flags: TARGET_LIGHT, render_mode: "Normal", lcd_filter: 1 }
[2021-10-13 11:36:58.910606092] [DEBUG] [alacritty] Estimated DPR: 1
[2021-10-13 11:36:58.910720127] [DEBUG] [alacritty] Estimated window size: None
[2021-10-13 11:36:58.910755125] [DEBUG] [alacritty] Estimated cell size: 9 x 20
[2021-10-13 11:36:59.019037732] [INFO ] [alacritty] Device pixel ratio: 1
[2021-10-13 11:36:59.049712562] [INFO ] [alacritty] Initializing glyph cache...
[2021-10-13 11:36:59.054074182] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Regular, load_flags: TARGET_LIGHT, render_mode: "Normal", lcd_filter: 1 }
[2021-10-13 11:36:59.063504963] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Bold, load_flags: TARGET_LIGHT, render_mode: "Normal", lcd_filter: 1 }
[2021-10-13 11:36:59.073139607] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Italic, load_flags: TARGET_LIGHT, render_mode: "Normal", lcd_filter: 1 }
[2021-10-13 11:36:59.082763169] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Bold Italic, load_flags: TARGET_LIGHT, render_mode: "Normal", lcd_filter: 1 }
[2021-10-13 11:36:59.430697721] [INFO ] [alacritty] ... finished initializing glyph cache in 0.380882498s
[2021-10-13 11:36:59.430961080] [INFO ] [alacritty] Cell size: 9 x 20
[2021-10-13 11:36:59.431031076] [INFO ] [alacritty] Padding: 4 x 10
[2021-10-13 11:36:59.431070157] [INFO ] [alacritty] Width: 800, Height: 600
[2021-10-13 11:36:59.434250888] [INFO ] [alacritty] PTY dimensions: 29 x 88
[2021-10-13 11:36:59.452195504] [INFO ] [alacritty] Initialisation complete
[2021-10-13 11:36:59.481600616] [DEBUG] [alacritty_terminal] New num_cols is 105 and num_lines is 51
[2021-10-13 11:36:59.490114743] [INFO ] [alacritty] Padding: 5 x 7
[2021-10-13 11:36:59.490217112] [INFO ] [alacritty] Width: 956, Height: 1035
[2021-10-13 11:36:59.645425319] [DEBUG] [alacritty_terminal] [unhandled osc_dispatch]: [['5','0',],['c','o','l','o','r','s','=','l','i','g','h','t',],] at line 947
[2021-10-13 11:37:05.464752222] [DEBUG] [crossfont] Loaded Face Face { ft_face: Font Face: Regular, load_flags: TARGET_LIGHT, render_mode: "Normal", lcd_filter: 1 }
[2021-10-13 11:37:43.348998800] [DEBUG] [alacritty_terminal] [unhandled osc_dispatch]: [['4',],['1',],['?',],] at line 947
[2021-10-13 11:45:18.472954361] [DEBUG] [alacritty_terminal] [unhandled osc_dispatch]: [['1','0','4',],[],] at line 947
| Querying the current value with `?` simply isn't implemented, should be trivial to do though if someone is interested.
I would also like support for this. An additional problem is that the colors currently reported by `\e]11;?\a` are the original term colors rather than the (potentially) modified ones.
I'm looking into the issue with OSC 104 and it seems to happen here:
https://github.com/alacritty/alacritty/blob/ed35d033aeae0fa054756dac7f91ea8527203a4e/alacritty_terminal/src/ansi.rs#L1055-L1059
`\e]104;\a` is parsed as `[['1', '0', '4'], []]`. Omitting the `;` will cause it to be parsed as `[['1', '0', '4']]` and behave as expected. Is this a issue with how the vte crates's parser or should we work around it?
> An additional problem is that the colors currently reported by `\e]11;?\a` are the original term colors rather than the (potentially) modified ones.
I can confirm this.
```
> echo -ne '\e]11;?\a'; cat
^[]11;rgb:ffff/ffff/ffff^G^C⏎
> printf '\e]11;#490101\a'
> echo -ne '\e]11;?\a'; cat
^[]11;rgb:ffff/ffff/ffff^G^C⏎
```
Should I open a separate bug report for OSC 11 not reporting the current value, but the initial value?
Apologies if I've misunderstood what those commands are supposed to do, I'm going entirely off the descriptions in [foot's list of OSCs](https://codeberg.org/dnkl/foot#supported-oscs) since that's the clearest source I've found.
> \e]104;\a is parsed as [['1', '0', '4'], []]. Omitting the ; will cause it to be parsed as [['1', '0', '4']] and behave as expected. Is this a issue with how the vte crates's parser or should we work around it?
Why would the result by the VTE crate be problematic? It's returning the appropriate things.
> Why would the result by the VTE crate be problematic? It's returning the appropriate things.
I'm not familiar with these escape sequences so I got a bit confused. I'm working on it and will submit a PR soon.
Let me know if you run into any troubles. | 2022-01-16T06:10:04 | 1.56 | 589c1e9c6b8830625162af14a9a7aee32c7aade0 | [
"ansi::tests::parse_osc104_reset_all_colors"
] | [
"ansi::tests::parse_invalid_legacy_rgb_colors",
"ansi::tests::parse_designate_g0_as_line_drawing",
"ansi::tests::parse_designate_g1_as_line_drawing_and_invoke",
"ansi::tests::parse_control_attribute",
"ansi::tests::parse_invalid_number",
"ansi::tests::parse_number_too_large",
"ansi::tests::parse_invalid... | [] | [] | 7 |
dtolnay/anyhow | 34 | dtolnay__anyhow-34 | [
"32"
] | 5e04e776efae33b311a850a0b6bae3104b90e1d4 | diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -5,7 +5,7 @@ use std::error::Error as StdError;
use std::fmt::{self, Debug, Display};
use std::mem::{self, ManuallyDrop};
use std::ops::{Deref, DerefMut};
-use std::ptr;
+use std::ptr::{self, NonNull};
impl Error {
/// Create a new error object from any error type.
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -29,11 +29,11 @@ impl Error {
{
let vtable = &ErrorVTable {
object_drop: object_drop::<E>,
- object_drop_front: object_drop_front::<E>,
object_ref: object_ref::<E>,
object_mut: object_mut::<E>,
object_boxed: object_boxed::<E>,
- object_is: object_is::<E>,
+ object_downcast: object_downcast::<E>,
+ object_drop_rest: object_drop_front::<E>,
};
// Safety: passing vtable that operates on the right type E.
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -47,11 +47,11 @@ impl Error {
let error: MessageError<M> = MessageError(message);
let vtable = &ErrorVTable {
object_drop: object_drop::<MessageError<M>>,
- object_drop_front: object_drop_front::<MessageError<M>>,
object_ref: object_ref::<MessageError<M>>,
object_mut: object_mut::<MessageError<M>>,
object_boxed: object_boxed::<MessageError<M>>,
- object_is: object_is::<M>,
+ object_downcast: object_downcast::<M>,
+ object_drop_rest: object_drop_front::<M>,
};
// Safety: MessageError is repr(transparent) so it is okay for the
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -66,11 +66,11 @@ impl Error {
let error: DisplayError<M> = DisplayError(message);
let vtable = &ErrorVTable {
object_drop: object_drop::<DisplayError<M>>,
- object_drop_front: object_drop_front::<DisplayError<M>>,
object_ref: object_ref::<DisplayError<M>>,
object_mut: object_mut::<DisplayError<M>>,
object_boxed: object_boxed::<DisplayError<M>>,
- object_is: object_is::<M>,
+ object_downcast: object_downcast::<M>,
+ object_drop_rest: object_drop_front::<M>,
};
// Safety: DisplayError is repr(transparent) so it is okay for the
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -87,11 +87,11 @@ impl Error {
let vtable = &ErrorVTable {
object_drop: object_drop::<ContextError<C, E>>,
- object_drop_front: object_drop_front::<ContextError<C, E>>,
object_ref: object_ref::<ContextError<C, E>>,
object_mut: object_mut::<ContextError<C, E>>,
object_boxed: object_boxed::<ContextError<C, E>>,
- object_is: object_is::<ContextError<C, E>>,
+ object_downcast: context_downcast::<C, E>,
+ object_drop_rest: context_drop_rest::<C, E>,
};
// Safety: passing vtable that operates on the right type.
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -114,7 +114,7 @@ impl Error {
let inner = Box::new(ErrorImpl {
vtable,
backtrace,
- _error: error,
+ _object: error,
});
let erased = mem::transmute::<Box<ErrorImpl<E>>, Box<ErrorImpl<()>>>(inner);
let inner = ManuallyDrop::new(erased);
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -186,11 +186,11 @@ impl Error {
let vtable = &ErrorVTable {
object_drop: object_drop::<ContextError<C, Error>>,
- object_drop_front: object_drop_front::<ContextError<C, Error>>,
object_ref: object_ref::<ContextError<C, Error>>,
object_mut: object_mut::<ContextError<C, Error>>,
object_boxed: object_boxed::<ContextError<C, Error>>,
- object_is: object_is::<ContextError<C, Error>>,
+ object_downcast: context_chain_downcast::<C>,
+ object_drop_rest: context_chain_drop_rest::<C>,
};
// As the cause is anyhow::Error, we already have a backtrace for it.
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -255,13 +255,19 @@ impl Error {
root_cause
}
- /// Returns `true` if `E` is the type wrapped by this error object.
+ /// Returns true if `E` is the type held by this error object.
+ ///
+ /// For errors with context, this method returns true if `E` matches the
+ /// type of the context `C` **or** the type of the error on which the
+ /// context has been attached. For details about the interaction between
+ /// context and downcasting, [see here].
+ ///
+ /// [see here]: trait.Context.html#effect-on-downcasting
pub fn is<E>(&self) -> bool
where
E: Display + Debug + Send + Sync + 'static,
{
- let target = TypeId::of::<E>();
- unsafe { (self.inner.vtable.object_is)(target) }
+ self.downcast_ref::<E>().is_some()
}
/// Attempt to downcast the error object to a concrete type.
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -269,19 +275,18 @@ impl Error {
where
E: Display + Debug + Send + Sync + 'static,
{
- if self.is::<E>() {
+ let target = TypeId::of::<E>();
+ unsafe {
+ let addr = match (self.inner.vtable.object_downcast)(&self.inner, target) {
+ Some(addr) => addr,
+ None => return Err(self),
+ };
let outer = ManuallyDrop::new(self);
- unsafe {
- let error = ptr::read(
- outer.inner.error() as *const (dyn StdError + Send + Sync) as *const E
- );
- let inner = ptr::read(&outer.inner);
- let erased = ManuallyDrop::into_inner(inner);
- (erased.vtable.object_drop_front)(erased);
- Ok(error)
- }
- } else {
- Err(self)
+ let error = ptr::read(addr.cast::<E>().as_ptr());
+ let inner = ptr::read(&outer.inner);
+ let erased = ManuallyDrop::into_inner(inner);
+ (erased.vtable.object_drop_rest)(erased, target);
+ Ok(error)
}
}
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -325,12 +330,10 @@ impl Error {
where
E: Display + Debug + Send + Sync + 'static,
{
- if self.is::<E>() {
- Some(unsafe {
- &*(self.inner.error() as *const (dyn StdError + Send + Sync) as *const E)
- })
- } else {
- None
+ let target = TypeId::of::<E>();
+ unsafe {
+ let addr = (self.inner.vtable.object_downcast)(&self.inner, target)?;
+ Some(&*addr.cast::<E>().as_ptr())
}
}
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -339,12 +342,10 @@ impl Error {
where
E: Display + Debug + Send + Sync + 'static,
{
- if self.is::<E>() {
- Some(unsafe {
- &mut *(self.inner.error_mut() as *mut (dyn StdError + Send + Sync) as *mut E)
- })
- } else {
- None
+ let target = TypeId::of::<E>();
+ unsafe {
+ let addr = (self.inner.vtable.object_downcast)(&self.inner, target)?;
+ Some(&mut *addr.cast::<E>().as_ptr())
}
}
}
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -400,11 +401,11 @@ impl Drop for Error {
struct ErrorVTable {
object_drop: unsafe fn(Box<ErrorImpl<()>>),
- object_drop_front: unsafe fn(Box<ErrorImpl<()>>),
object_ref: unsafe fn(&ErrorImpl<()>) -> &(dyn StdError + Send + Sync + 'static),
object_mut: unsafe fn(&mut ErrorImpl<()>) -> &mut (dyn StdError + Send + Sync + 'static),
object_boxed: unsafe fn(Box<ErrorImpl<()>>) -> Box<dyn StdError + Send + Sync + 'static>,
- object_is: unsafe fn(TypeId) -> bool,
+ object_downcast: unsafe fn(&ErrorImpl<()>, TypeId) -> Option<NonNull<()>>,
+ object_drop_rest: unsafe fn(Box<ErrorImpl<()>>, TypeId),
}
unsafe fn object_drop<E>(e: Box<ErrorImpl<()>>) {
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -414,10 +415,11 @@ unsafe fn object_drop<E>(e: Box<ErrorImpl<()>>) {
drop(unerased);
}
-unsafe fn object_drop_front<E>(e: Box<ErrorImpl<()>>) {
+unsafe fn object_drop_front<E>(e: Box<ErrorImpl<()>>, target: TypeId) {
// Drop the fields of ErrorImpl other than E as well as the Box allocation,
// without dropping E itself. This is used by downcast after doing a
// ptr::read to take ownership of the E.
+ let _ = target;
let unerased = mem::transmute::<Box<ErrorImpl<()>>, Box<ErrorImpl<ManuallyDrop<E>>>>(e);
drop(unerased);
}
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -426,14 +428,14 @@ unsafe fn object_ref<E>(e: &ErrorImpl<()>) -> &(dyn StdError + Send + Sync + 'st
where
E: StdError + Send + Sync + 'static,
{
- &(*(e as *const ErrorImpl<()> as *const ErrorImpl<E>))._error
+ &(*(e as *const ErrorImpl<()> as *const ErrorImpl<E>))._object
}
unsafe fn object_mut<E>(e: &mut ErrorImpl<()>) -> &mut (dyn StdError + Send + Sync + 'static)
where
E: StdError + Send + Sync + 'static,
{
- &mut (*(e as *mut ErrorImpl<()> as *mut ErrorImpl<E>))._error
+ &mut (*(e as *mut ErrorImpl<()> as *mut ErrorImpl<E>))._object
}
unsafe fn object_boxed<E>(e: Box<ErrorImpl<()>>) -> Box<dyn StdError + Send + Sync + 'static>
diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -443,22 +445,111 @@ where
mem::transmute::<Box<ErrorImpl<()>>, Box<ErrorImpl<E>>>(e)
}
-unsafe fn object_is<T>(target: TypeId) -> bool
+unsafe fn object_downcast<E>(e: &ErrorImpl<()>, target: TypeId) -> Option<NonNull<()>>
+where
+ E: 'static,
+{
+ if TypeId::of::<E>() == target {
+ let unerased = e as *const ErrorImpl<()> as *const ErrorImpl<E>;
+ let addr = &(*unerased)._object as *const E as *mut ();
+ Some(NonNull::new_unchecked(addr))
+ } else {
+ None
+ }
+}
+
+unsafe fn context_downcast<C, E>(e: &ErrorImpl<()>, target: TypeId) -> Option<NonNull<()>>
where
- T: 'static,
+ C: 'static,
+ E: 'static,
{
- TypeId::of::<T>() == target
+ if TypeId::of::<C>() == target {
+ let unerased = e as *const ErrorImpl<()> as *const ErrorImpl<ContextError<C, E>>;
+ let addr = &(*unerased)._object.context as *const C as *mut ();
+ Some(NonNull::new_unchecked(addr))
+ } else if TypeId::of::<E>() == target {
+ let unerased = e as *const ErrorImpl<()> as *const ErrorImpl<ContextError<C, E>>;
+ let addr = &(*unerased)._object.error as *const E as *mut ();
+ Some(NonNull::new_unchecked(addr))
+ } else {
+ None
+ }
}
-// repr C to ensure that `E` remains in the final position
+unsafe fn context_drop_rest<C, E>(e: Box<ErrorImpl<()>>, target: TypeId)
+where
+ C: 'static,
+ E: 'static,
+{
+ // Called after downcasting by value to either the C or the E and doing a
+ // ptr::read to take ownership of that value.
+ if TypeId::of::<C>() == target {
+ let unerased = mem::transmute::<
+ Box<ErrorImpl<()>>,
+ Box<ErrorImpl<ContextError<ManuallyDrop<C>, E>>>,
+ >(e);
+ drop(unerased);
+ } else {
+ let unerased = mem::transmute::<
+ Box<ErrorImpl<()>>,
+ Box<ErrorImpl<ContextError<C, ManuallyDrop<E>>>>,
+ >(e);
+ drop(unerased);
+ }
+}
+
+unsafe fn context_chain_downcast<C>(e: &ErrorImpl<()>, target: TypeId) -> Option<NonNull<()>>
+where
+ C: 'static,
+{
+ if TypeId::of::<C>() == target {
+ let unerased = e as *const ErrorImpl<()> as *const ErrorImpl<ContextError<C, Error>>;
+ let addr = &(*unerased)._object.context as *const C as *mut ();
+ Some(NonNull::new_unchecked(addr))
+ } else {
+ let unerased = e as *const ErrorImpl<()> as *const ErrorImpl<ContextError<C, Error>>;
+ let source = &(*unerased)._object.error;
+ (source.inner.vtable.object_downcast)(&source.inner, target)
+ }
+}
+
+unsafe fn context_chain_drop_rest<C>(e: Box<ErrorImpl<()>>, target: TypeId)
+where
+ C: 'static,
+{
+ // Called after downcasting by value to either the C or one of the causes
+ // and doing a ptr::read to take ownership of that value.
+ if TypeId::of::<C>() == target {
+ let unerased = mem::transmute::<
+ Box<ErrorImpl<()>>,
+ Box<ErrorImpl<ContextError<ManuallyDrop<C>, Error>>>,
+ >(e);
+ drop(unerased);
+ } else {
+ let unerased = mem::transmute::<
+ Box<ErrorImpl<()>>,
+ Box<ErrorImpl<ContextError<C, ManuallyDrop<Error>>>>,
+ >(e);
+ let inner = ptr::read(&unerased._object.error.inner);
+ drop(unerased);
+ let erased = ManuallyDrop::into_inner(inner);
+ (erased.vtable.object_drop_rest)(erased, target);
+ }
+}
+
+// repr C to ensure that E remains in the final position.
#[repr(C)]
pub(crate) struct ErrorImpl<E> {
vtable: &'static ErrorVTable,
backtrace: Option<Backtrace>,
- // NOTE: Don't use directly. Use only through vtable. Erased type may have different alignment.
- _error: E,
+ // NOTE: Don't use directly. Use only through vtable. Erased type may have
+ // different alignment.
+ _object: E,
}
+// repr C to ensure that ContextError<C, E> has the same layout as
+// ContextError<ManuallyDrop<C>, E> and ContextError<C, ManuallyDrop<E>>.
+#[repr(C)]
pub(crate) struct ContextError<C, E> {
pub context: C,
pub error: E,
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -286,6 +286,8 @@ pub type Result<T, E = Error> = std::result::Result<T, E>;
/// This trait is sealed and cannot be implemented for types outside of
/// `anyhow`.
///
+/// <br>
+///
/// # Example
///
/// ```
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -326,6 +328,100 @@ pub type Result<T, E = Error> = std::result::Result<T, E>;
/// Caused by:
/// No such file or directory (os error 2)
/// ```
+///
+/// <br>
+///
+/// # Effect on downcasting
+///
+/// After attaching context of type `C` onto an error of type `E`, the resulting
+/// `anyhow::Error` may be downcast to `C` **or** to `E`.
+///
+/// That is, in codebases that rely on downcasting, Anyhow's context supports
+/// both of the following use cases:
+///
+/// - **Attaching context whose type is insignificant onto errors whose type
+/// is used in downcasts.**
+///
+/// In other error libraries whose context is not designed this way, it can
+/// be risky to introduce context to existing code because new context might
+/// break existing working downcasts. In Anyhow, any downcast that worked
+/// before adding context will continue to work after you add a context, so
+/// you should freely add human-readable context to errors wherever it would
+/// be helpful.
+///
+/// ```
+/// # use anyhow::bail;
+/// # use thiserror::Error;
+/// #
+/// # #[derive(Error, Debug)]
+/// # #[error("???")]
+/// # struct SuspiciousError;
+/// #
+/// # fn helper() -> Result<()> {
+/// # bail!(SuspiciousError);
+/// # }
+/// #
+/// use anyhow::{Context, Result};
+///
+/// fn do_it() -> Result<()> {
+/// helper().context("failed to complete the work")?;
+/// # const IGNORE: &str = stringify! {
+/// ...
+/// # };
+/// # unreachable!()
+/// }
+///
+/// fn main() {
+/// let err = do_it().unwrap_err();
+/// if let Some(e) = err.downcast_ref::<SuspiciousError>() {
+/// // If helper() returned SuspiciousError, this downcast will
+/// // correctly succeed even with the context in between.
+/// # return;
+/// }
+/// # panic!("expected downcast to succeed");
+/// }
+/// ```
+///
+/// - **Attaching context whose type is used in downcasts onto errors whose
+/// type is insignificant.**
+///
+/// Some codebases prefer to use machine-readable context to categorize
+/// lower level errors in a way that will be actionable to higher levels of
+/// the application.
+///
+/// ```
+/// # use anyhow::bail;
+/// # use thiserror::Error;
+/// #
+/// # #[derive(Error, Debug)]
+/// # #[error("???")]
+/// # struct HelperFailed;
+/// #
+/// # fn helper() -> Result<()> {
+/// # bail!("no such file or directory");
+/// # }
+/// #
+/// use anyhow::{Context, Result};
+///
+/// fn do_it() -> Result<()> {
+/// helper().context(HelperFailed)?;
+/// # const IGNORE: &str = stringify! {
+/// ...
+/// # };
+/// # unreachable!()
+/// }
+///
+/// fn main() {
+/// let err = do_it().unwrap_err();
+/// if let Some(e) = err.downcast_ref::<HelperFailed>() {
+/// // If helper failed, this downcast will succeed because
+/// // HelperFailed is the context that has been attached to
+/// // that error.
+/// # return;
+/// }
+/// # panic!("expected downcast to succeed");
+/// }
+/// ```
pub trait Context<T, E>: context::private::Sealed {
/// Wrap the error value with additional context.
fn context<C>(self, context: C) -> Result<T, Error>
| diff --git a/tests/test_context.rs b/tests/test_context.rs
--- a/tests/test_context.rs
+++ b/tests/test_context.rs
@@ -1,4 +1,9 @@
-use anyhow::{Context, Result};
+mod drop;
+
+use crate::drop::{DetectDrop, Flag};
+use anyhow::{Context, Error, Result};
+use std::fmt::{self, Display};
+use thiserror::Error;
// https://github.com/dtolnay/anyhow/issues/18
#[test]
diff --git a/tests/test_context.rs b/tests/test_context.rs
--- a/tests/test_context.rs
+++ b/tests/test_context.rs
@@ -8,3 +13,147 @@ fn test_inference() -> Result<()> {
assert_eq!(y, 1);
Ok(())
}
+
+macro_rules! context_type {
+ ($name:ident) => {
+ #[derive(Debug)]
+ struct $name {
+ message: &'static str,
+ drop: DetectDrop,
+ }
+
+ impl Display for $name {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ f.write_str(self.message)
+ }
+ }
+ };
+}
+
+context_type!(HighLevel);
+context_type!(MidLevel);
+
+#[derive(Error, Debug)]
+#[error("{message}")]
+struct LowLevel {
+ message: &'static str,
+ drop: DetectDrop,
+}
+
+struct Dropped {
+ low: Flag,
+ mid: Flag,
+ high: Flag,
+}
+
+impl Dropped {
+ fn none(&self) -> bool {
+ !self.low.get() && !self.mid.get() && !self.high.get()
+ }
+
+ fn all(&self) -> bool {
+ self.low.get() && self.mid.get() && self.high.get()
+ }
+}
+
+fn make_chain() -> (Error, Dropped) {
+ let dropped = Dropped {
+ low: Flag::new(),
+ mid: Flag::new(),
+ high: Flag::new(),
+ };
+
+ let low = LowLevel {
+ message: "no such file or directory",
+ drop: DetectDrop::new(&dropped.low),
+ };
+
+ // impl Context for Result<T, E>
+ let mid = Err::<(), LowLevel>(low)
+ .context(MidLevel {
+ message: "failed to load config",
+ drop: DetectDrop::new(&dropped.mid),
+ })
+ .unwrap_err();
+
+ // impl Context for Result<T, Error>
+ let high = Err::<(), Error>(mid)
+ .context(HighLevel {
+ message: "failed to start server",
+ drop: DetectDrop::new(&dropped.high),
+ })
+ .unwrap_err();
+
+ (high, dropped)
+}
+
+#[test]
+fn test_downcast_ref() {
+ let (err, dropped) = make_chain();
+
+ assert!(!err.is::<String>());
+ assert!(err.downcast_ref::<String>().is_none());
+
+ assert!(err.is::<HighLevel>());
+ let high = err.downcast_ref::<HighLevel>().unwrap();
+ assert_eq!(high.to_string(), "failed to start server");
+
+ assert!(err.is::<MidLevel>());
+ let mid = err.downcast_ref::<MidLevel>().unwrap();
+ assert_eq!(mid.to_string(), "failed to load config");
+
+ assert!(err.is::<LowLevel>());
+ let low = err.downcast_ref::<LowLevel>().unwrap();
+ assert_eq!(low.to_string(), "no such file or directory");
+
+ assert!(dropped.none());
+ drop(err);
+ assert!(dropped.all());
+}
+
+#[test]
+fn test_downcast_high() {
+ let (err, dropped) = make_chain();
+
+ let err = err.downcast::<HighLevel>().unwrap();
+ assert!(!dropped.high.get());
+ assert!(dropped.low.get() && dropped.mid.get());
+
+ drop(err);
+ assert!(dropped.all());
+}
+
+#[test]
+fn test_downcast_mid() {
+ let (err, dropped) = make_chain();
+
+ let err = err.downcast::<MidLevel>().unwrap();
+ assert!(!dropped.mid.get());
+ assert!(dropped.low.get() && dropped.high.get());
+
+ drop(err);
+ assert!(dropped.all());
+}
+
+#[test]
+fn test_downcast_low() {
+ let (err, dropped) = make_chain();
+
+ let err = err.downcast::<LowLevel>().unwrap();
+ assert!(!dropped.low.get());
+ assert!(dropped.mid.get() && dropped.high.get());
+
+ drop(err);
+ assert!(dropped.all());
+}
+
+#[test]
+fn test_unsuccessful_downcast() {
+ let (err, dropped) = make_chain();
+
+ let err = err.downcast::<String>().unwrap_err();
+ assert!(dropped.none());
+
+ drop(err);
+ assert!(dropped.all());
+}
| Figure out how context should interact with downcasting
For example if we have:
```rust
let e = fs::read("/...").context(ReadFailed).unwrap_err();
match e.downcast_ref::<ReadFailed>() {
```
should this downcast succeed or fail?
| For that matter, should `e.downcast_ref::<io::Error>()` succeed? | 2019-10-28T12:27:06 | 1.0 | 2737bbeb59f50651ff54ca3d879a3f5d659a98ab | [
"test_downcast_low",
"test_downcast_high",
"test_downcast_mid"
] | [
"test_downcast_ref",
"test_inference",
"test_unsuccessful_downcast",
"test_convert",
"test_question_mark",
"test_downcast",
"test_downcast_mut",
"test_drop",
"test_large_alignment",
"test_display",
"test_ensure",
"test_messages",
"test_autotraits",
"test_null_pointer_optimization",
"test... | [] | [] | 8 |
ratatui/ratatui | 1,226 | ratatui__ratatui-1226 | [
"1211"
] | 935a7187c273e0efc876d094d6247d50e28677a3 | diff --git a/src/buffer/buffer.rs b/src/buffer/buffer.rs
--- a/src/buffer/buffer.rs
+++ b/src/buffer/buffer.rs
@@ -192,7 +192,7 @@ impl Buffer {
}
/// Print at most the first n characters of a string if enough space is available
- /// until the end of the line.
+ /// until the end of the line. Skips zero-width graphemes and control characters.
///
/// Use [`Buffer::set_string`] when the maximum amount of characters can be printed.
pub fn set_stringn<T, S>(
diff --git a/src/buffer/buffer.rs b/src/buffer/buffer.rs
--- a/src/buffer/buffer.rs
+++ b/src/buffer/buffer.rs
@@ -210,6 +210,7 @@ impl Buffer {
let max_width = max_width.try_into().unwrap_or(u16::MAX);
let mut remaining_width = self.area.right().saturating_sub(x).min(max_width);
let graphemes = UnicodeSegmentation::graphemes(string.as_ref(), true)
+ .filter(|symbol| !symbol.contains(|char: char| char.is_control()))
.map(|symbol| (symbol, symbol.width() as u16))
.filter(|(_symbol, width)| *width > 0)
.map_while(|(symbol, width)| {
| diff --git a/src/buffer/buffer.rs b/src/buffer/buffer.rs
--- a/src/buffer/buffer.rs
+++ b/src/buffer/buffer.rs
@@ -931,4 +932,35 @@ mod tests {
buf.set_string(0, 1, "bar", Style::new().blue());
assert_eq!(buf, Buffer::with_lines(["foo".red(), "bar".blue()]));
}
+
+ #[test]
+ fn control_sequence_rendered_full() {
+ let text = "I \x1b[0;36mwas\x1b[0m here!";
+
+ let mut buffer = Buffer::filled(Rect::new(0, 0, 25, 3), Cell::new("x"));
+ buffer.set_string(1, 1, text, Style::new());
+
+ let expected = Buffer::with_lines([
+ "xxxxxxxxxxxxxxxxxxxxxxxxx",
+ "xI [0;36mwas[0m here!xxxx",
+ "xxxxxxxxxxxxxxxxxxxxxxxxx",
+ ]);
+ assert_eq!(buffer, expected);
+ }
+
+ #[test]
+ fn control_sequence_rendered_partially() {
+ let text = "I \x1b[0;36mwas\x1b[0m here!";
+
+ let mut buffer = Buffer::filled(Rect::new(0, 0, 11, 3), Cell::new("x"));
+ buffer.set_string(1, 1, text, Style::new());
+
+ #[rustfmt::skip]
+ let expected = Buffer::with_lines([
+ "xxxxxxxxxxx",
+ "xI [0;36mwa",
+ "xxxxxxxxxxx",
+ ]);
+ assert_eq!(buffer, expected);
+ }
}
| Visual glitch - Null character inside block wrapped component alters borders
## Description
G'day,
This one took me a while to investigate in a project so thought I'd report here in case it helps someone else.
When a block wrapped widget contains text with a null character (eg `\0`), and is styled (e.g colour or bold), it can cause a visual glitch where outter block borders do not align correctly.
## To Reproduce
Reproducible example located here: https://github.com/nick42d/ratatuirepro
## Expected behavior
Expect inner widget contents not to impact outter block - however note that the cause could be a technical limitation of the terminal that I'm not aware of.
## Screenshots

## Environment
- OS: Linux
- Terminal Emulator: tried Kitty, Alacritty, Xterm
- Font: JetBrains Nerd Font Mono
- Crate version: 0.27
- Backend: crossterm, also tried termwiz
| Thanks for reporting! oof, that must have been annoying to find.
I think the solution for this is that `Line` or `Span` should probably strip all null characters?
This is related to the recent unicode-width change in 0.1.13, which makes these characters now be measured as one character wide instead of 0 characters. See https://github.com/unicode-rs/unicode-width/issues/55 (closed as WONTFIX)
Locking unicode-width version =0.1.12 (and reverting to Ratatui 0.26.3) makes the issue in the repro go away. This might be a viable approach if you need an immediate workaround, but it's likely a better idea to strip the null characters (and probably any control characters) until this is fixed.
> I think the solution for this is that `Line` or `Span` should probably strip all null characters?
I think this is probably more complex than only filtering characters out:
- it's not just null characters that are affected, it's all the control characters and a bunch of other things that we haven't until now considered
- we might want to turn off this filtering (it's a perf problem very large string sizes)
- we definitely want to be able to pass through ANSI escape codes and measure that differently (they'd be in the control characters mentioned above)
- We have other things that probably need to go with this (e.g. tab characters)
> This is related to the recent unicode-width change in 0.1.13, which makes these characters now be measured as one character wide instead of 0 characters. See https://github.com/unicode-rs/unicode-width/issues/55 (closed as WONTFIX)
>
> Locking unicode-width version =0.1.12 (and reverting to Ratatui 0.26.3) makes the issue in the repro go away. This might be a viable approach if you need an immediate workaround, but it's likely a better idea to strip the null characters (and probably any control characters) until this is fixed.
>
> > I think the solution for this is that `Line` or `Span` should probably strip all null characters?
>
> I think this is probably more complex than only filtering characters out:
> - it's not just null characters that are affected, it's all the control characters and a bunch of other things that we haven't until now considered
> - we might want to turn off this filtering (it's a perf problem very large string sizes)
> - we definitely want to be able to pass through ANSI escape codes and measure that differently (they'd be in the control characters mentioned above)
> - We have other things that probably need to go with this (e.g. tab characters)
@joshka, that was an interesting read! For now I took the approach to strip the characters, I was doing something incorrect on my part in any case calling a function like `string.push(optional_char.unwrap_or_default())`. Good luck in finding a solution!
ANSI escape codes result in horrible output glitches which prevents me from updating to ratatui 0.27 as it enforces `unicode-width` 0.1.13. (https://github.com/EdJoPaTo/mqttui/pull/172)
The following tests work with `unicode-width` < 0.1.13, but don't work on `unicode-width` 0.1.13 (and therefore `ratatui` 0.27.0):
```rust
#[test]
fn control_sequence_rendered_full() {
let text = "I \x1b[0;36mwas\x1b[0m here!";
let mut buffer = Buffer::filled(Rect::new(0, 0, 25, 3), Cell::new("x"));
buffer.set_string(1, 1, text, Style::new());
let expected = Buffer::with_lines([
"xxxxxxxxxxxxxxxxxxxxxxxxx",
"xI [0;36mwas[0m here!xxxx",
"xxxxxxxxxxxxxxxxxxxxxxxxx",
]);
assert_eq!(buffer, expected);
}
#[test]
fn control_sequence_rendered_partially() {
let text = "I \x1b[0;36mwas\x1b[0m here!";
let mut buffer = Buffer::filled(Rect::new(0, 0, 11, 3), Cell::new("x"));
buffer.set_string(1, 1, text, Style::new());
#[rustfmt::skip]
let expected = Buffer::with_lines([
"xxxxxxxxxxx",
"xI [0;36mwa",
"xxxxxxxxxxx",
]);
assert_eq!(buffer, expected);
}
```
The second test colors the whole (failing) test output cyan until some other place resets it. The same behavior happens while rendering a TUI. A lot of places continue being in wrong styles, positions and shifted lines, and it results in characters being stuck at some positions until something is drawn over them again. There are glitches all over the place. Looks funny, but definitely isn't useable.
Simple reproduction:


Also notice that the right border is at the wrong position when the end escape isn't in view (second screenshot). As this is a simple example, it doesn't look as bad as a more complex TUI.
<details>
<summary>More extreme example</summary>
(This provides more details on what things are broken. But it doesn't really provide a way forward. Therefore, I put this into a collapsible. Collapse it again for the actual useful stuff.)

This example does not include the mentioned artifacts of previous renders but includes a mess of shifted and wrong color renders. I can't even explain why some areas have wrong styles or wrong characters. The line starting with debug = is shifted to the left completely. The scrollbar is also shifted left. (Also notice: the right scrollbar is not in focus and normally rendered gray. But its also green here for some reason.)
Where comes the `ens` with green background from? Is it mixing up timestamp (from the right) and `sensor:093` (from the left)? How can this even happen here?
And at the end some random °C appears which is also in the middle of some other string.
Also, the payload at the top: it starts with a cyan escape code which is why the `[D]…` is cyan. But then it ends too early. Then randomly the `t` and the `2` are cyan again. Then the border and the following line. But it stops in the middle of `espDht-et`?

Here the border of the right is stuck behind from the last render. Notice it's above the line of the `debug = ` on the left, so it's not a direct result of the ANSI escape code.

All the values should be completely cyan according to their escape codes contained in the Strings. (The end ANSI escape code is never rendered as its trimmed.) And whatever happened here, the cyan stops in the middle at some random column. Some lines could even be the same (sensor in cyan and sensor in black at the same line), but some are definitely mixed up with probably the ones from previous renders as sensor and scd30 are on the same line which never happens (see reference screenshot in the end).

Here the cyan randomly starts and stops again.
I suspect there is not only the issue of control characters being rendered but also with some positioning being wrong due to different width calculations or wrong detection of changes as to which parts need to be rendered again?
For reference: this is the behavior with `unicode-width` 0.1.11 in the current release of `mqttui`:

Even here you can see that the truncation on the right side isn't correct, as there is an additional 1 width gap until the border. Not sure where that comes from. (Doesn't happen with only displayable characters.) Maybe there is a bug in `unicode-truncation` too?
</details>
---
One attempt to fix this was to `Style::reset()` the border in order to contain this behavior inside the Block which doesn't work.
Something that actually works is a filter of what should be printed in the first place. This restores the behavior of `unicode-width` < 0.1.13 at least for the ANSI escape codes and the reported `\0` character:
```diff
index 4a4c365..ad8b88f 100644
--- a/src/buffer/buffer.rs
+++ b/src/buffer/buffer.rs
@@ -210,6 +210,7 @@ impl Buffer {
let max_width = max_width.try_into().unwrap_or(u16::MAX);
let mut remaining_width = self.area.right().saturating_sub(x).min(max_width);
let graphemes = UnicodeSegmentation::graphemes(string.as_ref(), true)
+ .filter(|symbol| !symbol.starts_with('\x1b') && !symbol.starts_with('\0'))
.map(|symbol| (symbol, symbol.width() as u16))
.filter(|(_symbol, width)| *width > 0)
.map_while(|(symbol, width)| {
```
(alternatively filter for control characters, but that might be too much):
```rust
.filter(|symbol| !symbol.contains(|char: char| char.is_control()))
```
---
I suggest releasing a bug fix release which either filters these characters (are there more?) out before rendering. Alternatively the bug fix release should drop the requirement of `unicode-width` 0.1.13 which would allow for version pinning of <0.1.13 until there is an actual fix. | 2024-07-12T20:26:33 | 0.27 | 935a7187c273e0efc876d094d6247d50e28677a3 | [
"buffer::buffer::tests::control_sequence_rendered_partially",
"buffer::buffer::tests::control_sequence_rendered_full"
] | [
"backend::crossterm::tests::from_crossterm_color",
"backend::crossterm::tests::from_crossterm_content_style_underline",
"backend::crossterm::tests::from_crossterm_content_style",
"backend::crossterm::tests::modifier::from_crossterm_attribute",
"backend::crossterm::tests::modifier::from_crossterm_attributes"... | [
"backend::test::tests::buffer_view_with_overwrites"
] | [] | 9 |
ratatui/ratatui | 518 | ratatui__ratatui-518 | [
"499"
] | c5ea656385843c880b3bef45dccbe8ea57431d10 | diff --git a/src/widgets/barchart.rs b/src/widgets/barchart.rs
--- a/src/widgets/barchart.rs
+++ b/src/widgets/barchart.rs
@@ -335,6 +335,26 @@ impl<'a> BarChart<'a> {
}
fn render_horizontal_bars(self, buf: &mut Buffer, bars_area: Rect, max: u64) {
+ // get the longest label
+ let label_size = self
+ .data
+ .iter()
+ .flat_map(|group| group.bars.iter().map(|bar| &bar.label))
+ .flatten() // bar.label is an Option<Line>
+ .map(|label| label.width())
+ .max()
+ .unwrap_or(0) as u16;
+
+ let label_x = bars_area.x;
+ let bars_area = {
+ let margin = if label_size == 0 { 0 } else { 1 };
+ Rect {
+ x: bars_area.x + label_size + margin,
+ width: bars_area.width - label_size - margin,
+ ..bars_area
+ }
+ };
+
// convert the bar values to ratatui::symbols::bar::Set
let groups: Vec<Vec<u16>> = self
.data
diff --git a/src/widgets/barchart.rs b/src/widgets/barchart.rs
--- a/src/widgets/barchart.rs
+++ b/src/widgets/barchart.rs
@@ -348,7 +368,7 @@ impl<'a> BarChart<'a> {
})
.collect();
- // print all visible bars
+ // print all visible bars, label and values
let mut bar_y = bars_area.top();
for (group_data, mut group) in groups.into_iter().zip(self.data) {
let bars = std::mem::take(&mut group.bars);
diff --git a/src/widgets/barchart.rs b/src/widgets/barchart.rs
--- a/src/widgets/barchart.rs
+++ b/src/widgets/barchart.rs
@@ -374,6 +394,12 @@ impl<'a> BarChart<'a> {
y: bar_y + (self.bar_width >> 1),
..bars_area
};
+
+ // label
+ if let Some(label) = &bar.label {
+ buf.set_line(label_x, bar_value_area.top(), label, label_size);
+ }
+
bar.render_value_with_different_styles(
buf,
bar_value_area,
| diff --git a/src/widgets/barchart.rs b/src/widgets/barchart.rs
--- a/src/widgets/barchart.rs
+++ b/src/widgets/barchart.rs
@@ -1093,6 +1119,21 @@ mod tests {
test_horizontal_bars_label_width_greater_than_bar(Some(Color::White))
}
+ /// Tests horizontal bars label are presents
+ #[test]
+ fn test_horizontal_label() {
+ let chart = BarChart::default()
+ .direction(Direction::Horizontal)
+ .bar_gap(0)
+ .data(&[("Jan", 10), ("Feb", 20), ("Mar", 5)]);
+
+ let mut buffer = Buffer::empty(Rect::new(0, 0, 10, 3));
+ chart.render(buffer.area, &mut buffer);
+ let expected = Buffer::with_lines(vec!["Jan 10█ ", "Feb 20████", "Mar 5 "]);
+
+ assert_buffer_eq!(buffer, expected);
+ }
+
#[test]
fn test_group_label_style() {
let chart: BarChart<'_> = BarChart::default()
| Horizontal Barchart does not print labels
## Description
Horizontal Barcharts were added recently (0.23.0), but labels were not implemented.
Barcharts have two label types: label and text_value (a formatted version of the value), only the latter is rendered.
## To Reproduce
```rust
let data = vec![
("Jan", 10),
("Feb", 20),
("Mar", 30),
("Apr", 40),
("May", 50),
("Jun", 60),
];
BarChart::default()
.direction(Direction::Horizontal)
.data(&data)
.render(area, buf);
```
```plain
10█████
20█████████████
30█████████████████████
40████████████████████████████
50████████████████████████████████████
60████████████████████████████████████████████
```
## Expected behavior
```plain
Jan 10█████
Feb 20█████████████
Mar 30█████████████████████
Apr 40████████████████████████████
May 50████████████████████████████████████
Jun 60████████████████████████████████████████████
```
## Screenshots
## Environment
version 0.23-alpha (main branch as of today)
## Additional context
| I can work on that.
I'm wondering how should we handle something like this.
```rust
let data = vec![
("January", 10),
("February", 20),
("March", 30),
("April", 40),
("May", 50),
("June", 60),
];
```
Bars should always be aligned so we have to have a max label length somehow.
I'd suggest having an option to configure the `max_label_len` (feel free to propose a better naming). If not defined, we'd use a default value - either an arbitrary value like `3`, the longest label length or maybe the average length. | 2023-09-19T23:43:13 | 0.23 | 3bda37284781b62560cde2a7fa774211f651ec25 | [
"widgets::barchart::tests::test_horizontal_label"
] | [
"backend::test::tests::assert_buffer_panics - should panic",
"backend::test::tests::assert_buffer",
"backend::test::tests::resize",
"backend::test::tests::get_cursor",
"backend::test::tests::test_buffer_view",
"backend::test::tests::draw",
"backend::tests::clear_type_from_str",
"backend::tests::clear_... | [
"backend::test::tests::buffer_view_with_overwrites",
"buffer::tests::buffer_set_string_zero_width",
"widgets::paragraph::test::zero_width_char_at_end_of_line",
"widgets::reflow::test::line_composer_zero_width_at_end"
] | [] | 10 |
ratatui/ratatui | 323 | ratatui__ratatui-323 | [
"322"
] | 446efae185a5bb02a9ab6481542d1dc3024b66a9 | diff --git a/src/title.rs b/src/title.rs
--- a/src/title.rs
+++ b/src/title.rs
@@ -50,8 +50,8 @@ impl<'a> Default for Title<'a> {
fn default() -> Self {
Self {
content: Line::from(""),
- alignment: Some(Alignment::Left),
- position: Some(Position::Top),
+ alignment: None,
+ position: None,
}
}
}
| diff --git a/src/widgets/block.rs b/src/widgets/block.rs
--- a/src/widgets/block.rs
+++ b/src/widgets/block.rs
@@ -515,6 +515,7 @@ impl<'a> Styled for Block<'a> {
mod tests {
use super::*;
use crate::{
+ assert_buffer_eq,
layout::Rect,
style::{Color, Modifier, Stylize},
};
diff --git a/src/widgets/block.rs b/src/widgets/block.rs
--- a/src/widgets/block.rs
+++ b/src/widgets/block.rs
@@ -887,4 +888,38 @@ mod tests {
.remove_modifier(Modifier::DIM)
)
}
+
+ #[test]
+ fn title_alignment() {
+ let tests = vec![
+ (Alignment::Left, "test "),
+ (Alignment::Center, " test "),
+ (Alignment::Right, " test"),
+ ];
+ for (alignment, expected) in tests {
+ let mut buffer = Buffer::empty(Rect::new(0, 0, 8, 1));
+ Block::default()
+ .title("test")
+ .title_alignment(alignment)
+ .render(buffer.area, &mut buffer);
+ assert_buffer_eq!(buffer, Buffer::with_lines(vec![expected]));
+ }
+ }
+
+ #[test]
+ fn title_alignment_overrides_block_title_alignment() {
+ let tests = vec![
+ (Alignment::Right, Alignment::Left, "test "),
+ (Alignment::Left, Alignment::Center, " test "),
+ (Alignment::Center, Alignment::Right, " test"),
+ ];
+ for (block_title_alignment, alignment, expected) in tests {
+ let mut buffer = Buffer::empty(Rect::new(0, 0, 8, 1));
+ Block::default()
+ .title(Title::from("test").alignment(alignment))
+ .title_alignment(block_title_alignment)
+ .render(buffer.area, &mut buffer);
+ assert_buffer_eq!(buffer, Buffer::with_lines(vec![expected]));
+ }
+ }
}
diff --git a/tests/widgets_block.rs b/tests/widgets_block.rs
--- a/tests/widgets_block.rs
+++ b/tests/widgets_block.rs
@@ -295,10 +295,15 @@ fn widgets_block_title_alignment() {
let backend = TestBackend::new(15, 2);
let mut terminal = Terminal::new(backend).unwrap();
- let block = Block::default()
+ let block1 = Block::default()
.title(Title::from(Span::styled("Title", Style::default())).alignment(alignment))
.borders(borders);
+ let block2 = Block::default()
+ .title("Title")
+ .title_alignment(alignment)
+ .borders(borders);
+
let area = Rect {
x: 1,
y: 0,
diff --git a/tests/widgets_block.rs b/tests/widgets_block.rs
--- a/tests/widgets_block.rs
+++ b/tests/widgets_block.rs
@@ -306,13 +311,15 @@ fn widgets_block_title_alignment() {
height: 2,
};
- terminal
- .draw(|f| {
- f.render_widget(block, area);
- })
- .unwrap();
+ for block in [block1, block2] {
+ terminal
+ .draw(|f| {
+ f.render_widget(block, area);
+ })
+ .unwrap();
- terminal.backend().assert_buffer(&expected);
+ terminal.backend().assert_buffer(&expected);
+ }
};
// title top-left with all borders
| `title_alignment` does not work as expected
## Description
`title_alignment` does not have effect on `Block` title when used without `Title`.
## To Reproduce
```rs
fn ui<B: Backend>(f: &mut Frame<B>) {
let size = f.size();
let block = Block::default()
.borders(Borders::ALL)
.title("this should be on center")
.title_alignment(Alignment::Center);
f.render_widget(block, size);
}
```
```
┌this should be on center────────────────────────────────┐
│ │
│ │
│ │
│ │
│ │
│ │
└────────────────────────────────────────────────────────┘
```
## Expected behavior
```
┌────────────────this should be on center────────────────┐
│ │
│ │
│ │
│ │
│ │
│ │
└────────────────────────────────────────────────────────┘
```
## Environment
- OS: Arch Linux
- Terminal Emulator: Alacritty
- Font: Ubuntu Mono
- Crate version: git
- Backend: crossterm
## Additional context
This code works:
```rs
fn ui<B: Backend>(f: &mut Frame<B>) {
let size = f.size();
let block = Block::default()
.borders(Borders::ALL)
.title(Title::from("this should be on center").alignment(Alignment::Center))
.title_alignment(Alignment::Center);
f.render_widget(block, size);
}
```
| 2023-07-16T23:26:44 | 0.21 | 446efae185a5bb02a9ab6481542d1dc3024b66a9 | [
"widgets::block::tests::title_alignment",
"widgets_block_title_alignment"
] | [
"buffer::tests::buffer_merge3",
"buffer::tests::buffer_merge2",
"buffer::tests::buffer_diffing_multi_width_offset",
"buffer::tests::buffer_set_string",
"buffer::tests::buffer_set_string_double_width",
"buffer::tests::buffer_set_string_multi_width_overwrite",
"buffer::tests::buffer_diffing_multi_width",
... | [
"buffer::tests::buffer_set_string_zero_width",
"widgets::paragraph::test::zero_width_char_at_end_of_line",
"widgets::reflow::test::line_composer_zero_width_at_end",
"src/backend/termwiz.rs - backend::termwiz::TermwizBackend (line 27)"
] | [] | 11 | |
ratatui/ratatui | 310 | ratatui__ratatui-310 | [
"308"
] | 2889c7d08448a5bd786329fdaeedbb37f77d55b9 | diff --git a/src/backend/crossterm.rs b/src/backend/crossterm.rs
--- a/src/backend/crossterm.rs
+++ b/src/backend/crossterm.rs
@@ -12,7 +12,7 @@ use crossterm::{
execute, queue,
style::{
Attribute as CAttribute, Color as CColor, Print, SetAttribute, SetBackgroundColor,
- SetForegroundColor,
+ SetForegroundColor, SetUnderlineColor,
},
terminal::{self, Clear},
};
diff --git a/src/backend/crossterm.rs b/src/backend/crossterm.rs
--- a/src/backend/crossterm.rs
+++ b/src/backend/crossterm.rs
@@ -81,6 +81,7 @@ where
{
let mut fg = Color::Reset;
let mut bg = Color::Reset;
+ let mut underline_color = Color::Reset;
let mut modifier = Modifier::empty();
let mut last_pos: Option<(u16, u16)> = None;
for (x, y, cell) in content {
diff --git a/src/backend/crossterm.rs b/src/backend/crossterm.rs
--- a/src/backend/crossterm.rs
+++ b/src/backend/crossterm.rs
@@ -107,6 +108,11 @@ where
map_error(queue!(self.buffer, SetBackgroundColor(color)))?;
bg = cell.bg;
}
+ if cell.underline_color != underline_color {
+ let color = CColor::from(cell.underline_color);
+ map_error(queue!(self.buffer, SetUnderlineColor(color)))?;
+ underline_color = cell.underline_color;
+ }
map_error(queue!(self.buffer, Print(&cell.symbol)))?;
}
diff --git a/src/backend/crossterm.rs b/src/backend/crossterm.rs
--- a/src/backend/crossterm.rs
+++ b/src/backend/crossterm.rs
@@ -115,6 +121,7 @@ where
self.buffer,
SetForegroundColor(CColor::Reset),
SetBackgroundColor(CColor::Reset),
+ SetUnderlineColor(CColor::Reset),
SetAttribute(CAttribute::Reset)
))
}
diff --git a/src/buffer.rs b/src/buffer.rs
--- a/src/buffer.rs
+++ b/src/buffer.rs
@@ -19,6 +19,8 @@ pub struct Cell {
pub symbol: String,
pub fg: Color,
pub bg: Color,
+ #[cfg(feature = "crossterm")]
+ pub underline_color: Color,
pub modifier: Modifier,
}
diff --git a/src/buffer.rs b/src/buffer.rs
--- a/src/buffer.rs
+++ b/src/buffer.rs
@@ -52,11 +54,25 @@ impl Cell {
if let Some(c) = style.bg {
self.bg = c;
}
+ #[cfg(feature = "crossterm")]
+ if let Some(c) = style.underline_color {
+ self.underline_color = c;
+ }
self.modifier.insert(style.add_modifier);
self.modifier.remove(style.sub_modifier);
self
}
+ #[cfg(feature = "crossterm")]
+ pub fn style(&self) -> Style {
+ Style::default()
+ .fg(self.fg)
+ .bg(self.bg)
+ .underline_color(self.underline_color)
+ .add_modifier(self.modifier)
+ }
+
+ #[cfg(not(feature = "crossterm"))]
pub fn style(&self) -> Style {
Style::default()
.fg(self.fg)
diff --git a/src/buffer.rs b/src/buffer.rs
--- a/src/buffer.rs
+++ b/src/buffer.rs
@@ -69,6 +85,10 @@ impl Cell {
self.symbol.push(' ');
self.fg = Color::Reset;
self.bg = Color::Reset;
+ #[cfg(feature = "crossterm")]
+ {
+ self.underline_color = Color::Reset;
+ }
self.modifier = Modifier::empty();
}
}
diff --git a/src/buffer.rs b/src/buffer.rs
--- a/src/buffer.rs
+++ b/src/buffer.rs
@@ -79,6 +99,8 @@ impl Default for Cell {
symbol: " ".into(),
fg: Color::Reset,
bg: Color::Reset,
+ #[cfg(feature = "crossterm")]
+ underline_color: Color::Reset,
modifier: Modifier::empty(),
}
}
diff --git a/src/buffer.rs b/src/buffer.rs
--- a/src/buffer.rs
+++ b/src/buffer.rs
@@ -106,6 +128,8 @@ impl Default for Cell {
/// symbol: String::from("r"),
/// fg: Color::Red,
/// bg: Color::White,
+/// #[cfg(feature = "crossterm")]
+/// underline_color: Color::Reset,
/// modifier: Modifier::empty()
/// });
/// buf.get_mut(5, 0).set_char('x');
diff --git a/src/buffer.rs b/src/buffer.rs
--- a/src/buffer.rs
+++ b/src/buffer.rs
@@ -559,10 +583,21 @@ impl Debug for Buffer {
overwritten.push((x, &c.symbol));
}
skip = std::cmp::max(skip, c.symbol.width()).saturating_sub(1);
- let style = (c.fg, c.bg, c.modifier);
- if last_style != Some(style) {
- last_style = Some(style);
- styles.push((x, y, c.fg, c.bg, c.modifier));
+ #[cfg(feature = "crossterm")]
+ {
+ let style = (c.fg, c.bg, c.underline_color, c.modifier);
+ if last_style != Some(style) {
+ last_style = Some(style);
+ styles.push((x, y, c.fg, c.bg, c.underline_color, c.modifier));
+ }
+ }
+ #[cfg(not(feature = "crossterm"))]
+ {
+ let style = (c.fg, c.bg, c.modifier);
+ if last_style != Some(style) {
+ last_style = Some(style);
+ styles.push((x, y, c.fg, c.bg, c.modifier));
+ }
}
}
if !overwritten.is_empty() {
diff --git a/src/buffer.rs b/src/buffer.rs
--- a/src/buffer.rs
+++ b/src/buffer.rs
@@ -574,6 +609,12 @@ impl Debug for Buffer {
}
f.write_str(" ],\n styles: [\n")?;
for s in styles {
+ #[cfg(feature = "crossterm")]
+ f.write_fmt(format_args!(
+ " x: {}, y: {}, fg: {:?}, bg: {:?}, underline: {:?}, modifier: {:?},\n",
+ s.0, s.1, s.2, s.3, s.4, s.5
+ ))?;
+ #[cfg(not(feature = "crossterm"))]
f.write_fmt(format_args!(
" x: {}, y: {}, fg: {:?}, bg: {:?}, modifier: {:?},\n",
s.0, s.1, s.2, s.3, s.4
diff --git a/src/style.rs b/src/style.rs
--- a/src/style.rs
+++ b/src/style.rs
@@ -200,7 +200,9 @@ impl fmt::Debug for Modifier {
/// # use ratatui::layout::Rect;
/// let styles = [
/// Style::default().fg(Color::Blue).add_modifier(Modifier::BOLD | Modifier::ITALIC),
-/// Style::default().bg(Color::Red),
+/// Style::default().bg(Color::Red).add_modifier(Modifier::UNDERLINED),
+/// #[cfg(feature = "crossterm")]
+/// Style::default().underline_color(Color::Green),
/// Style::default().fg(Color::Yellow).remove_modifier(Modifier::ITALIC),
/// ];
/// let mut buffer = Buffer::empty(Rect::new(0, 0, 1, 1));
diff --git a/src/style.rs b/src/style.rs
--- a/src/style.rs
+++ b/src/style.rs
@@ -211,7 +213,9 @@ impl fmt::Debug for Modifier {
/// Style {
/// fg: Some(Color::Yellow),
/// bg: Some(Color::Red),
-/// add_modifier: Modifier::BOLD,
+/// #[cfg(feature = "crossterm")]
+/// underline_color: Some(Color::Green),
+/// add_modifier: Modifier::BOLD | Modifier::UNDERLINED,
/// sub_modifier: Modifier::empty(),
/// },
/// buffer.get(0, 0).style(),
diff --git a/src/style.rs b/src/style.rs
--- a/src/style.rs
+++ b/src/style.rs
@@ -237,6 +241,8 @@ impl fmt::Debug for Modifier {
/// Style {
/// fg: Some(Color::Yellow),
/// bg: Some(Color::Reset),
+/// #[cfg(feature = "crossterm")]
+/// underline_color: Some(Color::Reset),
/// add_modifier: Modifier::empty(),
/// sub_modifier: Modifier::empty(),
/// },
diff --git a/src/style.rs b/src/style.rs
--- a/src/style.rs
+++ b/src/style.rs
@@ -248,6 +254,8 @@ impl fmt::Debug for Modifier {
pub struct Style {
pub fg: Option<Color>,
pub bg: Option<Color>,
+ #[cfg(feature = "crossterm")]
+ pub underline_color: Option<Color>,
pub add_modifier: Modifier,
pub sub_modifier: Modifier,
}
diff --git a/src/style.rs b/src/style.rs
--- a/src/style.rs
+++ b/src/style.rs
@@ -263,6 +271,8 @@ impl Style {
Style {
fg: None,
bg: None,
+ #[cfg(feature = "crossterm")]
+ underline_color: None,
add_modifier: Modifier::empty(),
sub_modifier: Modifier::empty(),
}
diff --git a/src/style.rs b/src/style.rs
--- a/src/style.rs
+++ b/src/style.rs
@@ -273,6 +283,8 @@ impl Style {
Style {
fg: Some(Color::Reset),
bg: Some(Color::Reset),
+ #[cfg(feature = "crossterm")]
+ underline_color: Some(Color::Reset),
add_modifier: Modifier::empty(),
sub_modifier: Modifier::all(),
}
diff --git a/src/style.rs b/src/style.rs
--- a/src/style.rs
+++ b/src/style.rs
@@ -308,6 +320,27 @@ impl Style {
self
}
+ /// Changes the underline color. The text must be underlined with a modifier for this to work.
+ ///
+ /// This uses a non-standard ANSI escape sequence. It is supported by most terminal emulators,
+ /// but is only implemented in the crossterm backend.
+ ///
+ /// See [Wikipedia](https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_(Select_Graphic_Rendition)_parameters) code `58` and `59` for more information.
+ ///
+ /// ## Examples
+ ///
+ /// ```rust
+ /// # use ratatui::style::{Color, Modifier, Style};
+ /// let style = Style::default().underline_color(Color::Blue).add_modifier(Modifier::UNDERLINED);
+ /// let diff = Style::default().underline_color(Color::Red).add_modifier(Modifier::UNDERLINED);
+ /// assert_eq!(style.patch(diff), Style::default().underline_color(Color::Red).add_modifier(Modifier::UNDERLINED));
+ /// ```
+ #[cfg(feature = "crossterm")]
+ pub const fn underline_color(mut self, color: Color) -> Style {
+ self.underline_color = Some(color);
+ self
+ }
+
/// Changes the text emphasis.
///
/// When applied, it adds the given modifier to the `Style` modifiers.
diff --git a/src/style.rs b/src/style.rs
--- a/src/style.rs
+++ b/src/style.rs
@@ -365,6 +398,11 @@ impl Style {
self.fg = other.fg.or(self.fg);
self.bg = other.bg.or(self.bg);
+ #[cfg(feature = "crossterm")]
+ {
+ self.underline_color = other.underline_color.or(self.underline_color);
+ }
+
self.add_modifier.remove(other.sub_modifier);
self.add_modifier.insert(other.add_modifier);
self.sub_modifier.remove(other.add_modifier);
diff --git a/src/text.rs b/src/text.rs
--- a/src/text.rs
+++ b/src/text.rs
@@ -141,6 +141,8 @@ impl<'a> Span<'a> {
/// style: Style {
/// fg: Some(Color::Yellow),
/// bg: Some(Color::Black),
+ /// #[cfg(feature = "crossterm")]
+ /// underline_color: None,
/// add_modifier: Modifier::empty(),
/// sub_modifier: Modifier::empty(),
/// },
diff --git a/src/text.rs b/src/text.rs
--- a/src/text.rs
+++ b/src/text.rs
@@ -150,6 +152,8 @@ impl<'a> Span<'a> {
/// style: Style {
/// fg: Some(Color::Yellow),
/// bg: Some(Color::Black),
+ /// #[cfg(feature = "crossterm")]
+ /// underline_color: None,
/// add_modifier: Modifier::empty(),
/// sub_modifier: Modifier::empty(),
/// },
diff --git a/src/text.rs b/src/text.rs
--- a/src/text.rs
+++ b/src/text.rs
@@ -159,6 +163,8 @@ impl<'a> Span<'a> {
/// style: Style {
/// fg: Some(Color::Yellow),
/// bg: Some(Color::Black),
+ /// #[cfg(feature = "crossterm")]
+ /// underline_color: None,
/// add_modifier: Modifier::empty(),
/// sub_modifier: Modifier::empty(),
/// },
diff --git a/src/text.rs b/src/text.rs
--- a/src/text.rs
+++ b/src/text.rs
@@ -168,6 +174,8 @@ impl<'a> Span<'a> {
/// style: Style {
/// fg: Some(Color::Yellow),
/// bg: Some(Color::Black),
+ /// #[cfg(feature = "crossterm")]
+ /// underline_color: None,
/// add_modifier: Modifier::empty(),
/// sub_modifier: Modifier::empty(),
/// },
diff --git a/src/widgets/gauge.rs b/src/widgets/gauge.rs
--- a/src/widgets/gauge.rs
+++ b/src/widgets/gauge.rs
@@ -279,6 +279,8 @@ impl<'a> Widget for LineGauge<'a> {
.set_style(Style {
fg: self.gauge_style.fg,
bg: None,
+ #[cfg(feature = "crossterm")]
+ underline_color: self.gauge_style.underline_color,
add_modifier: self.gauge_style.add_modifier,
sub_modifier: self.gauge_style.sub_modifier,
});
diff --git a/src/widgets/gauge.rs b/src/widgets/gauge.rs
--- a/src/widgets/gauge.rs
+++ b/src/widgets/gauge.rs
@@ -289,6 +291,8 @@ impl<'a> Widget for LineGauge<'a> {
.set_style(Style {
fg: self.gauge_style.bg,
bg: None,
+ #[cfg(feature = "crossterm")]
+ underline_color: self.gauge_style.underline_color,
add_modifier: self.gauge_style.add_modifier,
sub_modifier: self.gauge_style.sub_modifier,
});
| diff --git a/src/buffer.rs b/src/buffer.rs
--- a/src/buffer.rs
+++ b/src/buffer.rs
@@ -607,6 +648,25 @@ mod tests {
.bg(Color::Yellow)
.add_modifier(Modifier::BOLD),
);
+ #[cfg(feature = "crossterm")]
+ assert_eq!(
+ format!("{buf:?}"),
+ indoc::indoc!(
+ "
+ Buffer {
+ area: Rect { x: 0, y: 0, width: 12, height: 2 },
+ content: [
+ \"Hello World!\",
+ \"G'day World!\",
+ ],
+ styles: [
+ x: 0, y: 0, fg: Reset, bg: Reset, underline: Reset, modifier: NONE,
+ x: 0, y: 1, fg: Green, bg: Yellow, underline: Reset, modifier: BOLD,
+ ]
+ }"
+ )
+ );
+ #[cfg(not(feature = "crossterm"))]
assert_eq!(
format!("{buf:?}"),
indoc::indoc!(
| Enable setting the underline color
## Problem
There is a nonstandard ansi escape code that allows setting the underline color.
This is done with the code `58`. it uses the same arguments as the `38` and `48` codes do for settings their colors.
The underline color can be reset with the code `59`.
See [kitty documentation](https://sw.kovidgoyal.net/kitty/underlines/) for more info.
This is implemented in at least kitty, wezterm, alacritty, gnome terminal and konsole.
## Solution
Allow this to be possible
```rust
Style::default().underline(Color::Blue).add_modifier(Modifier::UNDERLINE),
```
I can make a PR if needed.
| Sounds like a nice feature, feel free to submit a PR! 🐻
I've noticed only crossterm supports setting the underline color, termion and termwiz do not.
Is putting it behind the crossterm feature flag the option you prefer?
The proposed function name will conflict with https://github.com/tui-rs-revival/ratatui/pull/289 which adds an `underline()` method to Style via the Stylize trait. Perhaps `underline_color`? Could there likely be other extended escape codes that might be added in the future?
It would be nice to generally keep feature parity between the backends, so perhaps submitting PRs to termion / termwiz for this feature would be a good in addition to feature flagging behind the crossterm flag here.
We recently made it possible to build Ratatui with all backend feature flags enabled together in order to make it easier to develop changes to backends without having to disable / enable features, so the implementation of this might end up being a no-op on the other backends? | 2023-07-14T08:22:00 | 0.21 | 446efae185a5bb02a9ab6481542d1dc3024b66a9 | [
"buffer::tests::it_implements_debug"
] | [
"buffer::tests::buffer_diffing_empty_empty",
"buffer::tests::buffer_diffing_empty_filled",
"buffer::tests::buffer_diffing_multi_width_offset",
"buffer::tests::buffer_merge",
"buffer::tests::buffer_set_string",
"buffer::tests::buffer_with_lines",
"buffer::tests::it_translates_to_and_from_coordinates",
... | [
"buffer::tests::buffer_set_string_zero_width",
"widgets::paragraph::test::zero_width_char_at_end_of_line",
"widgets::reflow::test::line_composer_zero_width_at_end",
"src/backend/termwiz.rs - backend::termwiz::TermwizBackend (line 27)"
] | [] | 12 |
ratatui/ratatui | 911 | ratatui__ratatui-911 | [
"910"
] | d2d91f754c87458c6d07863eca20f3ea8ae319ce | diff --git a/src/widgets/scrollbar.rs b/src/widgets/scrollbar.rs
--- a/src/widgets/scrollbar.rs
+++ b/src/widgets/scrollbar.rs
@@ -454,7 +454,7 @@ impl<'a> StatefulWidget for Scrollbar<'a> {
let area = self.scollbar_area(area);
for x in area.left()..area.right() {
for y in area.top()..area.bottom() {
- if let Some((symbol, style)) = bar.next() {
+ if let Some(Some((symbol, style))) = bar.next() {
buf.set_string(x, y, symbol, style);
}
}
diff --git a/src/widgets/scrollbar.rs b/src/widgets/scrollbar.rs
--- a/src/widgets/scrollbar.rs
+++ b/src/widgets/scrollbar.rs
@@ -468,23 +468,23 @@ impl Scrollbar<'_> {
&self,
area: Rect,
state: &mut ScrollbarState,
- ) -> impl Iterator<Item = (&str, Style)> {
+ ) -> impl Iterator<Item = Option<(&str, Style)>> {
let (track_start_len, thumb_len, track_end_len) = self.part_lengths(area, state);
- let begin = self.begin_symbol.map(|s| (s, self.begin_style));
- let track = self.track_symbol.map(|s| (s, self.track_style));
- let thumb = Some((self.thumb_symbol, self.thumb_style));
- let end = self.end_symbol.map(|s| (s, self.end_style));
+ let begin = self.begin_symbol.map(|s| Some((s, self.begin_style)));
+ let track = Some(self.track_symbol.map(|s| (s, self.track_style)));
+ let thumb = Some(Some((self.thumb_symbol, self.thumb_style)));
+ let end = self.end_symbol.map(|s| Some((s, self.end_style)));
- // `<``
+ // `<`
iter::once(begin)
// `<═══`
.chain(iter::repeat(track).take(track_start_len))
// `<═══█████`
.chain(iter::repeat(thumb).take(thumb_len))
- // `<═══█████═══════``
+ // `<═══█████═══════`
.chain(iter::repeat(track).take(track_end_len))
- // `<═══█████═══════>``
+ // `<═══█████═══════>`
.chain(iter::once(end))
.flatten()
}
| diff --git a/src/widgets/scrollbar.rs b/src/widgets/scrollbar.rs
--- a/src/widgets/scrollbar.rs
+++ b/src/widgets/scrollbar.rs
@@ -769,6 +769,83 @@ mod tests {
);
}
+ #[rstest]
+ #[case("█████ ", 0, 10, "position_0")]
+ #[case(" █████ ", 1, 10, "position_1")]
+ #[case(" █████ ", 2, 10, "position_2")]
+ #[case(" █████ ", 3, 10, "position_3")]
+ #[case(" █████ ", 4, 10, "position_4")]
+ #[case(" █████ ", 5, 10, "position_5")]
+ #[case(" █████ ", 6, 10, "position_6")]
+ #[case(" █████ ", 7, 10, "position_7")]
+ #[case(" █████ ", 8, 10, "position_8")]
+ #[case(" █████", 9, 10, "position_9")]
+ #[case(" █████", 100, 10, "position_out_of_bounds")]
+ fn render_scrollbar_without_track_symbols(
+ #[case] expected: &str,
+ #[case] position: usize,
+ #[case] content_length: usize,
+ #[case] assertion_message: &str,
+ ) {
+ let size = expected.width() as u16;
+ let mut buffer = Buffer::empty(Rect::new(0, 0, size, 1));
+ let mut state = ScrollbarState::default()
+ .position(position)
+ .content_length(content_length);
+ Scrollbar::default()
+ .orientation(ScrollbarOrientation::HorizontalBottom)
+ .track_symbol(None)
+ .begin_symbol(None)
+ .end_symbol(None)
+ .render(buffer.area, &mut buffer, &mut state);
+ assert_eq!(
+ buffer,
+ Buffer::with_lines(vec![expected]),
+ "{}",
+ assertion_message
+ );
+ }
+
+ #[rstest]
+ #[case("█████-----", 0, 10, "position_0")]
+ #[case("-█████----", 1, 10, "position_1")]
+ #[case("-█████----", 2, 10, "position_2")]
+ #[case("--█████---", 3, 10, "position_3")]
+ #[case("--█████---", 4, 10, "position_4")]
+ #[case("---█████--", 5, 10, "position_5")]
+ #[case("---█████--", 6, 10, "position_6")]
+ #[case("----█████-", 7, 10, "position_7")]
+ #[case("----█████-", 8, 10, "position_8")]
+ #[case("-----█████", 9, 10, "position_9")]
+ #[case("-----█████", 100, 10, "position_out_of_bounds")]
+ fn render_scrollbar_without_track_symbols_over_content(
+ #[case] expected: &str,
+ #[case] position: usize,
+ #[case] content_length: usize,
+ #[case] assertion_message: &str,
+ ) {
+ let size = expected.width() as u16;
+ let mut buffer = Buffer::empty(Rect::new(0, 0, size, 1));
+ let width = buffer.area.width as usize;
+ let s = "";
+ Text::from(format!("{s:-^width$}")).render(buffer.area, &mut buffer);
+ let mut state = ScrollbarState::default()
+ .position(position)
+ .content_length(content_length);
+ Scrollbar::default()
+ .orientation(ScrollbarOrientation::HorizontalBottom)
+ .track_symbol(None)
+ .begin_symbol(None)
+ .end_symbol(None)
+ .render(buffer.area, &mut buffer, &mut state);
+ assert_eq!(
+ buffer,
+ Buffer::with_lines(vec![expected]),
+ "{}",
+ assertion_message
+ );
+ }
+
#[rstest]
#[case("<####---->", 0, 10, "position_0")]
#[case("<#####--->", 1, 10, "position_1")]
| Scrollbar with `.track_symbol(None)` renders incorrectly
## Description
https://github.com/ratatui-org/ratatui/assets/1813121/03646e25-aa2a-41de-ba0b-d0397862de3a
Using code like this causes the problem:
```rust
f.render_stateful_widget(Scrollbar::default().begin_symbol(None).end_symbol(None), area, &mut state);
```
The issue is that when `None` is passed to `iter::repeat` it does not "take" any values which results in an incorrect `track_start_len`.
https://github.com/ratatui-org/ratatui/blob/d2d91f754c87458c6d07863eca20f3ea8ae319ce/src/widgets/scrollbar.rs#L472-L489
| 2024-02-03T18:02:50 | 0.26 | 943c0431d968a82b23a2f31527f32e57f86f8a7c | [
"widgets::scrollbar::tests::render_scrollbar_without_track_symbols::case_04",
"widgets::scrollbar::tests::render_scrollbar_without_track_symbols::case_03",
"widgets::scrollbar::tests::render_scrollbar_without_track_symbols::case_05",
"widgets::scrollbar::tests::render_scrollbar_without_track_symbols::case_02"... | [
"backend::crossterm::tests::from_crossterm_color",
"backend::termwiz::tests::into_color::from_ansicolor",
"backend::crossterm::tests::modifier::from_crossterm_attributes",
"backend::termion::tests::from_termion_bg",
"backend::crossterm::tests::from_crossterm_content_style",
"backend::termwiz::tests::into_... | [
"backend::test::tests::buffer_view_with_overwrites",
"buffer::buffer::tests::set_string_zero_width",
"widgets::paragraph::test::zero_width_char_at_end_of_line",
"widgets::reflow::test::line_composer_zero_width_at_end"
] | [] | 13 | |
ron-rs/ron | 114 | ron-rs__ron-114 | [
"113"
] | 1d86009a18c62600f811b8596f585f2e02f5f1dc | diff --git a/src/de/mod.rs b/src/de/mod.rs
--- a/src/de/mod.rs
+++ b/src/de/mod.rs
@@ -122,7 +122,7 @@ impl<'de, 'a> de::Deserializer<'de> for &'a mut Deserializer<'de> {
b'[' => self.deserialize_seq(visitor),
b'{' => self.deserialize_map(visitor),
b'0'...b'9' | b'+' | b'-' | b'.' => self.deserialize_f64(visitor),
- b'"' => self.deserialize_string(visitor),
+ b'"' | b'r' => self.deserialize_string(visitor),
b'\'' => self.deserialize_char(visitor),
other => self.bytes.err(ParseError::UnexpectedByte(other as char)),
}
diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -277,9 +277,18 @@ impl<'a> Bytes<'a> {
}
pub fn identifier(&mut self) -> Result<&'a [u8]> {
- if IDENT_FIRST.contains(&self.peek_or_eof()?) {
- let bytes = self.next_bytes_contained_in(IDENT_CHAR);
+ let next = self.peek_or_eof()?;
+ if IDENT_FIRST.contains(&next) {
+ // If the next two bytes signify the start of a raw string literal,
+ // return an error.
+ if next == b'r' {
+ let second = self.bytes.get(1).ok_or(self.error(ParseError::Eof))?;
+ if *second == b'"' || *second == b'#' {
+ return self.err(ParseError::ExpectedIdentifier);
+ }
+ }
+ let bytes = self.next_bytes_contained_in(IDENT_CHAR);
let ident = &self.bytes[..bytes];
let _ = self.advance(bytes);
| diff --git /dev/null b/tests/value.rs
new file mode 100644
--- /dev/null
+++ b/tests/value.rs
@@ -0,0 +1,65 @@
+extern crate ron;
+
+use std::collections::BTreeMap;
+
+use ron::value::{Number, Value};
+
+#[test]
+fn bool() {
+ assert_eq!(Value::from_str("true"), Ok(Value::Bool(true)));
+ assert_eq!(Value::from_str("false"), Ok(Value::Bool(false)));
+}
+
+#[test]
+fn char() {
+ assert_eq!(Value::from_str("'a'"), Ok(Value::Char('a')));
+}
+
+#[test]
+fn map() {
+ let mut map = BTreeMap::new();
+ map.insert(Value::Char('a'), Value::Number(Number::new(1f64)));
+ map.insert(Value::Char('b'), Value::Number(Number::new(2f64)));
+ assert_eq!(Value::from_str("{ 'a': 1, 'b': 2 }"), Ok(Value::Map(map)));
+}
+
+#[test]
+fn number() {
+ assert_eq!(Value::from_str("42"), Ok(Value::Number(Number::new(42f64))));
+ assert_eq!(Value::from_str("3.1415"), Ok(Value::Number(Number::new(3.1415f64))));
+}
+
+#[test]
+fn option() {
+ let opt = Some(Box::new(Value::Char('c')));
+ assert_eq!(Value::from_str("Some('c')"), Ok(Value::Option(opt)));
+}
+
+#[test]
+fn string() {
+ let normal = "\"String\"";
+ assert_eq!(Value::from_str(normal), Ok(Value::String("String".into())));
+
+ let raw = "r\"Raw String\"";
+ assert_eq!(Value::from_str(raw), Ok(Value::String("Raw String".into())));
+
+ let raw_hashes = "r#\"Raw String\"#";
+ assert_eq!(Value::from_str(raw_hashes), Ok(Value::String("Raw String".into())));
+
+ let raw_escaped = "r##\"Contains \"#\"##";
+ assert_eq!(Value::from_str(raw_escaped), Ok(Value::String("Contains \"#".into())));
+
+ let raw_multi_line = "r\"Multi\nLine\"";
+ assert_eq!(Value::from_str(raw_multi_line), Ok(Value::String("Multi\nLine".into())));
+}
+
+#[test]
+fn seq() {
+ let seq = vec![Value::Number(Number::new(1f64)), Value::Number(Number::new(2f64))];
+ assert_eq!(Value::from_str("[1, 2]"), Ok(Value::Seq(seq)));
+}
+
+#[test]
+fn unit() {
+ assert_eq!(Value::from_str("()"), Ok(Value::Unit));
+}
| Forgot to hook up raw string literals with deserialize_any
I made a mistake in #112 where I forgot to update [the appropriate branch](https://github.com/ron-rs/ron/blob/master/src/de/mod.rs#L125) in `de/mod.rs` to work with raw string literals. The match pattern should be `b'"' | b'r'` in order for raw string literal deserialization to work in `deserialize_any`. I will prepare a PR to fix this.
| 2018-06-06T22:32:54 | 0.2 | 1d86009a18c62600f811b8596f585f2e02f5f1dc | [
"string"
] | [
"de::tests::expected_attribute",
"de::tests::expected_attribute_end",
"de::tests::forgot_apostrophes",
"de::tests::implicit_some",
"de::tests::invalid_attribute",
"de::tests::multiple_attributes",
"de::tests::test_array",
"de::tests::test_char",
"de::tests::test_comment",
"de::tests::test_empty_st... | [
"test_nul_in_string"
] | [] | 14 | |
ron-rs/ron | 112 | ron-rs__ron-112 | [
"111"
] | 82cc3a2015f3c5f0de328ce0fb6629d807f3cd47 | diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -342,11 +342,17 @@ impl<'a> Bytes<'a> {
}
pub fn string(&mut self) -> Result<ParsedStr> {
- use std::iter::repeat;
-
- if !self.consume("\"") {
- return self.err(ParseError::ExpectedString);
+ if self.consume("\"") {
+ self.escaped_string()
+ } else if self.consume("r") {
+ self.raw_string()
+ } else {
+ self.err(ParseError::ExpectedString)
}
+ }
+
+ fn escaped_string(&mut self) -> Result<ParsedStr> {
+ use std::iter::repeat;
let (i, end_or_escape) = self.bytes
.iter()
| diff --git a/src/de/tests.rs b/src/de/tests.rs
--- a/src/de/tests.rs
+++ b/src/de/tests.rs
@@ -93,8 +93,19 @@ fn test_map() {
#[test]
fn test_string() {
let s: String = from_str("\"String\"").unwrap();
-
assert_eq!("String", s);
+
+ let raw: String = from_str("r\"String\"").unwrap();
+ assert_eq!("String", raw);
+
+ let raw_hashes: String = from_str("r#\"String\"#").unwrap();
+ assert_eq!("String", raw_hashes);
+
+ let raw_hashes_multiline: String = from_str("r#\"String with\nmultiple\nlines\n\"#").unwrap();
+ assert_eq!("String with\nmultiple\nlines\n", raw_hashes_multiline);
+
+ let raw_hashes_quote: String = from_str("r##\"String with \"#\"##").unwrap();
+ assert_eq!("String with \"#", raw_hashes_quote);
}
#[test]
diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -398,6 +404,30 @@ impl<'a> Bytes<'a> {
}
}
+ fn raw_string(&mut self) -> Result<ParsedStr> {
+ let num_hashes = self.bytes.iter().take_while(|&&b| b == b'#').count();
+ let hashes = &self.bytes[..num_hashes];
+ let _ = self.advance(num_hashes);
+
+ if !self.consume("\"") {
+ return self.err(ParseError::ExpectedString);
+ }
+
+ let ending = [&[b'"'], hashes].concat();
+ let i = self.bytes
+ .windows(num_hashes + 1)
+ .position(|window| window == ending.as_slice())
+ .ok_or(self.error(ParseError::ExpectedStringEnd))?;
+
+ let s = from_utf8(&self.bytes[..i]).map_err(|e| self.error(e.into()))?;
+
+ // Advance by the number of bytes of the string
+ // + `num_hashes` + 1 for the `"`.
+ let _ = self.advance(i + num_hashes + 1);
+
+ Ok(ParsedStr::Slice(s))
+ }
+
fn test_for(&self, s: &str) -> bool {
s.bytes()
.enumerate()
| Support for multi-line string literals
Will RON ever support multi-line string literals? This is a feature that missing from JSON, but is present in YAML, TOML, and in Rust proper, like so:
```yaml
# YAML multi-line string literals
key: >
This is a very long sentence
that spans several lines in the YAML
but which will be rendered as a string
without carriage returns.
```
```toml
# TOML multi-line string literals
key = '''
This is a very long sentence
that spans several lines in the TOML
but which will be rendered as a string
without carriage returns.
'''
```
```rust
// Rust multi-line string literals
let key = r#"
This is a very long sentence
that spans several lines in Rust
but which will be rendered as a string
without carriage returns.
"#;
let key = r##"
This is a sentence which happens
to contain a '#' character inside.
"##;
```
This could be a useful for storing localized strings for dialogue boxes, for example, or other complex strings that are too tedious and error-prone to escape by hand. Here is some pseudo-RON syntax showing how this feature could be used:
```ron
System(
arch: "x86_64-linux",
shell: "/usr/bin/bash",
packages: ["firefox", "git", "rustup"],
startup: r##"
#!/usr/bin/bash
export PS1="..."
echo "Done booting!"
"##,
)
```
Given that RON already mimics Rust's notation quite well, adopting its syntax for raw string literals would make this feature feel right at home. See the [official reference page](https://doc.rust-lang.org/reference/tokens.html#raw-string-literals) and [Rust By Example](https://rustbyexample.com/std/str.html#literals-and-escapes) for details.
| Sounds like a useful feature to me @ebkalderon! Do you want to work on this?
@torkleyy Sure! I can try. | 2018-06-06T00:06:05 | 0.2 | 1d86009a18c62600f811b8596f585f2e02f5f1dc | [
"de::tests::test_string"
] | [
"de::tests::expected_attribute",
"de::tests::expected_attribute_end",
"de::tests::forgot_apostrophes",
"de::tests::invalid_attribute",
"de::tests::multiple_attributes",
"de::tests::implicit_some",
"de::tests::test_array",
"de::tests::test_char",
"de::tests::test_comment",
"de::tests::test_enum",
... | [
"test_nul_in_string"
] | [] | 15 |
ron-rs/ron | 108 | ron-rs__ron-108 | [
"107"
] | 5efdf123de373ec466b3dbfd0c61a4f6991a0127 | diff --git a/Cargo.toml b/Cargo.toml
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "ron"
-version = "0.2.1"
+version = "0.2.2"
license = "MIT/Apache-2.0"
keywords = ["parser", "serde", "serialization"]
authors = [
diff --git a/src/de/mod.rs b/src/de/mod.rs
--- a/src/de/mod.rs
+++ b/src/de/mod.rs
@@ -578,6 +578,8 @@ impl<'de, 'a> de::VariantAccess<'de> for Enum<'a, 'de> {
self.de.bytes.skip_ws()?;
if self.de.bytes.consume("(") {
+ self.de.bytes.skip_ws()?;
+
let val = seed.deserialize(&mut *self.de)?;
self.de.bytes.comma()?;
| diff --git a/src/de/tests.rs b/src/de/tests.rs
--- a/src/de/tests.rs
+++ b/src/de/tests.rs
@@ -258,3 +258,8 @@ fn implicit_some() {
// Not concise
assert_eq!(de::<Option<Option<char>>>("None"), None);
}
+
+#[test]
+fn ws_tuple_newtype_variant() {
+ assert_eq!(Ok(MyEnum::B(true)), from_str("B ( \n true \n ) "));
+}
diff --git a/tests/roundtrip.rs b/tests/roundtrip.rs
--- a/tests/roundtrip.rs
+++ b/tests/roundtrip.rs
@@ -79,3 +79,51 @@ fn roundtrip_pretty() {
assert_eq!(Ok(value), deserial);
}
+
+#[test]
+fn roundtrip_sep_tuple_members() {
+ #[derive(Debug, Deserialize, PartialEq, Serialize)]
+ pub enum FileOrMem {
+ File(String),
+ Memory,
+ }
+
+ #[derive(Debug, Deserialize, PartialEq, Serialize)]
+ struct Both {
+ a: Struct,
+ b: FileOrMem,
+ }
+
+ let a = Struct {
+ tuple: ((), NewType(0.5), TupleStruct(UnitStruct, -5)),
+ vec: vec![None, Some(UnitStruct)],
+ map: vec![
+ (Key(5), Enum::Unit),
+ (Key(6), Enum::Bool(false)),
+ (Key(7), Enum::Bool(true)),
+ (Key(9), Enum::Chars('x', "".to_string())),
+ ].into_iter()
+ .collect(),
+ };
+ let b = FileOrMem::File("foo".to_owned());
+
+ let value = Both {
+ a,
+ b,
+ };
+
+ let pretty = ron::ser::PrettyConfig {
+ depth_limit: !0,
+ new_line: "\n".to_owned(),
+ indentor: " ".to_owned(),
+ separate_tuple_members: true,
+ enumerate_arrays: false,
+ };
+ let serial = ron::ser::to_string_pretty(&value, pretty).unwrap();
+
+ println!("Serialized: {}", serial);
+
+ let deserial = ron::de::from_str(&serial);
+
+ assert_eq!(Ok(value), deserial);
+}
| `to_string` produces text that `from_str` does not accept (with tuple variants)
```
# Cargo.toml
ron = "0.2.1"
```
---
Given the following type:
```rust
pub enum FileOrMem {
File(String),
Memory,
}
```
`ser::to_string` produces the following RON:
```
data: Node (
location: Location (
file_or_mem: File(
"foo/bar"
),
column: 1,
line: 1
),
key: "Player",
value: None,
comment: None
)
```
Which fails deserialization with:
```
Parser(ExpectedString, Position { col: 39, line: 19 })
```
The fix, IME, is to change the text to be inline. But this should be fixed in parsing IMO (what I mean is that this syntax is reasonable and should be accepted).
---
For the sake of completeness this error is not restricted to string literals on a new line.
It also occurs if there is a space between the `(` and `"`.
| Was this fixed in https://github.com/ron-rs/ron/commit/239cb7919ff06abdf2bd40514b16f39d6c3786f4?
Possibly, at least there was a similar bug.
Or in https://github.com/ron-rs/ron/commit/713284f17ab1b0883f0206c50e08d2c88b80e061 | 2018-05-20T12:41:47 | 0.2 | 1d86009a18c62600f811b8596f585f2e02f5f1dc | [
"de::tests::ws_tuple_newtype_variant"
] | [
"de::tests::expected_attribute",
"de::tests::expected_attribute_end",
"de::tests::forgot_apostrophes",
"de::tests::invalid_attribute",
"de::tests::implicit_some",
"de::tests::multiple_attributes",
"de::tests::test_array",
"de::tests::test_char",
"de::tests::test_empty_struct",
"de::tests::test_com... | [
"test_nul_in_string"
] | [] | 16 |
ron-rs/ron | 103 | ron-rs__ron-103 | [
"102"
] | a5a4b2ee21f5be0b07c55844e2689ae8a521da3b | diff --git a/src/de/mod.rs b/src/de/mod.rs
--- a/src/de/mod.rs
+++ b/src/de/mod.rs
@@ -313,6 +313,7 @@ impl<'de, 'a> de::Deserializer<'de> for &'a mut Deserializer<'de> {
self.bytes.skip_ws()?;
if self.bytes.consume("(") {
+ self.bytes.skip_ws()?;
let value = visitor.visit_newtype_struct(&mut *self)?;
self.bytes.comma()?;
| diff --git a/tests/big_struct.rs b/tests/big_struct.rs
--- a/tests/big_struct.rs
+++ b/tests/big_struct.rs
@@ -39,8 +39,12 @@ pub struct ImGuiStyleSave {
pub anti_aliased_shapes: bool,
pub curve_tessellation_tol: f32,
pub colors: ImColorsSave,
+ pub new_type: NewType,
}
+#[derive(Serialize, Deserialize)]
+pub struct NewType(i32);
+
const CONFIG: &str = "(
alpha: 1.0,
window_padding: (x: 8, y: 8),
diff --git a/tests/big_struct.rs b/tests/big_struct.rs
--- a/tests/big_struct.rs
+++ b/tests/big_struct.rs
@@ -66,6 +70,7 @@ const CONFIG: &str = "(
anti_aliased_shapes: true,
curve_tessellation_tol: 1.25,
colors: (text: 4),
+ new_type: NewType( 1 ),
ignored_field: \"Totally ignored, not causing a panic. Hopefully.\",
)";
| Parser doesn't handle whitespace ...
between `(` and `[` of a `newtype_struct` wrapping a `seq`:
```rust
#[derive(Debug, Deserialize)]
struct Newtype(Vec<String>);
fn main() {
let serialized = stringify!(
Newtype([
"a",
"b",
"c",
])
);
println!("{}", serialized);
let deserialized: Result<Newtype, _> = ron::de::from_str(serialized);
println!("{:#?}", deserialized);
let serialized = r#"
Newtype([
"a",
"b",
"c",
])
"#;
println!("{}", serialized);
let deserialized: Result<Newtype, _> = ron::de::from_str(serialized);
println!("{:#?}", deserialized);
}
```
```
Newtype ( [ "a" , "b" , "c" , ] )
Err(
Parser(
ExpectedArray,
Position {
col: 10,
line: 1
}
)
)
Newtype([
"a",
"b",
"c",
])
Ok(
Newtype(
[
"a",
"b",
"c"
]
)
)
```
(This is what I actually came here to report when I got sidetracked by the linguist+ronn history dive 😆)
| 2018-04-26T19:22:39 | 0.2 | 1d86009a18c62600f811b8596f585f2e02f5f1dc | [
"deserialize_big_struct"
] | [
"de::tests::forgot_apostrophes",
"de::tests::expected_attribute_end",
"de::tests::expected_attribute",
"de::tests::invalid_attribute",
"de::tests::multiple_attributes",
"de::tests::implicit_some",
"de::tests::test_array",
"de::tests::test_char",
"de::tests::test_empty_struct",
"de::tests::test_com... | [
"test_nul_in_string"
] | [] | 17 | |
ron-rs/ron | 80 | ron-rs__ron-80 | [
"49"
] | 83f67a083bb22c000164052d80b9a559084f3f80 | diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -81,29 +81,52 @@ impl<'a> Bytes<'a> {
}
pub fn char(&mut self) -> Result<char> {
+ use std::cmp::min;
+
if !self.consume("'") {
return self.err(ParseError::ExpectedChar);
}
- let c = self.eat_byte()?;
+ let c = self.peek_or_eof()?;
let c = if c == b'\\' {
+ let _ = self.advance(1);
let c = self.eat_byte()?;
if c != b'\\' && c != b'\'' {
return self.err(ParseError::InvalidEscape);
}
- c
+ c as char
} else {
- c
+ // Check where the end of the char (') is and try to
+ // interpret the rest as UTF-8
+
+ let max = min(5, self.bytes.len());
+ let pos: usize = self.bytes[..max]
+ .iter()
+ .position(|&x| x == b'\'')
+ .ok_or_else(|| self.error(ParseError::ExpectedChar))?;
+ let s = from_utf8(&self.bytes[0..pos]).map_err(|e| self.error(e.into()))?;
+ let mut chars = s.chars();
+
+ let first = chars
+ .next()
+ .ok_or_else(|| self.error(ParseError::ExpectedChar))?;
+ if chars.next().is_some() {
+ return self.err(ParseError::ExpectedChar);
+ }
+
+ let _ = self.advance(pos);
+
+ first
};
if !self.consume("'") {
return self.err(ParseError::ExpectedChar);
}
- Ok(c as char)
+ Ok(c)
}
pub fn comma(&mut self) -> bool {
| diff --git /dev/null b/tests/unicode.rs
new file mode 100644
--- /dev/null
+++ b/tests/unicode.rs
@@ -0,0 +1,15 @@
+extern crate ron;
+
+use ron::de::from_str;
+
+#[test]
+fn test_char() {
+ let de: char = from_str("'Փ'").unwrap();
+ assert_eq!(de, 'Փ');
+}
+
+#[test]
+fn test_string() {
+ let de: String = from_str("\"My string: ऄ\"").unwrap();
+ assert_eq!(de, "My string: ऄ");
+}
| Properly handle unicode
I'm pretty sure unicode support isn't fully implemented, so we would need tests and full support in serializer and deserializer. Additionally, rules should go into the spec (probably in a text document in this repository, as suggested somewhere).
| 2018-01-20T21:46:43 | 0.2 | 1d86009a18c62600f811b8596f585f2e02f5f1dc | [
"test_char"
] | [
"de::tests::expected_attribute",
"de::tests::expected_attribute_end",
"de::tests::forgot_apostrophes",
"de::tests::multiple_attributes",
"de::tests::invalid_attribute",
"de::tests::implicit_some",
"de::tests::test_array",
"de::tests::test_char",
"de::tests::test_comment",
"de::tests::test_empty_st... | [] | [] | 18 | |
ron-rs/ron | 512 | ron-rs__ron-512 | [
"511"
] | 1ff0efa8c87590b14af8a9d61e1fa7288c7455f1 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -40,6 +40,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Fix serialising reserved identifiers `true`, `false`, `Some`, `None`, `inf`[`f32`|`f64`], and `Nan`[`f32`|`f64`] ([#487](https://github.com/ron-rs/ron/pull/487))
- Disallow unclosed line comments at the end of `ron::value::RawValue` ([#489](https://github.com/ron-rs/ron/pull/489))
- Fix parsing of struct/variant names starting in `None`, `Some`, `true`, or `false` ([#499](https://github.com/ron-rs/ron/pull/499))
+- Fix deserialising owned string field names in structs, allowing deserializing into `serde_json::Value`s ([#511](https://github.com/ron-rs/ron/pull/512))
### Miscellaneous
diff --git a/src/de/id.rs b/src/de/id.rs
--- a/src/de/id.rs
+++ b/src/de/id.rs
@@ -144,11 +144,11 @@ impl<'a, 'b: 'a, 'c> de::Deserializer<'b> for &'c mut Deserializer<'a, 'b> {
Err(Error::ExpectedIdentifier)
}
- fn deserialize_string<V>(self, _: V) -> Result<V::Value>
+ fn deserialize_string<V>(self, visitor: V) -> Result<V::Value>
where
V: Visitor<'b>,
{
- Err(Error::ExpectedIdentifier)
+ self.deserialize_identifier(visitor)
}
fn deserialize_bytes<V>(self, _: V) -> Result<V::Value>
| diff --git /dev/null b/tests/511_deserialize_any_map_string_key.rs
new file mode 100644
--- /dev/null
+++ b/tests/511_deserialize_any_map_string_key.rs
@@ -0,0 +1,76 @@
+#[test]
+fn test_map_custom_deserialize() {
+ use std::collections::HashMap;
+
+ #[derive(PartialEq, Debug)]
+ struct CustomMap(HashMap<String, String>);
+
+ // Use a custom deserializer for CustomMap in order to extract String
+ // keys in the visit_map method.
+ impl<'de> serde::de::Deserialize<'de> for CustomMap {
+ fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
+ where
+ D: serde::Deserializer<'de>,
+ {
+ struct CVisitor;
+ impl<'de> serde::de::Visitor<'de> for CVisitor {
+ type Value = CustomMap;
+
+ // GRCOV_EXCL_START
+ fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
+ write!(formatter, "a map with string keys and values")
+ }
+ // GRCOV_EXCL_STOP
+
+ fn visit_map<A>(self, mut map: A) -> Result<Self::Value, A::Error>
+ where
+ A: serde::de::MapAccess<'de>,
+ {
+ let mut inner = HashMap::new();
+ while let Some((k, v)) = map.next_entry::<String, String>()? {
+ inner.insert(k, v);
+ }
+ Ok(CustomMap(inner))
+ }
+ }
+ // Note: This method will try to deserialize any value. In this test, it will
+ // invoke the visit_map method in the visitor.
+ deserializer.deserialize_any(CVisitor)
+ }
+ }
+
+ let mut map = HashMap::<String, String>::new();
+ map.insert("key1".into(), "value1".into());
+ map.insert("key2".into(), "value2".into());
+
+ let result: Result<CustomMap, _> = ron::from_str(
+ r#"(
+ key1: "value1",
+ key2: "value2",
+ )"#,
+ );
+
+ assert_eq!(result, Ok(CustomMap(map)));
+}
+
+#[test]
+fn test_ron_struct_as_json_map() {
+ let json: serde_json::Value = ron::from_str("(f1: 0, f2: 1)").unwrap();
+ assert_eq!(
+ json,
+ serde_json::Value::Object(
+ [
+ (
+ String::from("f1"),
+ serde_json::Value::Number(serde_json::Number::from(0))
+ ),
+ (
+ String::from("f2"),
+ serde_json::Value::Number(serde_json::Number::from(1))
+ ),
+ ]
+ .into_iter()
+ .collect()
+ )
+ );
+}
diff --git a/tests/non_identifier_identifier.rs b/tests/non_identifier_identifier.rs
--- a/tests/non_identifier_identifier.rs
+++ b/tests/non_identifier_identifier.rs
@@ -79,7 +79,11 @@ test_non_identifier! { test_u128 => deserialize_u128() }
test_non_identifier! { test_f32 => deserialize_f32() }
test_non_identifier! { test_f64 => deserialize_f64() }
test_non_identifier! { test_char => deserialize_char() }
-test_non_identifier! { test_string => deserialize_string() }
+// Removed due to fix for #511 - string keys are allowed.
+// test_non_identifier! { test_string => deserialize_string() }
+// See comment above. If deserialize_str is to be added, it should give the same expected result as
+// deserialize_string. deserialize_str and deserialize_string should be consistently implemented.
+// test_non_identifier! { test_str => deserialize_str() }
test_non_identifier! { test_bytes => deserialize_bytes() }
test_non_identifier! { test_byte_buf => deserialize_byte_buf() }
test_non_identifier! { test_option => deserialize_option() }
| Problem deserializing maps when deserialization target expect owned String keys
I just encountered this somewhat strange behavior. Trying to deserialize a very simple RON string into a `serde_json::Value` fails with an `ExpectedIdentifier` error:
```rust
let json: serde_json::Value = ron::from_str("(f1: 0, f2: 1)").unwrap();
```
I know that RON is not meant to be a self describing format and supports more types than JSON, but I expected this simple structure to be deserialized correctly; the identifiers should support deserializing to string keys.
The trigger of the error is that the `serde_json::Value` type expect owned `String` keys. After some investigating, it seems that the problem is related to the `id::Deserializer` variant, which supports `deserialize_str(..)`, but NOT `deserialize_string(..)`, which I find somewhat inconsistent. These methods should give the same result; the reason for having two methods is to give hints about expected string ownership (for performance reasons).
The following patch in `de/id.rs` file makes the code example work as expected:
```rust
// de::id.rs, line 147:
fn deserialize_string<V>(self, visitor: V) -> Result<V::Value>
where
V: Visitor<'b>,
{
// Err(Error::ExpectedIdentifier)
self.deserialize_identifier(visitor) // Same as deserialize_str behavior
}
```
| Thank you @grindvoll for reporting and investigating this issue! This sounds like an easy-enough change to me. Would you like to file a PR yourself that also adds a test with your usecase? I'm unfortunately quite swamped with uni at the moment.
Ok, I will try to file a PR with unit test as soon as possible. | 2023-10-02T22:33:22 | 0.9 | d3e9ebee2cb9dd7e1ed1ae3d6ed4bea6e05a139d | [
"test_map_custom_deserialize",
"test_ron_struct_as_json_map"
] | [
"de::tests::expected_attribute",
"de::tests::expected_attribute_end",
"de::tests::boolean_struct_name",
"de::tests::forgot_apostrophes",
"de::tests::invalid_attribute",
"de::tests::implicit_some",
"de::tests::multiple_attributes",
"de::tests::rename",
"de::tests::test_char",
"de::tests::test_comme... | [] | [] | 19 |
ron-rs/ron | 78 | ron-rs__ron-78 | [
"76"
] | cb8bd4d32727c6aadeaea561910d45d7df0a0673 | diff --git a/Cargo.toml b/Cargo.toml
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "ron"
-version = "0.1.5"
+version = "0.2.0"
license = "MIT/Apache-2.0"
keywords = ["parser", "serde", "serialization"]
authors = [
diff --git a/docs/grammar.md b/docs/grammar.md
--- a/docs/grammar.md
+++ b/docs/grammar.md
@@ -46,7 +46,7 @@ value = unsigned | signed | float | string | char | bool | option | list | map |
```ebnf
digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9";
-unsigned = digit, { digit };
+unsigned = ["0", ("x" | "b" | "o")], digit, { digit };
signed = ["+" | "-"], unsigned;
float = float_std | float_frac;
float_std = ["+" | "-"], digit, { digit }, ".", {digit}, [float_exp];
diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -1,10 +1,11 @@
use std::fmt::{Display, Formatter, Result as FmtResult};
use std::ops::Neg;
+use std::result::Result as StdResult;
use std::str::{FromStr, from_utf8, from_utf8_unchecked};
use de::{Error, ParseError, Result};
-const DIGITS: &[u8] = b"0123456789";
+const DIGITS: &[u8] = b"0123456789ABCDEFabcdef";
const FLOAT_CHARS: &[u8] = b"0123456789.+-eE";
const IDENT_FIRST: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_";
const IDENT_CHAR: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_0123456789";
diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -276,7 +277,7 @@ impl<'a> Bytes<'a> {
}
pub fn signed_integer<T>(&mut self) -> Result<T>
- where T: FromStr + Neg<Output=T>
+ where T: Neg<Output=T> + Num,
{
match self.peek_or_eof()? {
b'+' => {
diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -344,14 +345,31 @@ impl<'a> Bytes<'a> {
s.bytes().enumerate().all(|(i, b)| self.bytes.get(i).map(|t| *t == b).unwrap_or(false))
}
- pub fn unsigned_integer<T>(&mut self) -> Result<T> where T: FromStr {
+ pub fn unsigned_integer<T: Num>(&mut self) -> Result<T> {
+ let base = if self.peek() == Some(b'0') {
+ match self.bytes.get(1).cloned() {
+ Some(b'x') => 16,
+ Some(b'b') => 2,
+ Some(b'o') => 8,
+ _ => 10,
+ }
+ } else {
+ 10
+ };
+
+ if base != 10 {
+ // If we have `0x45A` for example,
+ // cut it to `45A`.
+ let _ = self.advance(2);
+ }
+
let num_bytes = self.next_bytes_contained_in(DIGITS);
if num_bytes == 0 {
return self.err(ParseError::ExpectedInteger);
}
- let res = FromStr::from_str(unsafe { from_utf8_unchecked(&self.bytes[0..num_bytes]) })
+ let res = Num::from_str(unsafe { from_utf8_unchecked(&self.bytes[0..num_bytes]) }, base)
.map_err(|_| self.error(ParseError::ExpectedInteger));
let _ = self.advance(num_bytes);
diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -474,6 +492,25 @@ impl Extensions {
}
}
+pub trait Num: Sized {
+ fn from_str(src: &str, radix: u32) -> StdResult<Self, ()>;
+}
+
+macro_rules! impl_num {
+ ($ty:ident) => {
+ impl Num for $ty {
+ fn from_str(src: &str, radix: u32) -> StdResult<Self, ()> {
+ $ty::from_str_radix(src, radix).map_err(|_| ())
+ }
+ }
+ };
+ ($($tys:ident)*) => {
+ $( impl_num!($tys); )*
+ };
+}
+
+impl_num!(u8 u16 u32 u64 i8 i16 i32 i64);
+
#[derive(Clone, Debug)]
pub enum ParsedStr<'a> {
Allocated(String),
| diff --git /dev/null b/tests/numbers.rs
new file mode 100644
--- /dev/null
+++ b/tests/numbers.rs
@@ -0,0 +1,22 @@
+extern crate ron;
+
+#[test]
+fn test_hex() {
+ assert_eq!(ron::de::from_str("0x507"), Ok(0x507));
+ assert_eq!(ron::de::from_str("0x1A5"), Ok(0x1A5));
+ assert_eq!(ron::de::from_str("0x53C537"), Ok(0x53C537));
+}
+
+#[test]
+fn test_bin() {
+ assert_eq!(ron::de::from_str("0b101"), Ok(0b101));
+ assert_eq!(ron::de::from_str("0b001"), Ok(0b001));
+ assert_eq!(ron::de::from_str("0b100100"), Ok(0b100100));
+}
+
+#[test]
+fn test_oct() {
+ assert_eq!(ron::de::from_str("0o1461"), Ok(0o1461));
+ assert_eq!(ron::de::from_str("0o051"), Ok(0o051));
+ assert_eq!(ron::de::from_str("0o150700"), Ok(0o150700));
+}
| Add hexadecimal numbers
And maybe it's a good idea to add octal and binary (`0o...` and `0b...`) notations too.
| Yep, @kvark mentioned this in a recent PR.
We happen to need this for gfx-warden as well. Even though, native support for `bitflags` would have been even better. Man can dream...
So I assume we use `0x` for hexadecimal and `0b` for binary? | 2018-01-19T23:37:44 | 0.1 | cb8bd4d32727c6aadeaea561910d45d7df0a0673 | [
"test_bin",
"test_hex",
"test_oct"
] | [
"de::tests::expected_attribute",
"de::tests::expected_attribute_end",
"de::tests::forgot_apostrophes",
"de::tests::invalid_attribute",
"de::tests::implicit_some",
"de::tests::multiple_attributes",
"de::tests::test_char",
"de::tests::test_array",
"de::tests::test_comment",
"de::tests::test_empty_st... | [] | [] | 20 |
ron-rs/ron | 75 | ron-rs__ron-75 | [
"55"
] | 78c7aae1306a8df5c70ad4d30ecfaaf13bacd11c | diff --git a/src/de/mod.rs b/src/de/mod.rs
--- a/src/de/mod.rs
+++ b/src/de/mod.rs
@@ -228,23 +228,28 @@ impl<'de, 'a> de::Deserializer<'de> for &'a mut Deserializer<'de> {
fn deserialize_option<V>(self, visitor: V) -> Result<V::Value>
where V: Visitor<'de>
{
- if self.bytes.consume("Some") && { self.bytes.skip_ws(); self.bytes.consume("(") } {
- self.bytes.skip_ws();
+ if self.bytes.consume("None") {
+ visitor.visit_none()
+ } else {
+ if self.bytes.exts.contains(Extensions::IMPLICIT_SOME) {
+ visitor.visit_some(&mut *self)
+ } else {
+ if self.bytes.consume("Some") && { self.bytes.skip_ws(); self.bytes.consume("(") } {
+ self.bytes.skip_ws();
- let v = visitor.visit_some(&mut *self)?;
+ let v = visitor.visit_some(&mut *self)?;
- self.bytes.skip_ws();
+ self.bytes.skip_ws();
- if self.bytes.consume(")") {
- Ok(v)
- } else {
- self.bytes.err(ParseError::ExpectedOptionEnd)
+ if self.bytes.consume(")") {
+ Ok(v)
+ } else {
+ self.bytes.err(ParseError::ExpectedOptionEnd)
+ }
+ } else {
+ self.bytes.err(ParseError::ExpectedOption)
+ }
}
-
- } else if self.bytes.consume("None") {
- visitor.visit_none()
- } else {
- self.bytes.err(ParseError::ExpectedOption)
}
}
diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -459,6 +459,7 @@ impl<'a> Bytes<'a> {
bitflags! {
pub struct Extensions: usize {
const UNWRAP_NEWTYPES = 0x1;
+ const IMPLICIT_SOME = 0x2;
}
}
diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -467,6 +468,7 @@ impl Extensions {
pub fn from_ident(ident: &[u8]) -> Option<Extensions> {
match ident {
b"unwrap_newtypes" => Some(Extensions::UNWRAP_NEWTYPES),
+ b"implicit_some" => Some(Extensions::IMPLICIT_SOME),
_ => None,
}
}
| diff --git a/src/de/tests.rs b/src/de/tests.rs
--- a/src/de/tests.rs
+++ b/src/de/tests.rs
@@ -209,3 +209,28 @@ fn uglified_attribute() {
assert_eq!(de, Ok(()));
}
+
+#[test]
+fn implicit_some() {
+ use serde::de::DeserializeOwned;
+
+ fn de<T: DeserializeOwned>(s: &str) -> Option<T> {
+ let enable = "#![enable(implicit_some)]\n".to_string();
+
+ from_str::<Option<T>>(&(enable + s)).unwrap()
+ }
+
+ assert_eq!(de("'c'"), Some('c'));
+ assert_eq!(de("5"), Some(5));
+ assert_eq!(de("\"Hello\""), Some("Hello".to_owned()));
+ assert_eq!(de("false"), Some(false));
+ assert_eq!(de("MyStruct(x: .4, y: .5)"), Some(MyStruct {
+ x: 0.4,
+ y: 0.5,
+ }));
+
+ assert_eq!(de::<char>("None"), None);
+
+ // Not concise
+ assert_eq!(de::<Option<Option<char>>>("None"), None);
+}
diff --git a/tests/extensions.rs b/tests/extensions.rs
--- a/tests/extensions.rs
+++ b/tests/extensions.rs
@@ -54,3 +54,31 @@ fn unwrap_newtypes() {
println!("unwrap_newtypes: {:#?}", d);
}
+
+const CONFIG_I_S: &str = "
+#![enable(implicit_some)]
+
+(
+ tuple: ((), (0.5), ((), -5)),
+ vec: [
+ None,
+ (),
+ UnitStruct,
+ None,
+ (),
+ ],
+ map: {
+ (7): Bool(true),
+ (9): Chars('x', \"\"),
+ (6): Bool(false),
+ (5): Unit,
+ },
+)
+";
+
+#[test]
+fn implicit_some() {
+ let d: Struct = ron::de::from_str(&CONFIG_I_S).expect("Failed to deserialize");
+
+ println!("implicit_some: {:#?}", d);
+}
| Ergonomics of optional fields
If I have a struct like
```rust
#[derive(Debug, Clone, Deserialize)]
pub struct Settings {
pub font: Option<PathBuf>, // <- optional field
pub other_value: u8,
}
```
I can omit the field in `.ron` file to get `None`:
```json
(
other_value: 0,
)
```
But I'm forced to write `Some()` around the actual value which clutters up the config:
```json
(
font: Some("OpenSans-Regular.ttf"), // :(
// font: "OpenSans-Regular.ttf", // I want this
other_value: 0,
)
```
Seems like an ergonomic problem to me :(
See https://gitter.im/ron-rs/ron?at=59d228bd614889d47565733d
| @ozkriff If you don't specify a font, you'll use some fallback, right?
Correct.
```rust
#[derive(Deserialize)]
struct Settings {
#[serde(default = "default_font")]
font: String,
}
fn default_font() -> String {
"default.ttf".to_owned()
}
```
The problem in this specific case is that if there's no font in the config, I want to use embedded font:
```rust
let font = if let Some(ref fontpath) = settings.font {
new_font_from_vec(fs::load(fontpath))
} else {
let data = include_bytes!("Karla-Regular.ttf");
new_font_from_vec(data.to_vec())
};
```
@ozkriff isn't it easy to workaround? set default to something like `"<embedded>"`, then check for it and load your font correspondingly
@kvark yep, that's what I'm going to do. Though it's still a hack and I think I would prefer some kind of "(configurable) option promotion" from RON or something.
@kvark Do we want an option for implicit `Some`? And if so, should it be on by default?
@torkleyy I don't see a strong for it yet. The serde defaults should cover it mostly, and help keep `ron` simpler and more consistent.
I understand how it makes sense to work around it from the Rust side of things, but it does feel really counter-intuitive to have to wrap lots of values with Some() on the scripting side.
Even though the scripter should know whether a field is optional or not, I feel like it expects too much understanding of Rust. I've done something similar to with TOML in a previous project and it handled options implicitly. I don't mean to imply that you should do it just because they did, but it just felt more like the right way to do things from the scripting side of things.
Could the parser check to see if an option is expected and wrap the value with Some() automatically?
The current strategy was chosen because it is more concise. E.g. `None` could be `None` in Rust or `Some(None)`.
I would like the current concise scheme to be default, but I don't mind an option in the new pragmas. There are more clients who want it (@ozkriff).
> On Jan 13, 2018, at 12:41, Thomas Schaller <notifications@github.com> wrote:
>
> The current strategy was chosen because it is more concise. E.g. None could be None in Rust or Some(None).
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or mute the thread.
>
I already started working on a solution, but couldn't finish it yet.
@kvark @torkleyy It just feels like the default behavior is a little inconsistent:
- I have `optional_field: Option<String>` in Rust.
- If I omit the field in a .ron file, it accepts it as `None`, no "expected option" error message.
- If I supply a `String`, I get an "expected option" error.
Shouldn't it see that I correctly supplied a `String` and accept it (same as with `None`) since `String` and `None` are both acceptable values for `optional_field`?
You can only skip the field if you added `#[serde(skip)]` .
If missing fields become None, and existing fields become Some(val), that would mean the format won't be [self-describing](https://github.com/ron-rs/ron/issues/48) any longer, right?
I don't have a problem with that though, as it's intended to be written by humans..
But how to distinguish Some(None) from None etc? The same way as serde_json? | 2018-01-18T02:33:21 | 0.1 | cb8bd4d32727c6aadeaea561910d45d7df0a0673 | [
"de::tests::implicit_some",
"implicit_some"
] | [
"de::tests::expected_attribute",
"de::tests::forgot_apostrophes",
"de::tests::expected_attribute_end",
"de::tests::invalid_attribute",
"de::tests::multiple_attributes",
"de::tests::test_char",
"de::tests::test_array",
"de::tests::test_empty_struct",
"de::tests::test_comment",
"de::tests::test_enum... | [] | [] | 21 |
ron-rs/ron | 206 | ron-rs__ron-206 | [
"202"
] | ecdcd83b79ce9124fbf6ea1ce0611e896ec7bbe6 | diff --git a/src/de/mod.rs b/src/de/mod.rs
--- a/src/de/mod.rs
+++ b/src/de/mod.rs
@@ -5,10 +5,11 @@ pub use crate::parse::Position;
use serde::de::{self, DeserializeSeed, Deserializer as SerdeError, Visitor};
use std::{borrow::Cow, io, str};
-use self::id::IdDeserializer;
-use self::tag::TagDeserializer;
-use crate::extensions::Extensions;
-use crate::parse::{AnyNum, Bytes, ParsedStr};
+use self::{id::IdDeserializer, tag::TagDeserializer};
+use crate::{
+ extensions::Extensions,
+ parse::{AnyNum, Bytes, ParsedStr},
+};
mod id;
mod tag;
diff --git a/src/parse.rs b/src/parse.rs
--- a/src/parse.rs
+++ b/src/parse.rs
@@ -4,8 +4,10 @@ use std::{
str::{from_utf8, from_utf8_unchecked, FromStr},
};
-use crate::error::{Error, ErrorCode, Result};
-use crate::extensions::Extensions;
+use crate::{
+ error::{Error, ErrorCode, Result},
+ extensions::Extensions,
+};
const DIGITS: &[u8] = b"0123456789ABCDEFabcdef_";
const FLOAT_CHARS: &[u8] = b"0123456789.+-eE";
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -1,11 +1,21 @@
use serde::{ser, Deserialize, Serialize};
-use std::fmt::Write;
+use std::io;
use crate::error::{Error, Result};
use crate::extensions::Extensions;
mod value;
+/// Serializes `value` into `writer`
+pub fn to_writer<W, T>(writer: W, value: &T) -> Result<()>
+where
+ W: io::Write,
+ T: Serialize,
+{
+ let mut s = Serializer::new(writer, None, false)?;
+ value.serialize(&mut s)
+}
+
/// Serializes `value` and returns it as string.
///
/// This function does not generate any newlines or nice formatting;
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -14,9 +24,10 @@ pub fn to_string<T>(value: &T) -> Result<String>
where
T: Serialize,
{
- let mut s = Serializer::new(None, false);
+ let buf = Vec::new();
+ let mut s = Serializer::new(buf, None, false)?;
value.serialize(&mut s)?;
- Ok(s.output)
+ Ok(String::from_utf8(s.output).expect("Ron should be utf-8"))
}
/// Serializes `value` in the recommended RON layout in a pretty way.
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -24,9 +35,10 @@ pub fn to_string_pretty<T>(value: &T, config: PrettyConfig) -> Result<String>
where
T: Serialize,
{
- let mut s = Serializer::new(Some(config), false);
+ let buf = Vec::new();
+ let mut s = Serializer::new(buf, Some(config), false)?;
value.serialize(&mut s)?;
- Ok(s.output)
+ Ok(String::from_utf8(s.output).expect("Ron should be utf-8"))
}
/// Pretty serializer state
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -78,8 +90,9 @@ impl PrettyConfig {
}
/// Limits the pretty-formatting based on the number of indentations.
- /// I.e., with a depth limit of 5, starting with an element of depth (indentation level) 6,
- /// everything will be put into the same line, without pretty formatting.
+ /// I.e., with a depth limit of 5, starting with an element of depth
+ /// (indentation level) 6, everything will be put into the same line,
+ /// without pretty formatting.
///
/// Default: [std::usize::MAX]
pub fn with_depth_limit(mut self, depth_limit: usize) -> Self {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -107,8 +120,9 @@ impl PrettyConfig {
}
/// Configures whether tuples are single- or multi-line.
- /// If set to `true`, tuples will have their fields indented and in new lines.
- /// If set to `false`, tuples will be serialized without any newlines or indentations.
+ /// If set to `true`, tuples will have their fields indented and in new
+ /// lines. If set to `false`, tuples will be serialized without any
+ /// newlines or indentations.
///
/// Default: `false`
pub fn with_separate_tuple_members(mut self, separate_tuple_members: bool) -> Self {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -117,8 +131,8 @@ impl PrettyConfig {
self
}
- /// Configures whether a comment shall be added to every array element, indicating
- /// the index.
+ /// Configures whether a comment shall be added to every array element,
+ /// indicating the index.
///
/// Default: `false`
pub fn with_enumerate_arrays(mut self, enumerate_arrays: bool) -> Self {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -180,29 +194,26 @@ impl Default for PrettyConfig {
///
/// You can just use `to_string` for deserializing a value.
/// If you want it pretty-printed, take a look at the `pretty` module.
-pub struct Serializer {
- output: String,
+pub struct Serializer<W: io::Write> {
+ output: W,
pretty: Option<(PrettyConfig, Pretty)>,
struct_names: bool,
is_empty: Option<bool>,
}
-impl Serializer {
+impl<W: io::Write> Serializer<W> {
/// Creates a new `Serializer`.
///
/// Most of the time you can just use `to_string` or `to_string_pretty`.
- pub fn new(config: Option<PrettyConfig>, struct_names: bool) -> Self {
- let initial_output = if let Some(conf) = &config {
+ pub fn new(mut writer: W, config: Option<PrettyConfig>, struct_names: bool) -> Result<Self> {
+ if let Some(conf) = &config {
if conf.extensions.contains(Extensions::IMPLICIT_SOME) {
- "#![enable(implicit_some)]".to_string() + &conf.new_line
- } else {
- String::new()
- }
- } else {
- String::new()
+ writer.write_all(b"#![enable(implicit_some)]")?;
+ writer.write_all(conf.new_line.as_bytes())?;
+ };
};
- Serializer {
- output: initial_output,
+ Ok(Serializer {
+ output: writer,
pretty: config.map(|conf| {
(
conf,
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -214,12 +225,7 @@ impl Serializer {
}),
struct_names,
is_empty: None,
- }
- }
-
- /// Consumes `self` and returns the built `String`.
- pub fn into_output_string(self) -> String {
- self.output
+ })
}
fn is_pretty(&self) -> bool {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -241,65 +247,74 @@ impl Serializer {
.map_or(Extensions::empty(), |&(ref config, _)| config.extensions)
}
- fn start_indent(&mut self) {
+ fn start_indent(&mut self) -> Result<()> {
if let Some((ref config, ref mut pretty)) = self.pretty {
pretty.indent += 1;
if pretty.indent <= config.depth_limit {
let is_empty = self.is_empty.unwrap_or(false);
if !is_empty {
- self.output += &config.new_line;
+ self.output.write_all(config.new_line.as_bytes())?;
}
}
}
+ Ok(())
}
- fn indent(&mut self) {
+ fn indent(&mut self) -> io::Result<()> {
if let Some((ref config, ref pretty)) = self.pretty {
if pretty.indent <= config.depth_limit {
- self.output
- .extend((0..pretty.indent).map(|_| config.indentor.as_str()));
+ for _ in 0..pretty.indent {
+ self.output.write_all(config.indentor.as_bytes())?;
+ }
}
}
+ Ok(())
}
- fn end_indent(&mut self) {
+ fn end_indent(&mut self) -> io::Result<()> {
if let Some((ref config, ref mut pretty)) = self.pretty {
if pretty.indent <= config.depth_limit {
let is_empty = self.is_empty.unwrap_or(false);
if !is_empty {
- self.output
- .extend((1..pretty.indent).map(|_| config.indentor.as_str()));
+ for _ in 1..pretty.indent {
+ self.output.write_all(config.indentor.as_bytes())?;
+ }
}
}
pretty.indent -= 1;
self.is_empty = None;
}
+ Ok(())
}
- fn serialize_escaped_str(&mut self, value: &str) {
- let value = value.chars().flat_map(|c| c.escape_debug());
- self.output += "\"";
- self.output.extend(value);
- self.output += "\"";
+ fn serialize_escaped_str(&mut self, value: &str) -> io::Result<()> {
+ self.output.write_all(b"\"")?;
+ let mut scalar = [0u8; 4];
+ for c in value.chars().flat_map(|c| c.escape_debug()) {
+ self.output
+ .write_all(c.encode_utf8(&mut scalar).as_bytes())?;
+ }
+ self.output.write_all(b"\"")?;
+ Ok(())
}
}
-impl<'a> ser::Serializer for &'a mut Serializer {
+impl<'a, W: io::Write> ser::Serializer for &'a mut Serializer<W> {
type Error = Error;
type Ok = ();
- type SerializeMap = Self;
- type SerializeSeq = Self;
- type SerializeStruct = Self;
- type SerializeStructVariant = Self;
- type SerializeTuple = Self;
- type SerializeTupleStruct = Self;
- type SerializeTupleVariant = Self;
+ type SerializeMap = Compound<'a, W>;
+ type SerializeSeq = Compound<'a, W>;
+ type SerializeStruct = Compound<'a, W>;
+ type SerializeStructVariant = Compound<'a, W>;
+ type SerializeTuple = Compound<'a, W>;
+ type SerializeTupleStruct = Compound<'a, W>;
+ type SerializeTupleVariant = Compound<'a, W>;
fn serialize_bool(self, v: bool) -> Result<()> {
- self.output += if v { "true" } else { "false" };
+ self.output.write_all(if v { b"true" } else { b"false" })?;
Ok(())
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -321,7 +336,7 @@ impl<'a> ser::Serializer for &'a mut Serializer {
fn serialize_i128(self, v: i128) -> Result<()> {
// TODO optimize
- self.output += &v.to_string();
+ write!(self.output, "{}", v)?;
Ok(())
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -342,32 +357,32 @@ impl<'a> ser::Serializer for &'a mut Serializer {
}
fn serialize_u128(self, v: u128) -> Result<()> {
- self.output += &v.to_string();
+ write!(self.output, "{}", v)?;
Ok(())
}
fn serialize_f32(self, v: f32) -> Result<()> {
- self.output += &v.to_string();
+ write!(self.output, "{}", v)?;
Ok(())
}
fn serialize_f64(self, v: f64) -> Result<()> {
- self.output += &v.to_string();
+ write!(self.output, "{}", v)?;
Ok(())
}
fn serialize_char(self, v: char) -> Result<()> {
- self.output += "'";
+ self.output.write_all(b"'")?;
if v == '\\' || v == '\'' {
- self.output.push('\\');
+ self.output.write_all(b"\\")?;
}
- self.output.push(v);
- self.output += "'";
+ write!(self.output, "{}", v)?;
+ self.output.write_all(b"'")?;
Ok(())
}
fn serialize_str(self, v: &str) -> Result<()> {
- self.serialize_escaped_str(v);
+ self.serialize_escaped_str(v)?;
Ok(())
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -377,7 +392,7 @@ impl<'a> ser::Serializer for &'a mut Serializer {
}
fn serialize_none(self) -> Result<()> {
- self.output += "None";
+ self.output.write_all(b"None")?;
Ok(())
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -388,25 +403,25 @@ impl<'a> ser::Serializer for &'a mut Serializer {
{
let implicit_some = self.extensions().contains(Extensions::IMPLICIT_SOME);
if !implicit_some {
- self.output += "Some(";
+ self.output.write_all(b"Some(")?;
}
value.serialize(&mut *self)?;
if !implicit_some {
- self.output += ")";
+ self.output.write_all(b")")?;
}
Ok(())
}
fn serialize_unit(self) -> Result<()> {
- self.output += "()";
+ self.output.write_all(b"()")?;
Ok(())
}
fn serialize_unit_struct(self, name: &'static str) -> Result<()> {
if self.struct_names {
- self.output += name;
+ self.output.write_all(name.as_bytes())?;
Ok(())
} else {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -415,7 +430,7 @@ impl<'a> ser::Serializer for &'a mut Serializer {
}
fn serialize_unit_variant(self, _: &'static str, _: u32, variant: &'static str) -> Result<()> {
- self.output += variant;
+ self.output.write_all(variant.as_bytes())?;
Ok(())
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -425,12 +440,12 @@ impl<'a> ser::Serializer for &'a mut Serializer {
T: ?Sized + Serialize,
{
if self.struct_names {
- self.output += name;
+ self.output.write_all(name.as_bytes())?;
}
- self.output += "(";
+ self.output.write_all(b"(")?;
value.serialize(&mut *self)?;
- self.output += ")";
+ self.output.write_all(b")")?;
Ok(())
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -444,41 +459,47 @@ impl<'a> ser::Serializer for &'a mut Serializer {
where
T: ?Sized + Serialize,
{
- self.output += variant;
- self.output += "(";
+ self.output.write_all(variant.as_bytes())?;
+ self.output.write_all(b"(")?;
value.serialize(&mut *self)?;
- self.output += ")";
+ self.output.write_all(b")")?;
Ok(())
}
fn serialize_seq(self, len: Option<usize>) -> Result<Self::SerializeSeq> {
- self.output += "[";
+ self.output.write_all(b"[")?;
if let Some(len) = len {
self.is_empty = Some(len == 0);
}
- self.start_indent();
+ self.start_indent()?;
if let Some((_, ref mut pretty)) = self.pretty {
pretty.sequence_index.push(0);
}
- Ok(self)
+ Ok(Compound::Map {
+ ser: self,
+ state: State::First,
+ })
}
fn serialize_tuple(self, len: usize) -> Result<Self::SerializeTuple> {
- self.output += "(";
+ self.output.write_all(b"(")?;
if self.separate_tuple_members() {
self.is_empty = Some(len == 0);
- self.start_indent();
+ self.start_indent()?;
}
- Ok(self)
+ Ok(Compound::Map {
+ ser: self,
+ state: State::First,
+ })
}
fn serialize_tuple_struct(
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -487,7 +508,7 @@ impl<'a> ser::Serializer for &'a mut Serializer {
len: usize,
) -> Result<Self::SerializeTupleStruct> {
if self.struct_names {
- self.output += name;
+ self.output.write_all(name.as_bytes())?;
}
self.serialize_tuple(len)
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -500,40 +521,49 @@ impl<'a> ser::Serializer for &'a mut Serializer {
variant: &'static str,
len: usize,
) -> Result<Self::SerializeTupleVariant> {
- self.output += variant;
- self.output += "(";
+ self.output.write_all(variant.as_bytes())?;
+ self.output.write_all(b"(")?;
if self.separate_tuple_members() {
self.is_empty = Some(len == 0);
- self.start_indent();
+ self.start_indent()?;
}
- Ok(self)
+ Ok(Compound::Map {
+ ser: self,
+ state: State::First,
+ })
}
fn serialize_map(self, len: Option<usize>) -> Result<Self::SerializeMap> {
- self.output += "{";
+ self.output.write_all(b"{")?;
if let Some(len) = len {
self.is_empty = Some(len == 0);
}
- self.start_indent();
+ self.start_indent()?;
- Ok(self)
+ Ok(Compound::Map {
+ ser: self,
+ state: State::First,
+ })
}
fn serialize_struct(self, name: &'static str, len: usize) -> Result<Self::SerializeStruct> {
if self.struct_names {
- self.output += name;
+ self.output.write_all(name.as_bytes())?;
}
- self.output += "(";
+ self.output.write_all(b"(")?;
self.is_empty = Some(len == 0);
- self.start_indent();
+ self.start_indent()?;
- Ok(self)
+ Ok(Compound::Map {
+ ser: self,
+ state: State::First,
+ })
}
fn serialize_struct_variant(
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -543,17 +573,20 @@ impl<'a> ser::Serializer for &'a mut Serializer {
variant: &'static str,
len: usize,
) -> Result<Self::SerializeStructVariant> {
- self.output += variant;
- self.output += "(";
+ self.output.write_all(variant.as_bytes())?;
+ self.output.write_all(b"(")?;
self.is_empty = Some(len == 0);
- self.start_indent();
+ self.start_indent()?;
- Ok(self)
+ Ok(Compound::Map {
+ ser: self,
+ state: State::First,
+ })
}
}
-impl<'a> ser::SerializeSeq for &'a mut Serializer {
+impl<'a, W: io::Write> ser::SerializeSeq for Compound<'a, W> {
type Error = Error;
type Ok = ();
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -561,41 +594,78 @@ impl<'a> ser::SerializeSeq for &'a mut Serializer {
where
T: ?Sized + Serialize,
{
- self.indent();
-
- value.serialize(&mut **self)?;
- self.output += ",";
-
- if let Some((ref config, ref mut pretty)) = self.pretty {
+ let ser = match self {
+ Compound::Map {
+ state: ref mut s @ State::First,
+ ser,
+ } => {
+ *s = State::Rest;
+ ser
+ }
+ Compound::Map {
+ state: State::Rest,
+ ser,
+ } => {
+ ser.output.write_all(b",")?;
+ ser
+ }
+ };
+ if let Some((ref config, ref mut pretty)) = ser.pretty {
if pretty.indent <= config.depth_limit {
if config.enumerate_arrays {
assert!(config.new_line.contains('\n'));
let index = pretty.sequence_index.last_mut().unwrap();
//TODO: when /**/ comments are supported, prepend the index
// to an element instead of appending it.
- write!(self.output, "// [{}]", index).unwrap();
+ write!(ser.output, "// [{}]", index).unwrap();
*index += 1;
}
- self.output += &config.new_line;
+ ser.output.write_all(config.new_line.as_bytes())?;
}
}
+ ser.indent()?;
+
+ value.serialize(&mut **ser)?;
Ok(())
}
fn end(self) -> Result<()> {
- self.end_indent();
+ let ser = match self {
+ Compound::Map {
+ ser,
+ state: State::Rest,
+ } => {
+ if let Some((ref config, ref mut pretty)) = ser.pretty {
+ if pretty.indent <= config.depth_limit {
+ ser.output.write_all(b",")?;
+ }
+ }
+ ser
+ }
+ Compound::Map { ser, .. } => ser,
+ };
+ ser.end_indent()?;
- if let Some((_, ref mut pretty)) = self.pretty {
+ if let Some((_, ref mut pretty)) = ser.pretty {
pretty.sequence_index.pop();
}
- self.output += "]";
+ ser.output.write_all(b"]")?;
Ok(())
}
}
-
-impl<'a> ser::SerializeTuple for &'a mut Serializer {
+pub enum State {
+ First,
+ Rest,
+}
+pub enum Compound<'a, W: io::Write> {
+ Map {
+ ser: &'a mut Serializer<W>,
+ state: State,
+ },
+}
+impl<'a, W: io::Write> ser::SerializeTuple for Compound<'a, W> {
type Error = Error;
type Ok = ();
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -603,42 +673,66 @@ impl<'a> ser::SerializeTuple for &'a mut Serializer {
where
T: ?Sized + Serialize,
{
- if self.separate_tuple_members() {
- self.indent();
- }
-
- value.serialize(&mut **self)?;
- self.output += ",";
-
- if let Some((ref config, ref pretty)) = self.pretty {
- if pretty.indent <= config.depth_limit {
- self.output += if self.separate_tuple_members() {
- &config.new_line
- } else {
- " "
- };
+ let ser = match self {
+ Compound::Map {
+ ser,
+ state: ref mut s @ State::First,
+ } => {
+ *s = State::Rest;
+ ser
}
+ Compound::Map { ser, .. } => {
+ ser.output.write_all(b",")?;
+ if let Some((ref config, ref pretty)) = ser.pretty {
+ if pretty.indent <= config.depth_limit {
+ ser.output.write_all(if ser.separate_tuple_members() {
+ config.new_line.as_bytes()
+ } else {
+ b" "
+ })?;
+ }
+ }
+ ser
+ }
+ };
+
+ if ser.separate_tuple_members() {
+ ser.indent()?;
}
+ value.serialize(&mut **ser)?;
+
Ok(())
}
fn end(self) -> Result<()> {
- if self.separate_tuple_members() {
- self.end_indent();
- } else if self.is_pretty() {
- self.output.pop();
- self.output.pop();
+ let ser = match self {
+ Compound::Map {
+ ser,
+ state: State::Rest,
+ } => {
+ if let Some((ref config, ref pretty)) = ser.pretty {
+ if ser.separate_tuple_members() && pretty.indent <= config.depth_limit {
+ ser.output.write_all(b",")?;
+ ser.output.write_all(config.new_line.as_bytes())?;
+ }
+ }
+ ser
+ }
+ Compound::Map { ser, .. } => ser,
+ };
+ if ser.separate_tuple_members() {
+ ser.end_indent()?;
}
- self.output += ")";
+ ser.output.write_all(b")")?;
Ok(())
}
}
// Same thing but for tuple structs.
-impl<'a> ser::SerializeTupleStruct for &'a mut Serializer {
+impl<'a, W: io::Write> ser::SerializeTupleStruct for Compound<'a, W> {
type Error = Error;
type Ok = ();
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -654,7 +748,7 @@ impl<'a> ser::SerializeTupleStruct for &'a mut Serializer {
}
}
-impl<'a> ser::SerializeTupleVariant for &'a mut Serializer {
+impl<'a, W: io::Write> ser::SerializeTupleVariant for Compound<'a, W> {
type Error = Error;
type Ok = ();
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -670,7 +764,7 @@ impl<'a> ser::SerializeTupleVariant for &'a mut Serializer {
}
}
-impl<'a> ser::SerializeMap for &'a mut Serializer {
+impl<'a, W: io::Write> ser::SerializeMap for Compound<'a, W> {
type Error = Error;
type Ok = ();
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -678,27 +772,45 @@ impl<'a> ser::SerializeMap for &'a mut Serializer {
where
T: ?Sized + Serialize,
{
- self.indent();
-
- key.serialize(&mut **self)
+ let ser = match self {
+ Compound::Map {
+ ser,
+ state: ref mut s @ State::First,
+ } => {
+ *s = State::Rest;
+ ser
+ }
+ Compound::Map {
+ ser,
+ state: State::Rest,
+ } => {
+ ser.output.write_all(b",")?;
+
+ if let Some((ref config, ref pretty)) = ser.pretty {
+ if pretty.indent <= config.depth_limit {
+ ser.output.write_all(config.new_line.as_bytes())?;
+ }
+ }
+ ser
+ }
+ };
+ ser.indent()?;
+ key.serialize(&mut **ser)
}
fn serialize_value<T>(&mut self, value: &T) -> Result<()>
where
T: ?Sized + Serialize,
{
- self.output += ":";
+ match self {
+ Compound::Map { ser, .. } => {
+ ser.output.write_all(b":")?;
- if self.is_pretty() {
- self.output += " ";
- }
-
- value.serialize(&mut **self)?;
- self.output += ",";
+ if ser.is_pretty() {
+ ser.output.write_all(b" ")?;
+ }
- if let Some((ref config, ref pretty)) = self.pretty {
- if pretty.indent <= config.depth_limit {
- self.output += &config.new_line;
+ value.serialize(&mut **ser)?;
}
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -706,14 +818,29 @@ impl<'a> ser::SerializeMap for &'a mut Serializer {
}
fn end(self) -> Result<()> {
- self.end_indent();
+ let ser = match self {
+ Compound::Map {
+ ser,
+ state: State::Rest,
+ } => {
+ if let Some((ref config, ref pretty)) = ser.pretty {
+ if pretty.indent <= config.depth_limit {
+ ser.output.write_all(b",")?;
+ ser.output.write_all(config.new_line.as_bytes())?;
+ }
+ }
- self.output += "}";
+ ser
+ }
+ Compound::Map { ser, .. } => ser,
+ };
+ ser.end_indent()?;
+ ser.output.write_all(b"}")?;
Ok(())
}
}
-impl<'a> ser::SerializeStruct for &'a mut Serializer {
+impl<'a, W: io::Write> ser::SerializeStruct for Compound<'a, W> {
type Error = Error;
type Ok = ();
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -721,36 +848,61 @@ impl<'a> ser::SerializeStruct for &'a mut Serializer {
where
T: ?Sized + Serialize,
{
- self.indent();
+ let ser = match self {
+ Compound::Map {
+ ser,
+ state: ref mut s @ State::First,
+ } => {
+ *s = State::Rest;
+ ser
+ }
+ Compound::Map { ser, .. } => {
+ ser.output.write_all(b",")?;
- self.output += key;
- self.output += ":";
+ if let Some((ref config, ref pretty)) = ser.pretty {
+ if pretty.indent <= config.depth_limit {
+ ser.output.write_all(config.new_line.as_bytes())?;
+ }
+ }
+ ser
+ }
+ };
+ ser.indent()?;
+ ser.output.write_all(key.as_bytes())?;
+ ser.output.write_all(b":")?;
- if self.is_pretty() {
- self.output += " ";
+ if ser.is_pretty() {
+ ser.output.write_all(b" ")?;
}
- value.serialize(&mut **self)?;
- self.output += ",";
-
- if let Some((ref config, ref pretty)) = self.pretty {
- if pretty.indent <= config.depth_limit {
- self.output += &config.new_line;
- }
- }
+ value.serialize(&mut **ser)?;
Ok(())
}
fn end(self) -> Result<()> {
- self.end_indent();
-
- self.output += ")";
+ let ser = match self {
+ Compound::Map {
+ ser,
+ state: State::Rest,
+ } => {
+ if let Some((ref config, ref pretty)) = ser.pretty {
+ if pretty.indent <= config.depth_limit {
+ ser.output.write_all(b",")?;
+ ser.output.write_all(config.new_line.as_bytes())?;
+ }
+ }
+ ser
+ }
+ Compound::Map { ser, .. } => ser,
+ };
+ ser.end_indent()?;
+ ser.output.write_all(b")")?;
Ok(())
}
}
-impl<'a> ser::SerializeStructVariant for &'a mut Serializer {
+impl<'a, W: io::Write> ser::SerializeStructVariant for Compound<'a, W> {
type Error = Error;
type Ok = ();
| diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -800,7 +952,7 @@ mod tests {
fn test_struct() {
let my_struct = MyStruct { x: 4.0, y: 7.0 };
- assert_eq!(to_string(&my_struct).unwrap(), "(x:4,y:7,)");
+ assert_eq!(to_string(&my_struct).unwrap(), "(x:4,y:7)");
#[derive(Serialize)]
struct NewType(i32);
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -810,7 +962,7 @@ mod tests {
#[derive(Serialize)]
struct TupleStruct(f32, f32);
- assert_eq!(to_string(&TupleStruct(2.0, 5.0)).unwrap(), "(2,5,)");
+ assert_eq!(to_string(&TupleStruct(2.0, 5.0)).unwrap(), "(2,5)");
}
#[test]
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -823,8 +975,8 @@ mod tests {
fn test_enum() {
assert_eq!(to_string(&MyEnum::A).unwrap(), "A");
assert_eq!(to_string(&MyEnum::B(true)).unwrap(), "B(true)");
- assert_eq!(to_string(&MyEnum::C(true, 3.5)).unwrap(), "C(true,3.5,)");
- assert_eq!(to_string(&MyEnum::D { a: 2, b: 3 }).unwrap(), "D(a:2,b:3,)");
+ assert_eq!(to_string(&MyEnum::C(true, 3.5)).unwrap(), "C(true,3.5)");
+ assert_eq!(to_string(&MyEnum::D { a: 2, b: 3 }).unwrap(), "D(a:2,b:3)");
}
#[test]
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -834,8 +986,8 @@ mod tests {
let empty_ref: &[i32] = ∅
assert_eq!(to_string(&empty_ref).unwrap(), "[]");
- assert_eq!(to_string(&[2, 3, 4i32]).unwrap(), "(2,3,4,)");
- assert_eq!(to_string(&(&[2, 3, 4i32] as &[i32])).unwrap(), "[2,3,4,]");
+ assert_eq!(to_string(&[2, 3, 4i32]).unwrap(), "(2,3,4)");
+ assert_eq!(to_string(&(&[2, 3, 4i32] as &[i32])).unwrap(), "[2,3,4]");
}
#[test]
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -848,8 +1000,8 @@ mod tests {
let s = to_string(&map).unwrap();
s.starts_with("{");
- s.contains("(true,false,):4");
- s.contains("(false,false,):123");
+ s.contains("(true,false):4");
+ s.contains("(false,false):123");
s.ends_with("}");
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -875,7 +1027,7 @@ mod tests {
let small: [u8; 16] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15];
assert_eq!(
to_string(&small).unwrap(),
- "(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,)"
+ "(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15)"
);
let large = vec![255u8; 64];
diff --git a/src/value.rs b/src/value.rs
--- a/src/value.rs
+++ b/src/value.rs
@@ -371,8 +371,7 @@ impl<'de> SeqAccess<'de> for Seq {
mod tests {
use super::*;
use serde::Deserialize;
- use std::collections::BTreeMap;
- use std::fmt::Debug;
+ use std::{collections::BTreeMap, fmt::Debug};
fn assert_same<'de, T>(s: &'de str)
where
diff --git a/tests/117_untagged_tuple_variant.rs b/tests/117_untagged_tuple_variant.rs
--- a/tests/117_untagged_tuple_variant.rs
+++ b/tests/117_untagged_tuple_variant.rs
@@ -54,6 +54,6 @@ enum Foo {
fn test_vessd_case() {
let foo_vec = vec![Foo::Bar(0); 5];
let foo_str = to_string(&foo_vec).unwrap();
- assert_eq!(foo_str.as_str(), "[0,0,0,0,0,]");
+ assert_eq!(foo_str.as_str(), "[0,0,0,0,0]");
assert_eq!(from_str::<Vec<Foo>>(&foo_str).unwrap(), foo_vec);
}
diff --git a/tests/123_enum_representation.rs b/tests/123_enum_representation.rs
--- a/tests/123_enum_representation.rs
+++ b/tests/123_enum_representation.rs
@@ -1,8 +1,6 @@
-use ron::de::from_str;
-use ron::ser::to_string;
+use ron::{de::from_str, ser::to_string};
use serde::{Deserialize, Serialize};
-use std::cmp::PartialEq;
-use std::fmt::Debug;
+use std::{cmp::PartialEq, fmt::Debug};
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize)]
enum Inner {
diff --git a/tests/123_enum_representation.rs b/tests/123_enum_representation.rs
--- a/tests/123_enum_representation.rs
+++ b/tests/123_enum_representation.rs
@@ -73,14 +71,14 @@ fn test_externally_a_ser() {
bar: 2,
different: 3,
};
- let e = "VariantA(foo:1,bar:2,different:3,)";
+ let e = "VariantA(foo:1,bar:2,different:3)";
test_ser(&v, e);
}
#[test]
fn test_externally_b_ser() {
let v = EnumStructExternally::VariantB { foo: 1, bar: 2 };
- let e = "VariantB(foo:1,bar:2,)";
+ let e = "VariantB(foo:1,bar:2)";
test_ser(&v, e);
}
diff --git a/tests/123_enum_representation.rs b/tests/123_enum_representation.rs
--- a/tests/123_enum_representation.rs
+++ b/tests/123_enum_representation.rs
@@ -91,14 +89,14 @@ fn test_internally_a_ser() {
bar: 2,
different: 3,
};
- let e = "(type:\"VariantA\",foo:1,bar:2,different:3,)";
+ let e = "(type:\"VariantA\",foo:1,bar:2,different:3)";
test_ser(&v, e);
}
#[test]
fn test_internally_b_ser() {
let v = EnumStructInternally::VariantB { foo: 1, bar: 2 };
- let e = "(type:\"VariantB\",foo:1,bar:2,)";
+ let e = "(type:\"VariantB\",foo:1,bar:2)";
test_ser(&v, e);
}
diff --git a/tests/123_enum_representation.rs b/tests/123_enum_representation.rs
--- a/tests/123_enum_representation.rs
+++ b/tests/123_enum_representation.rs
@@ -109,14 +107,14 @@ fn test_adjacently_a_ser() {
bar: 2,
different: Inner::Foo,
};
- let e = "(type:\"VariantA\",content:(foo:1,bar:2,different:Foo,),)";
+ let e = "(type:\"VariantA\",content:(foo:1,bar:2,different:Foo))";
test_ser(&v, e);
}
#[test]
fn test_adjacently_b_ser() {
let v = EnumStructAdjacently::VariantB { foo: 1, bar: 2 };
- let e = "(type:\"VariantB\",content:(foo:1,bar:2,),)";
+ let e = "(type:\"VariantB\",content:(foo:1,bar:2))";
test_ser(&v, e);
}
diff --git a/tests/123_enum_representation.rs b/tests/123_enum_representation.rs
--- a/tests/123_enum_representation.rs
+++ b/tests/123_enum_representation.rs
@@ -127,20 +125,20 @@ fn test_untagged_a_ser() {
bar: 2,
different: 3,
};
- let e = "(foo:1,bar:2,different:3,)";
+ let e = "(foo:1,bar:2,different:3)";
test_ser(&v, e);
}
#[test]
fn test_untagged_b_ser() {
let v = EnumStructUntagged::VariantB { foo: 1, bar: 2 };
- let e = "(foo:1,bar:2,)";
+ let e = "(foo:1,bar:2)";
test_ser(&v, e);
}
#[test]
fn test_externally_a_de() {
- let s = "VariantA(foo:1,bar:2,different:3,)";
+ let s = "VariantA(foo:1,bar:2,different:3)";
let e = EnumStructExternally::VariantA {
foo: 1,
bar: 2,
diff --git a/tests/123_enum_representation.rs b/tests/123_enum_representation.rs
--- a/tests/123_enum_representation.rs
+++ b/tests/123_enum_representation.rs
@@ -151,14 +149,14 @@ fn test_externally_a_de() {
#[test]
fn test_externally_b_de() {
- let s = "VariantB(foo:1,bar:2,)";
+ let s = "VariantB(foo:1,bar:2)";
let e = EnumStructExternally::VariantB { foo: 1, bar: 2 };
test_de(s, e);
}
#[test]
fn test_internally_a_de() {
- let s = "(type:\"VariantA\",foo:1,bar:2,different:3,)";
+ let s = "(type:\"VariantA\",foo:1,bar:2,different:3)";
let e = EnumStructInternally::VariantA {
foo: 1,
bar: 2,
diff --git a/tests/123_enum_representation.rs b/tests/123_enum_representation.rs
--- a/tests/123_enum_representation.rs
+++ b/tests/123_enum_representation.rs
@@ -169,14 +167,14 @@ fn test_internally_a_de() {
#[test]
fn test_internally_b_de() {
- let s = "(type:\"VariantB\",foo:1,bar:2,)";
+ let s = "(type:\"VariantB\",foo:1,bar:2)";
let e = EnumStructInternally::VariantB { foo: 1, bar: 2 };
test_de(s, e);
}
#[test]
fn test_adjacently_a_de() {
- let s = "(type:\"VariantA\",content:(foo:1,bar:2,different:Foo,),)";
+ let s = "(type:\"VariantA\",content:(foo:1,bar:2,different:Foo))";
let e = EnumStructAdjacently::VariantA {
foo: 1,
bar: 2,
diff --git a/tests/123_enum_representation.rs b/tests/123_enum_representation.rs
--- a/tests/123_enum_representation.rs
+++ b/tests/123_enum_representation.rs
@@ -187,14 +185,14 @@ fn test_adjacently_a_de() {
#[test]
fn test_adjacently_b_de() {
- let s = "(type:\"VariantB\",content:(foo:1,bar:2,),)";
+ let s = "(type:\"VariantB\",content:(foo:1,bar:2))";
let e = EnumStructAdjacently::VariantB { foo: 1, bar: 2 };
test_de(s, e);
}
#[test]
fn test_untagged_a_de() {
- let s = "(foo:1,bar:2,different:3,)";
+ let s = "(foo:1,bar:2,different:3)";
let e = EnumStructUntagged::VariantA {
foo: 1,
bar: 2,
diff --git a/tests/123_enum_representation.rs b/tests/123_enum_representation.rs
--- a/tests/123_enum_representation.rs
+++ b/tests/123_enum_representation.rs
@@ -205,7 +203,7 @@ fn test_untagged_a_de() {
#[test]
fn test_untagged_b_de() {
- let s = "(foo:1,bar:2,)";
+ let s = "(foo:1,bar:2)";
let e = EnumStructUntagged::VariantB { foo: 1, bar: 2 };
test_de(s, e);
}
diff --git a/tests/129_indexmap.rs b/tests/129_indexmap.rs
--- a/tests/129_indexmap.rs
+++ b/tests/129_indexmap.rs
@@ -15,9 +15,9 @@ tasks: {
"-l",
"-h",
]),
- ch_dir: Some("/")
+ ch_dir: Some("/"),
),
-}
+},
)
"#;
diff --git a/tests/depth_limit.rs b/tests/depth_limit.rs
--- a/tests/depth_limit.rs
+++ b/tests/depth_limit.rs
@@ -26,12 +26,12 @@ struct Nested {
}
const EXPECTED: &str = "(
- float: (2.18,-1.1,),
- tuple: ((),false,),
- map: {8:'1',},
- nested: (a:\"a\",b:'b',),
- var: A(255,\"\",),
- array: [(),(),(),],
+ float: (2.18,-1.1),
+ tuple: ((),false),
+ map: {8:'1'},
+ nested: (a:\"a\",b:'b'),
+ var: A(255,\"\"),
+ array: [(),(),()],
)";
#[test]
diff --git a/tests/preserve_sequence.rs b/tests/preserve_sequence.rs
--- a/tests/preserve_sequence.rs
+++ b/tests/preserve_sequence.rs
@@ -1,7 +1,8 @@
-use ron::de::from_str;
-use ron::ser::{to_string_pretty, PrettyConfig};
-use serde::Deserialize;
-use serde::Serialize;
+use ron::{
+ de::from_str,
+ ser::{to_string_pretty, PrettyConfig},
+};
+use serde::{Deserialize, Serialize};
use std::collections::BTreeMap;
#[derive(Debug, Deserialize, Serialize)]
| Add function to serialize to a Writer
Bincode has [`serialize_into`](https://docs.rs/bincode/1.2.1/bincode/fn.serialize_into.html) which supports an arbitrary Writer, this would be very useful for RON as well, especially with `BufWriter`, because the data gets very large.
I'm seeing huge RAM spikes because of RON serialization: From ~90 MB baseline RAM usage to over 700 MB when it's serializing. Btw, the resulting ron file is 20 MB large.

---
In the above application, I'm serializing app state to disk every 10s.
Btw, the CPU usage is also quite high. This is the CPU & RAM usage when serializing to bincode instead. With ROM, the CPU usage is almost always at ~27% (probably because it takes so long), with bincode, it only shows short spikes (to 24%) when serializing:

---
Anyway, with a function like `serialize_into`+`BufWriter` we could at least reduce the RAM spikes.
This would also allow chaining it with e.g. a Gzip encoder for streaming compression, e.g. like with bincode:
```rust
bincode::serialize_into(
GzBuilder::new()
.write(BufWriter::new(File::create(path)?), Compression::best()),
&data,
)?;
```
| 2020-03-28T07:55:23 | 0.5 | c6873c6f8a953c5532174baa02937b00558996f5 | [
"ser::tests::test_array",
"ser::tests::test_byte_stream",
"ser::tests::test_enum",
"ser::tests::test_struct",
"test_vessd_case",
"test_externally_a_ser",
"test_externally_b_ser",
"test_internally_a_ser",
"test_internally_b_ser",
"test_untagged_a_ser",
"test_untagged_b_ser",
"depth_limit"
] | [
"de::tests::expected_attribute",
"de::tests::forgot_apostrophes",
"de::tests::expected_attribute_end",
"de::tests::invalid_attribute",
"de::tests::multiple_attributes",
"de::tests::implicit_some",
"de::tests::test_array",
"de::tests::test_any_number_precision",
"de::tests::test_byte_stream",
"de::... | [
"test_adjacently_a_de",
"test_adjacently_a_ser",
"test_adjacently_b_de",
"test_adjacently_b_ser",
"test_nul_in_string"
] | [] | 22 | |
ron-rs/ron | 150 | ron-rs__ron-150 | [
"147"
] | 64ba4a541204defb8218e75b7bea7c3a7c771226 | diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -18,6 +18,7 @@ where
output: String::new(),
pretty: None,
struct_names: false,
+ is_empty: None,
};
value.serialize(&mut s)?;
Ok(s.output)
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -38,6 +39,7 @@ where
},
)),
struct_names: false,
+ is_empty: None,
};
value.serialize(&mut s)?;
Ok(s.output)
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -119,6 +121,7 @@ pub struct Serializer {
output: String,
pretty: Option<(PrettyConfig, Pretty)>,
struct_names: bool,
+ is_empty: Option<bool>,
}
impl Serializer {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -138,6 +141,7 @@ impl Serializer {
)
}),
struct_names,
+ is_empty: None,
}
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -163,7 +167,11 @@ impl Serializer {
if let Some((ref config, ref mut pretty)) = self.pretty {
pretty.indent += 1;
if pretty.indent < config.depth_limit {
- self.output += &config.new_line;
+ let is_empty = self.is_empty.unwrap_or(false);
+
+ if !is_empty {
+ self.output += &config.new_line;
+ }
}
}
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -180,10 +188,16 @@ impl Serializer {
fn end_indent(&mut self) {
if let Some((ref config, ref mut pretty)) = self.pretty {
if pretty.indent < config.depth_limit {
- self.output
- .extend((1..pretty.indent).map(|_| config.indentor.as_str()));
+ let is_empty = self.is_empty.unwrap_or(false);
+
+ if !is_empty {
+ self.output
+ .extend((1..pretty.indent).map(|_| config.indentor.as_str()));
+ }
}
pretty.indent -= 1;
+
+ self.is_empty = None;
}
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -349,9 +363,13 @@ impl<'a> ser::Serializer for &'a mut Serializer {
Ok(())
}
- fn serialize_seq(self, _: Option<usize>) -> Result<Self::SerializeSeq> {
+ fn serialize_seq(self, len: Option<usize>) -> Result<Self::SerializeSeq> {
self.output += "[";
+ if let Some(len) = len {
+ self.is_empty = Some(len == 0);
+ }
+
self.start_indent();
if let Some((_, ref mut pretty)) = self.pretty {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -361,10 +379,12 @@ impl<'a> ser::Serializer for &'a mut Serializer {
Ok(self)
}
- fn serialize_tuple(self, _: usize) -> Result<Self::SerializeTuple> {
+ fn serialize_tuple(self, len: usize) -> Result<Self::SerializeTuple> {
self.output += "(";
if self.separate_tuple_members() {
+ self.is_empty = Some(len == 0);
+
self.start_indent();
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -388,32 +408,39 @@ impl<'a> ser::Serializer for &'a mut Serializer {
_: &'static str,
_: u32,
variant: &'static str,
- _: usize,
+ len: usize,
) -> Result<Self::SerializeTupleVariant> {
self.output += variant;
self.output += "(";
if self.separate_tuple_members() {
+ self.is_empty = Some(len == 0);
+
self.start_indent();
}
Ok(self)
}
- fn serialize_map(self, _len: Option<usize>) -> Result<Self::SerializeMap> {
+ fn serialize_map(self, len: Option<usize>) -> Result<Self::SerializeMap> {
self.output += "{";
+ if let Some(len) = len {
+ self.is_empty = Some(len == 0);
+ }
+
self.start_indent();
Ok(self)
}
- fn serialize_struct(self, name: &'static str, _: usize) -> Result<Self::SerializeStruct> {
+ fn serialize_struct(self, name: &'static str, len: usize) -> Result<Self::SerializeStruct> {
if self.struct_names {
self.output += name;
}
self.output += "(";
+ self.is_empty = Some(len == 0);
self.start_indent();
Ok(self)
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -424,11 +451,12 @@ impl<'a> ser::Serializer for &'a mut Serializer {
_: &'static str,
_: u32,
variant: &'static str,
- _: usize,
+ len: usize,
) -> Result<Self::SerializeStructVariant> {
self.output += variant;
self.output += "(";
+ self.is_empty = Some(len == 0);
self.start_indent();
Ok(self)
| diff --git /dev/null b/tests/147_empty_sets_serialisation.rs
new file mode 100644
--- /dev/null
+++ b/tests/147_empty_sets_serialisation.rs
@@ -0,0 +1,73 @@
+use serde::{Deserialize, Serialize};
+use std::collections::HashMap;
+
+#[derive(Debug, PartialEq, Deserialize, Serialize)]
+struct UnitStruct;
+
+#[derive(Debug, PartialEq, Deserialize, Serialize)]
+struct NewType(f32);
+
+#[derive(Debug, PartialEq, Deserialize, Serialize)]
+struct TupleStruct(UnitStruct, i8);
+
+#[derive(Debug, PartialEq, Eq, Hash, Deserialize, Serialize)]
+struct Key(u32);
+
+#[derive(Debug, PartialEq, Eq, Hash, Deserialize, Serialize)]
+enum Enum {
+ Unit,
+ Bool(bool),
+ Chars(char, String),
+}
+
+#[derive(Debug, PartialEq, Deserialize, Serialize)]
+struct Struct {
+ tuple: ((), NewType, TupleStruct),
+ vec: Vec<Option<UnitStruct>>,
+ map: HashMap<Key, Enum>,
+ deep_vec: HashMap<Key, Vec<()>>,
+ deep_map: HashMap<Key, HashMap<Key, Enum>>,
+}
+
+#[test]
+fn empty_sets_arrays() {
+ let value = Struct {
+ tuple: ((), NewType(0.5), TupleStruct(UnitStruct, -5)),
+ vec: vec![],
+ map: vec![]
+ .into_iter()
+ .collect(),
+ deep_vec: vec![
+ (Key(0), vec![]),
+ ]
+ .into_iter()
+ .collect(),
+ deep_map: vec![
+ (Key(0), vec![].into_iter().collect(),),
+ ].into_iter().collect(),
+ };
+
+ let pretty = ron::ser::PrettyConfig {
+ enumerate_arrays: true,
+ ..Default::default()
+ };
+ let serial = ron::ser::to_string_pretty(&value, pretty).unwrap();
+
+ println!("Serialized: {}", serial);
+
+ assert_eq!("(
+ tuple: ((), (0.5), ((), -5)),
+ vec: [],
+ map: {},
+ deep_vec: {
+ (0): [],
+ },
+ deep_map: {
+ (0): {},
+ },
+)", serial);
+
+ let deserial = ron::de::from_str(&serial);
+
+ assert_eq!(Ok(value), deserial);
+}
| Don't split empty sets/arrays into multiple lines
I noticed a lot of sections like this:
```ron
vertical_blurs: [
],
horizontal_blurs: [
],
scalings: [
],
blits: [
],
outputs: [
],
```
I think it would be strictly an improvement if we detect that a set/map/array is empty and avoid inserting a new line in this case.
| 2019-03-05T17:08:59 | 0.4 | 697de994d642adf0f856d1a4d4e99b0e288367b2 | [
"empty_sets_arrays"
] | [
"de::tests::expected_attribute",
"de::tests::expected_attribute_end",
"de::tests::forgot_apostrophes",
"de::tests::invalid_attribute",
"de::tests::implicit_some",
"de::tests::multiple_attributes",
"de::tests::test_array",
"de::tests::test_byte_stream",
"de::tests::test_comment",
"de::tests::test_c... | [
"test_nul_in_string"
] | [] | 23 | |
ron-rs/ron | 225 | ron-rs__ron-225 | [
"174",
"174"
] | c6873c6f8a953c5532174baa02937b00558996f5 | diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -224,7 +224,7 @@ impl Serializer {
fn is_pretty(&self) -> bool {
match self.pretty {
- Some((ref config, ref pretty)) => pretty.indent < config.depth_limit,
+ Some((ref config, ref pretty)) => pretty.indent <= config.depth_limit,
None => false,
}
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -244,7 +244,7 @@ impl Serializer {
fn start_indent(&mut self) {
if let Some((ref config, ref mut pretty)) = self.pretty {
pretty.indent += 1;
- if pretty.indent < config.depth_limit {
+ if pretty.indent <= config.depth_limit {
let is_empty = self.is_empty.unwrap_or(false);
if !is_empty {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -256,7 +256,7 @@ impl Serializer {
fn indent(&mut self) {
if let Some((ref config, ref pretty)) = self.pretty {
- if pretty.indent < config.depth_limit {
+ if pretty.indent <= config.depth_limit {
self.output
.extend((0..pretty.indent).map(|_| config.indentor.as_str()));
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -265,7 +265,7 @@ impl Serializer {
fn end_indent(&mut self) {
if let Some((ref config, ref mut pretty)) = self.pretty {
- if pretty.indent < config.depth_limit {
+ if pretty.indent <= config.depth_limit {
let is_empty = self.is_empty.unwrap_or(false);
if !is_empty {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -567,7 +567,7 @@ impl<'a> ser::SerializeSeq for &'a mut Serializer {
self.output += ",";
if let Some((ref config, ref mut pretty)) = self.pretty {
- if pretty.indent < config.depth_limit {
+ if pretty.indent <= config.depth_limit {
if config.enumerate_arrays {
assert!(config.new_line.contains('\n'));
let index = pretty.sequence_index.last_mut().unwrap();
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -611,7 +611,7 @@ impl<'a> ser::SerializeTuple for &'a mut Serializer {
self.output += ",";
if let Some((ref config, ref pretty)) = self.pretty {
- if pretty.indent < config.depth_limit {
+ if pretty.indent <= config.depth_limit {
self.output += if self.separate_tuple_members() {
&config.new_line
} else {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -697,7 +697,7 @@ impl<'a> ser::SerializeMap for &'a mut Serializer {
self.output += ",";
if let Some((ref config, ref pretty)) = self.pretty {
- if pretty.indent < config.depth_limit {
+ if pretty.indent <= config.depth_limit {
self.output += &config.new_line;
}
}
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -734,7 +734,7 @@ impl<'a> ser::SerializeStruct for &'a mut Serializer {
self.output += ",";
if let Some((ref config, ref pretty)) = self.pretty {
- if pretty.indent < config.depth_limit {
+ if pretty.indent <= config.depth_limit {
self.output += &config.new_line;
}
}
| diff --git a/tests/depth_limit.rs b/tests/depth_limit.rs
--- a/tests/depth_limit.rs
+++ b/tests/depth_limit.rs
@@ -49,7 +49,7 @@ fn depth_limit() {
};
let pretty = ron::ser::PrettyConfig::new()
- .with_depth_limit(2)
+ .with_depth_limit(1)
.with_separate_tuple_members(true)
.with_enumerate_arrays(true)
.with_new_line("\n".to_string());
| Depth limit seems to be off by one
Currently, a depth limit of `1` means that nothing gets indented (same for `0`). That seems wrong, as the limit should be the (inclusive) maximum depth.
This can be fixed by changing the `pretty.indent < config.ident` to a `<=` check.
@kvark Do you agree this is a bug?
Depth limit seems to be off by one
Currently, a depth limit of `1` means that nothing gets indented (same for `0`). That seems wrong, as the limit should be the (inclusive) maximum depth.
This can be fixed by changing the `pretty.indent < config.ident` to a `<=` check.
@kvark Do you agree this is a bug?
| yeah, let's fix this
Bump :)
PR's welcome!
yeah, let's fix this
Bump :)
PR's welcome! | 2020-05-03T07:28:48 | 0.5 | c6873c6f8a953c5532174baa02937b00558996f5 | [
"depth_limit"
] | [
"de::tests::expected_attribute",
"de::tests::expected_attribute_end",
"de::tests::forgot_apostrophes",
"de::tests::invalid_attribute",
"de::tests::implicit_some",
"de::tests::multiple_attributes",
"de::tests::test_any_number_precision",
"de::tests::test_char",
"de::tests::test_array",
"de::tests::... | [
"test_adjacently_a_de",
"test_adjacently_a_ser",
"test_adjacently_b_de",
"test_adjacently_b_ser",
"test_nul_in_string"
] | [] | 24 |
ron-rs/ron | 324 | ron-rs__ron-324 | [
"322"
] | 4993b34e5bc3da8d5977acbf3d89b7032a45ead9 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -4,6 +4,11 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [0.6.6] - 2020-10-21
+
+- Fix serialization of raw identifiers ([#323](https://github.com/ron-rs/ron/pull/323))
+- Add `unwrap_variant_newtypes` extension ([#319](https://github.com/ron-rs/ron/pull/319))
+
## [0.6.5] - 2021-09-09
- support serde renames that start with a digit
diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -11,6 +16,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- bump `base64` dependency to 0.13
## [0.6.2] - 2020-09-09
+
- Added `decimal_floats` PrettyConfig option, which always includes decimals in floats (`1.0` vs `1`) ([#237](https://github.com/ron-rs/ron/pull/237))
- Fixed EBNF grammar for raw strings ([#236](https://github.com/ron-rs/ron/pull/236), unsigned integers ([#248](https://github.com/ron-rs/ron/pull/248)), and nested comments ([#272](https://github.com/ron-rs/ron/pull/272))
- Added `ser::to_writer_pretty` ([#269](https://github.com/ron-rs/ron/pull/269))
diff --git a/Cargo.toml b/Cargo.toml
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -1,7 +1,7 @@
[package]
name = "ron"
# Memo: update version in src/lib.rs too (doc link)
-version = "0.6.5"
+version = "0.6.6"
license = "MIT/Apache-2.0"
keywords = ["parser", "serde", "serialization"]
authors = [
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -57,7 +57,7 @@ Serializing / Deserializing is as simple as calling `to_string` / `from_str`.
!*/
-#![doc(html_root_url = "https://docs.rs/ron/0.6.5")]
+#![doc(html_root_url = "https://docs.rs/ron/0.6.6")]
pub mod de;
pub mod ser;
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -928,7 +928,7 @@ impl<'a, W: io::Write> ser::SerializeStruct for Compound<'a, W> {
}
};
ser.indent()?;
- ser.output.write_all(key.as_bytes())?;
+ ser.write_identifier(key)?;
ser.output.write_all(b":")?;
if ser.is_pretty() {
| diff --git /dev/null b/tests/322_escape_idents.rs
new file mode 100644
--- /dev/null
+++ b/tests/322_escape_idents.rs
@@ -0,0 +1,35 @@
+use serde::{Deserialize, Serialize};
+use std::collections::HashMap;
+
+#[derive(Debug, Deserialize, PartialEq, Serialize)]
+#[serde(rename_all = "kebab-case")]
+enum MyEnumWithDashes {
+ ThisIsMyUnitVariant,
+ ThisIsMyTupleVariant(bool, i32),
+}
+
+#[derive(Debug, Deserialize, PartialEq, Serialize)]
+#[serde(rename_all = "kebab-case")]
+struct MyStructWithDashes {
+ my_enum: MyEnumWithDashes,
+ #[serde(rename = "2nd")]
+ my_enum2: MyEnumWithDashes,
+ will_be_renamed: u32,
+}
+
+#[test]
+fn roundtrip_ident_with_dash() {
+ let value = MyStructWithDashes {
+ my_enum: MyEnumWithDashes::ThisIsMyUnitVariant,
+ my_enum2: MyEnumWithDashes::ThisIsMyTupleVariant(false, -3),
+ will_be_renamed: 32,
+ };
+
+ let serial = ron::ser::to_string(&value).unwrap();
+
+ println!("Serialized: {}", serial);
+
+ let deserial = ron::de::from_str(&serial);
+
+ assert_eq!(Ok(value), deserial);
+}
| Identifiers get serialized without raw ident escape
Minimal example:
```rust
#[derive(Debug, Deserialize, PartialEq, Serialize)]
#[serde(rename_all = "kebab-case")]
struct MyStructWithDash {
will_be_renamed: u32,
}
#[test]
fn roundtrip_ident_with_dash() {
let value = MyStructWithDash {
will_be_renamed: 32
};
let serial = ron::ser::to_string(&value).unwrap();
println!("Serialized: {}", serial);
let deserial = ron::de::from_str(&serial);
assert_eq!(Ok(value), deserial);
}
```
| 2021-10-22T15:39:40 | 0.6 | 4993b34e5bc3da8d5977acbf3d89b7032a45ead9 | [
"roundtrip_ident_with_dash"
] | [
"de::tests::expected_attribute",
"de::tests::expected_attribute_end",
"de::tests::forgot_apostrophes",
"de::tests::invalid_attribute",
"de::tests::implicit_some",
"de::tests::multiple_attributes",
"de::tests::rename",
"de::tests::test_any_number_precision",
"de::tests::test_array",
"de::tests::tes... | [
"test_adjacently_a_de",
"test_adjacently_a_ser",
"test_adjacently_b_de",
"test_adjacently_b_ser",
"test_nul_in_string"
] | [] | 25 | |
artichoke/artichoke | 2,073 | artichoke__artichoke-2073 | [
"2072"
] | b3284a42fb0d175da1a822bda73bd300ddab72c3 | diff --git a/spinoso-time/src/time/tzrs/parts.rs b/spinoso-time/src/time/tzrs/parts.rs
--- a/spinoso-time/src/time/tzrs/parts.rs
+++ b/spinoso-time/src/time/tzrs/parts.rs
@@ -316,7 +316,7 @@ impl Time {
self.inner.local_time_type().is_dst()
}
- /// Returns an integer representing the day of the week, 0..6, with Sunday
+ /// Returns an integer representing the day of the week, `0..=6`, with Sunday
/// == 0.
///
/// Can be used to implement [`Time#wday`].
diff --git a/spinoso-time/src/time/tzrs/parts.rs b/spinoso-time/src/time/tzrs/parts.rs
--- a/spinoso-time/src/time/tzrs/parts.rs
+++ b/spinoso-time/src/time/tzrs/parts.rs
@@ -508,7 +508,7 @@ impl Time {
self.day_of_week() == 6
}
- /// Returns an integer representing the day of the year, 1..366.
+ /// Returns an integer representing the day of the year, `1..=366`.
///
/// Can be used to implement [`Time#yday`].
///
diff --git a/spinoso-time/src/time/tzrs/parts.rs b/spinoso-time/src/time/tzrs/parts.rs
--- a/spinoso-time/src/time/tzrs/parts.rs
+++ b/spinoso-time/src/time/tzrs/parts.rs
@@ -518,7 +518,7 @@ impl Time {
/// # use spinoso_time::tzrs::{Time, TimeError};
/// # fn example() -> Result<(), TimeError> {
/// let now = Time::utc(2022, 7, 8, 12, 34, 56, 0)?;
- /// assert_eq!(now.day_of_year(), 188);
+ /// assert_eq!(now.day_of_year(), 189);
/// # Ok(())
/// # }
/// # example().unwrap()
diff --git a/spinoso-time/src/time/tzrs/parts.rs b/spinoso-time/src/time/tzrs/parts.rs
--- a/spinoso-time/src/time/tzrs/parts.rs
+++ b/spinoso-time/src/time/tzrs/parts.rs
@@ -528,7 +528,7 @@ impl Time {
#[inline]
#[must_use]
pub fn day_of_year(&self) -> u16 {
- self.inner.year_day()
+ self.inner.year_day() + 1
}
}
| diff --git a/spinoso-time/src/time/tzrs/parts.rs b/spinoso-time/src/time/tzrs/parts.rs
--- a/spinoso-time/src/time/tzrs/parts.rs
+++ b/spinoso-time/src/time/tzrs/parts.rs
@@ -550,4 +550,29 @@ mod tests {
assert_eq!("UTC", dt.time_zone());
assert!(dt.is_utc());
}
+
+ #[test]
+ fn yday() {
+ // ```
+ // [3.1.2] > Time.new(2022, 7, 8, 12, 34, 56, 0).yday
+ // => 189
+ // [3.1.2] > Time.new(2022, 1, 1, 12, 34, 56, 0).yday
+ // => 1
+ // [3.1.2] > Time.new(2022, 12, 31, 12, 34, 56, 0).yday
+ // => 365
+ // [3.1.2] > Time.new(2020, 12, 31, 12, 34, 56, 0).yday
+ // => 366
+ // ```
+ let time = Time::utc(2022, 7, 8, 12, 34, 56, 0).unwrap();
+ assert_eq!(time.day_of_year(), 189);
+
+ let start_2022 = Time::utc(2022, 1, 1, 12, 34, 56, 0).unwrap();
+ assert_eq!(start_2022.day_of_year(), 1);
+
+ let end_2022 = Time::utc(2022, 12, 31, 12, 34, 56, 0).unwrap();
+ assert_eq!(end_2022.day_of_year(), 365);
+
+ let end_leap_year = Time::utc(2020, 12, 31, 12, 34, 56, 0).unwrap();
+ assert_eq!(end_leap_year.day_of_year(), 366);
+ }
}
| The doc of `spinoso_time::tzrs::Time::day_of_year()` is inconsistent with its implementation
The doc of `spinoso_time::tzrs::Time::day_of_year()` is inconsistent with its implementation:
https://github.com/artichoke/artichoke/blob/b3284a42fb0d175da1a822bda73bd300ddab72c3/spinoso-time/src/time/tzrs/parts.rs#L511-L533.
`tz::datetime::DateTime` returns a year day in `[0, 365]`, but the spinoso_time doc say it should be in `1..366` (`1..=366` is more correct I think).
_Originally posted by @x-hgg-x in https://github.com/artichoke/strftime-ruby/issues/12#issuecomment-1212120502_
| 2022-08-11T23:52:40 | 1.62 | b3284a42fb0d175da1a822bda73bd300ddab72c3 | [
"time::tzrs::parts::tests::yday"
] | [
"time::chrono::build::tests::time_at_with_max_i64_overflow",
"time::chrono::build::tests::time_at_with_min_i64_overflow",
"time::chrono::build::tests::time_at_with_overflowing_negative_sub_second_nanos",
"time::chrono::build::tests::time_at_with_negative_sub_second_nanos",
"time::chrono::build::tests::time_... | [] | [] | 26 | |
async-graphql/async-graphql | 1,228 | async-graphql__async-graphql-1228 | [
"1223"
] | dc140a55012ab4fd671fc4974f3aed7b0de23c7a | diff --git a/src/dynamic/resolve.rs b/src/dynamic/resolve.rs
--- a/src/dynamic/resolve.rs
+++ b/src/dynamic/resolve.rs
@@ -336,7 +336,13 @@ fn collect_fields<'a>(
type_condition.map(|condition| condition.node.on.node.as_str());
let introspection_type_name = &object.name;
- if type_condition.is_none() || type_condition == Some(introspection_type_name) {
+ let type_condition_matched = match type_condition {
+ None => true,
+ Some(type_condition) if type_condition == introspection_type_name => true,
+ Some(type_condition) if object.implements.contains(type_condition) => true,
+ _ => false,
+ };
+ if type_condition_matched {
collect_fields(
fields,
schema,
| diff --git a/src/dynamic/interface.rs b/src/dynamic/interface.rs
--- a/src/dynamic/interface.rs
+++ b/src/dynamic/interface.rs
@@ -389,4 +389,47 @@ mod tests {
}]
);
}
+ #[tokio::test]
+ async fn query_type_condition() {
+ struct MyObjA;
+ let obj_a = Object::new("MyObjA")
+ .implement("MyInterface")
+ .field(Field::new("a", TypeRef::named(TypeRef::INT), |_| {
+ FieldFuture::new(async { Ok(Some(Value::from(100))) })
+ }))
+ .field(Field::new("b", TypeRef::named(TypeRef::INT), |_| {
+ FieldFuture::new(async { Ok(Some(Value::from(200))) })
+ }));
+ let interface = Interface::new("MyInterface")
+ .field(InterfaceField::new("a", TypeRef::named(TypeRef::INT)));
+ let query = Object::new("Query");
+ let query = query.field(Field::new(
+ "valueA",
+ TypeRef::named_nn(obj_a.type_name()),
+ |_| FieldFuture::new(async { Ok(Some(FieldValue::owned_any(MyObjA))) }),
+ ));
+ let schema = Schema::build(query.type_name(), None, None)
+ .register(obj_a)
+ .register(interface)
+ .register(query)
+ .finish()
+ .unwrap();
+ let query = r#"
+ {
+ valueA { __typename
+ b
+ ... on MyInterface { a } }
+ }
+ "#;
+ assert_eq!(
+ schema.execute(query).await.into_result().unwrap().data,
+ value!({
+ "valueA": {
+ "__typename": "MyObjA",
+ "b": 200,
+ "a": 100,
+ }
+ })
+ );
+ }
}
diff --git a/src/dynamic/union.rs b/src/dynamic/union.rs
--- a/src/dynamic/union.rs
+++ b/src/dynamic/union.rs
@@ -137,7 +137,7 @@ impl Union {
mod tests {
use async_graphql_parser::Pos;
- use crate::{dynamic::*, value, PathSegment, ServerError, Value};
+ use crate::{dynamic::*, value, PathSegment, Request, ServerError, Value};
#[tokio::test]
async fn basic_union() {
diff --git a/src/dynamic/union.rs b/src/dynamic/union.rs
--- a/src/dynamic/union.rs
+++ b/src/dynamic/union.rs
@@ -258,4 +258,136 @@ mod tests {
}]
);
}
+
+ #[tokio::test]
+ async fn test_query() {
+ struct Dog;
+ struct Cat;
+ struct Snake;
+ // enum
+ #[allow(dead_code)]
+ enum Animal {
+ Dog(Dog),
+ Cat(Cat),
+ Snake(Snake),
+ }
+ struct Query {
+ pet: Animal,
+ }
+
+ impl Animal {
+ fn to_field_value(&self) -> FieldValue {
+ match self {
+ Animal::Dog(dog) => FieldValue::borrowed_any(dog).with_type("Dog"),
+ Animal::Cat(cat) => FieldValue::borrowed_any(cat).with_type("Cat"),
+ Animal::Snake(snake) => FieldValue::borrowed_any(snake).with_type("Snake"),
+ }
+ }
+ }
+ fn create_schema() -> Schema {
+ // interface
+ let named = Interface::new("Named");
+ let named = named.field(InterfaceField::new(
+ "name",
+ TypeRef::named_nn(TypeRef::STRING),
+ ));
+ // dog
+ let dog = Object::new("Dog");
+ let dog = dog.field(Field::new(
+ "name",
+ TypeRef::named_nn(TypeRef::STRING),
+ |_ctx| FieldFuture::new(async move { Ok(Some(Value::from("dog"))) }),
+ ));
+ let dog = dog.field(Field::new(
+ "power",
+ TypeRef::named_nn(TypeRef::INT),
+ |_ctx| FieldFuture::new(async move { Ok(Some(Value::from(100))) }),
+ ));
+ let dog = dog.implement("Named");
+ // cat
+ let cat = Object::new("Cat");
+ let cat = cat.field(Field::new(
+ "name",
+ TypeRef::named_nn(TypeRef::STRING),
+ |_ctx| FieldFuture::new(async move { Ok(Some(Value::from("cat"))) }),
+ ));
+ let cat = cat.field(Field::new(
+ "life",
+ TypeRef::named_nn(TypeRef::INT),
+ |_ctx| FieldFuture::new(async move { Ok(Some(Value::from(9))) }),
+ ));
+ let cat = cat.implement("Named");
+ // snake
+ let snake = Object::new("Snake");
+ let snake = snake.field(Field::new(
+ "length",
+ TypeRef::named_nn(TypeRef::INT),
+ |_ctx| FieldFuture::new(async move { Ok(Some(Value::from(200))) }),
+ ));
+ // animal
+ let animal = Union::new("Animal");
+ let animal = animal.possible_type("Dog");
+ let animal = animal.possible_type("Cat");
+ let animal = animal.possible_type("Snake");
+ // query
+
+ let query = Object::new("Query");
+ let query = query.field(Field::new("pet", TypeRef::named_nn("Animal"), |ctx| {
+ FieldFuture::new(async move {
+ let query = ctx.parent_value.try_downcast_ref::<Query>()?;
+ Ok(Some(query.pet.to_field_value()))
+ })
+ }));
+
+ let schema = Schema::build(query.type_name(), None, None);
+ let schema = schema
+ .register(query)
+ .register(named)
+ .register(dog)
+ .register(cat)
+ .register(snake)
+ .register(animal);
+
+ schema.finish().unwrap()
+ }
+
+ let schema = create_schema();
+ let query = r#"
+ query {
+ dog: pet {
+ ... on Dog {
+ __dog_typename: __typename
+ name
+ power
+ }
+ }
+ named: pet {
+ ... on Named {
+ __named_typename: __typename
+ name
+ }
+ }
+ }
+ "#;
+ let root = Query {
+ pet: Animal::Dog(Dog),
+ };
+ let req = Request::new(query).root_value(FieldValue::owned_any(root));
+ let res = schema.execute(req).await;
+
+ assert_eq!(
+ res.data.into_json().unwrap(),
+ serde_json::json!({
+ "dog": {
+ "__dog_typename": "Dog",
+ "name": "dog",
+ "power": 100
+ },
+ "named": {
+ "__named_typename": "Dog",
+ "name": "dog"
+ }
+ })
+ );
+ }
}
| dynamic schema: query interface on union not working
## Expected Behavior
when we have a union of objects that some of which implements an interface, we should be able to query the union by the interface. In the following example, if the `pet` is queried by the `Named` interface, we expect the `name` field and `__type_name` to be in response when the value is `Dog` or `Cat`
## Actual Behavior
query by the interface is an empty object
## Steps to Reproduce the Problem
all code is in [here][code]
schema:
```gql
interface Named {
name: String!
}
type Cat implements Named {
name: String!
life: Int!
}
type Dog implements Named {
name: String!
power: Int!
}
type Snake {
length: Int!
}
union Animal = Dog | Cat | Snake
type Query {
pet: Animal!
}
schema {
query: Query
}
```
query:
```graphql
query {
dog: pet {
... on Dog {
__dog_typename: __typename
name
power
}
}
named: pet {
... on Named {
__named_typename: __typename
name
}
}
}
```
actual response:
```json
{
"dog": {
"__dog_typename": "Dog",
"name": "dog",
"power": 100
},
"named": {}
}
```
expected response:
```json
{
"dog": {
"__dog_typename": "Dog",
"name": "dog",
"power": 100
},
"named": {
"__named_typename": "Dog",
"name": "dog"
}
}
```
## Specifications
- Version: 5.0.5
- Platform:
- Subsystem:
[code]: https://github.com/smmoosavi/async-graphql-dynamic-extend/blob/4663f78a629cf88703222a81ba1ff99b704aefd7/src/schema/union_with_interface.rs
| 2023-02-07T17:56:55 | 5.0 | dc140a55012ab4fd671fc4974f3aed7b0de23c7a | [
"dynamic::union::tests::test_query",
"dynamic::interface::tests::query_type_condition"
] | [
"dataloader::tests::test_dataloader",
"dynamic::interface::tests::basic_interface",
"dataloader::tests::test_duplicate_keys",
"dataloader::tests::test_dataloader_disable_all_cache",
"dynamic::input_object::tests::oneof_input_object",
"dynamic::scalar::tests::custom_scalar",
"dynamic::input_object::tests... | [] | [] | 27 | |
async-graphql/async-graphql | 1,559 | async-graphql__async-graphql-1559 | [
"1558"
] | faa78d071cfae795dacc39fd46900fdfb6157d0d | diff --git a/src/registry/export_sdl.rs b/src/registry/export_sdl.rs
--- a/src/registry/export_sdl.rs
+++ b/src/registry/export_sdl.rs
@@ -259,10 +259,8 @@ impl Registry {
write!(sdl, " @tag(name: \"{}\")", tag.replace('"', "\\\"")).ok();
}
}
- }
- for (_, meta_input_value) in &field.args {
- for directive in &meta_input_value.directive_invocations {
+ for directive in &arg.directive_invocations {
write!(sdl, " {}", directive.sdl()).ok();
}
}
| diff --git a/tests/schemas/test_fed2_compose_2.schema.graphql b/tests/schemas/test_fed2_compose_2.schema.graphql
--- a/tests/schemas/test_fed2_compose_2.schema.graphql
+++ b/tests/schemas/test_fed2_compose_2.schema.graphql
@@ -3,7 +3,7 @@
type Query {
- testArgument(arg: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in Object")): String!
+ testArgument(arg1: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in Object.arg1"), arg2: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in Object.arg2")): String!
testInputObject(arg: TestInput!): String!
testComplexObject: TestComplexObject!
testSimpleObject: TestSimpleObject!
diff --git a/tests/schemas/test_fed2_compose_2.schema.graphql b/tests/schemas/test_fed2_compose_2.schema.graphql
--- a/tests/schemas/test_fed2_compose_2.schema.graphql
+++ b/tests/schemas/test_fed2_compose_2.schema.graphql
@@ -15,7 +15,7 @@ type Query {
type TestComplexObject @type_directive_object(description: "This is OBJECT in (Complex / Simple)Object") {
field: String! @type_directive_field_definition(description: "This is FIELD_DEFINITION in (Complex / Simple)Object")
- test(arg: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in ComplexObject")): String!
+ test(arg1: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in ComplexObject.arg1"), arg2: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in ComplexObject.arg2")): String!
}
enum TestEnum @type_directive_enum(description: "This is ENUM in Enum") {
diff --git a/tests/schemas/test_fed2_compose_2.schema.graphql b/tests/schemas/test_fed2_compose_2.schema.graphql
--- a/tests/schemas/test_fed2_compose_2.schema.graphql
+++ b/tests/schemas/test_fed2_compose_2.schema.graphql
@@ -28,11 +28,11 @@ input TestInput @type_directive_input_object(description: "This is INPUT_OBJECT
}
interface TestInterface @type_directive_interface(description: "This is INTERFACE in Interface") {
- field(arg: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in Interface")): String! @type_directive_field_definition(description: "This is INTERFACE in Interface")
+ field(arg1: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in Interface.arg1"), arg2: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in Interface.arg2")): String! @type_directive_field_definition(description: "This is INTERFACE in Interface")
}
type TestObjectForInterface implements TestInterface {
- field(arg: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in Interface")): String!
+ field(arg1: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in Interface.arg1"), arg2: String! @type_directive_argument_definition(description: "This is ARGUMENT_DEFINITION in Interface.arg2")): String!
}
input TestOneOfObject @oneOf @type_directive_input_object(description: "This is INPUT_OBJECT in OneofObject") {
diff --git a/tests/type_directive.rs b/tests/type_directive.rs
--- a/tests/type_directive.rs
+++ b/tests/type_directive.rs
@@ -118,8 +118,10 @@ fn test_type_directive_2() {
impl TestComplexObject {
async fn test(
&self,
- #[graphql(directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in ComplexObject".to_string()))]
- _arg: String,
+ #[graphql(directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in ComplexObject.arg1".to_string()))]
+ _arg1: String,
+ #[graphql(directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in ComplexObject.arg2".to_string()))]
+ _arg2: String,
) -> &'static str {
"test"
}
diff --git a/tests/type_directive.rs b/tests/type_directive.rs
--- a/tests/type_directive.rs
+++ b/tests/type_directive.rs
@@ -140,8 +142,10 @@ fn test_type_directive_2() {
impl TestObjectForInterface {
async fn field(
&self,
- #[graphql(directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in Interface".to_string()))]
- _arg: String,
+ #[graphql(directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in Interface.arg1".to_string()))]
+ _arg1: String,
+ #[graphql(directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in Interface.arg2".to_string()))]
+ _arg2: String,
) -> &'static str {
"hello"
}
diff --git a/tests/type_directive.rs b/tests/type_directive.rs
--- a/tests/type_directive.rs
+++ b/tests/type_directive.rs
@@ -153,9 +157,14 @@ fn test_type_directive_2() {
ty = "String",
directive = type_directive_field_definition::apply("This is INTERFACE in Interface".to_string()),
arg(
- name = "_arg",
+ name = "_arg1",
ty = "String",
- directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in Interface".to_string())
+ directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in Interface.arg1".to_string())
+ ),
+ arg(
+ name = "_arg2",
+ ty = "String",
+ directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in Interface.arg2".to_string())
)
),
directive = type_directive_interface::apply("This is INTERFACE in Interface".to_string())
diff --git a/tests/type_directive.rs b/tests/type_directive.rs
--- a/tests/type_directive.rs
+++ b/tests/type_directive.rs
@@ -170,8 +179,10 @@ fn test_type_directive_2() {
impl Query {
pub async fn test_argument(
&self,
- #[graphql(directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in Object".to_string()))]
- _arg: String,
+ #[graphql(directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in Object.arg1".to_string()))]
+ _arg1: String,
+ #[graphql(directive = type_directive_argument_definition::apply("This is ARGUMENT_DEFINITION in Object.arg2".to_string()))]
+ _arg2: String,
) -> &'static str {
"hello"
}
| The custom directive ARGUMENT_DEFINITION is not being output at the appropriate location in SDL
## Rust Code
```rust
use async_graphql::*;
#[TypeDirective(location = "ArgumentDefinition")]
fn testDirective(desc: String) {}
struct Query;
#[Object]
impl Query {
async fn add(
&self,
#[graphql(directive = testDirective::apply("Desc for a".into()))] a: i32,
#[graphql(directive = testDirective::apply("Desc for b".into()))] b: i32,
) -> i32 {
a + b
}
}
fn main() {
let schema = Schema::build(Query, EmptyMutation, EmptySubscription).finish();
std::fs::write("./schema.graphql", schema.sdl()).unwrap();
}
```
## Expected Behavior
The @testDirective is expected to be applied to each of the arguments (a and b)
```graphql
type Query {
add(a: Int! @testDirective(desc: "Desc for a"), b: Int! @testDirective(desc: "Desc for b")): Int!
}
directive @include(if: Boolean!) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT
directive @skip(if: Boolean!) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT
directive @testDirective(desc: String!) on ARGUMENT_DEFINITION
schema {
query: Query
}
```
## Actual Behavior
Actually, the @testDirective gets applied collectively to the last argument (b)
```graphql
type Query {
add(a: Int!, b: Int! @testDirective(desc: "Desc for a") @testDirective(desc: "Desc for b")): Int!
}
directive @include(if: Boolean!) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT
directive @skip(if: Boolean!) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT
directive @testDirective(desc: String!) on ARGUMENT_DEFINITION
schema {
query: Query
}
```
## Steps to Reproduce the Problem
1. execute "## Rust Code".
## Specifications
- Version:
- rust: 1.75.0
- async-graphql: 7.0.6
- Platform:
- arm64 (MacOS)
- Subsystem:
| 2024-07-09T18:56:37 | 7.0 | faa78d071cfae795dacc39fd46900fdfb6157d0d | [
"test_type_directive_2"
] | [
"dataloader::tests::test_dataloader_load_empty",
"dynamic::check::tests::test_recursive_input_objects_local_cycle",
"dynamic::check::tests::test_recursive_input_objects",
"dynamic::check::tests::test_recursive_input_objects_bad",
"dataloader::tests::test_dataloader_disable_all_cache",
"dataloader::tests::... | [] | [] | 28 | |
async-graphql/async-graphql | 1,542 | async-graphql__async-graphql-1542 | [
"1541",
"1541"
] | faa78d071cfae795dacc39fd46900fdfb6157d0d | diff --git a/src/types/external/bson.rs b/src/types/external/bson.rs
--- a/src/types/external/bson.rs
+++ b/src/types/external/bson.rs
@@ -11,6 +11,13 @@ impl ScalarType for ObjectId {
fn parse(value: Value) -> InputValueResult<Self> {
match value {
Value::String(s) => Ok(ObjectId::parse_str(s)?),
+ Value::Object(o) => {
+ let json = Value::Object(o).into_json()?;
+ let bson = Bson::try_from(json)?;
+ bson.as_object_id().ok_or(InputValueError::custom(
+ "could not parse the value as a BSON ObjectId",
+ ))
+ }
_ => Err(InputValueError::expected_type(value)),
}
}
diff --git a/src/types/external/bson.rs b/src/types/external/bson.rs
--- a/src/types/external/bson.rs
+++ b/src/types/external/bson.rs
@@ -25,6 +32,17 @@ impl ScalarType for Uuid {
fn parse(value: Value) -> InputValueResult<Self> {
match value {
Value::String(s) => Ok(Uuid::parse_str(s)?),
+ Value::Object(o) => {
+ let json = Value::Object(o).into_json()?;
+ let Bson::Binary(binary) = Bson::try_from(json)? else {
+ return Err(InputValueError::custom(
+ "could not parse the value as BSON Binary",
+ ));
+ };
+ binary.to_uuid().map_err(|_| {
+ InputValueError::custom("could not deserialize BSON Binary to Uuid")
+ })
+ }
_ => Err(InputValueError::expected_type(value)),
}
}
| diff --git a/src/types/external/bson.rs b/src/types/external/bson.rs
--- a/src/types/external/bson.rs
+++ b/src/types/external/bson.rs
@@ -69,3 +87,34 @@ impl ScalarType for Document {
bson::from_document(self.clone()).unwrap_or_default()
}
}
+
+#[cfg(test)]
+mod tests {
+ use serde_json::json;
+
+ use super::*;
+
+ #[test]
+ fn test_parse_bson_uuid() {
+ let id = Uuid::new();
+ let bson_value = bson::bson!(id);
+ let extended_json_value = json!(bson_value);
+ let gql_value = Value::from_json(extended_json_value).expect("valid json");
+ assert_eq!(
+ id,
+ <Uuid as ScalarType>::parse(gql_value).expect("parsing succeeds")
+ );
+ }
+
+ #[test]
+ fn test_parse_bson_object_id() {
+ let id = ObjectId::from_bytes([42; 12]);
+ let bson_value = bson::bson!(id);
+ let extended_json_value = json!(bson_value);
+ let gql_value = Value::from_json(extended_json_value).expect("valid json");
+ assert_eq!(
+ id,
+ <ObjectId as ScalarType>::parse(gql_value).expect("parsing succeeds")
+ );
+ }
+}
| Support for extended JSON representations of BSON
## Description of the feature
BSON allows for a typed representation of its data in an [extended JSON representation](https://www.mongodb.com/docs/manual/reference/mongodb-extended-json/#mongodb-extended-json--v2-). To properly integrate with BSON, `async-graphql` should have the functionality handle these representations.
To be speciffic, the types `bson::ObjectId` and `bson::Uuid` implement `async_graphql::ScalarType` but do not currently support the alternative, more strongly typed representation.
## Code example (if possible)
This conversion chain will fail currently
```rust
let id = bson::Uuid::new();
let bson_value = bson::bson!(id);
let extended_json_value = serde_json::json!(bson_value);
let gql_value = async_graphql::ConstValue::from_json(extended_json_value).expect("valid json");
assert_eq!(
id,
<bson::Uuid as ScalarType>::parse(gql_value).expect("parsing succeeds") // panics
);
```
The reason for the failure are the different implementations of `serde::Serialize` for `bson::Uuid` and `bson::Bson`:
- `bson::Uuid` is serialized as a hex string with dashes
```json
"f136c009-e465-4f69-9170-8e898b1f9547"
```
- `bson::Bson::Binary` has the form described [here](https://www.mongodb.com/docs/manual/reference/mongodb-extended-json/#mongodb-bsontype-Binary)
```json
{ "$binary": { "base64": "8TbACeRlT2mRcI6Jix+VRw==", "subType": "04" } }
```
The exact same happens with `bson::ObjectId` and `bson::Bson::ObjectId`.
Support for extended JSON representations of BSON
## Description of the feature
BSON allows for a typed representation of its data in an [extended JSON representation](https://www.mongodb.com/docs/manual/reference/mongodb-extended-json/#mongodb-extended-json--v2-). To properly integrate with BSON, `async-graphql` should have the functionality handle these representations.
To be speciffic, the types `bson::ObjectId` and `bson::Uuid` implement `async_graphql::ScalarType` but do not currently support the alternative, more strongly typed representation.
## Code example (if possible)
This conversion chain will fail currently
```rust
let id = bson::Uuid::new();
let bson_value = bson::bson!(id);
let extended_json_value = serde_json::json!(bson_value);
let gql_value = async_graphql::ConstValue::from_json(extended_json_value).expect("valid json");
assert_eq!(
id,
<bson::Uuid as ScalarType>::parse(gql_value).expect("parsing succeeds") // panics
);
```
The reason for the failure are the different implementations of `serde::Serialize` for `bson::Uuid` and `bson::Bson`:
- `bson::Uuid` is serialized as a hex string with dashes
```json
"f136c009-e465-4f69-9170-8e898b1f9547"
```
- `bson::Bson::Binary` has the form described [here](https://www.mongodb.com/docs/manual/reference/mongodb-extended-json/#mongodb-bsontype-Binary)
```json
{ "$binary": { "base64": "8TbACeRlT2mRcI6Jix+VRw==", "subType": "04" } }
```
The exact same happens with `bson::ObjectId` and `bson::Bson::ObjectId`.
| 2024-06-16T20:13:02 | 7.0 | faa78d071cfae795dacc39fd46900fdfb6157d0d | [
"types::external::bson::tests::test_parse_bson_object_id",
"types::external::bson::tests::test_parse_bson_uuid"
] | [
"dataloader::tests::test_dataloader_load_empty",
"dynamic::check::tests::test_recursive_input_objects_local_cycle",
"dynamic::check::tests::test_recursive_input_objects",
"dataloader::tests::test_dataloader_disable_cache",
"dynamic::interface::tests::basic_interface",
"dataloader::tests::test_dataloader_w... | [] | [] | 29 | |
async-graphql/async-graphql | 1,524 | async-graphql__async-graphql-1524 | [
"1187"
] | 9d1befde1444a3d6a9dfdc62750ba271159c7173 | diff --git a/src/registry/export_sdl.rs b/src/registry/export_sdl.rs
--- a/src/registry/export_sdl.rs
+++ b/src/registry/export_sdl.rs
@@ -95,15 +95,6 @@ impl Registry {
pub(crate) fn export_sdl(&self, options: SDLExportOptions) -> String {
let mut sdl = String::new();
- let has_oneof = self
- .types
- .values()
- .any(|ty| matches!(ty, MetaType::InputObject { oneof: true, .. }));
-
- if has_oneof {
- sdl.write_str("directive @oneOf on INPUT_OBJECT\n\n").ok();
- }
-
for ty in self.types.values() {
if ty.name().starts_with("__") {
continue;
diff --git a/src/registry/export_sdl.rs b/src/registry/export_sdl.rs
--- a/src/registry/export_sdl.rs
+++ b/src/registry/export_sdl.rs
@@ -121,6 +112,46 @@ impl Registry {
}
self.directives.values().for_each(|directive| {
+ // Filter out deprecated directive from SDL if it is not used
+ if directive.name == "deprecated"
+ && !self.types.values().any(|ty| match ty {
+ MetaType::Object { fields, .. } => fields
+ .values()
+ .any(|field| field.deprecation.is_deprecated()),
+ MetaType::Enum { enum_values, .. } => enum_values
+ .values()
+ .any(|value| value.deprecation.is_deprecated()),
+ _ => false,
+ })
+ {
+ return;
+ }
+
+ // Filter out specifiedBy directive from SDL if it is not used
+ if directive.name == "specifiedBy"
+ && !self.types.values().any(|ty| {
+ matches!(
+ ty,
+ MetaType::Scalar {
+ specified_by_url: Some(_),
+ ..
+ }
+ )
+ })
+ {
+ return;
+ }
+
+ // Filter out oneOf directive from SDL if it is not used
+ if directive.name == "oneOf"
+ && !self
+ .types
+ .values()
+ .any(|ty| matches!(ty, MetaType::InputObject { oneof: true, .. }))
+ {
+ return;
+ }
+
writeln!(sdl, "{}", directive.sdl()).ok();
});
diff --git a/src/registry/mod.rs b/src/registry/mod.rs
--- a/src/registry/mod.rs
+++ b/src/registry/mod.rs
@@ -769,8 +769,8 @@ pub struct Registry {
impl Registry {
pub(crate) fn add_system_types(&mut self) {
self.add_directive(MetaDirective {
- name: "include".into(),
- description: Some("Directs the executor to include this field or fragment only when the `if` argument is true.".to_string()),
+ name: "skip".into(),
+ description: Some("Directs the executor to skip this field or fragment when the `if` argument is true.".to_string()),
locations: vec![
__DirectiveLocation::FIELD,
__DirectiveLocation::FRAGMENT_SPREAD,
diff --git a/src/registry/mod.rs b/src/registry/mod.rs
--- a/src/registry/mod.rs
+++ b/src/registry/mod.rs
@@ -780,7 +780,7 @@ impl Registry {
let mut args = IndexMap::new();
args.insert("if".to_string(), MetaInputValue {
name: "if".to_string(),
- description: Some("Included when true.".to_string()),
+ description: Some("Skipped when true.".to_string()),
ty: "Boolean!".to_string(),
default_value: None,
visible: None,
diff --git a/src/registry/mod.rs b/src/registry/mod.rs
--- a/src/registry/mod.rs
+++ b/src/registry/mod.rs
@@ -797,8 +797,8 @@ impl Registry {
});
self.add_directive(MetaDirective {
- name: "skip".into(),
- description: Some("Directs the executor to skip this field or fragment when the `if` argument is true.".to_string()),
+ name: "include".into(),
+ description: Some("Directs the executor to include this field or fragment only when the `if` argument is true.".to_string()),
locations: vec![
__DirectiveLocation::FIELD,
__DirectiveLocation::FRAGMENT_SPREAD,
diff --git a/src/registry/mod.rs b/src/registry/mod.rs
--- a/src/registry/mod.rs
+++ b/src/registry/mod.rs
@@ -808,7 +808,7 @@ impl Registry {
let mut args = IndexMap::new();
args.insert("if".to_string(), MetaInputValue {
name: "if".to_string(),
- description: Some("Skipped when true.".to_string()),
+ description: Some("Included when true.".to_string()),
ty: "Boolean!".to_string(),
default_value: None,
visible: None,
diff --git a/src/registry/mod.rs b/src/registry/mod.rs
--- a/src/registry/mod.rs
+++ b/src/registry/mod.rs
@@ -824,6 +824,84 @@ impl Registry {
composable: None,
});
+ self.add_directive(MetaDirective {
+ name: "deprecated".into(),
+ description: Some(
+ "Marks an element of a GraphQL schema as no longer supported.".into(),
+ ),
+ locations: vec![
+ __DirectiveLocation::FIELD_DEFINITION,
+ __DirectiveLocation::ARGUMENT_DEFINITION,
+ __DirectiveLocation::INPUT_FIELD_DEFINITION,
+ __DirectiveLocation::ENUM_VALUE,
+ ],
+ args: {
+ let mut args = IndexMap::new();
+ args.insert(
+ "reason".into(),
+ MetaInputValue {
+ name: "reason".into(),
+ description: Some(
+ "A reason for why it is deprecated, formatted using Markdown syntax"
+ .into(),
+ ),
+ ty: "String".into(),
+ default_value: Some(r#""No longer supported""#.into()),
+ visible: None,
+ inaccessible: false,
+ tags: Default::default(),
+ is_secret: false,
+ directive_invocations: vec![],
+ },
+ );
+ args
+ },
+ is_repeatable: false,
+ visible: None,
+ composable: None,
+ });
+
+ self.add_directive(MetaDirective {
+ name: "specifiedBy".into(),
+ description: Some("Provides a scalar specification URL for specifying the behavior of custom scalar types.".into()),
+ locations: vec![__DirectiveLocation::SCALAR],
+ args: {
+ let mut args = IndexMap::new();
+ args.insert(
+ "url".into(),
+ MetaInputValue {
+ name: "url".into(),
+ description: Some("URL that specifies the behavior of this scalar.".into()),
+ ty: "String!".into(),
+ default_value: None,
+ visible: None,
+ inaccessible: false,
+ tags: Default::default(),
+ is_secret: false,
+ directive_invocations: vec![],
+ },
+ );
+ args
+ },
+ is_repeatable: false,
+ visible: None,
+ composable: None,
+ });
+
+ self.add_directive(MetaDirective {
+ name: "oneOf".into(),
+ description: Some(
+ "Indicates that an Input Object is a OneOf Input Object (and thus requires
+ exactly one of its field be provided)"
+ .to_string(),
+ ),
+ locations: vec![__DirectiveLocation::INPUT_OBJECT],
+ args: Default::default(),
+ is_repeatable: false,
+ visible: None,
+ composable: None,
+ });
+
// create system scalars
<bool as InputType>::create_type_info(self);
<i32 as InputType>::create_type_info(self);
| diff --git a/tests/directive.rs b/tests/directive.rs
--- a/tests/directive.rs
+++ b/tests/directive.rs
@@ -1,4 +1,5 @@
use async_graphql::*;
+use serde::{Deserialize, Serialize};
#[tokio::test]
pub async fn test_directive_skip() {
diff --git a/tests/directive.rs b/tests/directive.rs
--- a/tests/directive.rs
+++ b/tests/directive.rs
@@ -127,3 +128,95 @@ pub async fn test_custom_directive() {
value!({ "value": "&abc*" })
);
}
+
+#[tokio::test]
+pub async fn test_no_unused_directives() {
+ struct Query;
+
+ #[Object]
+ impl Query {
+ pub async fn a(&self) -> String {
+ "a".into()
+ }
+ }
+
+ let sdl = Schema::new(Query, EmptyMutation, EmptySubscription).sdl();
+
+ assert!(!sdl.contains("directive @deprecated"));
+ assert!(!sdl.contains("directive @specifiedBy"));
+ assert!(!sdl.contains("directive @oneOf"));
+}
+
+#[tokio::test]
+pub async fn test_includes_deprecated_directive() {
+ #[derive(SimpleObject)]
+ struct A {
+ #[graphql(deprecation = "Use `Foo` instead")]
+ a: String,
+ }
+
+ struct Query;
+
+ #[Object]
+ impl Query {
+ pub async fn a(&self) -> A {
+ A { a: "a".into() }
+ }
+ }
+
+ let schema = Schema::new(Query, EmptyMutation, EmptySubscription);
+
+ assert!(schema.sdl().contains(r#"directive @deprecated(reason: String = "No longer supported") on FIELD_DEFINITION | ARGUMENT_DEFINITION | INPUT_FIELD_DEFINITION | ENUM_VALUE"#))
+}
+
+#[tokio::test]
+pub async fn test_includes_specified_by_directive() {
+ #[derive(Serialize, Deserialize)]
+ struct A {
+ a: String,
+ }
+
+ scalar!(
+ A,
+ "A",
+ "This is A",
+ "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
+ );
+
+ struct Query;
+
+ #[Object]
+ impl Query {
+ pub async fn a(&self) -> A {
+ A { a: "a".into() }
+ }
+ }
+
+ let schema = Schema::new(Query, EmptyMutation, EmptySubscription);
+
+ assert!(schema
+ .sdl()
+ .contains(r#"directive @specifiedBy(url: String!) on SCALAR"#))
+}
+
+#[tokio::test]
+pub async fn test_includes_one_of_directive() {
+ #[derive(OneofObject)]
+ enum AB {
+ A(String),
+ B(i64),
+ }
+
+ struct Query;
+
+ #[Object]
+ impl Query {
+ pub async fn ab(&self, _input: AB) -> bool {
+ true
+ }
+ }
+
+ let schema = Schema::new(Query, EmptyMutation, EmptySubscription);
+
+ assert!(schema.sdl().contains(r#"directive @oneOf on INPUT_OBJECT"#))
+}
diff --git a/tests/introspection.rs b/tests/introspection.rs
--- a/tests/introspection.rs
+++ b/tests/introspection.rs
@@ -1514,3 +1514,146 @@ pub async fn test_introspection_default() {
let res = schema.execute(query).await.into_result().unwrap().data;
assert_eq!(res, res_json);
}
+
+#[tokio::test]
+pub async fn test_introspection_directives() {
+ struct Query;
+
+ #[Object]
+ impl Query {
+ pub async fn a(&self) -> String {
+ "a".into()
+ }
+ }
+
+ let schema = Schema::new(Query, EmptyMutation, EmptySubscription);
+ let query = r#"
+ query IntrospectionQuery {
+ __schema {
+ directives {
+ name
+ locations
+ args {
+ ...InputValue
+ }
+ }
+ }
+ }
+
+ fragment InputValue on __InputValue {
+ name
+ type {
+ ...TypeRef
+ }
+ defaultValue
+ }
+
+ fragment TypeRef on __Type {
+ kind
+ name
+ ofType {
+ kind
+ name
+ }
+ }
+ "#;
+
+ let res_json = value!({"__schema": {
+ "directives": [
+ {
+ "name": "deprecated",
+ "locations": [
+ "FIELD_DEFINITION",
+ "ARGUMENT_DEFINITION",
+ "INPUT_FIELD_DEFINITION",
+ "ENUM_VALUE"
+ ],
+ "args": [
+ {
+ "name": "reason",
+ "type": {
+ "kind": "SCALAR",
+ "name": "String",
+ "ofType": null
+ },
+ "defaultValue": "\"No longer supported\""
+ }
+ ]
+ },
+ {
+ "name": "include",
+ "locations": [
+ "FIELD",
+ "FRAGMENT_SPREAD",
+ "INLINE_FRAGMENT"
+ ],
+ "args": [
+ {
+ "name": "if",
+ "type": {
+ "kind": "NON_NULL",
+ "name": null,
+ "ofType": {
+ "kind": "SCALAR",
+ "name": "Boolean"
+ }
+ },
+ "defaultValue": null
+ }
+ ]
+ },
+ {
+ "name": "oneOf",
+ "locations": [
+ "INPUT_OBJECT"
+ ],
+ "args": []
+ },
+ {
+ "name": "skip",
+ "locations": [
+ "FIELD",
+ "FRAGMENT_SPREAD",
+ "INLINE_FRAGMENT"
+ ],
+ "args": [
+ {
+ "name": "if",
+ "type": {
+ "kind": "NON_NULL",
+ "name": null,
+ "ofType": {
+ "kind": "SCALAR",
+ "name": "Boolean"
+ }
+ },
+ "defaultValue": null
+ }
+ ]
+ },
+ {
+ "name": "specifiedBy",
+ "locations": [
+ "SCALAR"
+ ],
+ "args": [
+ {
+ "name": "url",
+ "type": {
+ "kind": "NON_NULL",
+ "name": null,
+ "ofType": {
+ "kind": "SCALAR",
+ "name": "String"
+ }
+ },
+ "defaultValue": null
+ }
+ ]
+ }
+ ]
+ }});
+ let res = schema.execute(query).await.into_result().unwrap().data;
+
+ assert_eq!(res, res_json);
+}
diff --git a/tests/schemas/test_fed2_compose_2.schema.graphql b/tests/schemas/test_fed2_compose_2.schema.graphql
--- a/tests/schemas/test_fed2_compose_2.schema.graphql
+++ b/tests/schemas/test_fed2_compose_2.schema.graphql
@@ -1,5 +1,3 @@
-directive @oneOf on INPUT_OBJECT
-
diff --git a/tests/schemas/test_fed2_compose_2.schema.graphql b/tests/schemas/test_fed2_compose_2.schema.graphql
--- a/tests/schemas/test_fed2_compose_2.schema.graphql
+++ b/tests/schemas/test_fed2_compose_2.schema.graphql
@@ -47,6 +45,7 @@ type TestSimpleObject @type_directive_object(description: "This is OBJECT in Sim
}
directive @include(if: Boolean!) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT
+directive @oneOf on INPUT_OBJECT
directive @skip(if: Boolean!) on FIELD | FRAGMENT_SPREAD | INLINE_FRAGMENT
directive @type_directive_argument_definition(description: String!) on ARGUMENT_DEFINITION
directive @type_directive_enum(description: String!) on ENUM
| Oneof directive not included in introspection result
## Expected Behavior
I would expect the `@oneof` directive to be included in the result of an introspection query whenever the `#[derive(OneofObject)]` macro is used on a struct.
## Actual Behavior
The directive is not listed in the introspection query result:

## Steps to Reproduce the Problem
1. Clone this minimal example: https://github.com/bvanneerven/introspection-example
2. Run with `cargo run`
3. Go to http://localhost:8001 and open the network tab in the browser developer tools
4. Trigger an introspection query by clicking the 'Re-fetch GraphQL schema' button
5. Verify that the `@oneof` directive is not listed in the response from the server
## Specifications
- Version: `{ git = "https://github.com/async-graphql/async-graphql" }`
- Platform: WSL (Ubuntu-20.04)
| GraphiQL doesn't seem to support oneof.
`Type` contains `oneOf` field, but GraphiQL is not querying it.
https://github.com/async-graphql/async-graphql/blob/e2e35ce93f1032d6467144c7fa8f0783e1dde04b/src/model/type.rs#L241
Yes, that is true. Probably because it is still in RFC2 (Draft) phase. They renamed it to `isOneOf` by the way (https://github.com/graphql/graphql-spec/pull/825#issuecomment-1138335897). I opened this small PR to change it: https://github.com/async-graphql/async-graphql/pull/1188.
When sending the query below, I would however expect the *directive itself* to be returned. According to the current [draft spec](https://spec.graphql.org/draft/#sec-Type-System.Directives.Built-in-Directives), all directives, including built-in ones, should be returned during introspection (but may be omitted when exporting SDL). Right now, only 'include' and 'skip' are returned.
(As a sidenote, I also think that the 'deprecated' directive is not returned during introspection right now.)
```gql
query IntrospectionQuery {
__schema {
directives {
name
description
locations
args {
...InputValue
}
}
}
}
fragment InputValue on __InputValue {
name
description
type {
...TypeRef
}
defaultValue
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
}
}
```
> According to the current draft spec, all directives, including built-in ones, should be returned during introspection (but may be omitted when exporting SDL). Right now, only 'include' and 'skip' are returned.
Hey @sunli829, I am not familiar enough with the codebase to know how I can fix this. I might be able to try if you could give me some directions for it 🙂. | 2024-05-18T22:33:38 | 7.0 | faa78d071cfae795dacc39fd46900fdfb6157d0d | [
"test_includes_specified_by_directive",
"test_includes_deprecated_directive",
"test_introspection_directives",
"test_type_directive_2"
] | [
"dataloader::tests::test_dataloader_disable_cache",
"dataloader::tests::test_dataloader_load_empty",
"dataloader::tests::test_dataloader",
"dynamic::check::tests::test_recursive_input_objects_local_cycle",
"dataloader::tests::test_dataloader_disable_all_cache",
"dynamic::check::tests::test_recursive_input... | [] | [] | 30 |
tokio-rs/axum | 868 | tokio-rs__axum-868 | [
"865"
] | 422a883cb2a81fa6fbd2f2a1affa089304b7e47b | diff --git a/axum/CHANGELOG.md b/axum/CHANGELOG.md
--- a/axum/CHANGELOG.md
+++ b/axum/CHANGELOG.md
@@ -71,6 +71,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
be accepted but most likely result in bugs ([#823])
- **breaking:** `Headers` has been removed. Arrays of tuples directly implement
`IntoResponseParts` so `([("x-foo", "foo")], response)` now works ([#797])
+- **breaking:** `InvalidJsonBody` has been replaced with `JsonDataError` to clearly signal that the
+ request body was syntactically valid JSON but couldn't be deserialized into the target type
+- **changed:** New `JsonSyntaxError` variant added to `JsonRejection`. This is returned when the
+ request body contains syntactically invalid JSON
- **fixed:** Set `Allow` header when responding with `405 Method Not Allowed` ([#733])
- **fixed:** Correctly set the `Content-Length` header for response to `HEAD`
requests ([#734])
diff --git a/axum/src/docs/extract.md b/axum/src/docs/extract.md
--- a/axum/src/docs/extract.md
+++ b/axum/src/docs/extract.md
@@ -209,9 +209,12 @@ async fn create_user(payload: Result<Json<Value>, JsonRejection>) {
// Request didn't have `Content-Type: application/json`
// header
}
- Err(JsonRejection::InvalidJsonBody(_)) => {
+ Err(JsonRejection::JsonDataError(_)) => {
// Couldn't deserialize the body into the target type
}
+ Err(JsonRejection::JsonSyntaxError(_)) => {
+ // Syntax error in the body
+ }
Err(JsonRejection::BytesRejection(_)) => {
// Failed to extract the request body
}
diff --git a/axum/src/docs/extract.md b/axum/src/docs/extract.md
--- a/axum/src/docs/extract.md
+++ b/axum/src/docs/extract.md
@@ -249,7 +252,7 @@ breaking the public API.
For example that means while [`Json`] is implemented using [`serde_json`] it
doesn't directly expose the [`serde_json::Error`] thats contained in
-[`JsonRejection::InvalidJsonBody`]. However it is still possible to access via
+[`JsonRejection::JsonDataError`]. However it is still possible to access via
methods from [`std::error::Error`]:
```rust
diff --git a/axum/src/docs/extract.md b/axum/src/docs/extract.md
--- a/axum/src/docs/extract.md
+++ b/axum/src/docs/extract.md
@@ -267,21 +270,11 @@ async fn handler(result: Result<Json<Value>, JsonRejection>) -> impl IntoRespons
Ok(Json(payload)) => Ok(Json(json!({ "payload": payload }))),
Err(err) => match err {
- // attempt to extract the inner `serde_json::Error`, if that
- // succeeds we can provide a more specific error
- JsonRejection::InvalidJsonBody(err) => {
- if let Some(serde_json_err) = find_error_source::<serde_json::Error>(&err) {
- Err((
- StatusCode::BAD_REQUEST,
- format!(
- "Invalid JSON at line {} column {}",
- serde_json_err.line(),
- serde_json_err.column()
- ),
- ))
- } else {
- Err((StatusCode::BAD_REQUEST, "Unknown error".to_string()))
- }
+ JsonRejection::JsonDataError(err) => {
+ Err(serde_json_error_response(err))
+ }
+ JsonRejection::JsonSyntaxError(err) => {
+ Err(serde_json_error_response(err))
}
// handle other rejections from the `Json` extractor
JsonRejection::MissingJsonContentType(_) => Err((
diff --git a/axum/src/docs/extract.md b/axum/src/docs/extract.md
--- a/axum/src/docs/extract.md
+++ b/axum/src/docs/extract.md
@@ -302,6 +295,26 @@ async fn handler(result: Result<Json<Value>, JsonRejection>) -> impl IntoRespons
}
}
+// attempt to extract the inner `serde_json::Error`, if that succeeds we can
+// provide a more specific error
+fn serde_json_error_response<E>(err: E) -> (StatusCode, String)
+where
+ E: Error + 'static,
+{
+ if let Some(serde_json_err) = find_error_source::<serde_json::Error>(&err) {
+ (
+ StatusCode::BAD_REQUEST,
+ format!(
+ "Invalid JSON at line {} column {}",
+ serde_json_err.line(),
+ serde_json_err.column()
+ ),
+ )
+ } else {
+ (StatusCode::BAD_REQUEST, "Unknown error".to_string())
+ }
+}
+
// attempt to downcast `err` into a `T` and if that fails recursively try and
// downcast `err`'s source
fn find_error_source<'a, T>(err: &'a (dyn Error + 'static)) -> Option<&'a T>
diff --git a/axum/src/extract/rejection.rs b/axum/src/extract/rejection.rs
--- a/axum/src/extract/rejection.rs
+++ b/axum/src/extract/rejection.rs
@@ -9,10 +9,24 @@ pub use axum_core::extract::rejection::*;
#[cfg(feature = "json")]
define_rejection! {
#[status = UNPROCESSABLE_ENTITY]
+ #[body = "Failed to deserialize the JSON body into the target type"]
+ #[cfg_attr(docsrs, doc(cfg(feature = "json")))]
+ /// Rejection type for [`Json`](super::Json).
+ ///
+ /// This rejection is used if the request body is syntactically valid JSON but couldn't be
+ /// deserialized into the target type.
+ pub struct JsonDataError(Error);
+}
+
+#[cfg(feature = "json")]
+define_rejection! {
+ #[status = BAD_REQUEST]
#[body = "Failed to parse the request body as JSON"]
#[cfg_attr(docsrs, doc(cfg(feature = "json")))]
/// Rejection type for [`Json`](super::Json).
- pub struct InvalidJsonBody(Error);
+ ///
+ /// This rejection is used if the request body didn't contain syntactically valid JSON.
+ pub struct JsonSyntaxError(Error);
}
#[cfg(feature = "json")]
diff --git a/axum/src/extract/rejection.rs b/axum/src/extract/rejection.rs
--- a/axum/src/extract/rejection.rs
+++ b/axum/src/extract/rejection.rs
@@ -141,7 +155,8 @@ composite_rejection! {
/// can fail.
#[cfg_attr(docsrs, doc(cfg(feature = "json")))]
pub enum JsonRejection {
- InvalidJsonBody,
+ JsonDataError,
+ JsonSyntaxError,
MissingJsonContentType,
BytesRejection,
}
diff --git a/axum/src/json.rs b/axum/src/json.rs
--- a/axum/src/json.rs
+++ b/axum/src/json.rs
@@ -99,7 +99,27 @@ where
if json_content_type(req) {
let bytes = Bytes::from_request(req).await?;
- let value = serde_json::from_slice(&bytes).map_err(InvalidJsonBody::from_err)?;
+ let value = match serde_json::from_slice(&bytes) {
+ Ok(value) => value,
+ Err(err) => {
+ let rejection = match err.classify() {
+ serde_json::error::Category::Data => JsonDataError::from_err(err).into(),
+ serde_json::error::Category::Syntax | serde_json::error::Category::Eof => {
+ JsonSyntaxError::from_err(err).into()
+ }
+ serde_json::error::Category::Io => {
+ if cfg!(debug_assertions) {
+ // we don't use `serde_json::from_reader` and instead always buffer
+ // bodies first, so we shouldn't encounter any IO errors
+ unreachable!()
+ } else {
+ JsonSyntaxError::from_err(err).into()
+ }
+ }
+ };
+ return Err(rejection);
+ }
+ };
Ok(Json(value))
} else {
diff --git a/examples/customize-extractor-error/src/main.rs b/examples/customize-extractor-error/src/main.rs
--- a/examples/customize-extractor-error/src/main.rs
+++ b/examples/customize-extractor-error/src/main.rs
@@ -69,7 +69,7 @@ where
Err(rejection) => {
// convert the error from `axum::Json` into whatever we want
let (status, body): (_, Cow<'_, str>) = match rejection {
- JsonRejection::InvalidJsonBody(err) => (
+ JsonRejection::JsonDataError(err) => (
StatusCode::BAD_REQUEST,
format!("Invalid JSON request: {}", err).into(),
),
| diff --git a/axum/src/json.rs b/axum/src/json.rs
--- a/axum/src/json.rs
+++ b/axum/src/json.rs
@@ -244,4 +264,19 @@ mod tests {
assert!(valid_json_content_type("application/cloudevents+json").await);
assert!(!valid_json_content_type("text/json").await);
}
+
+ #[tokio::test]
+ async fn invalid_json_syntax() {
+ let app = Router::new().route("/", post(|_: Json<serde_json::Value>| async {}));
+
+ let client = TestClient::new(app);
+ let res = client
+ .post("/")
+ .body("{")
+ .header("content-type", "application/json")
+ .send()
+ .await;
+
+ assert_eq!(res.status(), StatusCode::BAD_REQUEST);
+ }
}
| HTTP Status on Invalid JSON Request
When I send a request where the `content-type` is not `application/json` I get the answer `415 Unsupported Media Type` and if the JSON parsing fails I get `422 Unprocessable Entity`
Two issues:
1. This is not documented correctly: https://docs.rs/axum/latest/axum/struct.Json.html In this documentation both of them would result in `400 Bad Request`
2. `422 Unprocessable Entity` seems to be part of an HTTP extension intended for WebDAV. Is it appropriate to use this in a non-WebDAV server?
3. The description of `422 Unprocessable Entity` https://datatracker.ietf.org/doc/html/rfc4918#section-11.2 says that this should be used for syntactically correct, but semantically wrong requests. I also get that with a syntactically wrong JSON like `}`
| 👍🏼 for distinguishing syntactic errors and content errors. `serde_json::Error` has a `classify()` method that allows to check what failed exactly.
> This is not documented correctly: https://docs.rs/axum/latest/axum/struct.Json.html In this documentation both of them would result in 400 Bad Request
Good catch. Thats easy to fix!
> 422 Unprocessable Entity seems to be part of an HTTP extension intended for WebDAV. Is it appropriate to use this in a non-WebDAV server?
Yeah its common. I know that Ruby on Rails uses it for a bunch of default things. MDN also doesn't mention thats its special https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/422.
> The description of 422 Unprocessable Entity https://datatracker.ietf.org/doc/html/rfc4918#section-11.2 says that this should be used for syntactically correct, but semantically wrong requests. I also get that with a syntactically wrong JSON like }
Yeah we should fix that. Shouldn't be hard using what jplatte is hinting at.
@kamulos Are you up for making a PR for this? | 2022-03-18T01:15:56 | 0.4 | 2e5d56a9b125142817378883b098470d05ab0597 | [
"json::tests::invalid_json_syntax"
] | [
"body::stream_body::stream_body_traits",
"extract::connect_info::traits",
"error_handling::traits",
"extract::extractor_middleware::traits",
"extract::form::tests::test_form_query",
"extract::form::tests::test_form_body",
"extract::form::tests::test_incorrect_content_type",
"extract::path::de::tests::... | [] | [] | 31 |
tokio-rs/axum | 529 | tokio-rs__axum-529 | [
"488"
] | f9a437d0813c0e2f784ca90eccccb4fbad9126cb | diff --git a/axum/CHANGELOG.md b/axum/CHANGELOG.md
--- a/axum/CHANGELOG.md
+++ b/axum/CHANGELOG.md
@@ -28,10 +28,15 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- **breaking:** The `Handler<B, T>` trait is now defined as `Handler<T, B =
Body>`. That is the type parameters have been swapped and `B` defaults to
`axum::body::Body` ([#527])
+- **breaking:** `Router::merge` will panic if both routers have fallbacks.
+ Previously the left side fallback would be silently discarded ([#529])
+- **breaking:** `Router::nest` will panic if the nested router has a fallback.
+ Previously it would be silently discarded ([#529])
- Update WebSockets to use tokio-tungstenite 0.16 ([#525])
[#525]: https://github.com/tokio-rs/axum/pull/525
[#527]: https://github.com/tokio-rs/axum/pull/527
+[#529]: https://github.com/tokio-rs/axum/pull/529
[#534]: https://github.com/tokio-rs/axum/pull/534
# 0.3.3 (13. November, 2021)
diff --git a/axum/src/docs/routing/fallback.md b/axum/src/docs/routing/fallback.md
--- a/axum/src/docs/routing/fallback.md
+++ b/axum/src/docs/routing/fallback.md
@@ -26,45 +26,3 @@ async fn fallback(uri: Uri) -> impl IntoResponse {
Fallbacks only apply to routes that aren't matched by anything in the
router. If a handler is matched by a request but returns 404 the
fallback is not called.
-
-## When used with `Router::merge`
-
-If a router with a fallback is merged with another router that also has
-a fallback the fallback of the second router takes precedence:
-
-```rust
-use axum::{
- Router,
- routing::get,
- handler::Handler,
- response::IntoResponse,
- http::{StatusCode, Uri},
-};
-
-let one = Router::new()
- .route("/one", get(|| async {}))
- .fallback(fallback_one.into_service());
-
-let two = Router::new()
- .route("/two", get(|| async {}))
- .fallback(fallback_two.into_service());
-
-let app = one.merge(two);
-
-async fn fallback_one() -> impl IntoResponse {}
-async fn fallback_two() -> impl IntoResponse {}
-
-// the fallback for `app` is `fallback_two`
-# async {
-# hyper::Server::bind(&"".parse().unwrap()).serve(app.into_make_service()).await.unwrap();
-# };
-```
-
-If only one of the routers have a fallback that will be used in the
-merged router.
-
-## When used with `Router::nest`
-
-If a router with a fallback is nested inside another router the fallback
-of the nested router will be discarded and not used. This is such that
-the outer router's fallback takes precedence.
diff --git a/axum/src/docs/routing/merge.md b/axum/src/docs/routing/merge.md
--- a/axum/src/docs/routing/merge.md
+++ b/axum/src/docs/routing/merge.md
@@ -36,3 +36,8 @@ let app = Router::new()
# hyper::Server::bind(&"".parse().unwrap()).serve(app.into_make_service()).await.unwrap();
# };
```
+
+## Panics
+
+- If two routers that each have a [fallback](Router::fallback) are merged. This
+ is because `Router` only allows a single fallback.
diff --git a/axum/src/docs/routing/nest.md b/axum/src/docs/routing/nest.md
--- a/axum/src/docs/routing/nest.md
+++ b/axum/src/docs/routing/nest.md
@@ -121,5 +121,7 @@ let app = Router::new()
for more details.
- If the route contains a wildcard (`*`).
- If `path` is empty.
+- If the nested router has a [fallback](Router::fallback). This is because
+ `Router` only allows a single fallback.
[`OriginalUri`]: crate::extract::OriginalUri
diff --git a/axum/src/routing/mod.rs b/axum/src/routing/mod.rs
--- a/axum/src/routing/mod.rs
+++ b/axum/src/routing/mod.rs
@@ -189,14 +189,17 @@ where
let Router {
mut routes,
node,
- // discard the fallback of the nested router
- fallback: _,
+ fallback,
// nesting a router that has something nested at root
// doesn't mean something is nested at root in _this_ router
// thus we don't need to propagate that
nested_at_root: _,
} = router;
+ if let Fallback::Custom(_) = fallback {
+ panic!("Cannot nest `Router`s that has a fallback");
+ }
+
for (id, nested_path) in node.route_id_to_path {
let route = routes.remove(&id).unwrap();
let full_path = if &*nested_path == "/" {
diff --git a/axum/src/routing/mod.rs b/axum/src/routing/mod.rs
--- a/axum/src/routing/mod.rs
+++ b/axum/src/routing/mod.rs
@@ -253,7 +256,9 @@ where
(Fallback::Default(_), pick @ Fallback::Default(_)) => pick,
(Fallback::Default(_), pick @ Fallback::Custom(_)) => pick,
(pick @ Fallback::Custom(_), Fallback::Default(_)) => pick,
- (Fallback::Custom(_), pick @ Fallback::Custom(_)) => pick,
+ (Fallback::Custom(_), Fallback::Custom(_)) => {
+ panic!("Cannot merge two `Router`s that both have a fallback")
+ }
};
self.nested_at_root = self.nested_at_root || nested_at_root;
| diff --git a/axum/src/routing/tests/fallback.rs b/axum/src/routing/tests/fallback.rs
--- a/axum/src/routing/tests/fallback.rs
+++ b/axum/src/routing/tests/fallback.rs
@@ -49,25 +49,3 @@ async fn or() {
assert_eq!(res.status(), StatusCode::OK);
assert_eq!(res.text().await, "fallback");
}
-
-#[tokio::test]
-async fn fallback_on_or() {
- let one = Router::new()
- .route("/one", get(|| async {}))
- .fallback((|| async { "fallback one" }).into_service());
-
- let two = Router::new()
- .route("/two", get(|| async {}))
- .fallback((|| async { "fallback two" }).into_service());
-
- let app = one.merge(two);
-
- let client = TestClient::new(app);
-
- assert_eq!(client.get("/one").send().await.status(), StatusCode::OK);
- assert_eq!(client.get("/two").send().await.status(), StatusCode::OK);
-
- let res = client.get("/does-not-exist").send().await;
- assert_eq!(res.status(), StatusCode::OK);
- assert_eq!(res.text().await, "fallback two");
-}
diff --git a/axum/src/routing/tests/mod.rs b/axum/src/routing/tests/mod.rs
--- a/axum/src/routing/tests/mod.rs
+++ b/axum/src/routing/tests/mod.rs
@@ -566,3 +566,21 @@ async fn different_methods_added_in_different_routes_deeply_nested() {
let body = res.text().await;
assert_eq!(body, "POST");
}
+
+#[tokio::test]
+#[should_panic(expected = "Cannot merge two `Router`s that both have a fallback")]
+async fn merging_routers_with_fallbacks_panics() {
+ async fn fallback() {}
+ let one = Router::new().fallback(fallback.into_service());
+ let two = Router::new().fallback(fallback.into_service());
+ TestClient::new(one.merge(two));
+}
+
+#[tokio::test]
+#[should_panic(expected = "Cannot nest `Router`s that has a fallback")]
+async fn nesting_router_with_fallbacks_panics() {
+ async fn fallback() {}
+ let one = Router::new().fallback(fallback.into_service());
+ let app = Router::new().nest("/", one);
+ TestClient::new(app);
+}
| Panic if merging two routers that each have a fallback
Currently when you `Router::merge` two routes that each have a fallback it'll pick the fallback of the router on the right hand side, and if only one has a fallback it'll pick that one. This might lead users to think that multiple fallbacks are supported (see https://github.com/tokio-rs/axum/discussions/480#discussioncomment-1601938), which it isn't.
I'm wondering if, in 0.4, we should change it such that merging two routers that each have a fallback causes a panic telling you that multiple fallbacks are not supported. This would also be consistent with the general routing which panics on overlapping routes.
The reason multiple fallbacks are not allowed is that `Router` tries hard to merge all your routes in one `matchit::Node` rather than nesting them somehow. This leads to significantly nicer internals and better performance.
@jplatte what do you think?
| 👍 for panicking. Silently discarding a fallback service will almost definitely end up causing issue for someone. | 2021-11-17T06:44:32 | 0.3 | 939995e80edaf4e882a6e2decb07db5a7c0a2c06 | [
"routing::tests::merging_routers_with_fallbacks_panics - should panic",
"routing::tests::nesting_router_with_fallbacks_panics - should panic"
] | [
"body::stream_body::stream_body_traits",
"error_handling::traits",
"extract::connect_info::traits",
"extract::extractor_middleware::traits",
"extract::path::de::tests::test_parse_map",
"extract::path::de::tests::test_parse_struct",
"extract::path::de::tests::test_parse_seq",
"extract::form::tests::tes... | [] | [] | 32 |
sharkdp/bat | 1,556 | sharkdp__bat-1556 | [
"1550"
] | c569774e1a8528fab91225dabdadf53fde3916ea | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -16,6 +16,7 @@
- If `PAGER` (but not `BAT_PAGER` or `--pager`) is `more` or `most`, silently use `less` instead to ensure support for colors, see #1063 (@Enselic)
- If `PAGER` is `bat`, silently use `less` to prevent recursion. For `BAT_PAGER` or `--pager`, exit with error, see #1413 (@Enselic)
- Manpage highlighting fix, see #1511 (@keith-hall)
+- `BAT_CONFIG_PATH` ignored by `bat` if non-existent, see #1550 (@sharkdp)
## Other
diff --git a/src/bin/bat/config.rs b/src/bin/bat/config.rs
--- a/src/bin/bat/config.rs
+++ b/src/bin/bat/config.rs
@@ -10,13 +10,12 @@ pub fn config_file() -> PathBuf {
env::var("BAT_CONFIG_PATH")
.ok()
.map(PathBuf::from)
- .filter(|config_path| config_path.is_file())
.unwrap_or_else(|| PROJECT_DIRS.config_dir().join("config"))
}
pub fn generate_config_file() -> bat::error::Result<()> {
let config_file = config_file();
- if config_file.exists() {
+ if config_file.is_file() {
println!(
"A config file already exists at: {}",
config_file.to_string_lossy()
diff --git a/src/bin/bat/config.rs b/src/bin/bat/config.rs
--- a/src/bin/bat/config.rs
+++ b/src/bin/bat/config.rs
@@ -71,7 +70,14 @@ pub fn generate_config_file() -> bat::error::Result<()> {
#--map-syntax ".ignore:Git Ignore"
"#;
- fs::write(&config_file, default_config)?;
+ fs::write(&config_file, default_config).map_err(|e| {
+ format!(
+ "Failed to create config file at '{}': {}",
+ config_file.to_string_lossy(),
+ e
+ )
+ })?;
+
println!(
"Success! Config file written to {}",
config_file.to_string_lossy()
| diff --git a/tests/integration_tests.rs b/tests/integration_tests.rs
--- a/tests/integration_tests.rs
+++ b/tests/integration_tests.rs
@@ -652,6 +652,13 @@ fn config_location_test() {
.assert()
.success()
.stdout("bat.conf\n");
+
+ bat_with_config()
+ .env("BAT_CONFIG_PATH", "not-existing.conf")
+ .arg("--config-file")
+ .assert()
+ .success()
+ .stdout("not-existing.conf\n");
}
#[test]
| BAT_CONFIG_PATH ignored by --generate-config-file
<!-- Hey there, thank you for creating an issue! -->
**What version of `bat` are you using?**
<!-- Output of `bat --version` -->
Newest
**Describe the bug you encountered:**

**What did you expect to happen instead?**
use custom location instead
**Windows version:**

| Thank you for reporting this. Looks like a bug indeed.
I can confirm this bug, which is generic. It is caused by
```
.filter(|config_path| config_path.is_file())
```
in
```
pub fn config_file() -> PathBuf {
env::var("BAT_CONFIG_PATH")
.ok()
.map(PathBuf::from)
.filter(|config_path| config_path.is_file())
.unwrap_or_else(|| PROJECT_DIRS.config_dir().join("config"))
}
```
which makes sense when _reading_ config values, but of course not when _creating it_.
Made a proof of concept fix (see above), but am not happy with how it turned out. Plus, it needs integration tests. Will take care of that at some later point when I get time, unless someone beats me to it (which I encourage).
> which makes sense when _reading_ config values, but of course not when _creating it_.
Let's discuss this first. Maybe it doesn't make sense for _reading_ either.
Agreed: when *writing* with `--generate-config-file`, we should always simply write to the path returned by `config_file()` (maybe rename that function to `config_file_path()`). We should print an error if that path can not be written to (e.g. missing permission or path exists and is a directory).
When *reading* the config file, we could distinguish between two cases:
1. `BAT_CONFIG_PATH` has not been set => read from the default path, but ignore if it is not existent. Print an error in case of permission / file-type problems.
2. `BAT_CONFIG_PATH` has been set => I think we should always try to read from that path and maybe print an error/warning if it does not exist. Maybe we should even ignore it, just like in case 1. In case of permission / file-type problems, we print an error.
I guess `config_file` would be most useful if it always just returns the path where `bat` expects the config to be. Even if that path does not exist. That would mean that we could simply remove the `.filter(…)` call. | 2021-02-27T19:22:32 | 0.17 | c569774e1a8528fab91225dabdadf53fde3916ea | [
"config_location_test"
] | [
"config::default_config_should_highlight_no_lines",
"config::default_config_should_include_all_lines",
"input::basic",
"input::utf16le",
"less::test_parse_less_version_487",
"less::test_parse_less_version_wrong_program",
"less::test_parse_less_version_551",
"less::test_parse_less_version_529",
"line... | [] | [] | 33 |
sharkdp/bat | 2,698 | sharkdp__bat-2698 | [
"2674"
] | 8e35a567121a7186d77e12e249490210c6eb75a9 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,8 @@
- Fix `more` not being found on Windows when provided via `BAT_PAGER`, see #2570, #2580, and #2651 (@mataha)
- Switched default behavior of `--map-syntax` to be case insensitive #2520
- Updated version of `serde_yaml` to `0.9`. See #2627 (@Raghav-Bell)
+- Fixed arithmetic overflow in `LineRange::from`, see #2674 (@skoriop)
+
## Other
- Output directory for generated assets (completion, manual) can be customized, see #2515 (@tranzystorek-io)
diff --git a/src/line_range.rs b/src/line_range.rs
--- a/src/line_range.rs
+++ b/src/line_range.rs
@@ -53,7 +53,7 @@ impl LineRange {
let more_lines = &line_numbers[1][1..]
.parse()
.map_err(|_| "Invalid character after +")?;
- new_range.lower + more_lines
+ new_range.lower.saturating_add(*more_lines)
} else if first_byte == Some(b'-') {
// this will prevent values like "-+5" even though "+5" is valid integer
if line_numbers[1][1..].bytes().next() == Some(b'+') {
| diff --git a/src/line_range.rs b/src/line_range.rs
--- a/src/line_range.rs
+++ b/src/line_range.rs
@@ -128,6 +128,13 @@ fn test_parse_plus() {
assert_eq!(50, range.upper);
}
+#[test]
+fn test_parse_plus_overflow() {
+ let range = LineRange::from(&format!("{}:+1", usize::MAX)).expect("Shouldn't fail on test!");
+ assert_eq!(usize::MAX, range.lower);
+ assert_eq!(usize::MAX, range.upper);
+}
+
#[test]
fn test_parse_plus_fail() {
let range = LineRange::from("40:+z");
| Arithmetic overflow occurs while using API LineRange::from()
<!--
Hey there, thank you for reporting a bug!
Please note that the following bugs have already been reported:
* dpkg: error processing archive /some/path/some-program.deb (--unpack):
trying to overwrite '/usr/.crates2.json'
See https://github.com/sharkdp/bat/issues/938
-->
**What steps will reproduce the bug?**
I executed fuzz testing with `bat` public APIs and found crash case.
~~~
let fuzz_arg1 = "18446744073709551615:+1";
LineRange::from(fuzz_arg1);
~~~
**What happens?**
~~~
Thread '<unnamed>' panicked at 'attempt to add with overflow', /rustc/871b5952023139738f72eba235063575062bc2e9/library/core/src/ops/arith.rs:109
~~~
The overflow occurs with this statement :
https://github.com/sharkdp/bat/blob/5a240f36b9251fd6513316333acb969a4d347b9d/src/line_range.rs#L56
**What did you expect to happen instead?**
Need assertion to prevent arith overflow, or need to explicitly mention panic condition.
| Maybe `saturating_add(new_range.lower, more_lines)` can be done here to avoid panicking.
> Maybe `saturating_add(new_range.lower, more_lines)` can be done here to avoid panicking.
Looks nice, better than `assert!`. | 2023-10-05T05:59:46 | 0.23 | 8e35a567121a7186d77e12e249490210c6eb75a9 | [
"line_range::test_parse_plus_overflow"
] | [
"assets::build_assets::acknowledgements::tests::test_append_to_acknowledgements_adds_newline_if_missing",
"assets::build_assets::acknowledgements::tests::test_normalize_license_text",
"assets::build_assets::acknowledgements::tests::test_normalize_license_text_with_windows_line_endings",
"config::default_confi... | [
"long_help",
"short_help"
] | [] | 34 |
bevyengine/bevy | 10,211 | bevyengine__bevy-10211 | [
"10207"
] | 6f27e0e35faffbf2b77807bb222d3d3a9a529210 | diff --git a/crates/bevy_ecs/macros/src/lib.rs b/crates/bevy_ecs/macros/src/lib.rs
--- a/crates/bevy_ecs/macros/src/lib.rs
+++ b/crates/bevy_ecs/macros/src/lib.rs
@@ -201,6 +201,10 @@ pub fn impl_param_set(_input: TokenStream) -> TokenStream {
#param::init_state(world, &mut #meta);
let #param = #param::init_state(world, &mut system_meta.clone());
)*
+ // Make the ParamSet non-send if any of its parameters are non-send.
+ if false #(|| !#meta.is_send())* {
+ system_meta.set_non_send();
+ }
#(
system_meta
.component_access_set
| diff --git a/crates/bevy_ecs/src/system/system_param.rs b/crates/bevy_ecs/src/system/system_param.rs
--- a/crates/bevy_ecs/src/system/system_param.rs
+++ b/crates/bevy_ecs/src/system/system_param.rs
@@ -1739,4 +1739,34 @@ mod tests {
schedule.add_systems(non_sync_system);
schedule.run(&mut world);
}
+
+ // Regression test for https://github.com/bevyengine/bevy/issues/10207.
+ #[test]
+ fn param_set_non_send_first() {
+ fn non_send_param_set(mut p: ParamSet<(NonSend<*mut u8>, ())>) {
+ let _ = p.p0();
+ p.p1();
+ }
+
+ let mut world = World::new();
+ world.insert_non_send_resource(std::ptr::null_mut::<u8>());
+ let mut schedule = crate::schedule::Schedule::default();
+ schedule.add_systems((non_send_param_set, non_send_param_set, non_send_param_set));
+ schedule.run(&mut world);
+ }
+
+ // Regression test for https://github.com/bevyengine/bevy/issues/10207.
+ #[test]
+ fn param_set_non_send_second() {
+ fn non_send_param_set(mut p: ParamSet<((), NonSendMut<*mut u8>)>) {
+ p.p0();
+ let _ = p.p1();
+ }
+
+ let mut world = World::new();
+ world.insert_non_send_resource(std::ptr::null_mut::<u8>());
+ let mut schedule = crate::schedule::Schedule::default();
+ schedule.add_systems((non_send_param_set, non_send_param_set, non_send_param_set));
+ schedule.run(&mut world);
+ }
}
| `NonSendMut` is allowed to run on the wrong thread when used in a `ParamSet`
## Bevy version
0.11
## What you did
I put a `NonSendMut<T>` type in a `ParamSet`.
```rust
use bevy::prelude::*;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.insert_non_send_resource(T(std::ptr::null_mut()))
.add_systems(Update, param_set_crash)
.run();
}
struct T(*mut T);
fn param_set_crash(mut p: ParamSet<(NonSend<T>, NonSendMut<T>)>) {
let _ = p.p0();
}
```
## What went wrong
```
thread 'Compute Task Pool (14)' panicked at 'Attempted to access or drop non-send resource debug_ui_crash::T from thread Some(ThreadId(1)) on a thread ThreadId(24). This is not allowed. Aborting.', /home/joseph/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.3/src/storage/resource.rs:56:13
```
Bevy crashed due to the resource being accessed from the wrong thread. Likely, the `ParamSet` made the scheduler unaware that there was a `NonSend` resource in the system.
## Additional information
You can easily work around this by adding a `NonSend<()>` marker parameter to the system outside of the ParamSet.
| 2023-10-21T10:17:22 | 1.70 | 6f27e0e35faffbf2b77807bb222d3d3a9a529210 | [
"system::system_param::tests::param_set_non_send_first",
"system::system_param::tests::param_set_non_send_second"
] | [
"change_detection::tests::map_mut",
"change_detection::tests::mut_from_non_send_mut",
"change_detection::tests::mut_new",
"change_detection::tests::as_deref_mut",
"change_detection::tests::mut_untyped_to_reflect",
"change_detection::tests::mut_from_res_mut",
"change_detection::tests::change_tick_wraparo... | [] | [] | 35 | |
bevyengine/bevy | 12,574 | bevyengine__bevy-12574 | [
"12570"
] | 7c7d1e8a6442a4258896b6c605beb1bf50399396 | diff --git a/crates/bevy_math/src/cubic_splines.rs b/crates/bevy_math/src/cubic_splines.rs
--- a/crates/bevy_math/src/cubic_splines.rs
+++ b/crates/bevy_math/src/cubic_splines.rs
@@ -2,6 +2,7 @@
use std::{
fmt::Debug,
+ iter::once,
ops::{Add, Div, Mul, Sub},
};
diff --git a/crates/bevy_math/src/cubic_splines.rs b/crates/bevy_math/src/cubic_splines.rs
--- a/crates/bevy_math/src/cubic_splines.rs
+++ b/crates/bevy_math/src/cubic_splines.rs
@@ -172,7 +173,8 @@ impl<P: Point> CubicGenerator<P> for CubicHermite<P> {
}
/// A spline interpolated continuously across the nearest four control points, with the position of
-/// the curve specified at every control point and the tangents computed automatically.
+/// the curve specified at every control point and the tangents computed automatically. The associated [`CubicCurve`]
+/// has one segment between each pair of adjacent control points.
///
/// **Note** the Catmull-Rom spline is a special case of Cardinal spline where the tension is 0.5.
///
diff --git a/crates/bevy_math/src/cubic_splines.rs b/crates/bevy_math/src/cubic_splines.rs
--- a/crates/bevy_math/src/cubic_splines.rs
+++ b/crates/bevy_math/src/cubic_splines.rs
@@ -183,8 +185,8 @@ impl<P: Point> CubicGenerator<P> for CubicHermite<P> {
/// Tangents are automatically computed based on the positions of control points.
///
/// ### Continuity
-/// The curve is at minimum C0 continuous, meaning it has no holes or jumps. It is also C1, meaning the
-/// tangent vector has no sudden jumps.
+/// The curve is at minimum C1, meaning that it is continuous (it has no holes or jumps), and its tangent
+/// vector is also well-defined everywhere, without sudden jumps.
///
/// ### Usage
///
diff --git a/crates/bevy_math/src/cubic_splines.rs b/crates/bevy_math/src/cubic_splines.rs
--- a/crates/bevy_math/src/cubic_splines.rs
+++ b/crates/bevy_math/src/cubic_splines.rs
@@ -232,10 +234,28 @@ impl<P: Point> CubicGenerator<P> for CubicCardinalSpline<P> {
[-s, 2. - s, s - 2., s],
];
- let segments = self
- .control_points
+ let length = self.control_points.len();
+
+ // Early return to avoid accessing an invalid index
+ if length < 2 {
+ return CubicCurve { segments: vec![] };
+ }
+
+ // Extend the list of control points by mirroring the last second-to-last control points on each end;
+ // this allows tangents for the endpoints to be provided, and the overall effect is that the tangent
+ // at an endpoint is proportional to twice the vector between it and its adjacent control point.
+ //
+ // The expression used here is P_{-1} := P_0 - (P_1 - P_0) = 2P_0 - P_1. (Analogously at the other end.)
+ let mirrored_first = self.control_points[0] * 2. - self.control_points[1];
+ let mirrored_last = self.control_points[length - 1] * 2. - self.control_points[length - 2];
+ let extended_control_points = once(&mirrored_first)
+ .chain(self.control_points.iter())
+ .chain(once(&mirrored_last))
+ .collect::<Vec<_>>();
+
+ let segments = extended_control_points
.windows(4)
- .map(|p| CubicSegment::coefficients([p[0], p[1], p[2], p[3]], char_matrix))
+ .map(|p| CubicSegment::coefficients([*p[0], *p[1], *p[2], *p[3]], char_matrix))
.collect();
CubicCurve { segments }
| diff --git a/crates/bevy_math/src/cubic_splines.rs b/crates/bevy_math/src/cubic_splines.rs
--- a/crates/bevy_math/src/cubic_splines.rs
+++ b/crates/bevy_math/src/cubic_splines.rs
@@ -1275,6 +1295,37 @@ mod tests {
assert_eq!(bezier.ease(1.0), 1.0);
}
+ /// Test that a simple cardinal spline passes through all of its control points with
+ /// the correct tangents.
+ #[test]
+ fn cardinal_control_pts() {
+ use super::CubicCardinalSpline;
+
+ let tension = 0.2;
+ let [p0, p1, p2, p3] = [vec2(-1., -2.), vec2(0., 1.), vec2(1., 2.), vec2(-2., 1.)];
+ let curve = CubicCardinalSpline::new(tension, [p0, p1, p2, p3]).to_curve();
+
+ // Positions at segment endpoints
+ assert!(curve.position(0.).abs_diff_eq(p0, FLOAT_EQ));
+ assert!(curve.position(1.).abs_diff_eq(p1, FLOAT_EQ));
+ assert!(curve.position(2.).abs_diff_eq(p2, FLOAT_EQ));
+ assert!(curve.position(3.).abs_diff_eq(p3, FLOAT_EQ));
+
+ // Tangents at segment endpoints
+ assert!(curve
+ .velocity(0.)
+ .abs_diff_eq((p1 - p0) * tension * 2., FLOAT_EQ));
+ assert!(curve
+ .velocity(1.)
+ .abs_diff_eq((p2 - p0) * tension, FLOAT_EQ));
+ assert!(curve
+ .velocity(2.)
+ .abs_diff_eq((p3 - p1) * tension, FLOAT_EQ));
+ assert!(curve
+ .velocity(3.)
+ .abs_diff_eq((p3 - p2) * tension * 2., FLOAT_EQ));
+ }
+
/// Test that [`RationalCurve`] properly generalizes [`CubicCurve`]. A Cubic upgraded to a rational
/// should produce pretty much the same output.
#[test]
| Somethin' ain't right with cardinal splines
## The issue
There is a mismatch between what the [documentation says](https://docs.rs/bevy/latest/bevy/math/cubic_splines/struct.CubicCardinalSpline.html) and how `CubicCardinalSpline` actually works. Namely, this:
> ### Interpolation
>
> The curve passes through every control point.
As implemented, this is not true. Instead, the curve passes through only the interior control points — that is, it doesn't actually go through the points at the ends.
## Example
Here is a complete example (coutesy of @afonsolage):
```rust
use bevy::math::prelude::*;
use bevy::math::vec2;
fn main() {
let control_points = [
vec2(-1.0, 50.0),
vec2(0.3, 100.0),
vec2(0.4, 150.0),
vec2(1.0, 150.0),
];
let curve = CubicCardinalSpline::new(0.1, control_points).to_curve();
for position in curve.iter_positions(10) {
println!("Position: {position:?}");
}
}
```
This produces the following output:
```
Position: Vec2(0.3, 100.0)
Position: Vec2(0.31351003, 102.165)
Position: Vec2(0.32608, 106.32)
Position: Vec2(0.33777002, 111.955)
Position: Vec2(0.34864, 118.56)
Position: Vec2(0.35875, 125.625)
Position: Vec2(0.36816, 132.64)
Position: Vec2(0.37693, 139.095)
Position: Vec2(0.38511997, 144.48)
Position: Vec2(0.39279, 148.28499)
Position: Vec2(0.39999998, 150.0)
```
As one can see, the curve only extends between 0.3 and 0.4, where the interior control points are defined. If you want it to actually pass through all those points, you need to extend your control sequence to a length of six, so that your desired endpoints become interior.
## Background
Cardinal splines connect a series of points `P_i` with a sequence of cubic segments between `P_i` and `P_{i+1}` so that:
1. The curve segment `B_i` from `P_i` to `P_{i+1}` starts at `P_i` and ends at `P_{i+1}`
2. The derivatives of `B_i` and `B_{i+1}` at `P_{i+1}` coincide
Property (1) happens by using `P_i` and `P_{i+1}` as the outermost control points of the Bézier curve `B_i`. Property (2) is ensured by making the third control point of `B_i`, the second control point of `B_{i+1}`, and `P_{i+1}` all collinear; the point of doing this is that it makes the union of cubic segments differentiable (i.e. it doesn't have "kinks").
The general construction is such that the derivative at `P_i` is given by the difference between `P_{i-1}` and `P_{i+1}` — i.e., the tangent to the curve at `P_i` is parallel to the line between the points preceding and following it. I believe this part is implemented at present; however, this aspect of the construction leaves one thing open: what do we do at the endpoints (where `P_{i+1}` or `P_{i-1}` doesn't exist)?
A standard choice is to just use the line from `P_0` to `P_1` to determine the tangent at `P_0` (and analogously for the other endpoint). Of course, there are other choices that one could make as well (e.g. just requiring the tangents for those to be specified manually).
Presently, either by accident or on purpose, we just exclude those points. Either way, we are currently misleading about it.
## Which way, spline implementor?
There are basically two paths forward here:
1. Update the documentation so that it is truthful about how the endpoints are only used to determined the tangents for the interior points.
2. Update the cardinal spline implementation so that it actually passes through all the control points.
I think it's pretty clear that I would have not written such a long issue if I did not prefer (2) (which I believe does a good job of meeting expectations and providing ergonomics for curve interpolation between a number of points), but there is a downside: in principle, the current implementation allows the user to indirectly specify the tangent at the endpoints by choosing their control points carefully. (2) is also a breaking change.
I am unsure if this is a bug or if the implementation is this way intentionally, so I have written it as a docs issue.
| Additional context which originated the issue: [on discord help channel](https://discordapp.com/channels/691052431525675048/1219580704824889424/1219580704824889424).
Looks like a bug to me. Docs state the classic properties of cardinals. I will need to review the history to see if this issue existed before I reworded the spline docs, it's possible I introduced this.
Let's fix the implementation. | 2024-03-20T00:25:28 | 1.76 | c9ec95d7827297528d0779e3fd232dfc2e3cbed7 | [
"cubic_splines::tests::cardinal_control_pts"
] | [
"bounding::bounded2d::aabb2d_tests::area",
"bounding::bounded2d::aabb2d_tests::center",
"bounding::bounded2d::aabb2d_tests::closest_point",
"bounding::bounded2d::aabb2d_tests::grow",
"bounding::bounded2d::aabb2d_tests::contains",
"bounding::bounded2d::aabb2d_tests::half_size",
"bounding::bounded2d::aabb... | [] | [] | 36 |
bevyengine/bevy | 10,103 | bevyengine__bevy-10103 | [
"10086"
] | 88599d7fa06dff6bba434f3a91b57bf8f483f158 | diff --git a/crates/bevy_reflect/src/serde/ser.rs b/crates/bevy_reflect/src/serde/ser.rs
--- a/crates/bevy_reflect/src/serde/ser.rs
+++ b/crates/bevy_reflect/src/serde/ser.rs
@@ -68,7 +68,22 @@ impl<'a> Serialize for ReflectSerializer<'a> {
{
let mut state = serializer.serialize_map(Some(1))?;
state.serialize_entry(
- self.value.reflect_type_path(),
+ self.value
+ .get_represented_type_info()
+ .ok_or_else(|| {
+ if self.value.is_dynamic() {
+ Error::custom(format_args!(
+ "cannot serialize dynamic value without represented type: {}",
+ self.value.reflect_type_path()
+ ))
+ } else {
+ Error::custom(format_args!(
+ "cannot get type info for {}",
+ self.value.reflect_type_path()
+ ))
+ }
+ })?
+ .type_path(),
&TypedReflectSerializer::new(self.value, self.registry),
)?;
state.end()
diff --git a/crates/bevy_reflect/src/type_path.rs b/crates/bevy_reflect/src/type_path.rs
--- a/crates/bevy_reflect/src/type_path.rs
+++ b/crates/bevy_reflect/src/type_path.rs
@@ -183,6 +183,10 @@ impl fmt::Debug for TypePathTable {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("TypePathVtable")
.field("type_path", &self.type_path)
+ .field("short_type_path", &(self.short_type_path)())
+ .field("type_ident", &(self.type_ident)())
+ .field("crate_name", &(self.crate_name)())
+ .field("module_path", &(self.module_path)())
.finish()
}
}
| diff --git a/crates/bevy_reflect/src/serde/mod.rs b/crates/bevy_reflect/src/serde/mod.rs
--- a/crates/bevy_reflect/src/serde/mod.rs
+++ b/crates/bevy_reflect/src/serde/mod.rs
@@ -8,7 +8,7 @@ pub use type_data::*;
#[cfg(test)]
mod tests {
- use crate::{self as bevy_reflect, DynamicTupleStruct};
+ use crate::{self as bevy_reflect, DynamicTupleStruct, Struct};
use crate::{
serde::{ReflectSerializer, UntypedReflectDeserializer},
type_registry::TypeRegistry,
diff --git a/crates/bevy_reflect/src/serde/mod.rs b/crates/bevy_reflect/src/serde/mod.rs
--- a/crates/bevy_reflect/src/serde/mod.rs
+++ b/crates/bevy_reflect/src/serde/mod.rs
@@ -94,8 +94,10 @@ mod tests {
}
#[test]
- #[should_panic(expected = "cannot get type info for bevy_reflect::DynamicStruct")]
- fn unproxied_dynamic_should_not_serialize() {
+ #[should_panic(
+ expected = "cannot serialize dynamic value without represented type: bevy_reflect::DynamicStruct"
+ )]
+ fn should_not_serialize_unproxied_dynamic() {
let registry = TypeRegistry::default();
let mut value = DynamicStruct::default();
diff --git a/crates/bevy_reflect/src/serde/mod.rs b/crates/bevy_reflect/src/serde/mod.rs
--- a/crates/bevy_reflect/src/serde/mod.rs
+++ b/crates/bevy_reflect/src/serde/mod.rs
@@ -104,4 +106,36 @@ mod tests {
let serializer = ReflectSerializer::new(&value, ®istry);
ron::ser::to_string(&serializer).unwrap();
}
+
+ #[test]
+ fn should_roundtrip_proxied_dynamic() {
+ #[derive(Reflect)]
+ struct TestStruct {
+ a: i32,
+ b: i32,
+ }
+
+ let mut registry = TypeRegistry::default();
+ registry.register::<TestStruct>();
+
+ let value: DynamicStruct = TestStruct { a: 123, b: 456 }.clone_dynamic();
+
+ let serializer = ReflectSerializer::new(&value, ®istry);
+
+ let expected = r#"{"bevy_reflect::serde::tests::TestStruct":(a:123,b:456)}"#;
+ let result = ron::ser::to_string(&serializer).unwrap();
+ assert_eq!(expected, result);
+
+ let mut deserializer = ron::de::Deserializer::from_str(&result).unwrap();
+ let reflect_deserializer = UntypedReflectDeserializer::new(®istry);
+
+ let expected = value.clone_value();
+ let result = reflect_deserializer
+ .deserialize(&mut deserializer)
+ .unwrap()
+ .take::<DynamicStruct>()
+ .unwrap();
+
+ assert!(expected.reflect_partial_eq(&result).unwrap());
+ }
}
| Potential reflect issue: DynamicStruct not registered
## Bevy version
version: current main branch
## What you did
While migrating my crate to the current bevy main, I am facing the current error: "No registration found for bevy_reflect::DynamicStruct" when deserializing a reflected component.
This worked in bevy 0.11.
The current branch where I'm working, and my test failing is: https://github.com/raffaeleragni/bevy_sync/blob/bevy_0_12/src/proto_serde.rs#L122
| @MrGVSV I don't follow exactly what's going on / broke here, but this looks like a regression.
Yeah this seems to be an issue with the recent changes to how `TypePath` is used in the reflection serializer. I'll make a PR! | 2023-10-13T07:39:42 | 1.70 | 6f27e0e35faffbf2b77807bb222d3d3a9a529210 | [
"serde::tests::should_not_serialize_unproxied_dynamic - should panic",
"serde::tests::should_roundtrip_proxied_dynamic"
] | [
"enums::tests::applying_non_enum_should_panic - should panic",
"enums::tests::dynamic_enum_should_apply_dynamic_enum",
"enums::tests::enum_should_allow_struct_fields",
"enums::tests::dynamic_enum_should_change_variant",
"enums::tests::dynamic_enum_should_set_variant_fields",
"enums::tests::enum_should_ret... | [] | [] | 37 |
bincode-org/bincode | 217 | bincode-org__bincode-217 | [
"215"
] | 882224363716621d4c9bbbbcc4bde8b4f7e1ba68 | diff --git a/Cargo.toml b/Cargo.toml
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -18,8 +18,9 @@ description = "A binary serialization / deserialization strategy that uses Serde
[dependencies]
byteorder = "1.0.0"
-serde = "1.*.*"
+serde = "1.0.16"
[dev-dependencies]
serde_bytes = "0.10.*"
-serde_derive = "1.*.*"
+serde_derive = "1.0.16"
+
diff --git a/src/de/mod.rs b/src/de/mod.rs
--- a/src/de/mod.rs
+++ b/src/de/mod.rs
@@ -407,6 +407,10 @@ where
let message = "Bincode does not support Deserializer::deserialize_ignored_any";
Err(Error::custom(message))
}
+
+ fn is_human_readable(&self) -> bool {
+ false
+ }
}
impl<'de, 'a, R, S, E> serde::de::VariantAccess<'de> for &'a mut Deserializer<R, S, E>
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -203,6 +203,10 @@ impl<'a, W: Write, E: ByteOrder> serde::Serializer for &'a mut Serializer<W, E>
) -> Result<()> {
self.serialize_u32(variant_index)
}
+
+ fn is_human_readable(&self) -> bool {
+ false
+ }
}
pub struct SizeChecker<S: SizeLimit> {
diff --git a/src/ser/mod.rs b/src/ser/mod.rs
--- a/src/ser/mod.rs
+++ b/src/ser/mod.rs
@@ -392,6 +396,10 @@ impl<'a, S: SizeLimit> serde::Serializer for &'a mut SizeChecker<S> {
try!(self.add_value(variant_index));
value.serialize(self)
}
+
+ fn is_human_readable(&self) -> bool {
+ false
+ }
}
#[doc(hidden)]
| diff --git a/tests/test.rs b/tests/test.rs
--- a/tests/test.rs
+++ b/tests/test.rs
@@ -457,3 +457,12 @@ fn test_zero_copy_parse() {
assert_eq!(out, f);
}
}
+
+#[test]
+fn not_human_readable() {
+ use std::net::Ipv4Addr;
+ let ip = Ipv4Addr::new(1, 2, 3, 4);
+ the_same(ip);
+ assert_eq!(&ip.octets()[..], &serialize_little(&ip, Infinite).unwrap()[..]);
+ assert_eq!(::std::mem::size_of::<Ipv4Addr>() as u64, serialized_size(&ip));
+}
| Set is_human_readable to false
I have not released this yet but I will in the next few days. https://github.com/serde-rs/serde/pull/1044
```rust
trait Serializer {
/* ... */
fn is_human_readable(&self) -> bool { true }
}
trait Deserializer<'de> {
/* ... */
fn is_human_readable(&self) -> bool { true }
}
```
Bincode should override these to false to get more compact representation for types like IpAddr and Datetime.
| 2017-11-02T18:16:55 | 0.9 | 882224363716621d4c9bbbbcc4bde8b4f7e1ba68 | [
"not_human_readable"
] | [
"bytes",
"deserializing_errors",
"char_serialization",
"encode_box",
"endian_difference",
"serde_bytes",
"path_buf",
"test_bool",
"test_basic_struct",
"test_enum",
"test_cow_serialize",
"test_multi_strings_serialize",
"test_map",
"test_fixed_size_array",
"test_nested_struct",
"test_pro... | [] | [] | 38 | |
bincode-org/bincode | 198 | bincode-org__bincode-198 | [
"193"
] | 131b2b213dedc34bea0edc98298770eb4eb0c74a | diff --git a/src/config/legacy.rs b/src/config/legacy.rs
--- a/src/config/legacy.rs
+++ b/src/config/legacy.rs
@@ -49,6 +49,7 @@ macro_rules! config_map {
(Unlimited, Little) => {
let $opts = DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.with_no_limit()
.with_little_endian();
$call
diff --git a/src/config/legacy.rs b/src/config/legacy.rs
--- a/src/config/legacy.rs
+++ b/src/config/legacy.rs
@@ -56,6 +57,7 @@ macro_rules! config_map {
(Unlimited, Big) => {
let $opts = DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.with_no_limit()
.with_big_endian();
$call
diff --git a/src/config/legacy.rs b/src/config/legacy.rs
--- a/src/config/legacy.rs
+++ b/src/config/legacy.rs
@@ -63,6 +65,7 @@ macro_rules! config_map {
(Unlimited, Native) => {
let $opts = DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.with_no_limit()
.with_native_endian();
$call
diff --git a/src/config/legacy.rs b/src/config/legacy.rs
--- a/src/config/legacy.rs
+++ b/src/config/legacy.rs
@@ -71,6 +74,7 @@ macro_rules! config_map {
(Limited(l), Little) => {
let $opts = DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.with_limit(l)
.with_little_endian();
$call
diff --git a/src/config/legacy.rs b/src/config/legacy.rs
--- a/src/config/legacy.rs
+++ b/src/config/legacy.rs
@@ -78,6 +82,7 @@ macro_rules! config_map {
(Limited(l), Big) => {
let $opts = DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.with_limit(l)
.with_big_endian();
$call
diff --git a/src/config/legacy.rs b/src/config/legacy.rs
--- a/src/config/legacy.rs
+++ b/src/config/legacy.rs
@@ -85,6 +90,7 @@ macro_rules! config_map {
(Limited(l), Native) => {
let $opts = DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.with_limit(l)
.with_native_endian();
$call
diff --git a/src/config/mod.rs b/src/config/mod.rs
--- a/src/config/mod.rs
+++ b/src/config/mod.rs
@@ -8,16 +8,19 @@ pub(crate) use self::endian::BincodeByteOrder;
pub(crate) use self::int::IntEncoding;
pub(crate) use self::internal::*;
pub(crate) use self::limit::SizeLimit;
+pub(crate) use self::trailing::TrailingBytes;
pub use self::endian::{BigEndian, LittleEndian, NativeEndian};
pub use self::int::{FixintEncoding, VarintEncoding};
pub use self::legacy::*;
pub use self::limit::{Bounded, Infinite};
+pub use self::trailing::{AllowTrailing, RejectTrailing};
mod endian;
mod int;
mod legacy;
mod limit;
+mod trailing;
/// The default options for bincode serialization/deserialization.
///
diff --git a/src/config/mod.rs b/src/config/mod.rs
--- a/src/config/mod.rs
+++ b/src/config/mod.rs
@@ -50,6 +53,7 @@ impl InternalOptions for DefaultOptions {
type Limit = Infinite;
type Endian = LittleEndian;
type IntEncoding = VarintEncoding;
+ type Trailing = RejectTrailing;
#[inline(always)]
fn limit(&mut self) -> &mut Infinite {
diff --git a/src/config/mod.rs b/src/config/mod.rs
--- a/src/config/mod.rs
+++ b/src/config/mod.rs
@@ -111,6 +115,16 @@ pub trait Options: InternalOptions + Sized {
WithOtherIntEncoding::new(self)
}
+ /// Sets the deserializer to reject trailing bytes
+ fn reject_trailing_bytes(self) -> WithOtherTrailing<Self, RejectTrailing> {
+ WithOtherTrailing::new(self)
+ }
+
+ /// Sets the deserializer to allow trailing bytes
+ fn allow_trailing_bytes(self) -> WithOtherTrailing<Self, AllowTrailing> {
+ WithOtherTrailing::new(self)
+ }
+
/// Serializes a serializable object into a `Vec` of bytes using this configuration
#[inline(always)]
fn serialize<S: ?Sized + serde::Serialize>(self, t: &S) -> Result<Vec<u8>> {
diff --git a/src/config/mod.rs b/src/config/mod.rs
--- a/src/config/mod.rs
+++ b/src/config/mod.rs
@@ -229,6 +243,12 @@ pub struct WithOtherIntEncoding<O: Options, I: IntEncoding> {
_length: PhantomData<I>,
}
+/// A configuration struct with a user-specified trailing bytes behavior.
+pub struct WithOtherTrailing<O: Options, T: TrailingBytes> {
+ options: O,
+ _trailing: PhantomData<T>,
+}
+
impl<O: Options, L: SizeLimit> WithOtherLimit<O, L> {
#[inline(always)]
pub(crate) fn new(options: O, limit: L) -> WithOtherLimit<O, L> {
diff --git a/src/config/mod.rs b/src/config/mod.rs
--- a/src/config/mod.rs
+++ b/src/config/mod.rs
@@ -259,11 +279,21 @@ impl<O: Options, I: IntEncoding> WithOtherIntEncoding<O, I> {
}
}
+impl<O: Options, T: TrailingBytes> WithOtherTrailing<O, T> {
+ #[inline(always)]
+ pub(crate) fn new(options: O) -> WithOtherTrailing<O, T> {
+ WithOtherTrailing {
+ options,
+ _trailing: PhantomData,
+ }
+ }
+}
+
impl<O: Options, E: BincodeByteOrder + 'static> InternalOptions for WithOtherEndian<O, E> {
type Limit = O::Limit;
type Endian = E;
type IntEncoding = O::IntEncoding;
-
+ type Trailing = O::Trailing;
#[inline(always)]
fn limit(&mut self) -> &mut O::Limit {
self.options.limit()
diff --git a/src/config/mod.rs b/src/config/mod.rs
--- a/src/config/mod.rs
+++ b/src/config/mod.rs
@@ -274,7 +304,7 @@ impl<O: Options, L: SizeLimit + 'static> InternalOptions for WithOtherLimit<O, L
type Limit = L;
type Endian = O::Endian;
type IntEncoding = O::IntEncoding;
-
+ type Trailing = O::Trailing;
fn limit(&mut self) -> &mut L {
&mut self.new_limit
}
diff --git a/src/config/mod.rs b/src/config/mod.rs
--- a/src/config/mod.rs
+++ b/src/config/mod.rs
@@ -284,6 +314,18 @@ impl<O: Options, I: IntEncoding + 'static> InternalOptions for WithOtherIntEncod
type Limit = O::Limit;
type Endian = O::Endian;
type IntEncoding = I;
+ type Trailing = O::Trailing;
+
+ fn limit(&mut self) -> &mut O::Limit {
+ self.options.limit()
+ }
+}
+
+impl<O: Options, T: TrailingBytes + 'static> InternalOptions for WithOtherTrailing<O, T> {
+ type Limit = O::Limit;
+ type Endian = O::Endian;
+ type IntEncoding = O::IntEncoding;
+ type Trailing = T;
fn limit(&mut self) -> &mut O::Limit {
self.options.limit()
diff --git a/src/config/mod.rs b/src/config/mod.rs
--- a/src/config/mod.rs
+++ b/src/config/mod.rs
@@ -297,6 +339,7 @@ mod internal {
type Limit: SizeLimit + 'static;
type Endian: BincodeByteOrder + 'static;
type IntEncoding: IntEncoding + 'static;
+ type Trailing: TrailingBytes + 'static;
fn limit(&mut self) -> &mut Self::Limit;
}
diff --git a/src/config/mod.rs b/src/config/mod.rs
--- a/src/config/mod.rs
+++ b/src/config/mod.rs
@@ -305,6 +348,7 @@ mod internal {
type Limit = O::Limit;
type Endian = O::Endian;
type IntEncoding = O::IntEncoding;
+ type Trailing = O::Trailing;
#[inline(always)]
fn limit(&mut self) -> &mut Self::Limit {
diff --git /dev/null b/src/config/trailing.rs
new file mode 100644
--- /dev/null
+++ b/src/config/trailing.rs
@@ -0,0 +1,37 @@
+use de::read::SliceReader;
+use {ErrorKind, Result};
+
+/// A trait for erroring deserialization if not all bytes were read.
+pub trait TrailingBytes {
+ /// Checks a given slice reader to determine if deserialization used all bytes in the slice.
+ fn check_end(reader: &SliceReader) -> Result<()>;
+}
+
+/// A TrailingBytes config that will allow trailing bytes in slices after deserialization.
+#[derive(Copy, Clone)]
+pub struct AllowTrailing;
+
+/// A TrailingBytes config that will cause bincode to produce an error if bytes are left over in the slice when deserialization is complete.
+
+#[derive(Copy, Clone)]
+pub struct RejectTrailing;
+
+impl TrailingBytes for AllowTrailing {
+ #[inline(always)]
+ fn check_end(_reader: &SliceReader) -> Result<()> {
+ Ok(())
+ }
+}
+
+impl TrailingBytes for RejectTrailing {
+ #[inline(always)]
+ fn check_end(reader: &SliceReader) -> Result<()> {
+ if reader.is_finished() {
+ Ok(())
+ } else {
+ Err(Box::new(ErrorKind::Custom(
+ "Slice had bytes remaining after deserialization".to_string(),
+ )))
+ }
+ }
+}
diff --git a/src/de/mod.rs b/src/de/mod.rs
--- a/src/de/mod.rs
+++ b/src/de/mod.rs
@@ -26,7 +26,7 @@ pub mod read;
/// let bytes_read = d.bytes_read();
/// ```
pub struct Deserializer<R, O: Options> {
- reader: R,
+ pub(crate) reader: R,
options: O,
}
diff --git a/src/de/read.rs b/src/de/read.rs
--- a/src/de/read.rs
+++ b/src/de/read.rs
@@ -52,6 +52,10 @@ impl<'storage> SliceReader<'storage> {
self.slice = remaining;
Ok(read_slice)
}
+
+ pub(crate) fn is_finished(&self) -> bool {
+ self.slice.is_empty()
+ }
}
impl<R> IoReader<R> {
diff --git a/src/internal.rs b/src/internal.rs
--- a/src/internal.rs
+++ b/src/internal.rs
@@ -2,7 +2,7 @@ use serde;
use std::io::{Read, Write};
use std::marker::PhantomData;
-use config::{Infinite, InternalOptions, Options, SizeLimit};
+use config::{Infinite, InternalOptions, Options, SizeLimit, TrailingBytes};
use de::read::BincodeRead;
use Result;
diff --git a/src/internal.rs b/src/internal.rs
--- a/src/internal.rs
+++ b/src/internal.rs
@@ -111,7 +111,14 @@ where
T: serde::de::DeserializeSeed<'a>,
O: InternalOptions,
{
- let reader = ::de::read::SliceReader::new(bytes);
let options = ::config::WithOtherLimit::new(options, Infinite);
- deserialize_from_custom_seed(seed, reader, options)
+
+ let reader = ::de::read::SliceReader::new(bytes);
+ let mut deserializer = ::de::Deserializer::with_bincode_read(reader, options);
+ let val = seed.deserialize(&mut deserializer)?;
+
+ match O::Trailing::check_end(&deserializer.reader) {
+ Ok(_) => Ok(val),
+ Err(err) => Err(err),
+ }
}
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -94,6 +94,7 @@ where
{
DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.serialize(value)
}
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -107,6 +108,7 @@ where
{
DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.deserialize_from(reader)
}
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -122,6 +124,7 @@ where
{
DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.deserialize_from_custom(reader)
}
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -136,6 +139,7 @@ where
{
DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.deserialize_in_place(reader, place)
}
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -146,6 +150,7 @@ where
{
DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.deserialize(bytes)
}
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -156,5 +161,6 @@ where
{
DefaultOptions::new()
.with_fixint_encoding()
+ .allow_trailing_bytes()
.serialized_size(value)
}
| diff --git a/tests/test.rs b/tests/test.rs
--- a/tests/test.rs
+++ b/tests/test.rs
@@ -277,6 +277,17 @@ fn deserializing_errors() {
}
}
+#[test]
+fn trailing_bytes() {
+ match DefaultOptions::new()
+ .deserialize::<char>(b"1x")
+ .map_err(|e| *e)
+ {
+ Err(ErrorKind::Custom(_)) => {}
+ other => panic!("Expecting TrailingBytes, got {:?}", other),
+ }
+}
+
#[test]
fn too_big_deserialize() {
let serialized = vec![0, 0, 0, 3];
| Reject trailing bytes when deserializing from &[u8]
In `deserialize_from`, it makes sense to allow deserializing multiple values from the same io::Read. But in `deserialize`, I think trailing data should be an error.
I would expect this to be an error rather than ignoring `b"x"`:
```rust
extern crate bincode;
fn main() {
println!("{:?}", bincode::deserialize::<char>(b"1x"));
}
```
| @dtolnay Is this issue in response to my question: https://github.com/TyOverby/bincode/issues/192 ?
Does it mean that multiple serialised messages (differing by trailing bytes) can deserialise to the same content?
https://github.com/TyOverby/bincode/issues/192#issuecomment-313660286 gives a workaround to detect trailing bytes.
I'm wondering if this is possible to achieve without incurring a performance penalty. The first method that came to mind was to modify the `SliceReader` struct, but the read methods on this struct may get called multiple times and there is no way to know if this is the last call so that method is out. The only other method that is immediatly obvious to me is to keep track of bytes read during the deserialization and error out at the end if bytes read and buffer length don't match. The issue with this method is that we would have to add code (Quite possibly to the sizelimit structs) to check this, which seems like it would be an unwanted amount of overhead. Especially in the common case of `SizeLimit::Infinite`
Edit: I've figured out a way to do this but the code ends up being much nicer if we can use `pub(restricted)`. @dtolnay or @TyOverby are there any policies on the minimum Rust version bincode needs to compile on? | 2017-07-24T08:07:44 | 1.2 | 263fb948aca86d3ac5dfa5bc4f345b44ed8bb8f8 | [
"trailing_bytes"
] | [
"config::int::test::test_zigzag_edge_cases",
"config::int::test::test_zigzag_decode",
"config::int::test::test_zigzag_encode",
"bytes",
"char_serialization",
"deserializing_errors",
"endian_difference",
"encode_box",
"not_human_readable",
"path_buf",
"serde_bytes",
"test_basic_struct",
"test... | [] | [] | 39 |
bincode-org/bincode | 336 | bincode-org__bincode-336 | [
"335"
] | f5fa06e761646ba083c1317a4ba0dcd58eefe99b | diff --git a/Cargo.toml b/Cargo.toml
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "bincode"
-version = "1.3.0" # remember to update html_root_url
+version = "1.3.1" # remember to update html_root_url
authors = ["Ty Overby <ty@pre-alpha.com>", "Francesco Mazzoli <f@mazzo.li>", "David Tolnay <dtolnay@gmail.com>", "Zoey Riordan <zoey@dos.cafe>"]
exclude = ["logo.png", "examples/*", ".gitignore", ".travis.yml"]
diff --git a/src/de/read.rs b/src/de/read.rs
--- a/src/de/read.rs
+++ b/src/de/read.rs
@@ -141,12 +141,7 @@ where
R: io::Read,
{
fn fill_buffer(&mut self, length: usize) -> Result<()> {
- // Reserve and fill extra space if needed
- let current_length = self.temp_buffer.len();
- if length > current_length {
- self.temp_buffer.reserve_exact(length - current_length);
- self.temp_buffer.resize(length, 0);
- }
+ self.temp_buffer.resize(length, 0);
self.reader.read_exact(&mut self.temp_buffer)?;
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -24,7 +24,7 @@
//! Support for `i128` and `u128` is automatically enabled on Rust toolchains
//! greater than or equal to `1.26.0` and disabled for targets which do not support it
-#![doc(html_root_url = "https://docs.rs/bincode/1.3.0")]
+#![doc(html_root_url = "https://docs.rs/bincode/1.3.1")]
#![crate_name = "bincode"]
#![crate_type = "rlib"]
#![crate_type = "dylib"]
| diff --git a/src/de/read.rs b/src/de/read.rs
--- a/src/de/read.rs
+++ b/src/de/read.rs
@@ -185,3 +180,23 @@ where
visitor.visit_bytes(&self.temp_buffer[..])
}
}
+
+#[cfg(test)]
+mod test {
+ use super::IoReader;
+
+ #[test]
+ fn test_fill_buffer() {
+ let buffer = vec![0u8; 64];
+ let mut reader = IoReader::new(buffer.as_slice());
+
+ reader.fill_buffer(20).unwrap();
+ assert_eq!(20, reader.temp_buffer.len());
+
+ reader.fill_buffer(30).unwrap();
+ assert_eq!(30, reader.temp_buffer.len());
+
+ reader.fill_buffer(5).unwrap();
+ assert_eq!(5, reader.temp_buffer.len());
+ }
+}
diff --git a/tests/test.rs b/tests/test.rs
--- a/tests/test.rs
+++ b/tests/test.rs
@@ -880,3 +880,21 @@ fn test_varint_length_prefixes() {
(1 + std::mem::size_of::<u32>()) as u64
); // 2 ** 16 + 1
}
+
+#[test]
+fn test_byte_vec_struct() {
+ #[derive(PartialEq, Eq, Clone, Serialize, Deserialize, Debug)]
+ struct ByteVecs {
+ a: Vec<u8>,
+ b: Vec<u8>,
+ c: Vec<u8>,
+ };
+
+ let byte_struct = ByteVecs {
+ a: vec![2; 20],
+ b: vec![3; 30],
+ c: vec![1; 10],
+ };
+
+ the_same(byte_struct);
+}
| Regression in 1.3.0: memory allocation of 8391155392347897856 bytes failed
Can be reproduced by adding the following test to the bottom of src/lib.rs in https://github.com/sharkdp/bat/tree/b09d245dea7e2f45cf461fda978f3256e7162465 and running `cargo update` and `cargo test --lib bincode_repro`. It succeeds with bincode 1.2.1 and aborts with 1.3.0.
```rust
#[test]
fn bincode_repro() {
use syntect::{dumps::from_binary, parsing::SyntaxSet};
from_binary::<SyntaxSet>(include_bytes!("../assets/syntaxes.bin"));
}
```
```console
memory allocation of 8391155392347897856 bytes failederror: test failed, to rerun pass '--lib'
Caused by:
process didn't exit successfully: `/git/tmp/bat/target/debug/deps/bat-3446bc567e169f19 repro` (signal: 6, SIGABRT: process abort signal)
```
I've yanked 1.3.0 for now since it's important to keep old serialized data deserializing across minor versions.
| The regression seems to start with https://github.com/servo/bincode/pull/309. I'm uncertain, but it seems like [`fill_buffer`](https://github.com/servo/bincode/blob/f5fa06e761646ba083c1317a4ba0dcd58eefe99b/src/de/read.rs#L143-L154) was written to assume that it will only add at most `length` bytes to the buffer. However, the new implementation allows reading more than `length` if the size of the temp_buffer is already larger than `length`.
@dtolnay I'm not sure why this needed a version yank. Especially when the only replacement version has undefined behavior. It looks to me like this is an edge case regression, which normally wouldn't require a yank.
I also can't even repro this issue locally.
Regarding the yank -- it's because taking correct code that used to work (bat, cargo-expand, etc) and making them not work is extremely disruptive. That disruption is not offset by making things work that didn't used to work, or even removing UB. The latter things can be re-released in a way where no disruption needs to occur. Yanks are intended for this.
> It looks to me like this is an edge case regression
I'm not sure how much you got a chance to investigate the regression but this sounds premature to me. I don't know how to tell that without understanding the bug better. In particular I don't know whether *all* or *most* past serialized bincode data might be broken.
It doesn't repro at all on my machines. On other's machines where it does repro the "fix" ends up causing a panic inside of syntect. (The fix being replacing line 151 with `self.reader.read_exact(&mut self.temp_buffer[..length])?;`). Since the reader in this case comes from ffi of the `flate2` crate. Are we sure there isn't some UB causing this? | 2020-06-24T13:33:27 | 1.3 | f5fa06e761646ba083c1317a4ba0dcd58eefe99b | [
"de::read::test::test_fill_buffer"
] | [
"config::int::test::test_zigzag_decode",
"config::int::test::test_zigzag_edge_cases",
"config::int::test::test_zigzag_encode",
"char_serialization",
"bytes",
"deserializing_errors",
"endian_difference",
"path_buf",
"encode_box",
"not_human_readable",
"serde_bytes",
"test_basic_struct",
"test... | [] | [] | 40 |
boa-dev/boa | 245 | boa-dev__boa-245 | [
"204"
] | 448835295a1cb2cbb216c0459759f208e132606c | diff --git a/src/lib/syntax/lexer.rs b/src/lib/syntax/lexer.rs
--- a/src/lib/syntax/lexer.rs
+++ b/src/lib/syntax/lexer.rs
@@ -426,11 +426,25 @@ impl<'a> Lexer<'a> {
None => break,
};
- if !c.is_digit(10) {
- break 'digitloop;
+ match c {
+ 'e' | 'E' => {
+ match self.preview_multiple_next(2).unwrap_or_default().to_digit(10) {
+ Some(0..=9) | None => {
+ buf.push(self.next());
+ }
+ _ => {
+ break 'digitloop;
+ }
+ }
+ }
+ _ => {
+ if !c.is_digit(10) {
+ break 'digitloop;
+ }
+ }
}
},
- 'e' => {
+ 'e' | 'E' => {
match self.preview_multiple_next(2).unwrap_or_default().to_digit(10) {
Some(0..=9) | None => {
buf.push(self.next());
| diff --git a/src/lib/syntax/lexer.rs b/src/lib/syntax/lexer.rs
--- a/src/lib/syntax/lexer.rs
+++ b/src/lib/syntax/lexer.rs
@@ -1002,7 +1016,9 @@ mod tests {
#[test]
fn numbers() {
- let mut lexer = Lexer::new("1 2 0x34 056 7.89 42. 5e3 5e+3 5e-3 0b10 0O123 0999");
+ let mut lexer = Lexer::new(
+ "1 2 0x34 056 7.89 42. 5e3 5e+3 5e-3 0b10 0O123 0999 1.0e1 1.0e-1 1.0E1 1E1",
+ );
lexer.lex().expect("failed to lex");
assert_eq!(lexer.tokens[0].data, TokenData::NumericLiteral(1.0));
assert_eq!(lexer.tokens[1].data, TokenData::NumericLiteral(2.0));
diff --git a/src/lib/syntax/lexer.rs b/src/lib/syntax/lexer.rs
--- a/src/lib/syntax/lexer.rs
+++ b/src/lib/syntax/lexer.rs
@@ -1016,6 +1032,10 @@ mod tests {
assert_eq!(lexer.tokens[9].data, TokenData::NumericLiteral(2.0));
assert_eq!(lexer.tokens[10].data, TokenData::NumericLiteral(83.0));
assert_eq!(lexer.tokens[11].data, TokenData::NumericLiteral(999.0));
+ assert_eq!(lexer.tokens[12].data, TokenData::NumericLiteral(10.0));
+ assert_eq!(lexer.tokens[13].data, TokenData::NumericLiteral(0.1));
+ assert_eq!(lexer.tokens[14].data, TokenData::NumericLiteral(10.0));
+ assert_eq!(lexer.tokens[14].data, TokenData::NumericLiteral(10.0));
}
#[test]
diff --git a/src/lib/syntax/lexer.rs b/src/lib/syntax/lexer.rs
--- a/src/lib/syntax/lexer.rs
+++ b/src/lib/syntax/lexer.rs
@@ -1105,7 +1125,7 @@ mod tests {
assert_eq!(lexer.tokens[1].data, TokenData::Punctuator(Punctuator::Add));
assert_eq!(
lexer.tokens[2].data,
- TokenData::NumericLiteral(100000000000.0)
+ TokenData::NumericLiteral(100_000_000_000.0)
);
}
}
| Number(<float>e<int>) should work
It looks like scientific notation numbers starting with a float compile to either `undefined` (positive float base) and when used in Number() triggers a panic.
For example:
```js
Number(1e1) // Works
Number(-1e1) // Works
Number(1e-1) // Works
Number(-1e-1) // Works
Number(1.0e1) // Fails
Number(-1.0e1) // Fails
Number(1.0e-1) // Fails
Number(-1.0e-1) // Fails
```
This behavior works in Node.js as expected:
```js
> Number(1.0e1)
10
> Number(1.0e-1)
0.1
> Number(-1.0e1)
-10
> Number(-1.0e-1)
-0.1
```
`Number()` is able to parse String inputs correctly, but raw scientific notation numbers with a float base trigger a panic. This may be a parsing issue as the errors look like this:
```
$ cargo run # Parsing `Number(1.0e1)`
Finished dev [unoptimized + debuginfo] target(s) in 0.03s
Running `target/debug/boa`
thread 'main' panicked at 'parsing failed: Expected([Punctuator(Comma), Punctuator(CloseParen)], Token { data: Identifier("e1"), pos: Position { column_number: 9, line_number: 5 } }, "function call arguments")', src/libcore/result.rs:1084:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
```
refs #182 implementing #34
| I ran into this while trying to fix some panics in the lexer. It looks like the lexer is trying to lex the number as two separate ones. The above panic will no longer occur, but the parser throws an error.
I'm going to try fixing this | 2020-02-05T11:36:17 | 0.5 | cb850fc13e94e1baec09267bd010a4cd4565d73d | [
"syntax::lexer::tests::numbers"
] | [
"builtins::array::tests::concat",
"builtins::array::tests::find",
"builtins::array::tests::push",
"builtins::array::tests::reverse",
"builtins::array::tests::join",
"builtins::array::tests::pop",
"builtins::boolean::tests::check_boolean_constructor_is_function",
"builtins::array::tests::find_index",
... | [] | [] | 41 |
boa-dev/boa | 235 | boa-dev__boa-235 | [
"224"
] | 6947122815f33b57b51062720380ca9ae68b47ad | diff --git a/src/lib/syntax/lexer.rs b/src/lib/syntax/lexer.rs
--- a/src/lib/syntax/lexer.rs
+++ b/src/lib/syntax/lexer.rs
@@ -171,6 +171,20 @@ impl<'a> Lexer<'a> {
fn preview_next(&mut self) -> Option<char> {
self.buffer.peek().copied()
}
+ /// Preview a char x indexes further in buf, without incrementing
+ fn preview_multiple_next(&mut self, nb_next: usize) -> Option<char> {
+ let mut next_peek = None;
+
+ for (i, x) in self.buffer.clone().enumerate() {
+ if i >= nb_next {
+ break;
+ }
+
+ next_peek = Some(x);
+ }
+
+ next_peek
+ }
/// Utility Function, while ``f(char)`` is true, read chars and move curser.
/// All chars are returned as a string
diff --git a/src/lib/syntax/lexer.rs b/src/lib/syntax/lexer.rs
--- a/src/lib/syntax/lexer.rs
+++ b/src/lib/syntax/lexer.rs
@@ -416,8 +430,19 @@ impl<'a> Lexer<'a> {
break 'digitloop;
}
},
- 'e' | '+' | '-' => {
- buf.push(self.next());
+ 'e' => {
+ match self.preview_multiple_next(2).unwrap_or_default().to_digit(10) {
+ Some(0..=9) | None => {
+ buf.push(self.next());
+ }
+ _ => {
+ break;
+ }
+ }
+ buf.push(self.next());
+ }
+ '+' | '-' => {
+ break;
}
_ if ch.is_digit(10) => {
buf.push(self.next());
| diff --git a/src/lib/syntax/lexer.rs b/src/lib/syntax/lexer.rs
--- a/src/lib/syntax/lexer.rs
+++ b/src/lib/syntax/lexer.rs
@@ -1026,4 +1051,61 @@ mod tests {
TokenData::RegularExpressionLiteral("\\/[^\\/]*\\/*".to_string(), "gmi".to_string())
);
}
+
+ #[test]
+ fn test_addition_no_spaces() {
+ let mut lexer = Lexer::new("1+1");
+ lexer.lex().expect("failed to lex");
+ assert_eq!(lexer.tokens[0].data, TokenData::NumericLiteral(1.0));
+ assert_eq!(lexer.tokens[1].data, TokenData::Punctuator(Punctuator::Add));
+ assert_eq!(lexer.tokens[2].data, TokenData::NumericLiteral(1.0));
+ }
+
+ #[test]
+ fn test_addition_no_spaces_left_side() {
+ let mut lexer = Lexer::new("1+ 1");
+ lexer.lex().expect("failed to lex");
+ assert_eq!(lexer.tokens[0].data, TokenData::NumericLiteral(1.0));
+ assert_eq!(lexer.tokens[1].data, TokenData::Punctuator(Punctuator::Add));
+ assert_eq!(lexer.tokens[2].data, TokenData::NumericLiteral(1.0));
+ }
+
+ #[test]
+ fn test_addition_no_spaces_right_side() {
+ let mut lexer = Lexer::new("1 +1");
+ lexer.lex().expect("failed to lex");
+ assert_eq!(lexer.tokens[0].data, TokenData::NumericLiteral(1.0));
+ assert_eq!(lexer.tokens[1].data, TokenData::Punctuator(Punctuator::Add));
+ assert_eq!(lexer.tokens[2].data, TokenData::NumericLiteral(1.0));
+ }
+
+ #[test]
+ fn test_addition_no_spaces_e_number_left_side() {
+ let mut lexer = Lexer::new("1e2+ 1");
+ lexer.lex().expect("failed to lex");
+ assert_eq!(lexer.tokens[0].data, TokenData::NumericLiteral(100.0));
+ assert_eq!(lexer.tokens[1].data, TokenData::Punctuator(Punctuator::Add));
+ assert_eq!(lexer.tokens[2].data, TokenData::NumericLiteral(1.0));
+ }
+
+ #[test]
+ fn test_addition_no_spaces_e_number_right_side() {
+ let mut lexer = Lexer::new("1 +1e3");
+ lexer.lex().expect("failed to lex");
+ assert_eq!(lexer.tokens[0].data, TokenData::NumericLiteral(1.0));
+ assert_eq!(lexer.tokens[1].data, TokenData::Punctuator(Punctuator::Add));
+ assert_eq!(lexer.tokens[2].data, TokenData::NumericLiteral(1000.0));
+ }
+
+ #[test]
+ fn test_addition_no_spaces_e_number() {
+ let mut lexer = Lexer::new("1e3+1e11");
+ lexer.lex().expect("failed to lex");
+ assert_eq!(lexer.tokens[0].data, TokenData::NumericLiteral(1000.0));
+ assert_eq!(lexer.tokens[1].data, TokenData::Punctuator(Punctuator::Add));
+ assert_eq!(
+ lexer.tokens[2].data,
+ TokenData::NumericLiteral(100000000000.0)
+ );
+ }
}
| Simple assignement with Number+"+" or Number+"-" does not work
Simple assignement with addition does not work when there is no space between first operand and "+" or "-":
```
let a = 1+ 2;
```
This works though:
```
let a = 1 + 2;
```
Same with "-" binary operator.
```
thread 'main' panicked at 'Could not convert value to f64: ParseFloatError { kind: Invalid }', src/libcore/result.rs:1165:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:76
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:60
4: core::fmt::write
at src/libcore/fmt/mod.rs:1030
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1412
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:64
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:196
9: std::panicking::default_hook
at src/libstd/panicking.rs:210
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:473
11: std::panicking::continue_panic_fmt
at src/libstd/panicking.rs:380
12: rust_begin_unwind
at src/libstd/panicking.rs:307
13: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
14: core::result::unwrap_failed
at src/libcore/result.rs:1165
15: core::result::Result<T,E>::expect
at /rustc/4560ea788cb760f0a34127156c78e2552949f734/src/libcore/result.rs:960
16: boa::syntax::lexer::Lexer::lex
at src/lib/syntax/lexer.rs:401
17: boa::parser_expr
at src/lib/lib.rs:52
18: boa::forward
at src/lib/lib.rs:61
19: boa::exec
at src/lib/lib.rs:84
20: boa::main
at src/bin/bin.rs:57```
| This is coming from here https://github.com/jasonwilliams/boa/blob/master/src/lib/syntax/lexer.rs#L430
It seems like once you add a space, the `+` is pushed to `buf`, so we make a call to the `f64::from_str` with `1+`.
I'll have a look, I wanted to get started contributing to boa since the talk at JSConf EU so that would be an amazing first task for it! | 2020-01-25T19:56:58 | 0.5 | cb850fc13e94e1baec09267bd010a4cd4565d73d | [
"syntax::lexer::tests::test_addition_no_spaces",
"syntax::lexer::tests::test_addition_no_spaces_e_number",
"syntax::lexer::tests::test_addition_no_spaces_e_number_left_side",
"syntax::lexer::tests::test_addition_no_spaces_left_side"
] | [
"builtins::array::tests::concat",
"builtins::array::tests::join",
"builtins::array::tests::find",
"builtins::array::tests::reverse",
"builtins::array::tests::pop",
"builtins::array::tests::push",
"builtins::array::tests::find_index",
"builtins::array::tests::inclues_value",
"builtins::boolean::tests... | [] | [] | 42 |
boa-dev/boa | 199 | boa-dev__boa-199 | [
"152"
] | 7567aacd77f18ab23860e76e6bae362cade02e72 | diff --git a/.vscode/launch.json b/.vscode/launch.json
--- a/.vscode/launch.json
+++ b/.vscode/launch.json
@@ -1,29 +1,27 @@
{
- // Use IntelliSense to learn about possible attributes.
- // Hover to view descriptions of existing attributes.
- // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
- "version": "0.2.0",
- "configurations": [
- {
- "type": "lldb",
- "request": "launch",
- "name": "Launch",
- "args": [],
- "program": "./target/debug/bin",
- "cwd": "${workspaceRoot}",
- "stopOnEntry": false,
- "sourceLanguages": [
- "rust"
- ]
- },
- {
- "name": "(Windows) Launch",
- "type": "cppvsdbg",
- "request": "launch",
- "program": "${workspaceFolder}/target/debug/bin.exe",
- "stopAtEntry": false,
- "cwd": "${workspaceFolder}",
- "environment": [],
- }
- ]
-}
\ No newline at end of file
+ // Use IntelliSense to learn about possible attributes.
+ // Hover to view descriptions of existing attributes.
+ // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "type": "lldb",
+ "request": "launch",
+ "name": "Launch",
+ "args": [],
+ "program": "./target/debug/boa",
+ "cwd": "${workspaceRoot}",
+ "stopOnEntry": false,
+ "sourceLanguages": ["rust"]
+ },
+ {
+ "name": "(Windows) Launch",
+ "type": "cppvsdbg",
+ "request": "launch",
+ "program": "${workspaceFolder}/target/debug/boa.exe",
+ "stopAtEntry": false,
+ "cwd": "${workspaceFolder}",
+ "environment": []
+ }
+ ]
+}
diff --git a/src/lib/syntax/parser.rs b/src/lib/syntax/parser.rs
--- a/src/lib/syntax/parser.rs
+++ b/src/lib/syntax/parser.rs
@@ -530,11 +530,37 @@ impl Parser {
}
};
self.pos += 1;
- self.expect(
- TokenData::Punctuator(Punctuator::Colon),
- "object declaration",
- )?;
- let value = self.parse()?;
+ let value = match self.get_token(self.pos)?.data {
+ TokenData::Punctuator(Punctuator::Colon) => {
+ self.pos += 1;
+ self.parse()?
+ }
+ TokenData::Punctuator(Punctuator::OpenParen) => {
+ self.pos += 1;
+ self.expect(
+ TokenData::Punctuator(Punctuator::CloseParen),
+ "Method Block",
+ )?;
+ self.pos += 1; // {
+ let expr = self.parse()?;
+ self.pos += 1;
+ mk!(
+ self,
+ ExprDef::FunctionDecl(None, vec!(), Box::new(expr)),
+ token
+ )
+ }
+ _ => {
+ return Err(ParseError::Expected(
+ vec![
+ TokenData::Punctuator(Punctuator::Colon),
+ TokenData::Punctuator(Punctuator::OpenParen),
+ ],
+ tk,
+ "object declaration",
+ ))
+ }
+ };
map.insert(name, value);
self.pos += 1;
}
diff --git a/src/lib/syntax/parser.rs b/src/lib/syntax/parser.rs
--- a/src/lib/syntax/parser.rs
+++ b/src/lib/syntax/parser.rs
@@ -554,6 +580,13 @@ impl Parser {
self.pos += 1;
mk!(self, ExprDef::Block(exprs), token)
}
+ // Empty Block
+ TokenData::Punctuator(Punctuator::CloseBlock)
+ if self.get_token(self.pos.wrapping_sub(2))?.data
+ == TokenData::Punctuator(Punctuator::OpenBlock) =>
+ {
+ mk!(self, ExprDef::Block(vec!()), token)
+ }
TokenData::Punctuator(Punctuator::Sub) => mk!(
self,
ExprDef::UnaryOp(UnaryOp::Minus, Box::new(self.parse()?))
| diff --git a/src/lib/syntax/parser.rs b/src/lib/syntax/parser.rs
--- a/src/lib/syntax/parser.rs
+++ b/src/lib/syntax/parser.rs
@@ -889,6 +922,32 @@ mod tests {
))))],
);
}
+ #[test]
+ fn check_object() {
+ // Testing short function syntax
+ let mut object_properties: BTreeMap<String, Expr> = BTreeMap::new();
+ object_properties.insert(
+ String::from("a"),
+ Expr::new(ExprDef::Const(Const::Bool(true))),
+ );
+ object_properties.insert(
+ String::from("b"),
+ Expr::new(ExprDef::FunctionDecl(
+ None,
+ vec![],
+ Box::new(Expr::new(ExprDef::Block(vec![]))),
+ )),
+ );
+
+ check_parser(
+ "{
+ a: true,
+ b() {}
+ };
+ ",
+ &[Expr::new(ExprDef::ObjectDecl(Box::new(object_properties)))],
+ );
+ }
#[test]
fn check_array() {
diff --git a/tests/js/test.js b/tests/js/test.js
--- a/tests/js/test.js
+++ b/tests/js/test.js
@@ -1,6 +1,8 @@
-///
-// Use this file to test your changes to boa.
-///
+let a = {
+ a: true,
+ b() {
+ return 2;
+ }
+};
-let a = Boolean(0);
-typeof a;
+console.log(a["b"]());
| Shorthand method syntax
Is needed
| Do you mean something like:
```javascript
const o = {
f() {
// ...
}
};
```
Yep! | 2019-11-01T08:21:08 | 0.4 | 7567aacd77f18ab23860e76e6bae362cade02e72 | [
"syntax::parser::tests::check_object"
] | [
"builtins::array::tests::find",
"builtins::array::tests::reverse",
"builtins::array::tests::push",
"builtins::array::tests::join",
"builtins::array::tests::concat",
"builtins::array::tests::pop",
"builtins::boolean::tests::check_boolean_constructor_is_function",
"builtins::array::tests::shift",
"bui... | [] | [] | 43 |
boa-dev/boa | 58 | boa-dev__boa-58 | [
"45"
] | bf9b78954a7763411ecb3c3586e3e973bbcf062b | diff --git a/src/lib/syntax/parser.rs b/src/lib/syntax/parser.rs
--- a/src/lib/syntax/parser.rs
+++ b/src/lib/syntax/parser.rs
@@ -407,41 +407,40 @@ impl Parser {
}
}
TokenData::Punctuator(Punctuator::OpenBracket) => {
- let mut array: Vec<Expr> = Vec::new();
- let mut expect_comma_or_end = self.get_token(self.pos)?.data
- == TokenData::Punctuator(Punctuator::CloseBracket);
+ let mut array: Vec<Expr> = vec![];
+ let mut saw_expr_last = false;
loop {
let token = self.get_token(self.pos)?;
- if token.data == TokenData::Punctuator(Punctuator::CloseBracket)
- && expect_comma_or_end
- {
- self.pos += 1;
- break;
- } else if token.data == TokenData::Punctuator(Punctuator::Comma)
- && expect_comma_or_end
- {
- expect_comma_or_end = false;
- } else if token.data == TokenData::Punctuator(Punctuator::Comma)
- && !expect_comma_or_end
- {
- array.push(mk!(self, ExprDef::ConstExpr(Const::Null)));
- expect_comma_or_end = false;
- } else if expect_comma_or_end {
- return Err(ParseError::Expected(
- vec![
- TokenData::Punctuator(Punctuator::Comma),
- TokenData::Punctuator(Punctuator::CloseBracket),
- ],
- token.clone(),
- "array declaration",
- ));
- } else {
- let parsed = self.parse()?;
- self.pos -= 1;
- array.push(parsed);
- expect_comma_or_end = true;
+ match token.data {
+ TokenData::Punctuator(Punctuator::CloseBracket) => {
+ self.pos += 1;
+ break;
+ }
+ TokenData::Punctuator(Punctuator::Comma) => {
+ if !saw_expr_last {
+ // An elision indicates that a space is saved in the array
+ array.push(mk!(self, ExprDef::ConstExpr(Const::Undefined)))
+ }
+ saw_expr_last = false;
+ self.pos += 1;
+ }
+ _ if saw_expr_last => {
+ // Two expressions in a row is not allowed, they must be comma-separated
+ return Err(ParseError::Expected(
+ vec![
+ TokenData::Punctuator(Punctuator::Comma),
+ TokenData::Punctuator(Punctuator::CloseBracket),
+ ],
+ token.clone(),
+ "array declaration",
+ ));
+ }
+ _ => {
+ let parsed = self.parse()?;
+ saw_expr_last = true;
+ array.push(parsed);
+ }
}
- self.pos += 1;
}
mk!(self, ExprDef::ArrayDeclExpr(array), token)
}
| diff --git a/src/lib/syntax/parser.rs b/src/lib/syntax/parser.rs
--- a/src/lib/syntax/parser.rs
+++ b/src/lib/syntax/parser.rs
@@ -810,9 +809,12 @@ mod tests {
check_parser("[]", &[Expr::new(ExprDef::ArrayDeclExpr(vec![]))]);
// Check array with empty slot
- // FIXME: This does not work, it should ignore the comma:
- // <https://tc39.es/ecma262/#prod-ArrayAssignmentPattern>
- // check_parser("[,]", &[Expr::new(ExprDef::ArrayDeclExpr(vec![]))]);
+ check_parser(
+ "[,]",
+ &[Expr::new(ExprDef::ArrayDeclExpr(vec![Expr::new(
+ ExprDef::ConstExpr(Const::Undefined),
+ )]))],
+ );
// Check numeric array
check_parser(
diff --git a/src/lib/syntax/parser.rs b/src/lib/syntax/parser.rs
--- a/src/lib/syntax/parser.rs
+++ b/src/lib/syntax/parser.rs
@@ -825,13 +827,37 @@ mod tests {
);
// Check numeric array with trailing comma
- // FIXME: This does not work, it should ignore the trailing comma:
- // <https://tc39.es/ecma262/#prod-ArrayAssignmentPattern>
- // check_parser("[1, 2, 3,]", &[Expr::new(ExprDef::ArrayDeclExpr(vec![
- // Expr::new(ExprDef::ConstExpr(Const::Num(1.0))),
- // Expr::new(ExprDef::ConstExpr(Const::Num(2.0))),
- // Expr::new(ExprDef::ConstExpr(Const::Num(3.0))),
- // ]))]);
+ check_parser(
+ "[1, 2, 3,]",
+ &[Expr::new(ExprDef::ArrayDeclExpr(vec![
+ Expr::new(ExprDef::ConstExpr(Const::Num(1.0))),
+ Expr::new(ExprDef::ConstExpr(Const::Num(2.0))),
+ Expr::new(ExprDef::ConstExpr(Const::Num(3.0))),
+ ]))],
+ );
+
+ // Check numeric array with an elision
+ check_parser(
+ "[1, 2, , 3]",
+ &[Expr::new(ExprDef::ArrayDeclExpr(vec![
+ Expr::new(ExprDef::ConstExpr(Const::Num(1.0))),
+ Expr::new(ExprDef::ConstExpr(Const::Num(2.0))),
+ Expr::new(ExprDef::ConstExpr(Const::Undefined)),
+ Expr::new(ExprDef::ConstExpr(Const::Num(3.0))),
+ ]))],
+ );
+
+ // Check numeric array with repeated elision
+ check_parser(
+ "[1, 2, ,, 3]",
+ &[Expr::new(ExprDef::ArrayDeclExpr(vec![
+ Expr::new(ExprDef::ConstExpr(Const::Num(1.0))),
+ Expr::new(ExprDef::ConstExpr(Const::Num(2.0))),
+ Expr::new(ExprDef::ConstExpr(Const::Undefined)),
+ Expr::new(ExprDef::ConstExpr(Const::Undefined)),
+ Expr::new(ExprDef::ConstExpr(Const::Num(3.0))),
+ ]))],
+ );
// Check combined array
check_parser(
| Trailing commas in array assignments make the parser fail
### Specification Link
https://tc39.es/ecma262/#prod-ArrayAssignmentPattern
### Example JS
```js
let foo = [1, 2, 3,];
```
### Expected
Parser should ignore the trailing comma
### Related to
https://github.com/jasonwilliams/boa/pull/42/files#diff-9f9158af1c9e00d4a93319c5918d9b97R816
### Contributing
https://github.com/jasonwilliams/boa/blob/master/CONTRIBUTING.md
| 2019-07-07T18:04:53 | 0.2 | 71340e6becc85fb660d9017fd8a21f035a6d39d7 | [
"syntax::parser::tests::check_array"
] | [
"js::value::tests::check_get_set_field",
"js::value::tests::check_integer_is_true",
"js::value::tests::check_is_object",
"js::value::tests::check_number_is_true",
"js::value::tests::check_string_to_value",
"js::string::tests::check_string_constructor_is_function",
"js::value::tests::check_undefined",
... | [] | [] | 44 | |
boa-dev/boa | 3,508 | boa-dev__boa-3508 | [
"1848"
] | 279caabf796ae22322d91dd0f79f63c53e99007f | diff --git a/core/engine/src/builtins/function/mod.rs b/core/engine/src/builtins/function/mod.rs
--- a/core/engine/src/builtins/function/mod.rs
+++ b/core/engine/src/builtins/function/mod.rs
@@ -142,7 +142,7 @@ pub enum ClassFieldDefinition {
}
unsafe impl Trace for ClassFieldDefinition {
- custom_trace! {this, {
+ custom_trace! {this, mark, {
match this {
Self::Public(_key, func) => {
mark(func);
diff --git a/core/engine/src/builtins/generator/mod.rs b/core/engine/src/builtins/generator/mod.rs
--- a/core/engine/src/builtins/generator/mod.rs
+++ b/core/engine/src/builtins/generator/mod.rs
@@ -45,7 +45,7 @@ pub(crate) enum GeneratorState {
// Need to manually implement, since `Trace` adds a `Drop` impl which disallows destructuring.
unsafe impl Trace for GeneratorState {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
match &this {
Self::SuspendedStart { context } | Self::SuspendedYield { context } => mark(context),
Self::Executing | Self::Completed => {}
diff --git a/core/engine/src/builtins/intl/collator/mod.rs b/core/engine/src/builtins/intl/collator/mod.rs
--- a/core/engine/src/builtins/intl/collator/mod.rs
+++ b/core/engine/src/builtins/intl/collator/mod.rs
@@ -57,7 +57,7 @@ pub(crate) struct Collator {
// SAFETY: only `bound_compare` is a traceable object.
unsafe impl Trace for Collator {
- custom_trace!(this, mark(&this.bound_compare));
+ custom_trace!(this, mark, mark(&this.bound_compare));
}
impl Collator {
diff --git a/core/engine/src/builtins/map/ordered_map.rs b/core/engine/src/builtins/map/ordered_map.rs
--- a/core/engine/src/builtins/map/ordered_map.rs
+++ b/core/engine/src/builtins/map/ordered_map.rs
@@ -42,7 +42,7 @@ pub struct OrderedMap<V> {
}
unsafe impl<V: Trace> Trace for OrderedMap<V> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for (k, v) in &this.map {
if let MapKey::Key(key) = k {
mark(key);
diff --git a/core/engine/src/builtins/promise/mod.rs b/core/engine/src/builtins/promise/mod.rs
--- a/core/engine/src/builtins/promise/mod.rs
+++ b/core/engine/src/builtins/promise/mod.rs
@@ -42,7 +42,7 @@ pub enum PromiseState {
}
unsafe impl Trace for PromiseState {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
match this {
Self::Fulfilled(v) | Self::Rejected(v) => mark(v),
Self::Pending => {}
diff --git a/core/engine/src/builtins/promise/mod.rs b/core/engine/src/builtins/promise/mod.rs
--- a/core/engine/src/builtins/promise/mod.rs
+++ b/core/engine/src/builtins/promise/mod.rs
@@ -121,7 +121,7 @@ pub struct ResolvingFunctions {
// Manually implementing `Trace` to allow destructuring.
unsafe impl Trace for ResolvingFunctions {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
mark(&this.resolve);
mark(&this.reject);
});
diff --git a/core/engine/src/builtins/promise/mod.rs b/core/engine/src/builtins/promise/mod.rs
--- a/core/engine/src/builtins/promise/mod.rs
+++ b/core/engine/src/builtins/promise/mod.rs
@@ -176,7 +176,7 @@ pub(crate) struct PromiseCapability {
// SAFETY: manually implementing `Trace` to allow destructuring.
unsafe impl Trace for PromiseCapability {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
mark(&this.promise);
mark(&this.functions);
});
diff --git a/core/engine/src/builtins/set/ordered_set.rs b/core/engine/src/builtins/set/ordered_set.rs
--- a/core/engine/src/builtins/set/ordered_set.rs
+++ b/core/engine/src/builtins/set/ordered_set.rs
@@ -14,7 +14,7 @@ pub struct OrderedSet {
}
unsafe impl Trace for OrderedSet {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for v in &this.inner {
if let MapKey::Key(v) = v {
mark(v);
diff --git a/core/engine/src/environments/runtime/declarative/function.rs b/core/engine/src/environments/runtime/declarative/function.rs
--- a/core/engine/src/environments/runtime/declarative/function.rs
+++ b/core/engine/src/environments/runtime/declarative/function.rs
@@ -161,7 +161,7 @@ pub(crate) enum ThisBindingStatus {
}
unsafe impl Trace for ThisBindingStatus {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
match this {
Self::Initialized(obj) => mark(obj),
Self::Lexical | Self::Uninitialized => {}
diff --git a/core/engine/src/error.rs b/core/engine/src/error.rs
--- a/core/engine/src/error.rs
+++ b/core/engine/src/error.rs
@@ -53,7 +53,7 @@ pub struct JsError {
// SAFETY: just mirroring the default derive to allow destructuring.
unsafe impl Trace for JsError {
- custom_trace!(this, mark(&this.inner));
+ custom_trace!(this, mark, mark(&this.inner));
}
/// Internal representation of a [`JsError`].
diff --git a/core/engine/src/error.rs b/core/engine/src/error.rs
--- a/core/engine/src/error.rs
+++ b/core/engine/src/error.rs
@@ -76,6 +76,7 @@ enum Repr {
unsafe impl Trace for Repr {
custom_trace!(
this,
+ mark,
match &this {
Self::Native(err) => mark(err),
Self::Opaque(val) => mark(val),
diff --git a/core/engine/src/error.rs b/core/engine/src/error.rs
--- a/core/engine/src/error.rs
+++ b/core/engine/src/error.rs
@@ -549,7 +550,7 @@ impl fmt::Display for JsNativeError {
// SAFETY: just mirroring the default derive to allow destructuring.
unsafe impl Trace for JsNativeError {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
mark(&this.kind);
mark(&this.cause);
mark(&this.realm);
diff --git a/core/engine/src/error.rs b/core/engine/src/error.rs
--- a/core/engine/src/error.rs
+++ b/core/engine/src/error.rs
@@ -1108,6 +1109,7 @@ pub enum JsNativeErrorKind {
unsafe impl Trace for JsNativeErrorKind {
custom_trace!(
this,
+ mark,
match &this {
Self::Aggregate(errors) => mark(errors),
Self::Error
diff --git a/core/engine/src/module/source.rs b/core/engine/src/module/source.rs
--- a/core/engine/src/module/source.rs
+++ b/core/engine/src/module/source.rs
@@ -104,7 +104,7 @@ enum Status {
// useful to have for state machines like `Status`. This is solved by manually implementing
// `Trace`.
unsafe impl Trace for Status {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
match this {
Self::Unlinked | Self::Linking { info: _ } => {}
Self::PreLinked { context, info: _ } | Self::Linked { context, info: _ } => {
diff --git a/core/engine/src/module/source.rs b/core/engine/src/module/source.rs
--- a/core/engine/src/module/source.rs
+++ b/core/engine/src/module/source.rs
@@ -239,7 +239,7 @@ impl std::fmt::Debug for SourceTextContext {
}
unsafe impl Trace for SourceTextContext {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
mark(&this.codeblock);
mark(&this.environments);
mark(&this.realm);
diff --git a/core/engine/src/native_function.rs b/core/engine/src/native_function.rs
--- a/core/engine/src/native_function.rs
+++ b/core/engine/src/native_function.rs
@@ -71,7 +71,7 @@ pub struct NativeFunctionObject {
// SAFETY: this traces all fields that need to be traced by the GC.
unsafe impl Trace for NativeFunctionObject {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
mark(&this.f);
mark(&this.realm);
});
diff --git a/core/engine/src/native_function.rs b/core/engine/src/native_function.rs
--- a/core/engine/src/native_function.rs
+++ b/core/engine/src/native_function.rs
@@ -125,7 +125,7 @@ enum Inner {
// Manual implementation because deriving `Trace` triggers the `single_use_lifetimes` lint.
// SAFETY: Only closures can contain `Trace` captures, so this implementation is safe.
unsafe impl Trace for NativeFunction {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
if let Inner::Closure(c) = &this.inner {
mark(c);
}
diff --git a/core/engine/src/object/mod.rs b/core/engine/src/object/mod.rs
--- a/core/engine/src/object/mod.rs
+++ b/core/engine/src/object/mod.rs
@@ -178,7 +178,9 @@ impl dyn NativeObject {
}
/// The internal representation of a JavaScript object.
-#[derive(Debug, Finalize)]
+#[derive(Debug, Finalize, Trace)]
+// SAFETY: This does not implement drop, so this is safe.
+#[boa_gc(unsafe_no_drop)]
pub struct Object<T: ?Sized> {
/// The collection of properties contained in the object
pub(crate) properties: PropertyMap,
diff --git a/core/engine/src/object/mod.rs b/core/engine/src/object/mod.rs
--- a/core/engine/src/object/mod.rs
+++ b/core/engine/src/object/mod.rs
@@ -201,16 +203,6 @@ impl<T: Default> Default for Object<T> {
}
}
-unsafe impl<T: Trace + ?Sized> Trace for Object<T> {
- boa_gc::custom_trace!(this, {
- mark(&this.data);
- mark(&this.properties);
- for (_, element) in &this.private_elements {
- mark(element);
- }
- });
-}
-
/// A Private Name.
#[derive(Clone, Debug, PartialEq, Eq, Trace, Finalize)]
pub struct PrivateName {
diff --git a/core/engine/src/object/property_map.rs b/core/engine/src/object/property_map.rs
--- a/core/engine/src/object/property_map.rs
+++ b/core/engine/src/object/property_map.rs
@@ -25,7 +25,7 @@ impl<K: Trace> Default for OrderedHashMap<K> {
}
unsafe impl<K: Trace> Trace for OrderedHashMap<K> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for (k, v) in &this.0 {
mark(k);
mark(v);
diff --git a/core/engine/src/value/mod.rs b/core/engine/src/value/mod.rs
--- a/core/engine/src/value/mod.rs
+++ b/core/engine/src/value/mod.rs
@@ -81,7 +81,7 @@ pub enum JsValue {
}
unsafe impl Trace for JsValue {
- custom_trace! {this, {
+ custom_trace! {this, mark, {
if let Self::Object(o) = this {
mark(o);
}
diff --git a/core/engine/src/vm/completion_record.rs b/core/engine/src/vm/completion_record.rs
--- a/core/engine/src/vm/completion_record.rs
+++ b/core/engine/src/vm/completion_record.rs
@@ -18,7 +18,7 @@ pub(crate) enum CompletionRecord {
// SAFETY: this matches all possible variants and traces
// their inner contents, which makes this safe.
unsafe impl Trace for CompletionRecord {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
match this {
Self::Normal(v) => mark(v),
Self::Return(r) => mark(r),
diff --git a/core/engine/src/vm/mod.rs b/core/engine/src/vm/mod.rs
--- a/core/engine/src/vm/mod.rs
+++ b/core/engine/src/vm/mod.rs
@@ -86,7 +86,7 @@ pub(crate) enum ActiveRunnable {
}
unsafe impl Trace for ActiveRunnable {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
match this {
Self::Script(script) => mark(script),
Self::Module(module) => mark(module),
diff --git a/core/gc/src/cell.rs b/core/gc/src/cell.rs
--- a/core/gc/src/cell.rs
+++ b/core/gc/src/cell.rs
@@ -1,6 +1,9 @@
//! A garbage collected cell implementation
-use crate::trace::{Finalize, Trace};
+use crate::{
+ trace::{Finalize, Trace},
+ Tracer,
+};
use std::{
cell::{Cell, UnsafeCell},
cmp::Ordering,
diff --git a/core/gc/src/cell.rs b/core/gc/src/cell.rs
--- a/core/gc/src/cell.rs
+++ b/core/gc/src/cell.rs
@@ -218,15 +221,15 @@ impl<T: Trace + ?Sized> Finalize for GcRefCell<T> {}
// Implementing a Trace while the cell is being written to or incorrectly implementing Trace
// on GcCell's value may cause Undefined Behavior
unsafe impl<T: Trace + ?Sized> Trace for GcRefCell<T> {
- unsafe fn trace(&self) {
+ unsafe fn trace(&self, tracer: &mut Tracer) {
match self.flags.get().borrowed() {
BorrowState::Writing => (),
// SAFETY: Please see GcCell's Trace impl Safety note.
- _ => unsafe { (*self.cell.get()).trace() },
+ _ => unsafe { (*self.cell.get()).trace(tracer) },
}
}
- fn trace_non_roots(&self) {
+ unsafe fn trace_non_roots(&self) {
match self.flags.get().borrowed() {
BorrowState::Writing => (),
// SAFETY: Please see GcCell's Trace impl Safety note.
diff --git a/core/gc/src/internals/ephemeron_box.rs b/core/gc/src/internals/ephemeron_box.rs
--- a/core/gc/src/internals/ephemeron_box.rs
+++ b/core/gc/src/internals/ephemeron_box.rs
@@ -1,94 +1,14 @@
-use crate::{trace::Trace, Gc, GcBox};
+use crate::{trace::Trace, Gc, GcBox, Tracer};
use std::{
- cell::{Cell, UnsafeCell},
+ cell::UnsafeCell,
ptr::{self, NonNull},
};
-const MARK_MASK: u32 = 1 << (u32::BITS - 1);
-const NON_ROOTS_MASK: u32 = !MARK_MASK;
-const NON_ROOTS_MAX: u32 = NON_ROOTS_MASK;
-
-/// The `EphemeronBoxHeader` contains the `EphemeronBoxHeader`'s current state for the `Collector`'s
-/// Mark/Sweep as well as a pointer to the next ephemeron in the heap.
-///
-/// `ref_count` is the number of Gc instances, and `non_root_count` is the number of
-/// Gc instances in the heap. `non_root_count` also includes Mark Flag bit.
-///
-/// The next node is set by the `Allocator` during initialization and by the
-/// `Collector` during the sweep phase.
-pub(crate) struct EphemeronBoxHeader {
- ref_count: Cell<u32>,
- non_root_count: Cell<u32>,
-}
-
-impl EphemeronBoxHeader {
- /// Creates a new `EphemeronBoxHeader` with a root of 1 and next set to None.
- pub(crate) fn new() -> Self {
- Self {
- ref_count: Cell::new(1),
- non_root_count: Cell::new(0),
- }
- }
-
- /// Returns the `EphemeronBoxHeader`'s current ref count
- pub(crate) fn get_ref_count(&self) -> u32 {
- self.ref_count.get()
- }
-
- /// Returns a count for non-roots.
- pub(crate) fn get_non_root_count(&self) -> u32 {
- self.non_root_count.get() & NON_ROOTS_MASK
- }
-
- /// Increments `EphemeronBoxHeader`'s non-roots count.
- pub(crate) fn inc_non_root_count(&self) {
- let non_root_count = self.non_root_count.get();
-
- if (non_root_count & NON_ROOTS_MASK) < NON_ROOTS_MAX {
- self.non_root_count.set(non_root_count.wrapping_add(1));
- } else {
- // TODO: implement a better way to handle root overload.
- panic!("non roots counter overflow");
- }
- }
-
- /// Reset non-roots count to zero.
- pub(crate) fn reset_non_root_count(&self) {
- self.non_root_count
- .set(self.non_root_count.get() & !NON_ROOTS_MASK);
- }
-
- /// Returns a bool for whether `GcBoxHeader`'s mark bit is 1.
- pub(crate) fn is_marked(&self) -> bool {
- self.non_root_count.get() & MARK_MASK != 0
- }
-
- /// Sets `GcBoxHeader`'s mark bit to 1.
- pub(crate) fn mark(&self) {
- self.non_root_count
- .set(self.non_root_count.get() | MARK_MASK);
- }
-
- /// Sets `GcBoxHeader`'s mark bit to 0.
- pub(crate) fn unmark(&self) {
- self.non_root_count
- .set(self.non_root_count.get() & !MARK_MASK);
- }
-}
-
-impl core::fmt::Debug for EphemeronBoxHeader {
- fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
- f.debug_struct("EphemeronBoxHeader")
- .field("marked", &self.is_marked())
- .field("ref_count", &self.get_ref_count())
- .field("non_root_count", &self.get_non_root_count())
- .finish()
- }
-}
+use super::GcHeader;
/// The inner allocation of an [`Ephemeron`][crate::Ephemeron] pointer.
pub(crate) struct EphemeronBox<K: Trace + ?Sized + 'static, V: Trace + 'static> {
- pub(crate) header: EphemeronBoxHeader,
+ pub(crate) header: GcHeader,
data: UnsafeCell<Option<Data<K, V>>>,
}
diff --git a/core/gc/src/internals/ephemeron_box.rs b/core/gc/src/internals/ephemeron_box.rs
--- a/core/gc/src/internals/ephemeron_box.rs
+++ b/core/gc/src/internals/ephemeron_box.rs
@@ -101,7 +21,7 @@ impl<K: Trace + ?Sized, V: Trace> EphemeronBox<K, V> {
/// Creates a new `EphemeronBox` that tracks `key` and has `value` as its inner data.
pub(crate) fn new(key: &Gc<K>, value: V) -> Self {
Self {
- header: EphemeronBoxHeader::new(),
+ header: GcHeader::new(),
data: UnsafeCell::new(Some(Data {
key: key.inner_ptr(),
value,
diff --git a/core/gc/src/internals/ephemeron_box.rs b/core/gc/src/internals/ephemeron_box.rs
--- a/core/gc/src/internals/ephemeron_box.rs
+++ b/core/gc/src/internals/ephemeron_box.rs
@@ -112,7 +32,7 @@ impl<K: Trace + ?Sized, V: Trace> EphemeronBox<K, V> {
/// Creates a new `EphemeronBox` with its inner data in the invalidated state.
pub(crate) fn new_empty() -> Self {
Self {
- header: EphemeronBoxHeader::new(),
+ header: GcHeader::new(),
data: UnsafeCell::new(None),
}
}
diff --git a/core/gc/src/internals/ephemeron_box.rs b/core/gc/src/internals/ephemeron_box.rs
--- a/core/gc/src/internals/ephemeron_box.rs
+++ b/core/gc/src/internals/ephemeron_box.rs
@@ -190,12 +110,12 @@ impl<K: Trace + ?Sized, V: Trace> EphemeronBox<K, V> {
#[inline]
pub(crate) fn inc_ref_count(&self) {
- self.header.ref_count.set(self.header.ref_count.get() + 1);
+ self.header.inc_ref_count();
}
#[inline]
pub(crate) fn dec_ref_count(&self) {
- self.header.ref_count.set(self.header.ref_count.get() - 1);
+ self.header.dec_ref_count();
}
#[inline]
diff --git a/core/gc/src/internals/ephemeron_box.rs b/core/gc/src/internals/ephemeron_box.rs
--- a/core/gc/src/internals/ephemeron_box.rs
+++ b/core/gc/src/internals/ephemeron_box.rs
@@ -206,13 +126,13 @@ impl<K: Trace + ?Sized, V: Trace> EphemeronBox<K, V> {
pub(crate) trait ErasedEphemeronBox {
/// Gets the header of the `EphemeronBox`.
- fn header(&self) -> &EphemeronBoxHeader;
+ fn header(&self) -> &GcHeader;
/// Traces through the `EphemeronBox`'s held value, but only if it's marked and its key is also
/// marked. Returns `true` if the ephemeron successfuly traced through its value. This also
/// considers ephemerons that are marked but don't have their value anymore as
/// "successfully traced".
- unsafe fn trace(&self) -> bool;
+ unsafe fn trace(&self, tracer: &mut Tracer) -> bool;
fn trace_non_roots(&self);
diff --git a/core/gc/src/internals/ephemeron_box.rs b/core/gc/src/internals/ephemeron_box.rs
--- a/core/gc/src/internals/ephemeron_box.rs
+++ b/core/gc/src/internals/ephemeron_box.rs
@@ -222,11 +142,11 @@ pub(crate) trait ErasedEphemeronBox {
}
impl<K: Trace + ?Sized, V: Trace> ErasedEphemeronBox for EphemeronBox<K, V> {
- fn header(&self) -> &EphemeronBoxHeader {
+ fn header(&self) -> &GcHeader {
&self.header
}
- unsafe fn trace(&self) -> bool {
+ unsafe fn trace(&self, tracer: &mut Tracer) -> bool {
if !self.header.is_marked() {
return false;
}
diff --git a/core/gc/src/internals/ephemeron_box.rs b/core/gc/src/internals/ephemeron_box.rs
--- a/core/gc/src/internals/ephemeron_box.rs
+++ b/core/gc/src/internals/ephemeron_box.rs
@@ -247,7 +167,7 @@ impl<K: Trace + ?Sized, V: Trace> ErasedEphemeronBox for EphemeronBox<K, V> {
if is_key_marked {
// SAFETY: this is safe to call, since we want to trace all reachable objects
// from a marked ephemeron that holds a live `key`.
- unsafe { data.value.trace() }
+ unsafe { data.value.trace(tracer) }
}
is_key_marked
diff --git a/core/gc/src/internals/gc_box.rs b/core/gc/src/internals/gc_box.rs
--- a/core/gc/src/internals/gc_box.rs
+++ b/core/gc/src/internals/gc_box.rs
@@ -1,95 +1,24 @@
use crate::Trace;
-use std::{cell::Cell, fmt, ptr};
-
-const MARK_MASK: u32 = 1 << (u32::BITS - 1);
-const NON_ROOTS_MASK: u32 = !MARK_MASK;
-const NON_ROOTS_MAX: u32 = NON_ROOTS_MASK;
-
-/// The `GcBoxheader` contains the `GcBox`'s current state for the `Collector`'s
-/// Mark/Sweep as well as a pointer to the next node in the heap.
-///
-/// `ref_count` is the number of Gc instances, and `non_root_count` is the number of
-/// Gc instances in the heap. `non_root_count` also includes Mark Flag bit.
-///
-/// The next node is set by the `Allocator` during initialization and by the
-/// `Collector` during the sweep phase.
-pub(crate) struct GcBoxHeader {
- ref_count: Cell<u32>,
- non_root_count: Cell<u32>,
-}
-
-impl GcBoxHeader {
- /// Creates a new `GcBoxHeader` with a root of 1 and next set to None.
- pub(crate) fn new() -> Self {
- Self {
- ref_count: Cell::new(1),
- non_root_count: Cell::new(0),
- }
- }
-
- /// Returns the `GcBoxHeader`'s current non-roots count
- pub(crate) fn get_non_root_count(&self) -> u32 {
- self.non_root_count.get() & NON_ROOTS_MASK
- }
-
- /// Increments `GcBoxHeader`'s non-roots count.
- pub(crate) fn inc_non_root_count(&self) {
- let non_root_count = self.non_root_count.get();
-
- if (non_root_count & NON_ROOTS_MASK) < NON_ROOTS_MAX {
- self.non_root_count.set(non_root_count.wrapping_add(1));
- } else {
- // TODO: implement a better way to handle root overload.
- panic!("non-roots counter overflow");
- }
- }
-
- /// Decreases `GcBoxHeader`'s current non-roots count.
- pub(crate) fn reset_non_root_count(&self) {
- self.non_root_count
- .set(self.non_root_count.get() & !NON_ROOTS_MASK);
- }
-
- /// Returns a bool for whether `GcBoxHeader`'s mark bit is 1.
- pub(crate) fn is_marked(&self) -> bool {
- self.non_root_count.get() & MARK_MASK != 0
- }
-
- /// Sets `GcBoxHeader`'s mark bit to 1.
- pub(crate) fn mark(&self) {
- self.non_root_count
- .set(self.non_root_count.get() | MARK_MASK);
- }
-
- /// Sets `GcBoxHeader`'s mark bit to 0.
- pub(crate) fn unmark(&self) {
- self.non_root_count
- .set(self.non_root_count.get() & !MARK_MASK);
- }
-}
+use std::ptr::{self};
-impl fmt::Debug for GcBoxHeader {
- fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- f.debug_struct("GcBoxHeader")
- .field("marked", &self.is_marked())
- .field("ref_count", &self.ref_count.get())
- .field("non_root_count", &self.get_non_root_count())
- .finish_non_exhaustive()
- }
-}
+use super::{vtable_of, DropFn, GcHeader, RunFinalizerFn, TraceFn, TraceNonRootsFn, VTable};
/// A garbage collected allocation.
#[derive(Debug)]
+#[repr(C)]
pub struct GcBox<T: Trace + ?Sized + 'static> {
- pub(crate) header: GcBoxHeader,
+ pub(crate) header: GcHeader,
+ pub(crate) vtable: &'static VTable,
value: T,
}
impl<T: Trace> GcBox<T> {
/// Returns a new `GcBox` with a rooted `GcBoxHeader`.
pub(crate) fn new(value: T) -> Self {
+ let vtable = vtable_of::<T>();
Self {
- header: GcBoxHeader::new(),
+ header: GcHeader::new(),
+ vtable,
value,
}
}
diff --git a/core/gc/src/internals/gc_box.rs b/core/gc/src/internals/gc_box.rs
--- a/core/gc/src/internals/gc_box.rs
+++ b/core/gc/src/internals/gc_box.rs
@@ -103,20 +32,6 @@ impl<T: Trace + ?Sized> GcBox<T> {
ptr::eq(&this.header, &other.header)
}
- /// Marks this `GcBox` and traces its value.
- pub(crate) unsafe fn mark_and_trace(&self) {
- if !self.header.is_marked() {
- self.header.mark();
- // SAFETY: if `GcBox::trace_inner()` has been called, then,
- // this box must have been deemed as reachable via tracing
- // from a root, which by extension means that value has not
- // been dropped either.
- unsafe {
- self.value.trace();
- }
- }
- }
-
/// Returns a reference to the `GcBox`'s value.
pub(crate) const fn value(&self) -> &T {
&self.value
diff --git a/core/gc/src/internals/gc_box.rs b/core/gc/src/internals/gc_box.rs
--- a/core/gc/src/internals/gc_box.rs
+++ b/core/gc/src/internals/gc_box.rs
@@ -127,24 +42,14 @@ impl<T: Trace + ?Sized> GcBox<T> {
self.header.is_marked()
}
- #[inline]
- pub(crate) fn get_ref_count(&self) -> u32 {
- self.header.ref_count.get()
- }
-
#[inline]
pub(crate) fn inc_ref_count(&self) {
- self.header.ref_count.set(self.header.ref_count.get() + 1);
+ self.header.inc_ref_count();
}
#[inline]
pub(crate) fn dec_ref_count(&self) {
- self.header.ref_count.set(self.header.ref_count.get() - 1);
- }
-
- #[inline]
- pub(crate) fn get_non_root_count(&self) -> u32 {
- self.header.get_non_root_count()
+ self.header.dec_ref_count();
}
#[inline]
diff --git a/core/gc/src/internals/gc_box.rs b/core/gc/src/internals/gc_box.rs
--- a/core/gc/src/internals/gc_box.rs
+++ b/core/gc/src/internals/gc_box.rs
@@ -155,4 +60,34 @@ impl<T: Trace + ?Sized> GcBox<T> {
pub(crate) fn reset_non_root_count(&self) {
self.header.reset_non_root_count();
}
+
+ /// Check if the gc object is rooted.
+ ///
+ /// # Note
+ ///
+ /// This only gives valid result if the we have run through the
+ /// tracing non roots phase.
+ pub(crate) fn is_rooted(&self) -> bool {
+ self.header.is_rooted()
+ }
+
+ pub(crate) fn trace_fn(&self) -> TraceFn {
+ self.vtable.trace_fn()
+ }
+
+ pub(crate) fn trace_non_roots_fn(&self) -> TraceNonRootsFn {
+ self.vtable.trace_non_roots_fn()
+ }
+
+ pub(crate) fn run_finalizer_fn(&self) -> RunFinalizerFn {
+ self.vtable.run_finalizer_fn()
+ }
+
+ pub(crate) fn drop_fn(&self) -> DropFn {
+ self.vtable.drop_fn()
+ }
+
+ pub(crate) fn size(&self) -> usize {
+ self.vtable.size()
+ }
}
diff --git /dev/null b/core/gc/src/internals/gc_header.rs
new file mode 100644
--- /dev/null
+++ b/core/gc/src/internals/gc_header.rs
@@ -0,0 +1,104 @@
+use std::{cell::Cell, fmt};
+
+const MARK_MASK: u32 = 1 << (u32::BITS - 1);
+const NON_ROOTS_MASK: u32 = !MARK_MASK;
+const NON_ROOTS_MAX: u32 = NON_ROOTS_MASK;
+
+/// The `Gcheader` contains the `GcBox`'s and `EphemeronBox`'s current state for the `Collector`'s
+/// Mark/Sweep as well as a pointer to the next node in the heap.
+///
+/// `ref_count` is the number of Gc instances, and `non_root_count` is the number of
+/// Gc instances in the heap. `non_root_count` also includes Mark Flag bit.
+///
+/// The next node is set by the `Allocator` during initialization and by the
+/// `Collector` during the sweep phase.
+pub(crate) struct GcHeader {
+ ref_count: Cell<u32>,
+ non_root_count: Cell<u32>,
+}
+
+impl GcHeader {
+ /// Creates a new [`GcHeader`] with a root of 1 and next set to None.
+ pub(crate) fn new() -> Self {
+ Self {
+ ref_count: Cell::new(1),
+ non_root_count: Cell::new(0),
+ }
+ }
+
+ /// Returns the [`GcHeader`]'s current ref count.
+ pub(crate) fn ref_count(&self) -> u32 {
+ self.ref_count.get()
+ }
+
+ /// Returns the [`GcHeader`]'s current non-roots count
+ pub(crate) fn non_root_count(&self) -> u32 {
+ self.non_root_count.get() & NON_ROOTS_MASK
+ }
+
+ /// Increments [`GcHeader`]'s non-roots count.
+ pub(crate) fn inc_non_root_count(&self) {
+ let non_root_count = self.non_root_count.get();
+
+ if (non_root_count & NON_ROOTS_MASK) < NON_ROOTS_MAX {
+ self.non_root_count.set(non_root_count.wrapping_add(1));
+ } else {
+ // TODO: implement a better way to handle root overload.
+ panic!("non-roots counter overflow");
+ }
+ }
+
+ /// Decreases [`GcHeader`]'s current non-roots count.
+ pub(crate) fn reset_non_root_count(&self) {
+ self.non_root_count
+ .set(self.non_root_count.get() & !NON_ROOTS_MASK);
+ }
+
+ /// Returns a bool for whether [`GcHeader`]'s mark bit is 1.
+ pub(crate) fn is_marked(&self) -> bool {
+ self.non_root_count.get() & MARK_MASK != 0
+ }
+
+ #[inline]
+ pub(crate) fn inc_ref_count(&self) {
+ self.ref_count.set(self.ref_count.get() + 1);
+ }
+
+ #[inline]
+ pub(crate) fn dec_ref_count(&self) {
+ self.ref_count.set(self.ref_count.get() - 1);
+ }
+
+ /// Check if the gc object is rooted.
+ ///
+ /// # Note
+ ///
+ /// This only gives valid result if the we have run through the
+ /// tracing non roots phase.
+ #[inline]
+ pub(crate) fn is_rooted(&self) -> bool {
+ self.non_root_count() < self.ref_count()
+ }
+
+ /// Sets [`GcHeader`]'s mark bit to 1.
+ pub(crate) fn mark(&self) {
+ self.non_root_count
+ .set(self.non_root_count.get() | MARK_MASK);
+ }
+
+ /// Sets [`GcHeader`]'s mark bit to 0.
+ pub(crate) fn unmark(&self) {
+ self.non_root_count
+ .set(self.non_root_count.get() & !MARK_MASK);
+ }
+}
+
+impl fmt::Debug for GcHeader {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ f.debug_struct("GcHeader")
+ .field("marked", &self.is_marked())
+ .field("ref_count", &self.ref_count.get())
+ .field("non_root_count", &self.non_root_count())
+ .finish_non_exhaustive()
+ }
+}
diff --git a/core/gc/src/internals/mod.rs b/core/gc/src/internals/mod.rs
--- a/core/gc/src/internals/mod.rs
+++ b/core/gc/src/internals/mod.rs
@@ -1,8 +1,12 @@
mod ephemeron_box;
mod gc_box;
+mod gc_header;
+mod vtable;
mod weak_map_box;
pub(crate) use self::ephemeron_box::{EphemeronBox, ErasedEphemeronBox};
+pub(crate) use self::gc_header::GcHeader;
pub(crate) use self::weak_map_box::{ErasedWeakMapBox, WeakMapBox};
+pub(crate) use vtable::{vtable_of, DropFn, RunFinalizerFn, TraceFn, TraceNonRootsFn, VTable};
pub use self::gc_box::GcBox;
diff --git /dev/null b/core/gc/src/internals/vtable.rs
new file mode 100644
--- /dev/null
+++ b/core/gc/src/internals/vtable.rs
@@ -0,0 +1,92 @@
+use crate::{GcBox, GcErasedPointer, Trace, Tracer};
+
+// Workaround: https://users.rust-lang.org/t/custom-vtables-with-integers/78508
+pub(crate) const fn vtable_of<T: Trace + 'static>() -> &'static VTable {
+ trait HasVTable: Trace + Sized + 'static {
+ const VTABLE: &'static VTable;
+
+ unsafe fn trace_fn(this: GcErasedPointer, tracer: &mut Tracer) {
+ // SAFETY: The caller must ensure that the passed erased pointer is `GcBox<Self>`.
+ let value = unsafe { this.cast::<GcBox<Self>>().as_ref().value() };
+
+ // SAFETY: The implementor must ensure that `trace` is correctly implemented.
+ unsafe {
+ Trace::trace(value, tracer);
+ }
+ }
+
+ unsafe fn trace_non_roots_fn(this: GcErasedPointer) {
+ // SAFETY: The caller must ensure that the passed erased pointer is `GcBox<Self>`.
+ let value = unsafe { this.cast::<GcBox<Self>>().as_ref().value() };
+
+ // SAFETY: The implementor must ensure that `trace_non_roots` is correctly implemented.
+ unsafe {
+ Self::trace_non_roots(value);
+ }
+ }
+
+ unsafe fn run_finalizer_fn(this: GcErasedPointer) {
+ // SAFETY: The caller must ensure that the passed erased pointer is `GcBox<Self>`.
+ let value = unsafe { this.cast::<GcBox<Self>>().as_ref().value() };
+
+ Self::run_finalizer(value);
+ }
+
+ // SAFETY: The caller must ensure that the passed erased pointer is `GcBox<Self>`.
+ unsafe fn drop_fn(this: GcErasedPointer) {
+ // SAFETY: The caller must ensure that the passed erased pointer is `GcBox<Self>`.
+ let this = this.cast::<GcBox<Self>>();
+
+ // SAFETY: The caller must ensure the erased pointer is not droped or deallocated.
+ let _value = unsafe { Box::from_raw(this.as_ptr()) };
+ }
+ }
+
+ impl<T: Trace + 'static> HasVTable for T {
+ const VTABLE: &'static VTable = &VTable {
+ trace_fn: T::trace_fn,
+ trace_non_roots_fn: T::trace_non_roots_fn,
+ run_finalizer_fn: T::run_finalizer_fn,
+ drop_fn: T::drop_fn,
+ size: std::mem::size_of::<GcBox<T>>(),
+ };
+ }
+
+ T::VTABLE
+}
+
+pub(crate) type TraceFn = unsafe fn(this: GcErasedPointer, tracer: &mut Tracer);
+pub(crate) type TraceNonRootsFn = unsafe fn(this: GcErasedPointer);
+pub(crate) type RunFinalizerFn = unsafe fn(this: GcErasedPointer);
+pub(crate) type DropFn = unsafe fn(this: GcErasedPointer);
+
+#[derive(Debug)]
+pub(crate) struct VTable {
+ trace_fn: TraceFn,
+ trace_non_roots_fn: TraceNonRootsFn,
+ run_finalizer_fn: RunFinalizerFn,
+ drop_fn: DropFn,
+ size: usize,
+}
+
+impl VTable {
+ pub(crate) fn trace_fn(&self) -> TraceFn {
+ self.trace_fn
+ }
+
+ pub(crate) fn trace_non_roots_fn(&self) -> TraceNonRootsFn {
+ self.trace_non_roots_fn
+ }
+
+ pub(crate) fn run_finalizer_fn(&self) -> RunFinalizerFn {
+ self.run_finalizer_fn
+ }
+
+ pub(crate) fn drop_fn(&self) -> DropFn {
+ self.drop_fn
+ }
+
+ pub(crate) fn size(&self) -> usize {
+ self.size
+ }
+}
diff --git a/core/gc/src/internals/weak_map_box.rs b/core/gc/src/internals/weak_map_box.rs
--- a/core/gc/src/internals/weak_map_box.rs
+++ b/core/gc/src/internals/weak_map_box.rs
@@ -1,4 +1,4 @@
-use crate::{pointers::RawWeakMap, GcRefCell, Trace, WeakGc};
+use crate::{pointers::RawWeakMap, GcRefCell, Trace, Tracer, WeakGc};
/// A box that is used to track [`WeakMap`][`crate::WeakMap`]s.
pub(crate) struct WeakMapBox<K: Trace + ?Sized + 'static, V: Trace + Sized + 'static> {
diff --git a/core/gc/src/internals/weak_map_box.rs b/core/gc/src/internals/weak_map_box.rs
--- a/core/gc/src/internals/weak_map_box.rs
+++ b/core/gc/src/internals/weak_map_box.rs
@@ -14,7 +14,7 @@ pub(crate) trait ErasedWeakMapBox {
fn is_live(&self) -> bool;
/// Traces the weak reference inside of the [`WeakMapBox`] if the weak map is live.
- unsafe fn trace(&self);
+ unsafe fn trace(&self, tracer: &mut Tracer);
}
impl<K: Trace + ?Sized, V: Trace + Clone> ErasedWeakMapBox for WeakMapBox<K, V> {
diff --git a/core/gc/src/internals/weak_map_box.rs b/core/gc/src/internals/weak_map_box.rs
--- a/core/gc/src/internals/weak_map_box.rs
+++ b/core/gc/src/internals/weak_map_box.rs
@@ -30,10 +30,10 @@ impl<K: Trace + ?Sized, V: Trace + Clone> ErasedWeakMapBox for WeakMapBox<K, V>
self.map.upgrade().is_some()
}
- unsafe fn trace(&self) {
+ unsafe fn trace(&self, tracer: &mut Tracer) {
if self.map.upgrade().is_some() {
// SAFETY: When the weak map is live, the weak reference should be traced.
- unsafe { self.map.trace() }
+ unsafe { self.map.trace(tracer) }
}
}
}
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -25,20 +25,20 @@ pub(crate) mod internals;
use boa_profiler::Profiler;
use internals::{EphemeronBox, ErasedEphemeronBox, ErasedWeakMapBox, WeakMapBox};
-use pointers::RawWeakMap;
+use pointers::{NonTraceable, RawWeakMap};
use std::{
cell::{Cell, RefCell},
mem,
ptr::NonNull,
};
-pub use crate::trace::{Finalize, Trace};
+pub use crate::trace::{Finalize, Trace, Tracer};
pub use boa_macros::{Finalize, Trace};
pub use cell::{GcRef, GcRefCell, GcRefMut};
pub use internals::GcBox;
pub use pointers::{Ephemeron, Gc, WeakGc, WeakMap};
-type GcPointer = NonNull<GcBox<dyn Trace>>;
+type GcErasedPointer = NonNull<GcBox<NonTraceable>>;
type EphemeronPointer = NonNull<dyn ErasedEphemeronBox>;
type ErasedWeakMapBoxPointer = NonNull<dyn ErasedWeakMapBox>;
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -79,7 +79,7 @@ struct GcRuntimeData {
struct BoaGc {
config: GcConfig,
runtime: GcRuntimeData,
- strongs: Vec<GcPointer>,
+ strongs: Vec<GcErasedPointer>,
weaks: Vec<EphemeronPointer>,
weak_maps: Vec<ErasedWeakMapBoxPointer>,
}
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -135,7 +135,7 @@ impl Allocator {
Self::manage_state(&mut gc);
// Safety: value cannot be a null pointer, since `Box` cannot return null pointers.
let ptr = unsafe { NonNull::new_unchecked(Box::into_raw(Box::new(value))) };
- let erased: NonNull<GcBox<dyn Trace>> = ptr;
+ let erased: NonNull<GcBox<NonTraceable>> = ptr.cast();
gc.strongs.push(erased);
gc.runtime.bytes_allocated += element_size;
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -202,7 +202,7 @@ impl Allocator {
}
struct Unreachables {
- strong: Vec<NonNull<GcBox<dyn Trace>>>,
+ strong: Vec<GcErasedPointer>,
weak: Vec<NonNull<dyn ErasedEphemeronBox>>,
}
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -228,7 +228,11 @@ impl Collector {
Self::trace_non_roots(gc);
- let unreachables = Self::mark_heap(&gc.strongs, &gc.weaks, &gc.weak_maps);
+ let mut tracer = Tracer::new();
+
+ let unreachables = Self::mark_heap(&mut tracer, &gc.strongs, &gc.weaks, &gc.weak_maps);
+
+ assert!(tracer.is_empty(), "The queue should be empty");
// Only finalize if there are any unreachable nodes.
if !unreachables.strong.is_empty() || !unreachables.weak.is_empty() {
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -236,7 +240,9 @@ impl Collector {
// SAFETY: All passed pointers are valid, since we won't deallocate until `Self::sweep`.
unsafe { Self::finalize(unreachables) };
- let _final_unreachables = Self::mark_heap(&gc.strongs, &gc.weaks, &gc.weak_maps);
+ // Reuse the tracer's already allocated capacity.
+ let _final_unreachables =
+ Self::mark_heap(&mut tracer, &gc.strongs, &gc.weaks, &gc.weak_maps);
}
// SAFETY: The head of our linked list is always valid per the invariants of our GC.
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -277,8 +283,12 @@ impl Collector {
// Then, we can find whether there is a reference from other places, and they are the roots.
for node in &gc.strongs {
// SAFETY: node must be valid as this phase cannot drop any node.
- let node_ref = unsafe { node.as_ref() };
- node_ref.value().trace_non_roots();
+ let trace_non_roots_fn = unsafe { node.as_ref() }.trace_non_roots_fn();
+
+ // SAFETY: The function pointer is appropriate for this node type because we extract it from it's VTable.
+ unsafe {
+ trace_non_roots_fn(*node);
+ }
}
for eph in &gc.weaks {
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -290,7 +300,8 @@ impl Collector {
/// Walk the heap and mark any nodes deemed reachable
fn mark_heap(
- strongs: &[GcPointer],
+ tracer: &mut Tracer,
+ strongs: &[GcErasedPointer],
weaks: &[EphemeronPointer],
weak_maps: &[ErasedWeakMapBoxPointer],
) -> Unreachables {
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -306,10 +317,26 @@ impl Collector {
for node in strongs {
// SAFETY: node must be valid as this phase cannot drop any node.
let node_ref = unsafe { node.as_ref() };
- if node_ref.get_non_root_count() < node_ref.get_ref_count() {
- // SAFETY: the gc heap object should be alive if there is a root.
- unsafe {
- node_ref.mark_and_trace();
+ if node_ref.is_rooted() {
+ tracer.enqueue(*node);
+
+ while let Some(node) = tracer.next() {
+ // SAFETY: the gc heap object should be alive if there is a root.
+ let node_ref = unsafe { node.as_ref() };
+
+ if !node_ref.header.is_marked() {
+ node_ref.header.mark();
+
+ // SAFETY: if `GcBox::trace_inner()` has been called, then,
+ // this box must have been deemed as reachable via tracing
+ // from a root, which by extension means that value has not
+ // been dropped either.
+
+ let trace_fn = node_ref.trace_fn();
+
+ // SAFETY: The function pointer is appropriate for this node type because we extract it from it's VTable.
+ unsafe { trace_fn(node, tracer) }
+ }
}
} else if !node_ref.is_marked() {
strong_dead.push(*node);
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -338,13 +365,23 @@ impl Collector {
// SAFETY: node must be valid as this phase cannot drop any node.
let eph_ref = unsafe { eph.as_ref() };
let header = eph_ref.header();
- if header.get_non_root_count() < header.get_ref_count() {
+ if header.is_rooted() {
header.mark();
}
// SAFETY: the garbage collector ensures `eph_ref` always points to valid data.
- if unsafe { !eph_ref.trace() } {
+ if unsafe { !eph_ref.trace(tracer) } {
pending_ephemerons.push(*eph);
}
+
+ while let Some(node) = tracer.next() {
+ // SAFETY: node must be valid as this phase cannot drop any node.
+ let trace_fn = unsafe { node.as_ref() }.trace_fn();
+
+ // SAFETY: The function pointer is appropriate for this node type because we extract it from it's VTable.
+ unsafe {
+ trace_fn(node, tracer);
+ }
+ }
}
// 2. Trace all the weak pointers in the live weak maps to make sure they do not get swept.
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -353,7 +390,17 @@ impl Collector {
let node_ref = unsafe { w.as_ref() };
// SAFETY: The garbage collector ensures that all nodes are valid.
- unsafe { node_ref.trace() };
+ unsafe { node_ref.trace(tracer) };
+
+ while let Some(node) = tracer.next() {
+ // SAFETY: node must be valid as this phase cannot drop any node.
+ let trace_fn = unsafe { node.as_ref() }.trace_fn();
+
+ // SAFETY: The function pointer is appropriate for this node type because we extract it from it's VTable.
+ unsafe {
+ trace_fn(node, tracer);
+ }
+ }
}
// 3. Iterate through all pending ephemerons, removing the ones which have been successfully
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -365,7 +412,19 @@ impl Collector {
// SAFETY: node must be valid as this phase cannot drop any node.
let eph_ref = unsafe { eph.as_ref() };
// SAFETY: the garbage collector ensures `eph_ref` always points to valid data.
- unsafe { !eph_ref.trace() }
+ let is_key_marked = unsafe { !eph_ref.trace(tracer) };
+
+ while let Some(node) = tracer.next() {
+ // SAFETY: node must be valid as this phase cannot drop any node.
+ let trace_fn = unsafe { node.as_ref() }.trace_fn();
+
+ // SAFETY: The function pointer is appropriate for this node type because we extract it from it's VTable.
+ unsafe {
+ trace_fn(node, tracer);
+ }
+ }
+
+ is_key_marked
});
if previous_len == pending_ephemerons.len() {
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -396,8 +455,13 @@ impl Collector {
let _timer = Profiler::global().start_event("Gc Finalization", "gc");
for node in unreachables.strong {
// SAFETY: The caller must ensure all pointers inside `unreachables.strong` are valid.
- let node = unsafe { node.as_ref() };
- Trace::run_finalizer(node.value());
+ let node_ref = unsafe { node.as_ref() };
+ let run_finalizer_fn = node_ref.run_finalizer_fn();
+
+ // SAFETY: The function pointer is appropriate for this node type because we extract it from it's VTable.
+ unsafe {
+ run_finalizer_fn(node);
+ }
}
for node in unreachables.weak {
// SAFETY: The caller must ensure all pointers inside `unreachables.weak` are valid.
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -413,7 +477,7 @@ impl Collector {
/// - Providing a list of pointers that weren't allocated by `Box::into_raw(Box::new(..))`
/// will result in Undefined Behaviour.
unsafe fn sweep(
- strong: &mut Vec<GcPointer>,
+ strong: &mut Vec<GcErasedPointer>,
weak: &mut Vec<EphemeronPointer>,
total_allocated: &mut usize,
) {
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -431,9 +495,14 @@ impl Collector {
} else {
// SAFETY: The algorithm ensures only unmarked/unreachable pointers are dropped.
// The caller must ensure all pointers were allocated by `Box::into_raw(Box::new(..))`.
- let unmarked_node = unsafe { Box::from_raw(node.as_ptr()) };
- let unallocated_bytes = mem::size_of_val(&*unmarked_node);
- *total_allocated -= unallocated_bytes;
+ let drop_fn = node_ref.drop_fn();
+ let size = node_ref.size();
+ *total_allocated -= size;
+
+ // SAFETY: The function pointer is appropriate for this node type because we extract it from it's VTable.
+ unsafe {
+ drop_fn(*node);
+ }
false
}
diff --git a/core/gc/src/lib.rs b/core/gc/src/lib.rs
--- a/core/gc/src/lib.rs
+++ b/core/gc/src/lib.rs
@@ -478,7 +547,12 @@ impl Collector {
// SAFETY:
// The `Allocator` must always ensure its start node is a valid, non-null pointer that
// was allocated by `Box::from_raw(Box::new(..))`.
- let _unmarked_node = unsafe { Box::from_raw(node.as_ptr()) };
+ let drop_fn = unsafe { node.as_ref() }.drop_fn();
+
+ // SAFETY: The function pointer is appropriate for this node type because we extract it from it's VTable.
+ unsafe {
+ drop_fn(node);
+ }
}
for node in std::mem::take(&mut gc.weaks) {
diff --git a/core/gc/src/pointers/ephemeron.rs b/core/gc/src/pointers/ephemeron.rs
--- a/core/gc/src/pointers/ephemeron.rs
+++ b/core/gc/src/pointers/ephemeron.rs
@@ -2,7 +2,7 @@ use crate::{
finalizer_safe,
internals::EphemeronBox,
trace::{Finalize, Trace},
- Allocator, Gc,
+ Allocator, Gc, Tracer,
};
use std::ptr::NonNull;
diff --git a/core/gc/src/pointers/ephemeron.rs b/core/gc/src/pointers/ephemeron.rs
--- a/core/gc/src/pointers/ephemeron.rs
+++ b/core/gc/src/pointers/ephemeron.rs
@@ -108,7 +108,7 @@ impl<K: Trace + ?Sized, V: Trace> Finalize for Ephemeron<K, V> {
// SAFETY: `Ephemeron`s trace implementation only marks its inner box because we want to stop
// tracing through weakly held pointers.
unsafe impl<K: Trace + ?Sized, V: Trace> Trace for Ephemeron<K, V> {
- unsafe fn trace(&self) {
+ unsafe fn trace(&self, _tracer: &mut Tracer) {
// SAFETY: We need to mark the inner box of the `Ephemeron` since it is reachable
// from a root and this means it cannot be dropped.
unsafe {
diff --git a/core/gc/src/pointers/ephemeron.rs b/core/gc/src/pointers/ephemeron.rs
--- a/core/gc/src/pointers/ephemeron.rs
+++ b/core/gc/src/pointers/ephemeron.rs
@@ -116,7 +116,7 @@ unsafe impl<K: Trace + ?Sized, V: Trace> Trace for Ephemeron<K, V> {
}
}
- fn trace_non_roots(&self) {
+ unsafe fn trace_non_roots(&self) {
self.inner().inc_non_root_count();
}
diff --git a/core/gc/src/pointers/gc.rs b/core/gc/src/pointers/gc.rs
--- a/core/gc/src/pointers/gc.rs
+++ b/core/gc/src/pointers/gc.rs
@@ -2,7 +2,7 @@ use crate::{
finalizer_safe,
internals::{EphemeronBox, GcBox},
trace::{Finalize, Trace},
- Allocator, Ephemeron, WeakGc,
+ Allocator, Ephemeron, GcErasedPointer, Tracer, WeakGc,
};
use std::{
cmp::Ordering,
diff --git a/core/gc/src/pointers/gc.rs b/core/gc/src/pointers/gc.rs
--- a/core/gc/src/pointers/gc.rs
+++ b/core/gc/src/pointers/gc.rs
@@ -14,6 +14,39 @@ use std::{
rc::Rc,
};
+/// Zero sized struct that is used to ensure that we do not call trace methods,
+/// call its finalization method or drop it.
+///
+/// This can only happen if we are accessing a [`GcErasedPointer`] directly which is a bug.
+/// Panics if any of it's methods are called.
+///
+/// Note: Accessing the [`crate::internals::GcHeader`] of [`GcErasedPointer`] is fine.
+pub(crate) struct NonTraceable(());
+
+impl Finalize for NonTraceable {
+ fn finalize(&self) {
+ unreachable!()
+ }
+}
+
+unsafe impl Trace for NonTraceable {
+ unsafe fn trace(&self, _tracer: &mut Tracer) {
+ unreachable!()
+ }
+ unsafe fn trace_non_roots(&self) {
+ unreachable!()
+ }
+ fn run_finalizer(&self) {
+ unreachable!()
+ }
+}
+
+impl Drop for NonTraceable {
+ fn drop(&mut self) {
+ unreachable!()
+ }
+}
+
/// A garbage-collected pointer type over an immutable value.
pub struct Gc<T: Trace + ?Sized + 'static> {
pub(crate) inner_ptr: NonNull<GcBox<T>>,
diff --git a/core/gc/src/pointers/gc.rs b/core/gc/src/pointers/gc.rs
--- a/core/gc/src/pointers/gc.rs
+++ b/core/gc/src/pointers/gc.rs
@@ -98,6 +131,10 @@ impl<T: Trace + ?Sized> Gc<T> {
marker: PhantomData,
}
}
+
+ pub(crate) fn as_erased(&self) -> GcErasedPointer {
+ self.inner_ptr.cast()
+ }
}
impl<T: Trace + ?Sized> Gc<T> {
diff --git a/core/gc/src/pointers/gc.rs b/core/gc/src/pointers/gc.rs
--- a/core/gc/src/pointers/gc.rs
+++ b/core/gc/src/pointers/gc.rs
@@ -125,14 +162,11 @@ impl<T: Trace + ?Sized> Finalize for Gc<T> {
// SAFETY: `Gc` maintains it's own rootedness and implements all methods of
// Trace. It is not possible to root an already rooted `Gc` and vice versa.
unsafe impl<T: Trace + ?Sized> Trace for Gc<T> {
- unsafe fn trace(&self) {
- // SAFETY: Inner must be live and allocated GcBox.
- unsafe {
- self.inner().mark_and_trace();
- }
+ unsafe fn trace(&self, tracer: &mut Tracer) {
+ tracer.enqueue(self.as_erased());
}
- fn trace_non_roots(&self) {
+ unsafe fn trace_non_roots(&self) {
self.inner().inc_non_root_count();
}
diff --git a/core/gc/src/pointers/mod.rs b/core/gc/src/pointers/mod.rs
--- a/core/gc/src/pointers/mod.rs
+++ b/core/gc/src/pointers/mod.rs
@@ -10,4 +10,5 @@ pub use gc::Gc;
pub use weak::WeakGc;
pub use weak_map::WeakMap;
+pub(crate) use gc::NonTraceable;
pub(crate) use weak_map::RawWeakMap;
diff --git a/core/gc/src/pointers/weak_map.rs b/core/gc/src/pointers/weak_map.rs
--- a/core/gc/src/pointers/weak_map.rs
+++ b/core/gc/src/pointers/weak_map.rs
@@ -17,7 +17,7 @@ pub struct WeakMap<K: Trace + ?Sized + 'static, V: Trace + 'static> {
}
unsafe impl<K: Trace + ?Sized + 'static, V: Trace + 'static> Trace for WeakMap<K, V> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
mark(&this.inner);
});
}
diff --git a/core/gc/src/pointers/weak_map.rs b/core/gc/src/pointers/weak_map.rs
--- a/core/gc/src/pointers/weak_map.rs
+++ b/core/gc/src/pointers/weak_map.rs
@@ -84,7 +84,7 @@ where
K: Trace + ?Sized + 'static,
V: Trace + 'static,
{
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for eph in this.iter() {
mark(eph);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -14,6 +14,35 @@ use std::{
sync::atomic,
};
+use crate::GcErasedPointer;
+
+/// A queue used to trace [`crate::Gc<T>`] non-recursively.
+#[doc(hidden)]
+#[allow(missing_debug_implementations)]
+pub struct Tracer {
+ queue: VecDeque<GcErasedPointer>,
+}
+
+impl Tracer {
+ pub(crate) fn new() -> Self {
+ Self {
+ queue: VecDeque::default(),
+ }
+ }
+
+ pub(crate) fn enqueue(&mut self, node: GcErasedPointer) {
+ self.queue.push_back(node);
+ }
+
+ pub(crate) fn next(&mut self) -> Option<GcErasedPointer> {
+ self.queue.pop_front()
+ }
+
+ pub(crate) fn is_empty(&mut self) -> bool {
+ self.queue.is_empty()
+ }
+}
+
/// Substitute for the [`Drop`] trait for garbage collected types.
pub trait Finalize {
/// Cleanup logic for a type.
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -35,10 +64,14 @@ pub unsafe trait Trace: Finalize {
/// # Safety
///
/// See [`Trace`].
- unsafe fn trace(&self);
+ unsafe fn trace(&self, tracer: &mut Tracer);
/// Trace handles located in GC heap, and mark them as non root.
- fn trace_non_roots(&self);
+ ///
+ /// # Safety
+ ///
+ /// See [`Trace`].
+ unsafe fn trace_non_roots(&self);
/// Runs [`Finalize::finalize`] on this object and all
/// contained subobjects.
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -52,9 +85,9 @@ pub unsafe trait Trace: Finalize {
macro_rules! empty_trace {
() => {
#[inline]
- unsafe fn trace(&self) {}
+ unsafe fn trace(&self, _tracer: &mut $crate::Tracer) {}
#[inline]
- fn trace_non_roots(&self) {}
+ unsafe fn trace_non_roots(&self) {}
#[inline]
fn run_finalizer(&self) {
$crate::Finalize::finalize(self)
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -73,29 +106,32 @@ macro_rules! empty_trace {
/// Misusing the `mark` function may result in Undefined Behaviour.
#[macro_export]
macro_rules! custom_trace {
- ($this:ident, $body:expr) => {
+ ($this:ident, $marker:ident, $body:expr) => {
#[inline]
- unsafe fn trace(&self) {
- fn mark<T: $crate::Trace + ?Sized>(it: &T) {
+ unsafe fn trace(&self, tracer: &mut $crate::Tracer) {
+ let mut $marker = |it: &dyn $crate::Trace| {
// SAFETY: The implementor must ensure that `trace` is correctly implemented.
unsafe {
- $crate::Trace::trace(it);
+ $crate::Trace::trace(it, tracer);
}
- }
+ };
let $this = self;
$body
}
#[inline]
- fn trace_non_roots(&self) {
- fn mark<T: $crate::Trace + ?Sized>(it: &T) {
- $crate::Trace::trace_non_roots(it);
+ unsafe fn trace_non_roots(&self) {
+ fn $marker<T: $crate::Trace + ?Sized>(it: &T) {
+ // SAFETY: The implementor must ensure that `trace` is correctly implemented.
+ unsafe {
+ $crate::Trace::trace_non_roots(it);
+ }
}
let $this = self;
$body
}
#[inline]
fn run_finalizer(&self) {
- fn mark<T: $crate::Trace + ?Sized>(it: &T) {
+ fn $marker<T: $crate::Trace + ?Sized>(it: &T) {
$crate::Trace::run_finalizer(it);
}
$crate::Finalize::finalize(self);
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -180,7 +216,7 @@ impl<T: Trace, const N: usize> Finalize for [T; N] {}
// SAFETY:
// All elements inside the array are correctly marked.
unsafe impl<T: Trace, const N: usize> Trace for [T; N] {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for v in this {
mark(v);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -219,12 +255,12 @@ macro_rules! tuple_finalize_trace {
// SAFETY:
// All elements inside the tuple are correctly marked.
unsafe impl<$($args: $crate::Trace),*> Trace for ($($args,)*) {
- custom_trace!(this, {
- #[allow(non_snake_case, unused_unsafe)]
- fn avoid_lints<$($args: $crate::Trace),*>(&($(ref $args,)*): &($($args,)*)) {
+ custom_trace!(this, mark, {
+ #[allow(non_snake_case, unused_unsafe, unused_mut)]
+ let mut avoid_lints = |&($(ref $args,)*): &($($args,)*)| {
// SAFETY: The implementor must ensure a correct implementation.
unsafe { $(mark($args);)* }
- }
+ };
avoid_lints(this)
});
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -259,15 +295,31 @@ type_arg_tuple_based_finalize_trace_impls![
impl<T: Trace + ?Sized> Finalize for Box<T> {}
// SAFETY: The inner value of the `Box` is correctly marked.
unsafe impl<T: Trace + ?Sized> Trace for Box<T> {
- custom_trace!(this, {
- mark(&**this);
- });
+ #[inline]
+ unsafe fn trace(&self, tracer: &mut Tracer) {
+ // SAFETY: The implementor must ensure that `trace` is correctly implemented.
+ unsafe {
+ Trace::trace(&**self, tracer);
+ }
+ }
+ #[inline]
+ unsafe fn trace_non_roots(&self) {
+ // SAFETY: The implementor must ensure that `trace_non_roots` is correctly implemented.
+ unsafe {
+ Trace::trace_non_roots(&**self);
+ }
+ }
+ #[inline]
+ fn run_finalizer(&self) {
+ Finalize::finalize(self);
+ Trace::run_finalizer(&**self);
+ }
}
impl<T: Trace> Finalize for Box<[T]> {}
// SAFETY: All the inner elements of the `Box` array are correctly marked.
unsafe impl<T: Trace> Trace for Box<[T]> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for e in &**this {
mark(e);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -277,7 +329,7 @@ unsafe impl<T: Trace> Trace for Box<[T]> {
impl<T: Trace> Finalize for Vec<T> {}
// SAFETY: All the inner elements of the `Vec` are correctly marked.
unsafe impl<T: Trace> Trace for Vec<T> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for e in this {
mark(e);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -290,7 +342,7 @@ impl<T: Trace> Finalize for thin_vec::ThinVec<T> {}
#[cfg(feature = "thin-vec")]
// SAFETY: All the inner elements of the `Vec` are correctly marked.
unsafe impl<T: Trace> Trace for thin_vec::ThinVec<T> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for e in this {
mark(e);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -300,7 +352,7 @@ unsafe impl<T: Trace> Trace for thin_vec::ThinVec<T> {
impl<T: Trace> Finalize for Option<T> {}
// SAFETY: The inner value of the `Option` is correctly marked.
unsafe impl<T: Trace> Trace for Option<T> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
if let Some(ref v) = *this {
mark(v);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -310,7 +362,7 @@ unsafe impl<T: Trace> Trace for Option<T> {
impl<T: Trace, E: Trace> Finalize for Result<T, E> {}
// SAFETY: Both inner values of the `Result` are correctly marked.
unsafe impl<T: Trace, E: Trace> Trace for Result<T, E> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
match *this {
Ok(ref v) => mark(v),
Err(ref v) => mark(v),
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -321,7 +373,7 @@ unsafe impl<T: Trace, E: Trace> Trace for Result<T, E> {
impl<T: Ord + Trace> Finalize for BinaryHeap<T> {}
// SAFETY: All the elements of the `BinaryHeap` are correctly marked.
unsafe impl<T: Ord + Trace> Trace for BinaryHeap<T> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for v in this {
mark(v);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -331,7 +383,7 @@ unsafe impl<T: Ord + Trace> Trace for BinaryHeap<T> {
impl<K: Trace, V: Trace> Finalize for BTreeMap<K, V> {}
// SAFETY: All the elements of the `BTreeMap` are correctly marked.
unsafe impl<K: Trace, V: Trace> Trace for BTreeMap<K, V> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for (k, v) in this {
mark(k);
mark(v);
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -342,7 +394,7 @@ unsafe impl<K: Trace, V: Trace> Trace for BTreeMap<K, V> {
impl<T: Trace> Finalize for BTreeSet<T> {}
// SAFETY: All the elements of the `BTreeSet` are correctly marked.
unsafe impl<T: Trace> Trace for BTreeSet<T> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for v in this {
mark(v);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -357,7 +409,7 @@ impl<K: Eq + Hash + Trace, V: Trace, S: BuildHasher> Finalize
unsafe impl<K: Eq + Hash + Trace, V: Trace, S: BuildHasher> Trace
for hashbrown::hash_map::HashMap<K, V, S>
{
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for (k, v) in this {
mark(k);
mark(v);
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -368,7 +420,7 @@ unsafe impl<K: Eq + Hash + Trace, V: Trace, S: BuildHasher> Trace
impl<K: Eq + Hash + Trace, V: Trace, S: BuildHasher> Finalize for HashMap<K, V, S> {}
// SAFETY: All the elements of the `HashMap` are correctly marked.
unsafe impl<K: Eq + Hash + Trace, V: Trace, S: BuildHasher> Trace for HashMap<K, V, S> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for (k, v) in this {
mark(k);
mark(v);
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -379,7 +431,7 @@ unsafe impl<K: Eq + Hash + Trace, V: Trace, S: BuildHasher> Trace for HashMap<K,
impl<T: Eq + Hash + Trace, S: BuildHasher> Finalize for HashSet<T, S> {}
// SAFETY: All the elements of the `HashSet` are correctly marked.
unsafe impl<T: Eq + Hash + Trace, S: BuildHasher> Trace for HashSet<T, S> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for v in this {
mark(v);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -389,7 +441,7 @@ unsafe impl<T: Eq + Hash + Trace, S: BuildHasher> Trace for HashSet<T, S> {
impl<T: Eq + Hash + Trace> Finalize for LinkedList<T> {}
// SAFETY: All the elements of the `LinkedList` are correctly marked.
unsafe impl<T: Eq + Hash + Trace> Trace for LinkedList<T> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
#[allow(clippy::explicit_iter_loop)]
for v in this.iter() {
mark(v);
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -406,7 +458,7 @@ unsafe impl<T> Trace for PhantomData<T> {
impl<T: Trace> Finalize for VecDeque<T> {}
// SAFETY: All the elements of the `VecDeque` are correctly marked.
unsafe impl<T: Trace> Trace for VecDeque<T> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
for v in this {
mark(v);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -420,7 +472,7 @@ unsafe impl<T: ToOwned + Trace + ?Sized> Trace for Cow<'static, T>
where
T::Owned: Trace,
{
- custom_trace!(this, {
+ custom_trace!(this, mark, {
if let Cow::Owned(ref v) = this {
mark(v);
}
diff --git a/core/gc/src/trace.rs b/core/gc/src/trace.rs
--- a/core/gc/src/trace.rs
+++ b/core/gc/src/trace.rs
@@ -431,7 +483,7 @@ impl<T: Trace> Finalize for Cell<Option<T>> {}
// SAFETY: Taking and setting is done in a single action, and recursive traces should find a `None`
// value instead of the original `T`, making this safe.
unsafe impl<T: Trace> Trace for Cell<Option<T>> {
- custom_trace!(this, {
+ custom_trace!(this, mark, {
if let Some(v) = this.take() {
mark(&v);
this.set(Some(v));
diff --git a/core/macros/src/lib.rs b/core/macros/src/lib.rs
--- a/core/macros/src/lib.rs
+++ b/core/macros/src/lib.rs
@@ -179,25 +179,29 @@ decl_derive! {
fn derive_trace(mut s: Structure<'_>) -> proc_macro2::TokenStream {
struct EmptyTrace {
copy: bool,
+ drop: bool,
}
impl Parse for EmptyTrace {
fn parse(input: ParseStream<'_>) -> syn::Result<Self> {
let i: Ident = input.parse()?;
- if i != "empty_trace" && i != "unsafe_empty_trace" {
+ if i != "empty_trace" && i != "unsafe_empty_trace" && i != "unsafe_no_drop" {
let msg = format!(
- "expected token \"empty_trace\" or \"unsafe_empty_trace\", found {i:?}"
+ "expected token \"empty_trace\", \"unsafe_empty_trace\" or \"unsafe_no_drop\", found {i:?}"
);
return Err(syn::Error::new_spanned(i.clone(), msg));
}
Ok(Self {
copy: i == "empty_trace",
+ drop: i == "empty_trace" || i != "unsafe_no_drop",
})
}
}
+ let mut drop = true;
+
for attr in &s.ast().attrs {
if attr.path().is_ident("boa_gc") {
let trace = match attr.parse_args::<EmptyTrace>() {
diff --git a/core/macros/src/lib.rs b/core/macros/src/lib.rs
--- a/core/macros/src/lib.rs
+++ b/core/macros/src/lib.rs
@@ -209,13 +213,18 @@ fn derive_trace(mut s: Structure<'_>) -> proc_macro2::TokenStream {
s.add_where_predicate(syn::parse_quote!(Self: Copy));
}
+ if !trace.drop {
+ drop = false;
+ continue;
+ }
+
return s.unsafe_bound_impl(
quote!(::boa_gc::Trace),
quote! {
#[inline(always)]
- unsafe fn trace(&self) {}
+ unsafe fn trace(&self, _tracer: &mut ::boa_gc::Tracer) {}
#[inline(always)]
- fn trace_non_roots(&self) {}
+ unsafe fn trace_non_roots(&self) {}
#[inline]
fn run_finalizer(&self) {
::boa_gc::Finalize::finalize(self)
diff --git a/core/macros/src/lib.rs b/core/macros/src/lib.rs
--- a/core/macros/src/lib.rs
+++ b/core/macros/src/lib.rs
@@ -231,31 +240,34 @@ fn derive_trace(mut s: Structure<'_>) -> proc_macro2::TokenStream {
.iter()
.any(|attr| attr.path().is_ident("unsafe_ignore_trace"))
});
- let trace_body = s.each(|bi| quote!(mark(#bi)));
+ let trace_body = s.each(|bi| quote!(::boa_gc::Trace::trace(#bi, tracer)));
+ let trace_other_body = s.each(|bi| quote!(mark(#bi)));
s.add_bounds(AddBounds::Fields);
let trace_impl = s.unsafe_bound_impl(
quote!(::boa_gc::Trace),
quote! {
#[inline]
- unsafe fn trace(&self) {
+ unsafe fn trace(&self, tracer: &mut ::boa_gc::Tracer) {
#[allow(dead_code)]
- fn mark<T: ::boa_gc::Trace + ?Sized>(it: &T) {
+ let mut mark = |it: &dyn ::boa_gc::Trace| {
+ // SAFETY: The implementor must ensure that `trace` is correctly implemented.
unsafe {
- ::boa_gc::Trace::trace(it);
+ ::boa_gc::Trace::trace(it, tracer);
}
- }
+ };
match *self { #trace_body }
}
#[inline]
- fn trace_non_roots(&self) {
+ unsafe fn trace_non_roots(&self) {
#[allow(dead_code)]
fn mark<T: ::boa_gc::Trace + ?Sized>(it: &T) {
+ // SAFETY: The implementor must ensure that `trace_non_roots` is correctly implemented.
unsafe {
::boa_gc::Trace::trace_non_roots(it);
}
}
- match *self { #trace_body }
+ match *self { #trace_other_body }
}
#[inline]
fn run_finalizer(&self) {
diff --git a/core/macros/src/lib.rs b/core/macros/src/lib.rs
--- a/core/macros/src/lib.rs
+++ b/core/macros/src/lib.rs
@@ -266,7 +278,7 @@ fn derive_trace(mut s: Structure<'_>) -> proc_macro2::TokenStream {
::boa_gc::Trace::run_finalizer(it);
}
}
- match *self { #trace_body }
+ match *self { #trace_other_body }
}
},
);
diff --git a/core/macros/src/lib.rs b/core/macros/src/lib.rs
--- a/core/macros/src/lib.rs
+++ b/core/macros/src/lib.rs
@@ -274,18 +286,22 @@ fn derive_trace(mut s: Structure<'_>) -> proc_macro2::TokenStream {
// We also implement drop to prevent unsafe drop implementations on this
// type and encourage people to use Finalize. This implementation will
// call `Finalize::finalize` if it is safe to do so.
- let drop_impl = s.unbound_impl(
- quote!(::core::ops::Drop),
- quote! {
- #[allow(clippy::inline_always)]
- #[inline(always)]
- fn drop(&mut self) {
- if ::boa_gc::finalizer_safe() {
- ::boa_gc::Finalize::finalize(self);
+ let drop_impl = if drop {
+ s.unbound_impl(
+ quote!(::core::ops::Drop),
+ quote! {
+ #[allow(clippy::inline_always)]
+ #[inline(always)]
+ fn drop(&mut self) {
+ if ::boa_gc::finalizer_safe() {
+ ::boa_gc::Finalize::finalize(self);
+ }
}
- }
- },
- );
+ },
+ )
+ } else {
+ quote!()
+ };
quote! {
#trace_impl
| diff --git a/core/engine/src/vm/tests.rs b/core/engine/src/vm/tests.rs
--- a/core/engine/src/vm/tests.rs
+++ b/core/engine/src/vm/tests.rs
@@ -417,3 +417,17 @@ fn cross_context_funtion_call() {
assert_eq!(result, Ok(JsValue::new(100)));
}
+
+// See: https://github.com/boa-dev/boa/issues/1848
+#[test]
+fn long_object_chain_gc_trace_stack_overflow() {
+ run_test_actions([
+ TestAction::run(indoc! {r#"
+ let old = {};
+ for (let i = 0; i < 100000; i++) {
+ old = { old };
+ }
+ "#}),
+ TestAction::inspect_context(|_| boa_gc::force_collect()),
+ ]);
+}
diff --git a/core/gc/src/test/allocation.rs b/core/gc/src/test/allocation.rs
--- a/core/gc/src/test/allocation.rs
+++ b/core/gc/src/test/allocation.rs
@@ -1,5 +1,7 @@
+use boa_macros::{Finalize, Trace};
+
use super::{run_test, Harness};
-use crate::{force_collect, Gc, GcRefCell};
+use crate::{force_collect, Gc, GcBox, GcRefCell};
#[test]
fn gc_basic_cell_allocation() {
diff --git a/core/gc/src/test/allocation.rs b/core/gc/src/test/allocation.rs
--- a/core/gc/src/test/allocation.rs
+++ b/core/gc/src/test/allocation.rs
@@ -29,3 +31,34 @@ fn gc_basic_pointer_alloc() {
Harness::assert_empty_gc();
});
}
+
+#[test]
+// Takes too long to finish in miri
+#[cfg_attr(miri, ignore)]
+fn gc_recursion() {
+ run_test(|| {
+ #[derive(Debug, Finalize, Trace)]
+ struct S {
+ i: usize,
+ next: Option<Gc<S>>,
+ }
+
+ const SIZE: usize = std::mem::size_of::<GcBox<S>>();
+ const COUNT: usize = 1_000_000;
+
+ let mut root = Gc::new(S { i: 0, next: None });
+ for i in 1..COUNT {
+ root = Gc::new(S {
+ i,
+ next: Some(root),
+ });
+ }
+
+ Harness::assert_bytes_allocated();
+ Harness::assert_exact_bytes_allocated(SIZE * COUNT);
+
+ drop(root);
+ force_collect();
+ Harness::assert_empty_gc();
+ });
+}
diff --git a/core/gc/src/test/weak.rs b/core/gc/src/test/weak.rs
--- a/core/gc/src/test/weak.rs
+++ b/core/gc/src/test/weak.rs
@@ -164,7 +164,7 @@ fn eph_self_referential() {
*root.inner.inner.borrow_mut() = Some(eph.clone());
assert!(eph.value().is_some());
- Harness::assert_exact_bytes_allocated(48);
+ Harness::assert_exact_bytes_allocated(56);
}
*root.inner.inner.borrow_mut() = None;
diff --git a/core/gc/src/test/weak.rs b/core/gc/src/test/weak.rs
--- a/core/gc/src/test/weak.rs
+++ b/core/gc/src/test/weak.rs
@@ -210,7 +210,7 @@ fn eph_self_referential_chain() {
assert!(eph_start.value().is_some());
assert!(eph_chain2.value().is_some());
- Harness::assert_exact_bytes_allocated(132);
+ Harness::assert_exact_bytes_allocated(168);
}
*root.borrow_mut() = None;
| Long reference "chain" can overflow the stack during GC trace
Very long "reference chains" (i.e. very deeply nested objects) can overflow the stack because, afaict, each object trace during GC cycles is its own function call.
It seems like there's basically no guards against stack overflows (for example even simple cases like #809 can abort the process) so this is probably lower priority, but this sounds a little trickier to guard against.
**To Reproduce**
```js
let old = {};
for (let i = 0; i < 100000000; i++) {
old = { old };
}
```
After 10 iterations, old would be `{ old: { old: { old: { old: { old: { old: { old: { old: { old: { old: {} } } } } } } } } } }`.
After 1e8 iterations, this would be nested very deeply, so anything that makes the engine recursively visit each property will end in a stack overflow (which is what happens during a GC trace as it needs to mark everything as visited)
**Expected behavior**
Not a stack overflow. The process shouldn't abort.
**Build environment:**
- OS: Ubuntu
- Version: 20.04
- Target triple: x86_64-unknown-linux-gnu
- Rustc version: rustc 1.60.0-nightly (75d9a0ae2 2022-02-16)
| I think I'll try to tackle this, if no one is working on this :) | 2023-12-06T22:53:53 | 0.17 | 279caabf796ae22322d91dd0f79f63c53e99007f | [
"test::weak::eph_self_referential_chain",
"test::weak::eph_self_referential"
] | [
"core/engine/src/bigint.rs - bigint::JsBigInt::mod_floor (line 213)",
"core/engine/src/error.rs - error::JsError (line 32)",
"core/engine/src/context/mod.rs - context::Context::register_global_property (line 212)",
"core/engine/src/native_function.rs - native_function::NativeFunction::from_async_fn (line 163)... | [
"core/engine/src/context/mod.rs - context::Context (line 54)",
"core/engine/src/context/hooks.rs - context::hooks::HostHooks (line 22)",
"core/engine/src/class.rs - class (line 7)",
"core/engine/src/context/mod.rs - context::Context::eval (line 172)",
"core/engine/src/lib.rs - (line 11)",
"core/engine/src... | [] | 45 |
amethyst/bracket-lib | 240 | amethyst__bracket-lib-240 | [
"230"
] | f6d1e87e44d57e3b6f2f558bfadad9000cc10f28 | diff --git a/bracket-geometry/src/rect.rs b/bracket-geometry/src/rect.rs
--- a/bracket-geometry/src/rect.rs
+++ b/bracket-geometry/src/rect.rs
@@ -82,8 +82,8 @@ impl Rect {
where
F: FnMut(Point),
{
- for y in self.y1..=self.y2 {
- for x in self.x1..=self.x2 {
+ for y in self.y1..self.y2 {
+ for x in self.x1..self.x2 {
f(Point::new(x, y));
}
}
| diff --git a/bracket-geometry/src/rect.rs b/bracket-geometry/src/rect.rs
--- a/bracket-geometry/src/rect.rs
+++ b/bracket-geometry/src/rect.rs
@@ -185,8 +185,8 @@ mod tests {
points.insert(p);
});
assert!(points.contains(&Point::new(0, 0)));
- assert!(points.contains(&Point::new(1, 0)));
- assert!(points.contains(&Point::new(0, 1)));
- assert!(points.contains(&Point::new(1, 1)));
+ assert!(!points.contains(&Point::new(1, 0)));
+ assert!(!points.contains(&Point::new(0, 1)));
+ assert!(!points.contains(&Point::new(1, 1)));
}
}
| Rect::for_each is inclusive, vs. other functions exclusive
`Rect::for_each` iterates inclusively to (x2, y2), which disagrees with the convention used by all other Rect functions. It looks like the exclusive convention was decided on in 223148ea11ada7af45204cd35b8d0f6f8463dee1, but the `for_each` function was left as it was.
| I just came here to report the same thing. Note that `test_rect_set()` and `test_rect_callback()` actually explicitly test that `Rect::with_size(0, 0, 1, 1)` has one point when accessed with `Rect::point_set` but four points when accessed with `Rect::for_each`; I assume the test is wrong. | 2021-11-03T22:13:50 | 0.8 | cf8eec60ae17f2534a14f3ae643c871202aed192 | [
"rect::tests::test_rect_callback"
] | [
"circle_bresenham::tests::circle_test_radius1",
"angles::tests::test_project_angle",
"circle_bresenham::tests::circle_nodiag_test_radius3",
"distance::tests::test_algorithm_from_shared_reference",
"distance::tests::test_manhattan_distance",
"circle_bresenham::tests::circle_test_radius3",
"distance::test... | [] | [] | 46 |
tokio-rs/bytes | 560 | tokio-rs__bytes-560 | [
"559"
] | 38fd42acbaced11ff19f0a4ca2af44a308af5063 | diff --git a/src/bytes_mut.rs b/src/bytes_mut.rs
--- a/src/bytes_mut.rs
+++ b/src/bytes_mut.rs
@@ -670,7 +670,10 @@ impl BytesMut {
// Compare the condition in the `kind == KIND_VEC` case above
// for more details.
- if v_capacity >= new_cap && offset >= len {
+ if v_capacity >= new_cap + offset {
+ self.cap = new_cap;
+ // no copy is necessary
+ } else if v_capacity >= new_cap && offset >= len {
// The capacity is sufficient, and copying is not too much
// overhead: reclaim the buffer!
| diff --git a/tests/test_bytes.rs b/tests/test_bytes.rs
--- a/tests/test_bytes.rs
+++ b/tests/test_bytes.rs
@@ -515,6 +515,34 @@ fn reserve_in_arc_unique_doubles() {
assert_eq!(2000, bytes.capacity());
}
+#[test]
+fn reserve_in_arc_unique_does_not_overallocate_after_split() {
+ let mut bytes = BytesMut::from(LONG);
+ let orig_capacity = bytes.capacity();
+ drop(bytes.split_off(LONG.len() / 2));
+
+ // now bytes is Arc and refcount == 1
+
+ let new_capacity = bytes.capacity();
+ bytes.reserve(orig_capacity - new_capacity);
+ assert_eq!(bytes.capacity(), orig_capacity);
+}
+
+#[test]
+fn reserve_in_arc_unique_does_not_overallocate_after_multiple_splits() {
+ let mut bytes = BytesMut::from(LONG);
+ let orig_capacity = bytes.capacity();
+ for _ in 0..10 {
+ drop(bytes.split_off(LONG.len() / 2));
+
+ // now bytes is Arc and refcount == 1
+
+ let new_capacity = bytes.capacity();
+ bytes.reserve(orig_capacity - new_capacity);
+ }
+ assert_eq!(bytes.capacity(), orig_capacity);
+}
+
#[test]
fn reserve_in_arc_nonunique_does_not_overallocate() {
let mut bytes = BytesMut::with_capacity(1000);
| reserve_inner over allocates which can lead to a OOM conidition
# Summary
`reseve_inner` will double the size of the underlying shared vector instead of just extending the BytesMut buffer in place if a call to `BytesMut::reserve` would fit inside the current allocated shared buffer. If this happens repeatedly you will eventually attempt to allocate a buffer that is too large.
# Repro
```rust
fn reserve_in_arc_unique_does_not_overallocate_after_split() {
let mut bytes = BytesMut::from(LONG);
let orig_capacity = bytes.capacity();
drop(bytes.split_off(LONG.len() / 2));
let new_capacity = bytes.capacity();
bytes.reserve(orig_capacity - new_capacity);
assert_eq!(bytes.capacity(), orig_capacity);
}
```
| 2022-07-30T07:42:13 | 1.2 | 38fd42acbaced11ff19f0a4ca2af44a308af5063 | [
"reserve_in_arc_unique_does_not_overallocate_after_split",
"reserve_in_arc_unique_does_not_overallocate_after_multiple_splits"
] | [
"test_deref_buf_forwards",
"test_fresh_cursor_vec",
"copy_to_bytes_overflow - should panic",
"test_get_u16_buffer_underflow - should panic",
"test_bufs_vec",
"test_get_u16",
"test_get_u8",
"test_vec_deque",
"copy_to_bytes_less",
"copy_from_slice_panics_if_different_length_1 - should panic",
"tes... | [] | [] | 47 | |
tokio-rs/bytes | 732 | tokio-rs__bytes-732 | [
"730"
] | 291df5acc94b82a48765e67eeb1c1a2074539e68 | diff --git a/src/buf/buf_impl.rs b/src/buf/buf_impl.rs
--- a/src/buf/buf_impl.rs
+++ b/src/buf/buf_impl.rs
@@ -66,6 +66,12 @@ macro_rules! buf_get_impl {
}};
}
+// https://en.wikipedia.org/wiki/Sign_extension
+fn sign_extend(val: u64, nbytes: usize) -> i64 {
+ let shift = (8 - nbytes) * 8;
+ (val << shift) as i64 >> shift
+}
+
/// Read bytes from a buffer.
///
/// A buffer stores bytes in memory such that read operations are infallible.
diff --git a/src/buf/buf_impl.rs b/src/buf/buf_impl.rs
--- a/src/buf/buf_impl.rs
+++ b/src/buf/buf_impl.rs
@@ -923,7 +929,7 @@ pub trait Buf {
/// This function panics if there is not enough remaining data in `self`, or
/// if `nbytes` is greater than 8.
fn get_int(&mut self, nbytes: usize) -> i64 {
- buf_get_impl!(be => self, i64, nbytes);
+ sign_extend(self.get_uint(nbytes), nbytes)
}
/// Gets a signed n-byte integer from `self` in little-endian byte order.
diff --git a/src/buf/buf_impl.rs b/src/buf/buf_impl.rs
--- a/src/buf/buf_impl.rs
+++ b/src/buf/buf_impl.rs
@@ -944,7 +950,7 @@ pub trait Buf {
/// This function panics if there is not enough remaining data in `self`, or
/// if `nbytes` is greater than 8.
fn get_int_le(&mut self, nbytes: usize) -> i64 {
- buf_get_impl!(le => self, i64, nbytes);
+ sign_extend(self.get_uint_le(nbytes), nbytes)
}
/// Gets a signed n-byte integer from `self` in native-endian byte order.
| diff --git a/tests/test_buf.rs b/tests/test_buf.rs
--- a/tests/test_buf.rs
+++ b/tests/test_buf.rs
@@ -36,6 +36,19 @@ fn test_get_u16() {
assert_eq!(0x5421, buf.get_u16_le());
}
+#[test]
+fn test_get_int() {
+ let mut buf = &b"\xd6zomg"[..];
+ assert_eq!(-42, buf.get_int(1));
+ let mut buf = &b"\xd6zomg"[..];
+ assert_eq!(-42, buf.get_int_le(1));
+
+ let mut buf = &b"\xfe\x1d\xc0zomg"[..];
+ assert_eq!(0xffffffffffc01dfeu64 as i64, buf.get_int_le(3));
+ let mut buf = &b"\xfe\x1d\xc0zomg"[..];
+ assert_eq!(0xfffffffffffe1dc0u64 as i64, buf.get_int(3));
+}
+
#[test]
#[should_panic]
fn test_get_u16_buffer_underflow() {
| `Buf::get_int()` implementation for `Bytes` returns positive number instead of negative when `nbytes` < 8.
**Steps to reproduce:**
Run the following program, using `bytes` version 1.7.1
```
use bytes::{BytesMut, Buf, BufMut};
fn main() {
const SOME_NEG_NUMBER: i64 = -42;
let mut buffer = BytesMut::with_capacity(8);
buffer.put_int(SOME_NEG_NUMBER, 1);
println!("buffer = {:?}", &buffer);
assert_eq!(buffer.freeze().get_int(1), SOME_NEG_NUMBER);
}
```
**Expected outcome:**
Assertion passes and program terminates successfully, due to the symmetry of the `put_int` and `get_int` calls.
**Actual outcome:**
Assertion fails:
```
buffer = b"\xd6"
thread 'main' panicked at src/main.rs:9:5:
assertion `left == right` failed
left: 214
right: -42
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
**Additional information:**
1. Only happens with negative numbers; positive numbers are always OK.
2. Only happens when `nbytes` parameter is < 8.
3. Other combos like `put_i8()`/`get_i8()` or `put_i16()`/`get_i16()` work as intended.
| It looks like this has been caused by 234d814122d6445bdfb15f635290bfc4dd36c2eb / #280, as demonstrated by the revert:
<details>
<summary>See the revert</summary>
```patch
From 4a9b9a4ea0538dff7d9ae57070c98f6ad4afd708 Mon Sep 17 00:00:00 2001
From: Paolo Barbolini <paolo.barbolini@m4ss.net>
Date: Mon, 19 Aug 2024 06:08:47 +0200
Subject: [PATCH] Revert "Remove byteorder dependency (#280)"
This reverts commit 234d814122d6445bdfb15f635290bfc4dd36c2eb.
---
Cargo.toml | 1 +
src/buf/buf_impl.rs | 126 ++++++++++++++++++--------------------------
src/buf/buf_mut.rs | 119 +++++++++++++++++++++++++----------------
3 files changed, 127 insertions(+), 119 deletions(-)
diff --git a/Cargo.toml b/Cargo.toml
index e072539..a49d681 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -24,6 +24,7 @@ std = []
[dependencies]
serde = { version = "1.0.60", optional = true, default-features = false, features = ["alloc"] }
+byteorder = "1.3"
[dev-dependencies]
serde_test = "1.0"
diff --git a/src/buf/buf_impl.rs b/src/buf/buf_impl.rs
index c44d4fb..5817f41 100644
--- a/src/buf/buf_impl.rs
+++ b/src/buf/buf_impl.rs
@@ -1,68 +1,46 @@
#[cfg(feature = "std")]
use crate::buf::{reader, Reader};
use crate::buf::{take, Chain, Take};
+use crate::panic_advance;
#[cfg(feature = "std")]
use crate::{min_u64_usize, saturating_sub_usize_u64};
-use crate::{panic_advance, panic_does_not_fit};
#[cfg(feature = "std")]
use std::io::IoSlice;
use alloc::boxed::Box;
-macro_rules! buf_get_impl {
- ($this:ident, $typ:tt::$conv:tt) => {{
- const SIZE: usize = core::mem::size_of::<$typ>();
-
- if $this.remaining() < SIZE {
- panic_advance(SIZE, $this.remaining());
- }
+use byteorder::{BigEndian, ByteOrder, LittleEndian, NativeEndian};
+macro_rules! buf_get_impl {
+ ($this:ident, $size:expr, $conv:path) => {{
// try to convert directly from the bytes
- // this Option<ret> trick is to avoid keeping a borrow on self
- // when advance() is called (mut borrow) and to call bytes() only once
- let ret = $this
- .chunk()
- .get(..SIZE)
- .map(|src| unsafe { $typ::$conv(*(src as *const _ as *const [_; SIZE])) });
-
+ let ret = {
+ // this Option<ret> trick is to avoid keeping a borrow on self
+ // when advance() is called (mut borrow) and to call bytes() only once
+ if let Some(src) = $this.chunk().get(..($size)) {
+ Some($conv(src))
+ } else {
+ None
+ }
+ };
if let Some(ret) = ret {
// if the direct conversion was possible, advance and return
- $this.advance(SIZE);
+ $this.advance($size);
return ret;
} else {
// if not we copy the bytes in a temp buffer then convert
- let mut buf = [0; SIZE];
+ let mut buf = [0; ($size)];
$this.copy_to_slice(&mut buf); // (do the advance)
- return $typ::$conv(buf);
+ return $conv(&buf);
}
}};
- (le => $this:ident, $typ:tt, $len_to_read:expr) => {{
- const SIZE: usize = core::mem::size_of::<$typ>();
-
+ ($this:ident, $buf_size:expr, $conv:path, $len_to_read:expr) => {{
// The same trick as above does not improve the best case speed.
// It seems to be linked to the way the method is optimised by the compiler
- let mut buf = [0; SIZE];
-
- let subslice = match buf.get_mut(..$len_to_read) {
- Some(subslice) => subslice,
- None => panic_does_not_fit(SIZE, $len_to_read),
- };
-
- $this.copy_to_slice(subslice);
- return $typ::from_le_bytes(buf);
- }};
- (be => $this:ident, $typ:tt, $len_to_read:expr) => {{
- const SIZE: usize = core::mem::size_of::<$typ>();
-
- let slice_at = match SIZE.checked_sub($len_to_read) {
- Some(slice_at) => slice_at,
- None => panic_does_not_fit(SIZE, $len_to_read),
- };
-
- let mut buf = [0; SIZE];
- $this.copy_to_slice(&mut buf[slice_at..]);
- return $typ::from_be_bytes(buf);
+ let mut buf = [0; ($buf_size)];
+ $this.copy_to_slice(&mut buf[..($len_to_read)]);
+ return $conv(&buf[..($len_to_read)], $len_to_read);
}};
}
@@ -350,7 +328,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u16(&mut self) -> u16 {
- buf_get_impl!(self, u16::from_be_bytes);
+ buf_get_impl!(self, 2, BigEndian::read_u16);
}
/// Gets an unsigned 16 bit integer from `self` in little-endian byte order.
@@ -370,7 +348,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u16_le(&mut self) -> u16 {
- buf_get_impl!(self, u16::from_le_bytes);
+ buf_get_impl!(self, 2, LittleEndian::read_u16);
}
/// Gets an unsigned 16 bit integer from `self` in native-endian byte order.
@@ -393,7 +371,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u16_ne(&mut self) -> u16 {
- buf_get_impl!(self, u16::from_ne_bytes);
+ buf_get_impl!(self, 2, NativeEndian::read_u16);
}
/// Gets a signed 16 bit integer from `self` in big-endian byte order.
@@ -413,7 +391,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i16(&mut self) -> i16 {
- buf_get_impl!(self, i16::from_be_bytes);
+ buf_get_impl!(self, 2, BigEndian::read_i16);
}
/// Gets a signed 16 bit integer from `self` in little-endian byte order.
@@ -433,7 +411,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i16_le(&mut self) -> i16 {
- buf_get_impl!(self, i16::from_le_bytes);
+ buf_get_impl!(self, 2, LittleEndian::read_i16);
}
/// Gets a signed 16 bit integer from `self` in native-endian byte order.
@@ -456,7 +434,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i16_ne(&mut self) -> i16 {
- buf_get_impl!(self, i16::from_ne_bytes);
+ buf_get_impl!(self, 2, NativeEndian::read_i16);
}
/// Gets an unsigned 32 bit integer from `self` in the big-endian byte order.
@@ -476,7 +454,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u32(&mut self) -> u32 {
- buf_get_impl!(self, u32::from_be_bytes);
+ buf_get_impl!(self, 4, BigEndian::read_u32);
}
/// Gets an unsigned 32 bit integer from `self` in the little-endian byte order.
@@ -496,7 +474,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u32_le(&mut self) -> u32 {
- buf_get_impl!(self, u32::from_le_bytes);
+ buf_get_impl!(self, 4, LittleEndian::read_u32);
}
/// Gets an unsigned 32 bit integer from `self` in native-endian byte order.
@@ -519,7 +497,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u32_ne(&mut self) -> u32 {
- buf_get_impl!(self, u32::from_ne_bytes);
+ buf_get_impl!(self, 4, NativeEndian::read_u32);
}
/// Gets a signed 32 bit integer from `self` in big-endian byte order.
@@ -539,7 +517,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i32(&mut self) -> i32 {
- buf_get_impl!(self, i32::from_be_bytes);
+ buf_get_impl!(self, 4, BigEndian::read_i32);
}
/// Gets a signed 32 bit integer from `self` in little-endian byte order.
@@ -559,7 +537,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i32_le(&mut self) -> i32 {
- buf_get_impl!(self, i32::from_le_bytes);
+ buf_get_impl!(self, 4, LittleEndian::read_i32);
}
/// Gets a signed 32 bit integer from `self` in native-endian byte order.
@@ -582,7 +560,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i32_ne(&mut self) -> i32 {
- buf_get_impl!(self, i32::from_ne_bytes);
+ buf_get_impl!(self, 4, NativeEndian::read_i32);
}
/// Gets an unsigned 64 bit integer from `self` in big-endian byte order.
@@ -602,7 +580,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u64(&mut self) -> u64 {
- buf_get_impl!(self, u64::from_be_bytes);
+ buf_get_impl!(self, 8, BigEndian::read_u64);
}
/// Gets an unsigned 64 bit integer from `self` in little-endian byte order.
@@ -622,7 +600,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u64_le(&mut self) -> u64 {
- buf_get_impl!(self, u64::from_le_bytes);
+ buf_get_impl!(self, 8, LittleEndian::read_u64);
}
/// Gets an unsigned 64 bit integer from `self` in native-endian byte order.
@@ -645,7 +623,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u64_ne(&mut self) -> u64 {
- buf_get_impl!(self, u64::from_ne_bytes);
+ buf_get_impl!(self, 8, NativeEndian::read_u64);
}
/// Gets a signed 64 bit integer from `self` in big-endian byte order.
@@ -665,7 +643,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i64(&mut self) -> i64 {
- buf_get_impl!(self, i64::from_be_bytes);
+ buf_get_impl!(self, 8, BigEndian::read_i64);
}
/// Gets a signed 64 bit integer from `self` in little-endian byte order.
@@ -685,7 +663,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i64_le(&mut self) -> i64 {
- buf_get_impl!(self, i64::from_le_bytes);
+ buf_get_impl!(self, 8, LittleEndian::read_i64);
}
/// Gets a signed 64 bit integer from `self` in native-endian byte order.
@@ -708,7 +686,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i64_ne(&mut self) -> i64 {
- buf_get_impl!(self, i64::from_ne_bytes);
+ buf_get_impl!(self, 8, NativeEndian::read_i64);
}
/// Gets an unsigned 128 bit integer from `self` in big-endian byte order.
@@ -728,7 +706,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u128(&mut self) -> u128 {
- buf_get_impl!(self, u128::from_be_bytes);
+ buf_get_impl!(self, 16, BigEndian::read_u128);
}
/// Gets an unsigned 128 bit integer from `self` in little-endian byte order.
@@ -748,7 +726,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u128_le(&mut self) -> u128 {
- buf_get_impl!(self, u128::from_le_bytes);
+ buf_get_impl!(self, 16, LittleEndian::read_u128);
}
/// Gets an unsigned 128 bit integer from `self` in native-endian byte order.
@@ -771,7 +749,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_u128_ne(&mut self) -> u128 {
- buf_get_impl!(self, u128::from_ne_bytes);
+ buf_get_impl!(self, 16, NativeEndian::read_u128);
}
/// Gets a signed 128 bit integer from `self` in big-endian byte order.
@@ -791,7 +769,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i128(&mut self) -> i128 {
- buf_get_impl!(self, i128::from_be_bytes);
+ buf_get_impl!(self, 16, BigEndian::read_i128);
}
/// Gets a signed 128 bit integer from `self` in little-endian byte order.
@@ -811,7 +789,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i128_le(&mut self) -> i128 {
- buf_get_impl!(self, i128::from_le_bytes);
+ buf_get_impl!(self, 16, LittleEndian::read_i128);
}
/// Gets a signed 128 bit integer from `self` in native-endian byte order.
@@ -834,7 +812,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_i128_ne(&mut self) -> i128 {
- buf_get_impl!(self, i128::from_ne_bytes);
+ buf_get_impl!(self, 16, NativeEndian::read_i128);
}
/// Gets an unsigned n-byte integer from `self` in big-endian byte order.
@@ -854,7 +832,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_uint(&mut self, nbytes: usize) -> u64 {
- buf_get_impl!(be => self, u64, nbytes);
+ buf_get_impl!(self, 8, BigEndian::read_uint, nbytes);
}
/// Gets an unsigned n-byte integer from `self` in little-endian byte order.
@@ -874,7 +852,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_uint_le(&mut self, nbytes: usize) -> u64 {
- buf_get_impl!(le => self, u64, nbytes);
+ buf_get_impl!(self, 8, LittleEndian::read_uint, nbytes);
}
/// Gets an unsigned n-byte integer from `self` in native-endian byte order.
@@ -923,7 +901,7 @@ pub trait Buf {
/// This function panics if there is not enough remaining data in `self`, or
/// if `nbytes` is greater than 8.
fn get_int(&mut self, nbytes: usize) -> i64 {
- buf_get_impl!(be => self, i64, nbytes);
+ buf_get_impl!(self, 8, BigEndian::read_int, nbytes);
}
/// Gets a signed n-byte integer from `self` in little-endian byte order.
@@ -944,7 +922,7 @@ pub trait Buf {
/// This function panics if there is not enough remaining data in `self`, or
/// if `nbytes` is greater than 8.
fn get_int_le(&mut self, nbytes: usize) -> i64 {
- buf_get_impl!(le => self, i64, nbytes);
+ buf_get_impl!(self, 8, LittleEndian::read_int, nbytes);
}
/// Gets a signed n-byte integer from `self` in native-endian byte order.
@@ -993,7 +971,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_f32(&mut self) -> f32 {
- f32::from_bits(self.get_u32())
+ buf_get_impl!(self, 4, BigEndian::read_f32);
}
/// Gets an IEEE754 single-precision (4 bytes) floating point number from
@@ -1038,7 +1016,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_f32_ne(&mut self) -> f32 {
- f32::from_bits(self.get_u32_ne())
+ buf_get_impl!(self, 4, LittleEndian::read_f32);
}
/// Gets an IEEE754 double-precision (8 bytes) floating point number from
@@ -1059,7 +1037,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_f64(&mut self) -> f64 {
- f64::from_bits(self.get_u64())
+ buf_get_impl!(self, 8, BigEndian::read_f64);
}
/// Gets an IEEE754 double-precision (8 bytes) floating point number from
@@ -1080,7 +1058,7 @@ pub trait Buf {
///
/// This function panics if there is not enough remaining data in `self`.
fn get_f64_le(&mut self) -> f64 {
- f64::from_bits(self.get_u64_le())
+ buf_get_impl!(self, 8, LittleEndian::read_f64);
}
/// Gets an IEEE754 double-precision (8 bytes) floating point number from
diff --git a/src/buf/buf_mut.rs b/src/buf/buf_mut.rs
index e13278d..73baa1e 100644
--- a/src/buf/buf_mut.rs
+++ b/src/buf/buf_mut.rs
@@ -3,9 +3,10 @@ use crate::buf::{limit, Chain, Limit, UninitSlice};
use crate::buf::{writer, Writer};
use crate::{panic_advance, panic_does_not_fit};
-use core::{mem, ptr, usize};
+use core::{ptr, usize};
use alloc::{boxed::Box, vec::Vec};
+use byteorder::{BigEndian, ByteOrder, LittleEndian};
/// A trait for values that provide sequential write access to bytes.
///
@@ -367,7 +368,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_u16(&mut self, n: u16) {
- self.put_slice(&n.to_be_bytes())
+ let mut buf = [0; 2];
+ BigEndian::write_u16(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an unsigned 16 bit integer to `self` in little-endian byte order.
@@ -390,7 +393,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_u16_le(&mut self, n: u16) {
- self.put_slice(&n.to_le_bytes())
+ let mut buf = [0; 2];
+ LittleEndian::write_u16(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an unsigned 16 bit integer to `self` in native-endian byte order.
@@ -440,7 +445,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_i16(&mut self, n: i16) {
- self.put_slice(&n.to_be_bytes())
+ let mut buf = [0; 2];
+ BigEndian::write_i16(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes a signed 16 bit integer to `self` in little-endian byte order.
@@ -463,7 +470,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_i16_le(&mut self, n: i16) {
- self.put_slice(&n.to_le_bytes())
+ let mut buf = [0; 2];
+ LittleEndian::write_i16(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes a signed 16 bit integer to `self` in native-endian byte order.
@@ -513,7 +522,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_u32(&mut self, n: u32) {
- self.put_slice(&n.to_be_bytes())
+ let mut buf = [0; 4];
+ BigEndian::write_u32(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an unsigned 32 bit integer to `self` in little-endian byte order.
@@ -536,7 +547,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_u32_le(&mut self, n: u32) {
- self.put_slice(&n.to_le_bytes())
+ let mut buf = [0; 4];
+ LittleEndian::write_u32(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an unsigned 32 bit integer to `self` in native-endian byte order.
@@ -586,7 +599,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_i32(&mut self, n: i32) {
- self.put_slice(&n.to_be_bytes())
+ let mut buf = [0; 4];
+ BigEndian::write_i32(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes a signed 32 bit integer to `self` in little-endian byte order.
@@ -609,7 +624,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_i32_le(&mut self, n: i32) {
- self.put_slice(&n.to_le_bytes())
+ let mut buf = [0; 4];
+ LittleEndian::write_i32(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes a signed 32 bit integer to `self` in native-endian byte order.
@@ -659,7 +676,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_u64(&mut self, n: u64) {
- self.put_slice(&n.to_be_bytes())
+ let mut buf = [0; 8];
+ BigEndian::write_u64(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an unsigned 64 bit integer to `self` in little-endian byte order.
@@ -682,7 +701,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_u64_le(&mut self, n: u64) {
- self.put_slice(&n.to_le_bytes())
+ let mut buf = [0; 8];
+ LittleEndian::write_u64(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an unsigned 64 bit integer to `self` in native-endian byte order.
@@ -732,7 +753,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_i64(&mut self, n: i64) {
- self.put_slice(&n.to_be_bytes())
+ let mut buf = [0; 8];
+ BigEndian::write_i64(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes a signed 64 bit integer to `self` in little-endian byte order.
@@ -755,7 +778,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_i64_le(&mut self, n: i64) {
- self.put_slice(&n.to_le_bytes())
+ let mut buf = [0; 8];
+ LittleEndian::write_i64(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes a signed 64 bit integer to `self` in native-endian byte order.
@@ -805,7 +830,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_u128(&mut self, n: u128) {
- self.put_slice(&n.to_be_bytes())
+ let mut buf = [0; 16];
+ BigEndian::write_u128(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an unsigned 128 bit integer to `self` in little-endian byte order.
@@ -828,7 +855,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_u128_le(&mut self, n: u128) {
- self.put_slice(&n.to_le_bytes())
+ let mut buf = [0; 16];
+ LittleEndian::write_u128(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an unsigned 128 bit integer to `self` in native-endian byte order.
@@ -878,7 +907,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_i128(&mut self, n: i128) {
- self.put_slice(&n.to_be_bytes())
+ let mut buf = [0; 16];
+ BigEndian::write_i128(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes a signed 128 bit integer to `self` in little-endian byte order.
@@ -901,7 +932,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_i128_le(&mut self, n: i128) {
- self.put_slice(&n.to_le_bytes())
+ let mut buf = [0; 16];
+ LittleEndian::write_i128(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes a signed 128 bit integer to `self` in native-endian byte order.
@@ -951,12 +984,9 @@ pub unsafe trait BufMut {
/// `self` or if `nbytes` is greater than 8.
#[inline]
fn put_uint(&mut self, n: u64, nbytes: usize) {
- let start = match mem::size_of_val(&n).checked_sub(nbytes) {
- Some(start) => start,
- None => panic_does_not_fit(nbytes, mem::size_of_val(&n)),
- };
-
- self.put_slice(&n.to_be_bytes()[start..]);
+ let mut buf = [0; 8];
+ BigEndian::write_uint(&mut buf, n, nbytes);
+ self.put_slice(&buf[0..nbytes])
}
/// Writes an unsigned n-byte integer to `self` in the little-endian byte order.
@@ -979,13 +1009,9 @@ pub unsafe trait BufMut {
/// `self` or if `nbytes` is greater than 8.
#[inline]
fn put_uint_le(&mut self, n: u64, nbytes: usize) {
- let slice = n.to_le_bytes();
- let slice = match slice.get(..nbytes) {
- Some(slice) => slice,
- None => panic_does_not_fit(nbytes, slice.len()),
- };
-
- self.put_slice(slice);
+ let mut buf = [0; 8];
+ LittleEndian::write_uint(&mut buf, n, nbytes);
+ self.put_slice(&buf[0..nbytes])
}
/// Writes an unsigned n-byte integer to `self` in the native-endian byte order.
@@ -1039,12 +1065,9 @@ pub unsafe trait BufMut {
/// `self` or if `nbytes` is greater than 8.
#[inline]
fn put_int(&mut self, n: i64, nbytes: usize) {
- let start = match mem::size_of_val(&n).checked_sub(nbytes) {
- Some(start) => start,
- None => panic_does_not_fit(nbytes, mem::size_of_val(&n)),
- };
-
- self.put_slice(&n.to_be_bytes()[start..]);
+ let mut buf = [0; 8];
+ BigEndian::write_int(&mut buf, n, nbytes);
+ self.put_slice(&buf[0..nbytes])
}
/// Writes low `nbytes` of a signed integer to `self` in little-endian byte order.
@@ -1100,11 +1123,9 @@ pub unsafe trait BufMut {
/// `self` or if `nbytes` is greater than 8.
#[inline]
fn put_int_ne(&mut self, n: i64, nbytes: usize) {
- if cfg!(target_endian = "big") {
- self.put_int(n, nbytes)
- } else {
- self.put_int_le(n, nbytes)
- }
+ let mut buf = [0; 8];
+ LittleEndian::write_int(&mut buf, n, nbytes);
+ self.put_slice(&buf[0..nbytes])
}
/// Writes an IEEE754 single-precision (4 bytes) floating point number to
@@ -1128,7 +1149,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_f32(&mut self, n: f32) {
- self.put_u32(n.to_bits());
+ let mut buf = [0; 4];
+ BigEndian::write_f32(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an IEEE754 single-precision (4 bytes) floating point number to
@@ -1152,7 +1175,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_f32_le(&mut self, n: f32) {
- self.put_u32_le(n.to_bits());
+ let mut buf = [0; 4];
+ LittleEndian::write_f32(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an IEEE754 single-precision (4 bytes) floating point number to
@@ -1204,7 +1229,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_f64(&mut self, n: f64) {
- self.put_u64(n.to_bits());
+ let mut buf = [0; 8];
+ BigEndian::write_f64(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an IEEE754 double-precision (8 bytes) floating point number to
@@ -1228,7 +1255,9 @@ pub unsafe trait BufMut {
/// `self`.
#[inline]
fn put_f64_le(&mut self, n: f64) {
- self.put_u64_le(n.to_bits());
+ let mut buf = [0; 8];
+ LittleEndian::write_f64(&mut buf, n);
+ self.put_slice(&buf)
}
/// Writes an IEEE754 double-precision (8 bytes) floating point number to
--
2.46.0
```
</details>
Thank you, we should fix this. Anyone willing to submit a PR?
I'm looking into it | 2024-08-19T21:14:07 | 1.7 | 291df5acc94b82a48765e67eeb1c1a2074539e68 | [
"test_get_int"
] | [
"bytes_mut::tests::test_original_capacity_to_repr",
"bytes_mut::tests::test_original_capacity_from_repr",
"copy_to_bytes_less",
"test_bufs_vec",
"copy_to_bytes_overflow - should panic",
"test_deref_buf_forwards",
"test_fresh_cursor_vec",
"test_get_u16",
"test_get_u16_buffer_underflow - should panic"... | [] | [] | 48 |
tokio-rs/bytes | 721 | tokio-rs__bytes-721 | [
"709"
] | 9965a04b5684079bb614addd750340ffc165a9f5 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,8 @@
+# 1.6.1 (July 13, 2024)
+
+This release fixes a bug where `Bytes::is_unique` returns incorrect values when
+the `Bytes` originates from a shared `BytesMut`. (#718)
+
# 1.6.0 (March 22, 2024)
### Added
diff --git a/Cargo.toml b/Cargo.toml
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -4,7 +4,7 @@ name = "bytes"
# When releasing to crates.io:
# - Update CHANGELOG.md.
# - Create "v1.x.y" git tag.
-version = "1.6.0"
+version = "1.6.1"
edition = "2018"
rust-version = "1.39"
license = "MIT"
diff --git a/src/bytes_mut.rs b/src/bytes_mut.rs
--- a/src/bytes_mut.rs
+++ b/src/bytes_mut.rs
@@ -1780,7 +1780,7 @@ static SHARED_VTABLE: Vtable = Vtable {
clone: shared_v_clone,
to_vec: shared_v_to_vec,
to_mut: shared_v_to_mut,
- is_unique: crate::bytes::shared_is_unique,
+ is_unique: shared_v_is_unique,
drop: shared_v_drop,
};
diff --git a/src/bytes_mut.rs b/src/bytes_mut.rs
--- a/src/bytes_mut.rs
+++ b/src/bytes_mut.rs
@@ -1843,6 +1843,12 @@ unsafe fn shared_v_to_mut(data: &AtomicPtr<()>, ptr: *const u8, len: usize) -> B
}
}
+unsafe fn shared_v_is_unique(data: &AtomicPtr<()>) -> bool {
+ let shared = data.load(Ordering::Acquire);
+ let ref_count = (*shared.cast::<Shared>()).ref_count.load(Ordering::Relaxed);
+ ref_count == 1
+}
+
unsafe fn shared_v_drop(data: &mut AtomicPtr<()>, _ptr: *const u8, _len: usize) {
data.with_mut(|shared| {
release_shared(*shared as *mut Shared);
| diff --git a/tests/test_bytes.rs b/tests/test_bytes.rs
--- a/tests/test_bytes.rs
+++ b/tests/test_bytes.rs
@@ -1173,6 +1173,15 @@ fn shared_is_unique() {
assert!(c.is_unique());
}
+#[test]
+fn mut_shared_is_unique() {
+ let mut b = BytesMut::from(LONG);
+ let c = b.split().freeze();
+ assert!(!c.is_unique());
+ drop(b);
+ assert!(c.is_unique());
+}
+
#[test]
fn test_bytesmut_from_bytes_static() {
let bs = b"1b23exfcz3r";
| Consider replacing Bytes::make_mut by impl From<Bytes> for BytesMut
`Bytes::make_mut` is a very good addition to the API but I think it would be better if it was instead exposed as `<BytesMut as From<Bytes>>::from`. Could this be done before the next bytes version is released? `Bytes::make_mut` isn't released yet.
| The reason for the current API is to support adding a fallible `Bytes::try_mut` method in the future (as I proposed in #611). See also #368
None of this mentions `From<_>` though. For some HTTP stuff I need to mutate bytes yielded by Hyper and I can do that in-place, but Hyper yields `Bytes` so I want to be able to do `O: Into<BytesMut>`.
Note also that this `try_mut` you suggest *is* what should be called `make_mut` to mimic `Arc<_>`, so that's one more argument in favour of making the current `Bytes::make_mut` become an impl `From<Bytes>` for `BytesMut`.
Oh yes, I had forgotten that I wanted `make_mut` to be the fallible conversion method. With that change I'm no longer against this (I was opposed to this from a naming standpoint, I felt that the `try_mut` and `from` names would conflict and lead to confusion).
> Oh yes, I had forgotten that I wanted `make_mut` to be the fallible conversion method. With that change I'm no longer against this (I was opposed to this from a naming standpoint, I felt that the `try_mut` and `from` names would conflict and lead to confusion).
For the API you envision, following what `Arc<T>` does, you should add `Bytes::make_mut(&mut self) -> &mut T` and `Bytes::get_mut(&mut self) -> Option<&mut T>`.
This proposal seems reasonable to me. | 2024-07-13T15:50:44 | 1.6 | 9965a04b5684079bb614addd750340ffc165a9f5 | [
"mut_shared_is_unique"
] | [
"bytes_mut::tests::test_original_capacity_from_repr",
"bytes_mut::tests::test_original_capacity_to_repr",
"copy_to_bytes_less",
"copy_to_bytes_overflow - should panic",
"test_get_u8",
"test_bufs_vec",
"test_deref_buf_forwards",
"test_get_u16_buffer_underflow - should panic",
"test_fresh_cursor_vec",... | [] | [] | 49 |
starkware-libs/cairo | 4,777 | starkware-libs__cairo-4777 | [
"4744"
] | 75f779e8553d28e4bfff3b23f140ccc781c9dab0 | diff --git a/crates/cairo-lang-sierra-generator/src/store_variables/known_stack.rs b/crates/cairo-lang-sierra-generator/src/store_variables/known_stack.rs
--- a/crates/cairo-lang-sierra-generator/src/store_variables/known_stack.rs
+++ b/crates/cairo-lang-sierra-generator/src/store_variables/known_stack.rs
@@ -70,9 +70,10 @@ impl KnownStack {
/// Updates offset according to the maximal index in `variables_on_stack`.
/// This is the expected behavior after invoking a libfunc.
pub fn update_offset_by_max(&mut self) {
- // `offset` is one more than the maximum of the indices in `variables_on_stack`
- // (or 0 if empty).
- self.offset = self.variables_on_stack.values().max().map(|idx| idx + 1).unwrap_or(0);
+ // `offset` is one more than the maximum of the indices in `variables_on_stack`, or the
+ // previous value (this would handle cases of variables being removed from the top of
+ // stack).
+ self.offset = self.variables_on_stack.values().fold(self.offset, |acc, f| acc.max(*f + 1));
}
/// Removes the information known about the given variable.
| diff --git a/crates/cairo-lang-sierra-generator/src/store_variables/test.rs b/crates/cairo-lang-sierra-generator/src/store_variables/test.rs
--- a/crates/cairo-lang-sierra-generator/src/store_variables/test.rs
+++ b/crates/cairo-lang-sierra-generator/src/store_variables/test.rs
@@ -207,6 +207,11 @@ fn get_lib_func_signature(db: &dyn SierraGenGroup, libfunc: ConcreteLibfuncId) -
],
SierraApChange::Known { new_vars_only: true },
),
+ "make_local" => LibfuncSignature::new_non_branch(
+ vec![felt252_ty.clone()],
+ vec![OutputVarInfo { ty: felt252_ty, ref_info: OutputVarReferenceInfo::NewLocalVar }],
+ SierraApChange::Known { new_vars_only: true },
+ ),
_ => panic!("get_branch_signatures() is not implemented for '{name}'."),
}
}
diff --git a/crates/cairo-lang-sierra-generator/src/store_variables/test.rs b/crates/cairo-lang-sierra-generator/src/store_variables/test.rs
--- a/crates/cairo-lang-sierra-generator/src/store_variables/test.rs
+++ b/crates/cairo-lang-sierra-generator/src/store_variables/test.rs
@@ -841,3 +846,27 @@ fn consecutive_appends_with_branch() {
]
);
}
+
+#[test]
+fn push_values_with_hole() {
+ let db = SierraGenDatabaseForTesting::default();
+ let statements: Vec<pre_sierra::Statement> = vec![
+ dummy_push_values(&db, &[("0", "100"), ("1", "101"), ("2", "102")]),
+ dummy_simple_statement(&db, "make_local", &["102"], &["102"]),
+ dummy_push_values(&db, &[("100", "200"), ("101", "201")]),
+ dummy_return_statement(&["201"]),
+ ];
+
+ assert_eq!(
+ test_add_store_statements(&db, statements, LocalVariables::default(), &["0", "1", "2"]),
+ vec![
+ "store_temp<felt252>(0) -> (100)",
+ "store_temp<felt252>(1) -> (101)",
+ "store_temp<felt252>(2) -> (102)",
+ "make_local(102) -> (102)",
+ "store_temp<felt252>(100) -> (200)",
+ "store_temp<felt252>(101) -> (201)",
+ "return(201)",
+ ]
+ );
+}
| bug: One of the arguments does not satisfy the requirements of the libfunc.
# Bug Report
**Cairo version: 2.3.0-rc0**
<!-- Please specify commit or tag version. -->
**Current behavior:**
```cairo
fn g(ref v: Felt252Vec<u64>, a: usize, b: usize, c: usize, d: usize, x: u64, y: u64) {
let mut v_a = v[a];
let mut v_b = v[b];
let mut v_c = v[c];
let mut v_d = v[d];
v_a = u64_wrapping_add(u64_wrapping_add(v_a, v_b), x);
v_a = u64_wrapping_add(u64_wrapping_add(v_a, v_b), y);
}
```
The above code fails with the following error:
```shell
testing bugreport ...
Error: Failed setting up runner.
Caused by:
#42: One of the arguments does not satisfy the requirements of the libfunc.
error: process did not exit successfully: exit status: 1
```
**Expected behavior:**
The above code should compile and work.
**Other information:**
I have seen a similar issue which has been closed at #3863, not sure if my bug is arriving because of the same reason.
| I suggested a temporary workaround as you suggested to me last time @orizi, which is
```rust
#[inline(never)]
fn no_op() {}
fn g(ref v: Felt252Vec<u64>, a: usize, b: usize, c: usize, d: usize, x: u64, y: u64) {
let mut v_a = v[a];
let mut v_b = v[b];
let mut v_c = v[c];
let mut v_d = v[d];
let tmp = u64_wrapping_add(v_a, v_b);
no_op();
v_a = u64_wrapping_add(tmp, x);
no_op();
v_d = rotate_right(v_d ^ v_a, 32);
v_c = u64_wrapping_add(v_c, v_d);
no_op();
v_b = rotate_right(v_b ^ v_c, 24);
```
which allowed us to compile
please check with 2.4.0 - cant try reproduce by myself as i got no self contained code for reproduction.
@bajpai244 this was 2.4 right? not 2.3.0-rc0
managed to run in main and on 2.4.0 and it seemed to fully work.
feel free to reopen if doesn't actually work.
Just re-checked, we're running a recent nightly build and the error still occurs.
```
❯ scarb test -p evm
Running cairo-test evm
Compiling test(evm_unittest) evm v0.1.0 (/Users/msaug/kkrt-labs/kakarot-ssj/crates/evm/Scarb.toml)
Finished release target(s) in 10 seconds
testing evm ...
Error: Failed setting up runner.
Caused by:
#725711: One of the arguments does not satisfy the requirements of the libfunc.
❯ scarb --version
scarb 2.4.0+nightly-2023-12-23 (9447d26cc 2023-12-23)
cairo: 2.4.0 (362c5f748)
sierra: 1.4.0
❯ scarb --version
scarb 2.4.0+nightly-2023-12-23 (9447d26cc 2023-12-23)
cairo: 2.4.0 (362c5f748)
sierra: 1.4.0
```
Full code: https://github.com/kkrt-labs/kakarot-ssj/blob/8bc2e966880e129c393f97723108f515cd88acf8/crates/utils/src/crypto/blake2_compress.cairo#L147-L175
removing the `no_op` and running this doesn't work
```
fn g(ref v: Felt252Vec<u64>, a: usize, b: usize, c: usize, d: usize, x: u64, y: u64) {
let mut v_a = v[a];
let mut v_b = v[b];
let mut v_c = v[c];
let mut v_d = v[d];
let tmp = u64_wrapping_add(v_a, v_b);
v_a = u64_wrapping_add(tmp, x);
v_d = rotate_right(v_d ^ v_a, 32);
v_c = u64_wrapping_add(v_c, v_d);
v_b = rotate_right(v_b ^ v_c, 24);
let tmp = u64_wrapping_add(v_a, v_b);
v_a = u64_wrapping_add(tmp, y);
v_d = rotate_right(v_d ^ v_a, 16);
v_c = u64_wrapping_add(v_c, v_d);
v_b = rotate_right(v_b ^ v_c, 63);
v.set(a, v_a);
v.set(b, v_b);
v.set(c, v_c);
v.set(d, v_d);
}
```
managed to more or less minimally recreate now - thanks! | 2024-01-10T17:30:48 | 2.4 | 75f779e8553d28e4bfff3b23f140ccc781c9dab0 | [
"store_variables::test::push_values_with_hole"
] | [
"canonical_id_replacer::test::test_replacer",
"function_generator::test::function_generator::simple",
"function_generator::test::function_generator::literals",
"function_generator::test::function_generator::match_",
"function_generator::test::function_generator::inline",
"block_generator::test::block_gene... | [] | [] | 50 |
starkware-libs/cairo | 6,720 | starkware-libs__cairo-6720 | [
"6696"
] | 9e0fea71cd1d90cf47d25689f5433250ae2f504f | diff --git a/crates/cairo-lang-semantic/src/diagnostic.rs b/crates/cairo-lang-semantic/src/diagnostic.rs
--- a/crates/cairo-lang-semantic/src/diagnostic.rs
+++ b/crates/cairo-lang-semantic/src/diagnostic.rs
@@ -1344,6 +1344,7 @@ impl SemanticDiagnosticKind {
}
Self::MissingMember(_) => error_code!(E0003),
Self::MissingItemsInImpl(_) => error_code!(E0004),
+ Self::ModuleFileNotFound(_) => error_code!(E0005),
_ => return None,
})
}
| diff --git a/crates/cairo-lang-semantic/src/diagnostic_test.rs b/crates/cairo-lang-semantic/src/diagnostic_test.rs
--- a/crates/cairo-lang-semantic/src/diagnostic_test.rs
+++ b/crates/cairo-lang-semantic/src/diagnostic_test.rs
@@ -54,7 +54,7 @@ fn test_missing_module_file() {
assert_eq!(
db.module_semantic_diagnostics(ModuleId::Submodule(submodule_id)).unwrap().format(db),
indoc! {"
- error: Module file not found. Expected path: abc.cairo
+ error[E0005]: Module file not found. Expected path: abc.cairo
--> lib.cairo:3:9
mod abc;
^******^
diff --git a/crates/cairo-lang-semantic/src/diagnostic_test_data/not_found b/crates/cairo-lang-semantic/src/diagnostic_test_data/not_found
--- a/crates/cairo-lang-semantic/src/diagnostic_test_data/not_found
+++ b/crates/cairo-lang-semantic/src/diagnostic_test_data/not_found
@@ -41,7 +41,7 @@ mod module_does_not_exist;
//! > function_body
//! > expected_diagnostics
-error: Module file not found. Expected path: module_does_not_exist.cairo
+error[E0005]: Module file not found. Expected path: module_does_not_exist.cairo
--> lib.cairo:1:1
mod module_does_not_exist;
^************************^
diff --git a/crates/cairo-lang-starknet/src/plugin/plugin_test_data/components/no_body b/crates/cairo-lang-starknet/src/plugin/plugin_test_data/components/no_body
--- a/crates/cairo-lang-starknet/src/plugin/plugin_test_data/components/no_body
+++ b/crates/cairo-lang-starknet/src/plugin/plugin_test_data/components/no_body
@@ -19,7 +19,7 @@ error: Plugin diagnostic: Components without body are not supported.
#[starknet::component]
^********************^
-error: Module file not found. Expected path: test_component.cairo
+error[E0005]: Module file not found. Expected path: test_component.cairo
--> lib.cairo:1:1
#[starknet::component]
^********************^
diff --git a/crates/cairo-lang-starknet/src/plugin/plugin_test_data/contracts/no_body b/crates/cairo-lang-starknet/src/plugin/plugin_test_data/contracts/no_body
--- a/crates/cairo-lang-starknet/src/plugin/plugin_test_data/contracts/no_body
+++ b/crates/cairo-lang-starknet/src/plugin/plugin_test_data/contracts/no_body
@@ -19,7 +19,7 @@ error: Plugin diagnostic: Contracts without body are not supported.
#[starknet::contract]
^*******************^
-error: Module file not found. Expected path: test_contract.cairo
+error[E0005]: Module file not found. Expected path: test_contract.cairo
--> lib.cairo:1:1
#[starknet::contract]
^*******************^
| Implement code action for creating module files
Basically copy this from IntelliJ Rust:
https://github.com/user-attachments/assets/c5fef694-3a5a-4d2c-b864-2fbaa84eba3a
| 2024-11-22T01:23:04 | 2.9 | 9e0fea71cd1d90cf47d25689f5433250ae2f504f | [
"diagnostic::test::test_missing_module_file",
"diagnostic::test::diagnostics::not_found"
] | [
"diagnostic::test::test_inline_module_diagnostics",
"diagnostic::test::test_analyzer_diagnostics",
"diagnostic::test::test_inline_inline_module_diagnostics",
"diagnostic::test::diagnostics::inline",
"expr::test::expr_diagnostics::attributes",
"expr::test::expr_diagnostics::constructor",
"expr::test::exp... | [] | [] | 51 | |
tafia/calamine | 402 | tafia__calamine-402 | [
"401"
] | 7aa2086f7f1e9ddd06dd83acf24c2e9b29218b75 | diff --git a/src/xlsx/mod.rs b/src/xlsx/mod.rs
--- a/src/xlsx/mod.rs
+++ b/src/xlsx/mod.rs
@@ -459,6 +459,7 @@ impl<RS: Read + Seek> Xlsx<RS> {
// sheets must be added before this is called!!
fn read_table_metadata(&mut self) -> Result<(), XlsxError> {
+ let mut new_tables = Vec::new();
for (sheet_name, sheet_path) in &self.sheets {
let last_folder_index = sheet_path.rfind('/').expect("should be in a folder");
let (base_folder, file_name) = sheet_path.split_at(last_folder_index);
diff --git a/src/xlsx/mod.rs b/src/xlsx/mod.rs
--- a/src/xlsx/mod.rs
+++ b/src/xlsx/mod.rs
@@ -522,7 +523,6 @@ impl<RS: Read + Seek> Xlsx<RS> {
}
}
}
- let mut new_tables = Vec::new();
for table_file in table_locations {
let mut xml = match xml_reader(&mut self.zip, &table_file) {
None => continue,
diff --git a/src/xlsx/mod.rs b/src/xlsx/mod.rs
--- a/src/xlsx/mod.rs
+++ b/src/xlsx/mod.rs
@@ -606,12 +606,8 @@ impl<RS: Read + Seek> Xlsx<RS> {
dims,
));
}
- if let Some(tables) = &mut self.tables {
- tables.append(&mut new_tables);
- } else {
- self.tables = Some(new_tables);
- }
}
+ self.tables = Some(new_tables);
Ok(())
}
| diff --git a/tests/test.rs b/tests/test.rs
--- a/tests/test.rs
+++ b/tests/test.rs
@@ -1674,3 +1674,14 @@ fn issue_384_multiple_formula() {
];
assert_eq!(formula, expected)
}
+
+#[test]
+fn issue_401_empty_tables() {
+ setup();
+
+ let path = format!("{}/tests/date.xlsx", env!("CARGO_MANIFEST_DIR"));
+ let mut excel: Xlsx<_> = open_workbook(&path).unwrap();
+ excel.load_tables().unwrap();
+ let tables = excel.table_names();
+ assert!(tables.is_empty());
+}
| Isssue with load_tables and table_names functions
Please, help me!
When I try to parse xlsx file with smart table inside, functions load_table and table_names work good.
But when I try to parse xlsx file without smart table inside the same code failed with error "Tables must be loaded before they are referenced"
```rust
let mut workbook: Xlsx<_> = open_workbook(cli.source_file).expect("Cannot open file");
// Get worksheets of a book
let worksheets: Vec<String> = workbook.sheet_names();
workbook.load_tables().unwrap();
let tables = workbook.table_names();
println!("Tables loaded");
println!("Defined Tables: {:?}", tables);
```
I can't figured out how to check if it is any tables in the file.
| This is a legit bug.
I have a fix but I need to get a file to test on. | 2024-02-08T20:33:23 | 0.23 | 7aa2086f7f1e9ddd06dd83acf24c2e9b29218b75 | [
"issue_401_empty_tables"
] | [
"datatype::date_tests::test_dates",
"datatype::date_tests::test_int_dates",
"datatype::tests::test_as_f64_with_bools",
"datatype::tests::test_as_i64_with_bools",
"datatype::tests::test_partial_eq",
"de::tests::test_deserialize_enum",
"formats::test_is_date_format",
"utils::tests::sound_to_u32",
"xls... | [] | [] | 52 |
RustScan/RustScan | 660 | RustScan__RustScan-660 | [
"651"
] | c3d24feebef4c605535a6661ddfeaa18b9bde60c | diff --git a/src/address.rs b/src/address.rs
--- a/src/address.rs
+++ b/src/address.rs
@@ -1,4 +1,5 @@
//! Provides functions to parse input IP addresses, CIDRs or files.
+use std::collections::BTreeSet;
use std::fs::{self, File};
use std::io::{prelude::*, BufReader};
use std::net::{IpAddr, SocketAddr, ToSocketAddrs};
diff --git a/src/address.rs b/src/address.rs
--- a/src/address.rs
+++ b/src/address.rs
@@ -27,6 +28,8 @@ use crate::warning;
///
/// let ips = parse_addresses(&opts);
/// ```
+///
+/// Finally, any duplicates are removed to avoid excessive scans.
pub fn parse_addresses(input: &Opts) -> Vec<IpAddr> {
let mut ips: Vec<IpAddr> = Vec::new();
let mut unresolved_addresses: Vec<&str> = Vec::new();
diff --git a/src/address.rs b/src/address.rs
--- a/src/address.rs
+++ b/src/address.rs
@@ -66,7 +69,10 @@ pub fn parse_addresses(input: &Opts) -> Vec<IpAddr> {
}
}
- ips
+ ips.into_iter()
+ .collect::<BTreeSet<_>>()
+ .into_iter()
+ .collect()
}
/// Given a string, parse it as a host, IP address, or CIDR.
| diff --git a/src/address.rs b/src/address.rs
--- a/src/address.rs
+++ b/src/address.rs
@@ -256,6 +262,16 @@ mod tests {
assert_eq!(ips.len(), 0);
}
+ #[test]
+ fn parse_duplicate_cidrs() {
+ let mut opts = Opts::default();
+ opts.addresses = vec!["79.98.104.0/21".to_owned(), "79.98.104.0/24".to_owned()];
+
+ let ips = parse_addresses(&opts);
+
+ assert_eq!(ips.len(), 2_048);
+ }
+
#[test]
fn resolver_default_cloudflare() {
let opts = Opts::default();
| Dublication of ports in output
rustscan.exe -a tmp/asns_prefixes_specific.txt -p 80,443,8443,8080,8006 --scripts none
<img width="365" alt="Screenshot_2" src="https://github.com/user-attachments/assets/ef874b42-60b5-4260-bc67-dd7abe6dbfce">
tmp/asns_prefixes_specific.txt
`185.52.205.0/24
79.98.109.0/24
185.239.124.0/24
185.52.204.0/22
185.228.27.0/24
185.239.127.0/24
79.98.111.0/24
185.52.207.0/24
79.98.110.0/24
79.98.108.0/24
195.189.81.0/24
185.52.206.0/24
194.145.63.0/24
185.228.26.0/24
185.52.204.0/24
185.228.25.0/24
79.98.105.0/24
185.55.228.0/24
79.98.104.0/21
5.182.21.0/24
79.98.107.0/24
185.228.24.0/24
185.55.231.0/24
185.55.228.0/22
195.189.83.0/24
185.55.229.0/24
185.199.38.0/24
185.55.230.0/24
185.199.37.0/24
45.67.19.0/24
185.166.238.0/24
89.187.83.0/24
5.182.20.0/24
79.98.106.0/24
185.228.24.0/22
195.189.80.0/24
185.239.126.0/24
195.189.80.0/22
185.133.72.0/24
195.189.82.0/24
79.98.104.0/24
`
| I suspect that when we got
79.98.104.0/21
79.98.104.0/24
dublication comes from this | 2024-09-19T00:46:28 | 2.3 | c3d24feebef4c605535a6661ddfeaa18b9bde60c | [
"address::tests::parse_duplicate_cidrs"
] | [
"address::tests::parse_correct_addresses",
"address::tests::parse_empty_hosts_file",
"address::tests::parse_naughty_host_file",
"input::tests::opts_merge_required_arguments",
"input::tests::opts_no_merge_when_config_is_ignored",
"input::tests::opts_merge_optional_arguments",
"input::tests::parse_trailin... | [] | [] | 53 |
RustScan/RustScan | 518 | RustScan__RustScan-518 | [
"515"
] | 7dd9352baee3b467b4c55c95edf4de8494ca8a40 | diff --git a/src/port_strategy/mod.rs b/src/port_strategy/mod.rs
--- a/src/port_strategy/mod.rs
+++ b/src/port_strategy/mod.rs
@@ -67,7 +67,7 @@ pub struct SerialRange {
impl RangeOrder for SerialRange {
fn generate(&self) -> Vec<u16> {
- (self.start..self.end).collect()
+ (self.start..=self.end).collect()
}
}
diff --git a/src/port_strategy/range_iterator.rs b/src/port_strategy/range_iterator.rs
--- a/src/port_strategy/range_iterator.rs
+++ b/src/port_strategy/range_iterator.rs
@@ -22,7 +22,7 @@ impl RangeIterator {
/// For example, the range `1000-2500` will be normalized to `0-1500`
/// before going through the algorithm.
pub fn new(start: u32, end: u32) -> Self {
- let normalized_end = end - start;
+ let normalized_end = end - start + 1;
let step = pick_random_coprime(normalized_end);
// Randomly choose a number within the range to be the first
| diff --git a/src/port_strategy/mod.rs b/src/port_strategy/mod.rs
--- a/src/port_strategy/mod.rs
+++ b/src/port_strategy/mod.rs
@@ -104,7 +104,7 @@ mod tests {
let range = PortRange { start: 1, end: 100 };
let strategy = PortStrategy::pick(&Some(range), None, ScanOrder::Serial);
let result = strategy.order();
- let expected_range = (1..100).into_iter().collect::<Vec<u16>>();
+ let expected_range = (1..=100).into_iter().collect::<Vec<u16>>();
assert_eq!(expected_range, result);
}
#[test]
diff --git a/src/port_strategy/mod.rs b/src/port_strategy/mod.rs
--- a/src/port_strategy/mod.rs
+++ b/src/port_strategy/mod.rs
@@ -112,7 +112,7 @@ mod tests {
let range = PortRange { start: 1, end: 100 };
let strategy = PortStrategy::pick(&Some(range), None, ScanOrder::Random);
let mut result = strategy.order();
- let expected_range = (1..100).into_iter().collect::<Vec<u16>>();
+ let expected_range = (1..=100).into_iter().collect::<Vec<u16>>();
assert_ne!(expected_range, result);
result.sort_unstable();
diff --git a/src/port_strategy/range_iterator.rs b/src/port_strategy/range_iterator.rs
--- a/src/port_strategy/range_iterator.rs
+++ b/src/port_strategy/range_iterator.rs
@@ -103,23 +103,23 @@ mod tests {
#[test]
fn range_iterator_iterates_through_the_entire_range() {
let result = generate_sorted_range(1, 10);
- let expected_range = (1..10).into_iter().collect::<Vec<u16>>();
+ let expected_range = (1..=10).into_iter().collect::<Vec<u16>>();
assert_eq!(expected_range, result);
let result = generate_sorted_range(1, 100);
- let expected_range = (1..100).into_iter().collect::<Vec<u16>>();
+ let expected_range = (1..=100).into_iter().collect::<Vec<u16>>();
assert_eq!(expected_range, result);
let result = generate_sorted_range(1, 1000);
- let expected_range = (1..1000).into_iter().collect::<Vec<u16>>();
+ let expected_range = (1..=1000).into_iter().collect::<Vec<u16>>();
assert_eq!(expected_range, result);
let result = generate_sorted_range(1, 65_535);
- let expected_range = (1..65_535).into_iter().collect::<Vec<u16>>();
+ let expected_range = (1..=65_535).into_iter().collect::<Vec<u16>>();
assert_eq!(expected_range, result);
let result = generate_sorted_range(1000, 2000);
- let expected_range = (1000..2000).into_iter().collect::<Vec<u16>>();
+ let expected_range = (1000..=2000).into_iter().collect::<Vec<u16>>();
assert_eq!(expected_range, result);
}
| --range <start-end>, only contain start, But not end port
In nmap run:**nmap 192.168.1.2 -p 1-65535** ,nmap scan port range is 1-65535.
But in RustScan run:**rustscan 192.168.1.2 --range 1-65535** (or directly execute **rustscan 192.168.1.2**) by wireshark verification, the scan range is 1-65534, does not contain 65535.
The same in nmap exec: **nmap 192.168.1.2 -p 1-5**, nmap scan range is 1-5.
But in the RustScan exec: **rustscan 192.168.1.2 --range 1-5**, rustscan scan range is 1-4.
Please fix this bug. Services open on 65535 may be missed when execute full port scan.
| 2023-05-26T04:18:08 | 2.1 | 2ab7191a659e4e431098d530b6376c753b8616dd | [
"port_strategy::tests::serial_strategy_with_range",
"port_strategy::tests::random_strategy_with_range",
"port_strategy::range_iterator::tests::range_iterator_iterates_through_the_entire_range"
] | [
"input::tests::opts_merge_optional_arguments",
"input::tests::opts_no_merge_when_config_is_ignored",
"input::tests::opts_merge_required_arguments",
"port_strategy::tests::serial_strategy_with_ports",
"port_strategy::tests::random_strategy_with_ports",
"scanner::socket_iterator::tests::goes_through_every_i... | [] | [] | 54 | |
clockworklabs/SpacetimeDB | 1,894 | clockworklabs__SpacetimeDB-1894 | [
"1818"
] | 9c64d1fbd17e99bbb8db1479492d88ba7578acc9 | diff --git a/crates/bindings-macro/src/lib.rs b/crates/bindings-macro/src/lib.rs
--- a/crates/bindings-macro/src/lib.rs
+++ b/crates/bindings-macro/src/lib.rs
@@ -45,6 +45,7 @@ mod sym {
symbol!(public);
symbol!(sats);
symbol!(scheduled);
+ symbol!(scheduled_at);
symbol!(unique);
symbol!(update);
diff --git a/crates/bindings-macro/src/lib.rs b/crates/bindings-macro/src/lib.rs
--- a/crates/bindings-macro/src/lib.rs
+++ b/crates/bindings-macro/src/lib.rs
@@ -154,7 +155,7 @@ mod sym {
pub fn reducer(args: StdTokenStream, item: StdTokenStream) -> StdTokenStream {
cvt_attr::<ItemFn>(args, item, quote!(), |args, original_function| {
let args = reducer::ReducerArgs::parse(args)?;
- reducer::reducer_impl(args, &original_function)
+ reducer::reducer_impl(args, original_function)
})
}
diff --git a/crates/bindings-macro/src/lib.rs b/crates/bindings-macro/src/lib.rs
--- a/crates/bindings-macro/src/lib.rs
+++ b/crates/bindings-macro/src/lib.rs
@@ -241,7 +242,7 @@ pub fn table(args: StdTokenStream, item: StdTokenStream) -> StdTokenStream {
/// Provides helper attributes for `#[spacetimedb::table]`, so that we don't get unknown attribute errors.
#[doc(hidden)]
-#[proc_macro_derive(__TableHelper, attributes(sats, unique, auto_inc, primary_key, index))]
+#[proc_macro_derive(__TableHelper, attributes(sats, unique, auto_inc, primary_key, index, scheduled_at))]
pub fn table_helper(_input: StdTokenStream) -> StdTokenStream {
Default::default()
}
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -1,6 +1,6 @@
use crate::sats::{self, derive_deserialize, derive_satstype, derive_serialize};
use crate::sym;
-use crate::util::{check_duplicate, check_duplicate_msg, ident_to_litstr, match_meta, MutItem};
+use crate::util::{check_duplicate, check_duplicate_msg, ident_to_litstr, match_meta};
use heck::ToSnakeCase;
use proc_macro2::{Span, TokenStream};
use quote::{format_ident, quote, quote_spanned, ToTokens};
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -36,21 +36,6 @@ impl TableAccess {
}
}
-// add scheduled_id and scheduled_at fields to the struct
-fn add_scheduled_fields(item: &mut syn::DeriveInput) {
- if let syn::Data::Struct(struct_data) = &mut item.data {
- if let syn::Fields::Named(fields) = &mut struct_data.fields {
- let extra_fields: syn::FieldsNamed = parse_quote!({
- #[primary_key]
- #[auto_inc]
- pub scheduled_id: u64,
- pub scheduled_at: spacetimedb::spacetimedb_lib::ScheduleAt,
- });
- fields.named.extend(extra_fields.named);
- }
- }
-}
-
struct IndexArg {
name: Ident,
kind: IndexType,
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -365,6 +350,7 @@ enum ColumnAttr {
AutoInc(Span),
PrimaryKey(Span),
Index(IndexArg),
+ ScheduledAt(Span),
}
impl ColumnAttr {
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -384,23 +370,18 @@ impl ColumnAttr {
} else if ident == sym::primary_key {
attr.meta.require_path_only()?;
Some(ColumnAttr::PrimaryKey(ident.span()))
+ } else if ident == sym::scheduled_at {
+ attr.meta.require_path_only()?;
+ Some(ColumnAttr::ScheduledAt(ident.span()))
} else {
None
})
}
}
-pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput>) -> syn::Result<TokenStream> {
- let scheduled_reducer_type_check = args.scheduled.as_ref().map(|reducer| {
- add_scheduled_fields(&mut item);
- let struct_name = &item.ident;
- quote! {
- const _: () = spacetimedb::rt::scheduled_reducer_typecheck::<#struct_name>(#reducer);
- }
- });
-
+pub(crate) fn table_impl(mut args: TableArgs, item: &syn::DeriveInput) -> syn::Result<TokenStream> {
let vis = &item.vis;
- let sats_ty = sats::sats_type_from_derive(&item, quote!(spacetimedb::spacetimedb_lib))?;
+ let sats_ty = sats::sats_type_from_derive(item, quote!(spacetimedb::spacetimedb_lib))?;
let original_struct_ident = sats_ty.ident;
let table_ident = &args.name;
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -429,7 +410,7 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
if fields.len() > u16::MAX.into() {
return Err(syn::Error::new_spanned(
- &*item,
+ item,
"too many columns; the most a table can have is 2^16",
));
}
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -438,6 +419,7 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
let mut unique_columns = vec![];
let mut sequenced_columns = vec![];
let mut primary_key_column = None;
+ let mut scheduled_at_column = None;
for (i, field) in fields.iter().enumerate() {
let col_num = i as u16;
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -446,6 +428,7 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
let mut unique = None;
let mut auto_inc = None;
let mut primary_key = None;
+ let mut scheduled_at = None;
for attr in field.original_attrs {
let Some(attr) = ColumnAttr::parse(attr, field_ident)? else {
continue;
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -464,6 +447,10 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
primary_key = Some(span);
}
ColumnAttr::Index(index_arg) => args.indices.push(index_arg),
+ ColumnAttr::ScheduledAt(span) => {
+ check_duplicate(&scheduled_at, span)?;
+ scheduled_at = Some(span);
+ }
}
}
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -489,10 +476,27 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
check_duplicate_msg(&primary_key_column, span, "can only have one primary key per table")?;
primary_key_column = Some(column);
}
+ if let Some(span) = scheduled_at {
+ check_duplicate_msg(
+ &scheduled_at_column,
+ span,
+ "can only have one scheduled_at column per table",
+ )?;
+ scheduled_at_column = Some(column);
+ }
columns.push(column);
}
+ let scheduled_at_typecheck = scheduled_at_column.map(|col| {
+ let ty = col.ty;
+ quote!(let _ = |x: #ty| { let _: spacetimedb::ScheduleAt = x; };)
+ });
+ let scheduled_id_typecheck = primary_key_column.filter(|_| args.scheduled.is_some()).map(|col| {
+ let ty = col.ty;
+ quote!(spacetimedb::rt::assert_scheduled_table_primary_key::<#ty>();)
+ });
+
let row_type = quote!(#original_struct_ident);
let mut indices = args
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -540,17 +544,46 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
let primary_col_id = primary_key_column.iter().map(|col| col.index);
let sequence_col_ids = sequenced_columns.iter().map(|col| col.index);
+ let scheduled_reducer_type_check = args.scheduled.as_ref().map(|reducer| {
+ quote! {
+ spacetimedb::rt::scheduled_reducer_typecheck::<#original_struct_ident>(#reducer);
+ }
+ });
let schedule = args
.scheduled
.as_ref()
.map(|reducer| {
- // scheduled_at was inserted as the last field
- let scheduled_at_id = (fields.len() - 1) as u16;
- quote!(spacetimedb::table::ScheduleDesc {
+ // better error message when both are missing
+ if scheduled_at_column.is_none() && primary_key_column.is_none() {
+ return Err(syn::Error::new(
+ original_struct_ident.span(),
+ "scheduled table missing required columns; add these to your struct:\n\
+ #[primary_key]\n\
+ #[auto_inc]\n\
+ scheduled_id: u64,\n\
+ #[scheduled_at]\n\
+ scheduled_at: spacetimedb::ScheduleAt,",
+ ));
+ }
+ let scheduled_at_column = scheduled_at_column.ok_or_else(|| {
+ syn::Error::new(
+ original_struct_ident.span(),
+ "scheduled tables must have a `#[scheduled_at] scheduled_at: spacetimedb::ScheduleAt` column.",
+ )
+ })?;
+ let scheduled_at_id = scheduled_at_column.index;
+ if primary_key_column.is_none() {
+ return Err(syn::Error::new(
+ original_struct_ident.span(),
+ "scheduled tables should have a `#[primary_key] #[auto_inc] scheduled_id: u64` column",
+ ));
+ }
+ Ok(quote!(spacetimedb::table::ScheduleDesc {
reducer_name: <#reducer as spacetimedb::rt::ReducerInfo>::NAME,
scheduled_at_column: #scheduled_at_id,
- })
+ }))
})
+ .transpose()?
.into_iter();
let unique_err = if !unique_columns.is_empty() {
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -564,7 +597,6 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
quote!(::core::convert::Infallible)
};
- let field_names = fields.iter().map(|f| f.ident.unwrap()).collect::<Vec<_>>();
let field_types = fields.iter().map(|f| f.ty).collect::<Vec<_>>();
let tabletype_impl = quote! {
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -599,16 +631,6 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
}
};
- let col_num = 0u16..;
- let field_access_impls = quote! {
- #(impl spacetimedb::table::FieldAccess<#col_num> for #original_struct_ident {
- type Field = #field_types;
- fn get_field(&self) -> &Self::Field {
- &self.#field_names
- }
- })*
- };
-
let row_type_to_table = quote!(<#row_type as spacetimedb::table::__MapRowTypeToTable>::Table);
// Output all macro data
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -635,6 +657,9 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
let emission = quote! {
const _: () = {
#(let _ = <#field_types as spacetimedb::rt::TableColumn>::_ITEM;)*
+ #scheduled_reducer_type_check
+ #scheduled_at_typecheck
+ #scheduled_id_typecheck
};
#trait_def
diff --git a/crates/bindings-macro/src/table.rs b/crates/bindings-macro/src/table.rs
--- a/crates/bindings-macro/src/table.rs
+++ b/crates/bindings-macro/src/table.rs
@@ -669,10 +694,6 @@ pub(crate) fn table_impl(mut args: TableArgs, mut item: MutItem<syn::DeriveInput
#schema_impl
#deserialize_impl
#serialize_impl
-
- #field_access_impls
-
- #scheduled_reducer_type_check
};
if std::env::var("PROC_MACRO_DEBUG").is_ok() {
diff --git a/crates/bindings-macro/src/util.rs b/crates/bindings-macro/src/util.rs
--- a/crates/bindings-macro/src/util.rs
+++ b/crates/bindings-macro/src/util.rs
@@ -10,24 +10,14 @@ pub(crate) fn cvt_attr<Item: Parse + quote::ToTokens>(
args: StdTokenStream,
item: StdTokenStream,
extra_attr: TokenStream,
- f: impl FnOnce(TokenStream, MutItem<'_, Item>) -> syn::Result<TokenStream>,
+ f: impl FnOnce(TokenStream, &Item) -> syn::Result<TokenStream>,
) -> StdTokenStream {
let item: TokenStream = item.into();
- let mut parsed_item = match syn::parse2::<Item>(item.clone()) {
+ let parsed_item = match syn::parse2::<Item>(item.clone()) {
Ok(i) => i,
Err(e) => return TokenStream::from_iter([item, e.into_compile_error()]).into(),
};
- let mut modified = false;
- let mut_item = MutItem {
- val: &mut parsed_item,
- modified: &mut modified,
- };
- let generated = f(args.into(), mut_item).unwrap_or_else(syn::Error::into_compile_error);
- let item = if modified {
- parsed_item.into_token_stream()
- } else {
- item
- };
+ let generated = f(args.into(), &parsed_item).unwrap_or_else(syn::Error::into_compile_error);
TokenStream::from_iter([extra_attr, item, generated]).into()
}
diff --git a/crates/bindings-macro/src/util.rs b/crates/bindings-macro/src/util.rs
--- a/crates/bindings-macro/src/util.rs
+++ b/crates/bindings-macro/src/util.rs
@@ -35,23 +25,6 @@ pub(crate) fn ident_to_litstr(ident: &Ident) -> syn::LitStr {
syn::LitStr::new(&ident.to_string(), ident.span())
}
-pub(crate) struct MutItem<'a, T> {
- val: &'a mut T,
- modified: &'a mut bool,
-}
-impl<T> std::ops::Deref for MutItem<'_, T> {
- type Target = T;
- fn deref(&self) -> &Self::Target {
- self.val
- }
-}
-impl<T> std::ops::DerefMut for MutItem<'_, T> {
- fn deref_mut(&mut self) -> &mut Self::Target {
- *self.modified = true;
- self.val
- }
-}
-
pub(crate) trait ErrorSource {
fn error(self, msg: impl std::fmt::Display) -> syn::Error;
}
diff --git a/crates/bindings/src/table.rs b/crates/bindings/src/table.rs
--- a/crates/bindings/src/table.rs
+++ b/crates/bindings/src/table.rs
@@ -262,19 +262,6 @@ impl MaybeError for AutoIncOverflow {
}
}
-/// A trait for types exposing an operation to access their `N`th field.
-///
-/// In other words, a type implementing `FieldAccess<N>` allows
-/// shared projection from `self` to its `N`th field.
-#[doc(hidden)]
-pub trait FieldAccess<const N: u16> {
- /// The type of the field at the `N`th position.
- type Field;
-
- /// Project to the value of the field at position `N`.
- fn get_field(&self) -> &Self::Field;
-}
-
pub trait Column {
type Row;
type ColType;
| diff --git a/crates/bindings/tests/ui/reducers.rs b/crates/bindings/tests/ui/reducers.rs
--- a/crates/bindings/tests/ui/reducers.rs
+++ b/crates/bindings/tests/ui/reducers.rs
@@ -25,8 +25,22 @@ fn missing_ctx(_a: u8) {}
#[spacetimedb::reducer]
fn ctx_by_val(_ctx: ReducerContext, _a: u8) {}
+#[spacetimedb::table(name = scheduled_table_missing_rows, scheduled(scheduled_table_missing_rows_reducer))]
+struct ScheduledTableMissingRows {
+ x: u8,
+ y: u8,
+}
+
+// #[spacetimedb::reducer]
+// fn scheduled_table_missing_rows_reducer(_ctx: &ReducerContext, _: &ScheduledTableMissingRows) {}
+
#[spacetimedb::table(name = scheduled_table, scheduled(scheduled_table_reducer))]
struct ScheduledTable {
+ #[primary_key]
+ #[auto_inc]
+ scheduled_id: u64,
+ #[scheduled_at]
+ scheduled_at: spacetimedb::ScheduleAt,
x: u8,
y: u8,
}
diff --git a/crates/bindings/tests/ui/reducers.stderr b/crates/bindings/tests/ui/reducers.stderr
--- a/crates/bindings/tests/ui/reducers.stderr
+++ b/crates/bindings/tests/ui/reducers.stderr
@@ -10,16 +10,27 @@ error: const parameters are not allowed on reducers
20 | fn const_param<const X: u8>() {}
| ^^^^^^^^^^^
+error: scheduled table missing required columns; add these to your struct:
+ #[primary_key]
+ #[auto_inc]
+ scheduled_id: u64,
+ #[scheduled_at]
+ scheduled_at: spacetimedb::ScheduleAt,
+ --> tests/ui/reducers.rs:29:8
+ |
+29 | struct ScheduledTableMissingRows {
+ | ^^^^^^^^^^^^^^^^^^^^^^^^^
+
error[E0593]: function is expected to take 2 arguments, but it takes 3 arguments
- --> tests/ui/reducers.rs:28:56
+ --> tests/ui/reducers.rs:37:56
|
-28 | #[spacetimedb::table(name = scheduled_table, scheduled(scheduled_table_reducer))]
+37 | #[spacetimedb::table(name = scheduled_table, scheduled(scheduled_table_reducer))]
| -------------------------------------------------------^^^^^^^^^^^^^^^^^^^^^^^---
| | |
| | expected function that takes 2 arguments
| required by a bound introduced by this call
...
-35 | fn scheduled_table_reducer(_ctx: &ReducerContext, _x: u8, _y: u8) {}
+49 | fn scheduled_table_reducer(_ctx: &ReducerContext, _x: u8, _y: u8) {}
| ----------------------------------------------------------------- takes 3 arguments
|
= note: required for `for<'a> fn(&'a ReducerContext, u8, u8) {scheduled_table_reducer}` to implement `Reducer<'_, (ScheduledTable,)>`
diff --git a/crates/cli/tests/snapshots/codegen__codegen_csharp.snap b/crates/cli/tests/snapshots/codegen__codegen_csharp.snap
--- a/crates/cli/tests/snapshots/codegen__codegen_csharp.snap
+++ b/crates/cli/tests/snapshots/codegen__codegen_csharp.snap
@@ -372,22 +372,22 @@ namespace SpacetimeDB
[DataContract]
public partial class RepeatingTestArg : IDatabaseRow
{
- [DataMember(Name = "prev_time")]
- public ulong PrevTime;
[DataMember(Name = "scheduled_id")]
public ulong ScheduledId;
[DataMember(Name = "scheduled_at")]
public SpacetimeDB.ScheduleAt ScheduledAt;
+ [DataMember(Name = "prev_time")]
+ public ulong PrevTime;
public RepeatingTestArg(
- ulong PrevTime,
ulong ScheduledId,
- SpacetimeDB.ScheduleAt ScheduledAt
+ SpacetimeDB.ScheduleAt ScheduledAt,
+ ulong PrevTime
)
{
- this.PrevTime = PrevTime;
this.ScheduledId = ScheduledId;
this.ScheduledAt = ScheduledAt;
+ this.PrevTime = PrevTime;
}
public RepeatingTestArg()
diff --git a/crates/cli/tests/snapshots/codegen__codegen_rust.snap b/crates/cli/tests/snapshots/codegen__codegen_rust.snap
--- a/crates/cli/tests/snapshots/codegen__codegen_rust.snap
+++ b/crates/cli/tests/snapshots/codegen__codegen_rust.snap
@@ -2003,9 +2003,9 @@ pub(super) fn parse_table_update(
#[derive(__lib::ser::Serialize, __lib::de::Deserialize, Clone, PartialEq, Debug)]
#[sats(crate = __lib)]
pub struct RepeatingTestArg {
- pub prev_time: u64,
pub scheduled_id: u64,
pub scheduled_at: __sdk::ScheduleAt,
+ pub prev_time: u64,
}
diff --git a/crates/cli/tests/snapshots/codegen__codegen_typescript.snap b/crates/cli/tests/snapshots/codegen__codegen_typescript.snap
--- a/crates/cli/tests/snapshots/codegen__codegen_typescript.snap
+++ b/crates/cli/tests/snapshots/codegen__codegen_typescript.snap
@@ -2165,9 +2165,9 @@ import {
deepEqual,
} from "@clockworklabs/spacetimedb-sdk";
export type RepeatingTestArg = {
- prevTime: bigint,
scheduledId: bigint,
scheduledAt: { tag: "Interval", value: bigint } | { tag: "Time", value: bigint },
+ prevTime: bigint,
};
/**
diff --git a/crates/cli/tests/snapshots/codegen__codegen_typescript.snap b/crates/cli/tests/snapshots/codegen__codegen_typescript.snap
--- a/crates/cli/tests/snapshots/codegen__codegen_typescript.snap
+++ b/crates/cli/tests/snapshots/codegen__codegen_typescript.snap
@@ -2180,9 +2180,9 @@ export namespace RepeatingTestArg {
*/
export function getTypeScriptAlgebraicType(): AlgebraicType {
return AlgebraicType.createProductType([
- new ProductTypeElement("prevTime", AlgebraicType.createU64Type()),
new ProductTypeElement("scheduledId", AlgebraicType.createU64Type()),
new ProductTypeElement("scheduledAt", AlgebraicType.createScheduleAtType()),
+ new ProductTypeElement("prevTime", AlgebraicType.createU64Type()),
]);
}
diff --git a/modules/rust-wasm-test/src/lib.rs b/modules/rust-wasm-test/src/lib.rs
--- a/modules/rust-wasm-test/src/lib.rs
+++ b/modules/rust-wasm-test/src/lib.rs
@@ -104,6 +104,11 @@ pub type TestAlias = TestA;
#[spacetimedb::table(name = repeating_test_arg, scheduled(repeating_test))]
pub struct RepeatingTestArg {
+ #[primary_key]
+ #[auto_inc]
+ scheduled_id: u64,
+ #[scheduled_at]
+ scheduled_at: spacetimedb::ScheduleAt,
prev_time: Timestamp,
}
diff --git a/smoketests/tests/modules.py b/smoketests/tests/modules.py
--- a/smoketests/tests/modules.py
+++ b/smoketests/tests/modules.py
@@ -144,6 +144,11 @@ class UploadModule2(Smoketest):
#[spacetimedb::table(name = scheduled_message, public, scheduled(my_repeating_reducer))]
pub struct ScheduledMessage {
+ #[primary_key]
+ #[auto_inc]
+ scheduled_id: u64,
+ #[scheduled_at]
+ scheduled_at: spacetimedb::ScheduleAt,
prev: Timestamp,
}
diff --git a/smoketests/tests/schedule_reducer.py b/smoketests/tests/schedule_reducer.py
--- a/smoketests/tests/schedule_reducer.py
+++ b/smoketests/tests/schedule_reducer.py
@@ -25,6 +25,11 @@ class CancelReducer(Smoketest):
#[spacetimedb::table(name = scheduled_reducer_args, public, scheduled(reducer))]
pub struct ScheduledReducerArgs {
+ #[primary_key]
+ #[auto_inc]
+ scheduled_id: u64,
+ #[scheduled_at]
+ scheduled_at: spacetimedb::ScheduleAt,
num: i32,
}
diff --git a/smoketests/tests/schedule_reducer.py b/smoketests/tests/schedule_reducer.py
--- a/smoketests/tests/schedule_reducer.py
+++ b/smoketests/tests/schedule_reducer.py
@@ -53,6 +58,11 @@ class SubscribeScheduledTable(Smoketest):
#[spacetimedb::table(name = scheduled_table, public, scheduled(my_reducer))]
pub struct ScheduledTable {
+ #[primary_key]
+ #[auto_inc]
+ scheduled_id: u64,
+ #[scheduled_at]
+ scheduled_at: spacetimedb::ScheduleAt,
prev: Timestamp,
}
| `#[spacetimedb::table(scheduled())]` should probably not autogen the scheduled_id/at fields for you - should just require they already be present
UX story:
> Someone new comes to the codebase and wants to insert a row into the table. the compiler complains "missing fields scheduled_id and schedule_at" and so they go to the struct definition but they're not there. what the heck are they?? what do they do?? how is the compiler complaining about these fields that don't exist??
| 2024-10-24T02:23:38 | 1.78 | 8075567da42408ecf53146fb3f72832e41d8a956 | [
"ui"
] | [
"deptree_snapshot",
"tests/ui/tables.rs",
"crates/bindings/src/rng.rs - rng::ReducerContext::rng (line 30)"
] | [] | [] | 55 | |
JohnnyMorganz/StyLua | 422 | JohnnyMorganz__StyLua-422 | [
"421"
] | 30c713bc68d31b45fcfba1227bc6fbd4df97cba2 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -17,6 +17,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- Fixed issue through static linking where Windows binary would not execute due to missing `VCRUNTIME140.dll`. ([#413](https://github.com/JohnnyMorganz/StyLua/issues/413))
- Fixed assignment with comment sometimes not hanging leading to malformed syntax. ([#416](https://github.com/JohnnyMorganz/StyLua/issues/416))
+- Fixed block ignores not applied when multiple leading block ignore comments are present at once. ([#421](https://github.com/JohnnyMorganz/StyLua/issues/421))
## [0.12.5] - 2022-03-08
### Fixed
diff --git a/src/context.rs b/src/context.rs
--- a/src/context.rs
+++ b/src/context.rs
@@ -44,33 +44,36 @@ impl Context {
// To preserve immutability of Context, we return a new Context with the `formatting_disabled` field toggled or left the same
// where necessary. Context is cheap so this is reasonable to do.
pub fn check_toggle_formatting(&self, node: &impl Node) -> Self {
- // Check comments
+ // Load all the leading comments from the token
let leading_trivia = node.surrounding_trivia().0;
- for trivia in leading_trivia {
- let comment_lines = match trivia.token_type() {
- TokenType::SingleLineComment { comment } => comment,
- TokenType::MultiLineComment { comment, .. } => comment,
- _ => continue,
- }
- .lines()
- .map(|line| line.trim());
-
- for line in comment_lines {
- if line == "stylua: ignore start" && !self.formatting_disabled {
- return Self {
- formatting_disabled: true,
- ..*self
- };
- } else if line == "stylua: ignore end" && self.formatting_disabled {
- return Self {
- formatting_disabled: false,
- ..*self
- };
+ let comment_lines = leading_trivia
+ .iter()
+ .filter_map(|trivia| {
+ match trivia.token_type() {
+ TokenType::SingleLineComment { comment } => Some(comment),
+ TokenType::MultiLineComment { comment, .. } => Some(comment),
+ _ => None,
}
+ .map(|comment| comment.lines().map(|line| line.trim()))
+ })
+ .flatten();
+
+ // Load the current formatting disabled state
+ let mut formatting_disabled = self.formatting_disabled;
+
+ // Work through all the lines and update the state as necessary
+ for line in comment_lines {
+ if line == "stylua: ignore start" {
+ formatting_disabled = true;
+ } else if line == "stylua: ignore end" {
+ formatting_disabled = false;
}
}
- *self
+ Self {
+ formatting_disabled,
+ ..*self
+ }
}
/// Checks whether we should format the given node.
| diff --git a/tests/test_ignore.rs b/tests/test_ignore.rs
--- a/tests/test_ignore.rs
+++ b/tests/test_ignore.rs
@@ -211,3 +211,48 @@ local bar = baz
local bar = baz
"###);
}
+
+#[test]
+fn test_multiline_block_ignore_multiple_comments_in_leading_trivia() {
+ insta::assert_snapshot!(
+ format(
+ r###"--stylua: ignore start
+local a = 1
+--stylua: ignore end
+
+--stylua: ignore start
+local b = 2
+--stylua: ignore end
+
+--stylua: ignore start
+local c = 3
+--stylua: ignore end
+
+-- Some very large comment
+
+--stylua: ignore start
+local d = 4
+--stylua: ignore end
+"###
+ ),
+ @r###"
+ --stylua: ignore start
+ local a = 1
+ --stylua: ignore end
+
+ --stylua: ignore start
+ local b = 2
+ --stylua: ignore end
+
+ --stylua: ignore start
+ local c = 3
+ --stylua: ignore end
+
+ -- Some very large comment
+
+ --stylua: ignore start
+ local d = 4
+ --stylua: ignore end
+ "###
+ )
+}
| Block ignore is not applied to all back-to-back blocks
When `--stylua: ignore start/end` comments are applied back-to-back, they take effect on every second ignored block. This might be considered as a non-issue because they can be merged into one single ignored block, but it might be inconvenient when they are separated with large comment block.
My understanding is that "this comment must be preceding a statement (same as `-- stylua: ignore`), and cannot cross block scope boundaries" holds here, although I am not sure about it.
Steps to reproduce:
- Create file `tmp.lua` with the following contents:
```lua
--stylua: ignore start
local a = 1
--stylua: ignore end
--stylua: ignore start
local b = 2
--stylua: ignore end
--stylua: ignore start
local c = 3
--stylua: ignore end
-- Some very large comment
--stylua: ignore start
local d = 4
--stylua: ignore end
```
- Run `stylua tmp.lua`.
Currently observed output:
```lua
--stylua: ignore start
local a = 1
--stylua: ignore end
--stylua: ignore start
local b = 2
--stylua: ignore end
--stylua: ignore start
local c = 3
--stylua: ignore end
-- Some very large comment
--stylua: ignore start
local d = 4
--stylua: ignore end
```
Details:
- Version of StyLua: 0.12.5
- OS: Xubuntu 20.04.
| Yeah, this is a bug - it seems as though we are not correctly re-opening an ignore section when we see the `--stylua: ignore start` comment.
This code is actually parsed as these separate statements:
```lua
--stylua: ignore start
local a = 1
---------------------------
--stylua: ignore end
--stylua: ignore start
local b = 2
```
where `--stylua: ignore end` and `--stylua: ignore start` are both leading trivia of the `local b` statement.
We are probably exiting early when we see the `--stylua: ignore end` comment in the leading trivia, but we should continue scanning in case we find another ignore comment afterwards | 2022-03-27T21:08:32 | 0.12 | a369d02342f3b1432ff89c619c97129a57678141 | [
"test_multiline_block_ignore_multiple_comments_in_leading_trivia"
] | [
"formatters::trivia_util::tests::test_token_contains_no_singleline_comments_2",
"formatters::trivia_util::tests::test_token_contains_no_singleline_comments",
"formatters::trivia_util::tests::test_token_contains_singleline_comments",
"tests::test_config_call_parentheses",
"tests::test_config_column_width",
... | [] | [] | 56 |
JohnnyMorganz/StyLua | 931 | JohnnyMorganz__StyLua-931 | [
"928"
] | d67c77b6f73ccab63d544adffe9cf3d859e71fa4 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -7,6 +7,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
+### Fixed
+
+- Fixed regression where configuration present in current working directory not used when formatting from stdin and no `--stdin-filepath` is provided ([#928](https://github.com/JohnnyMorganz/StyLua/issues/928))
+
## [2.0.1] - 2024-11-18
### Added
diff --git a/src/cli/config.rs b/src/cli/config.rs
--- a/src/cli/config.rs
+++ b/src/cli/config.rs
@@ -55,15 +55,22 @@ impl ConfigResolver<'_> {
})
}
+ /// Returns the root used when searching for configuration
+ /// If `--search-parent-directories`, then there is no root, and we keep searching
+ /// Else, the root is the current working directory, and we do not search higher than the cwd
+ fn get_configuration_search_root(&self) -> Option<PathBuf> {
+ match self.opt.search_parent_directories {
+ true => None,
+ false => Some(self.current_directory.to_path_buf()),
+ }
+ }
+
pub fn load_configuration(&mut self, path: &Path) -> Result<Config> {
if let Some(configuration) = self.forced_configuration {
return Ok(configuration);
}
- let root = match self.opt.search_parent_directories {
- true => None,
- false => Some(self.current_directory.to_path_buf()),
- };
+ let root = self.get_configuration_search_root();
let absolute_path = self.current_directory.join(path);
let parent_path = &absolute_path
diff --git a/src/cli/config.rs b/src/cli/config.rs
--- a/src/cli/config.rs
+++ b/src/cli/config.rs
@@ -91,19 +98,25 @@ impl ConfigResolver<'_> {
return Ok(configuration);
}
+ let root = self.get_configuration_search_root();
+ let my_current_directory = self.current_directory.to_owned();
+
match &self.opt.stdin_filepath {
Some(filepath) => self.load_configuration(filepath),
- None => {
- #[cfg(feature = "editorconfig")]
- if self.opt.no_editorconfig {
+ None => match self.find_config_file(&my_current_directory, root)? {
+ Some(config) => Ok(config),
+ None => {
+ #[cfg(feature = "editorconfig")]
+ if self.opt.no_editorconfig {
+ Ok(self.default_configuration)
+ } else {
+ editorconfig::parse(self.default_configuration, &PathBuf::from("*.lua"))
+ .context("could not parse editorconfig")
+ }
+ #[cfg(not(feature = "editorconfig"))]
Ok(self.default_configuration)
- } else {
- editorconfig::parse(self.default_configuration, &PathBuf::from("*.lua"))
- .context("could not parse editorconfig")
}
- #[cfg(not(feature = "editorconfig"))]
- Ok(self.default_configuration)
- }
+ },
}
}
| diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -810,6 +810,24 @@ mod tests {
cwd.close().unwrap();
}
+ #[test]
+ fn test_cwd_configuration_respected_when_formatting_from_stdin() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferSingle'",
+ "foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .arg("-")
+ .write_stdin("local x = \"hello\"")
+ .assert()
+ .success()
+ .stdout("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
#[test]
fn test_cwd_configuration_respected_for_file_in_cwd() {
let cwd = construct_tree!({
| after new release stdin does not repsect .stylua.toml by default
unless must set config-path .. before no need

| Looks like a mistake in https://github.com/JohnnyMorganz/StyLua/blob/1daf4c18fb6baf687cc41082f1cc6905ced226a4/src/cli/config.rs#L96, we forget to check the current directory if stdin file path was not provided. Thanks for reporting!
As a work around, you can use `--stdin-filepath` to specify the file path for the file you're formatting. It can even be a dummy file: `--stdin-filepath ./dummy` should in theory lookup config in the current working directory
sure but hope can fix in next release | 2024-11-30T20:21:18 | 2.0 | f581279895a7040a36abaea4a6c8a6f50f112cd7 | [
"tests::test_cwd_configuration_respected_when_formatting_from_stdin"
] | [
"editorconfig::tests::test_call_parentheses_always",
"editorconfig::tests::test_call_parentheses_no_single_string",
"editorconfig::tests::test_call_parentheses_no_single_table",
"editorconfig::tests::test_call_parentheses_none",
"editorconfig::tests::test_collapse_simple_statement_always",
"editorconfig::... | [] | [] | 57 |
JohnnyMorganz/StyLua | 926 | JohnnyMorganz__StyLua-926 | [
"925"
] | 08e536f3a02d9a07b0991726897e0013ff78dd21 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -7,6 +7,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
+### Fixed
+
+- Fixed CLI overrides not applying on top of a resolved `stylua.toml` file ([#925](https://github.com/JohnnyMorganz/StyLua/issues/925))
+
## [2.0.0] - 2024-11-17
### Breaking Changes
diff --git a/src/cli/config.rs b/src/cli/config.rs
--- a/src/cli/config.rs
+++ b/src/cli/config.rs
@@ -113,7 +113,9 @@ impl ConfigResolver<'_> {
match config_file {
Some(file_path) => {
debug!("config: found config at {}", file_path.display());
- read_config_file(&file_path).map(Some)
+ let config = read_and_apply_overrides(&file_path, self.opt)?;
+ debug!("config: {:#?}", config);
+ Ok(Some(config))
}
None => Ok(None),
}
diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -271,9 +271,6 @@ fn format(opt: opt::Opt) -> Result<i32> {
let opt_for_config_resolver = opt.clone();
let mut config_resolver = config::ConfigResolver::new(&opt_for_config_resolver)?;
- // TODO:
- // debug!("config: {:#?}", config);
-
// Create range if provided
let range = if opt.range_start.is_some() || opt.range_end.is_some() {
Some(Range::from_values(opt.range_start, opt.range_end))
| diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -953,4 +950,74 @@ mod tests {
cwd.close().unwrap();
}
+
+ #[test]
+ fn test_uses_cli_overrides_instead_of_default_configuration() {
+ let cwd = construct_tree!({
+ "foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .args(["--quote-style", "AutoPreferSingle", "."])
+ .assert()
+ .success();
+
+ cwd.child("foo.lua").assert("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_uses_cli_overrides_instead_of_default_configuration_stdin_filepath() {
+ let cwd = construct_tree!({
+ "foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .args(["--quote-style", "AutoPreferSingle", "-"])
+ .write_stdin("local x = \"hello\"")
+ .assert()
+ .success()
+ .stdout("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_uses_cli_overrides_instead_of_found_configuration() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferDouble'",
+ "foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .args(["--quote-style", "AutoPreferSingle", "."])
+ .assert()
+ .success();
+
+ cwd.child("foo.lua").assert("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_uses_cli_overrides_instead_of_found_configuration_stdin_filepath() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferDouble'",
+ "foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .args(["--quote-style", "AutoPreferSingle", "-"])
+ .write_stdin("local x = \"hello\"")
+ .assert()
+ .success()
+ .stdout("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
}
| CLI overrides not applied when formatting files in v2.0.0
I tried to run it both from Windows and from Docker for Windows
`stylua.exe --line-endings Windows --check .`
`stylua.exe --line-endings Unix --check .`
The result is the same, it gives errors in all files. Example of the first two files
```
Diff in ./cloud_test_tasks/runTests.luau:
1 |-require(game.ReplicatedStorage.Shared.LuTestRoblox.LuTestRunner)
1 |+require(game.ReplicatedStorage.Shared.LuTestRoblox.LuTestRunner)
Diff in ./place_battle/replicated_storage/tests/test_workspace.luau:
1 |-local tests = {}
2 |-local Workspace = game:GetService("Workspace")
3 |-
4 |-function tests.test_server_workspace_gravity()
5 |- assert(math.floor(Workspace.Gravity) == 196, "Gravity value is: " .. Workspace.Gravity)
6 |-end
7 |-
8 |-return tests
1 |+local tests = {}
2 |+local Workspace = game:GetService("Workspace")
3 |+
4 |+function tests.test_server_workspace_gravity()
5 |+ assert(math.floor(Workspace.Gravity) == 196, "Gravity value is: " .. Workspace.Gravity)
6 |+end
7 |+
8 |+return tests
Diff in ./place_battle/replicated_storage/tests/test_starterplayer.luau:
```
After reverting to 0.20.0 the results became normal again
| Hm, this does look like line endings changing for some reason. Which is weird, since I don't think we touched that at all.
Do you by any chance have an example file I can test this with? Either in a public repo, or attaching a code file here (not in a code block since we need to preserve the line endings)
Yes, sure
[runTests.zip](https://github.com/user-attachments/files/17792304/runTests.zip)
| 2024-11-18T05:01:53 | 2.0 | f581279895a7040a36abaea4a6c8a6f50f112cd7 | [
"tests::test_uses_cli_overrides_instead_of_found_configuration"
] | [
"editorconfig::tests::test_call_parentheses_no_single_table",
"editorconfig::tests::test_call_parentheses_always",
"editorconfig::tests::test_call_parentheses_no_single_string",
"editorconfig::tests::test_call_parentheses_none",
"editorconfig::tests::test_collapse_simple_statement_function_only",
"editorc... | [] | [] | 58 |
JohnnyMorganz/StyLua | 916 | JohnnyMorganz__StyLua-916 | [
"831",
"915"
] | d7d532b4baf2bcf2adf1925d7248f659788e5b58 | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -229,7 +229,8 @@ StyLua has opinionated defaults, but also provides a few options that can be set
### Finding the configuration
-The CLI looks for `stylua.toml` or `.stylua.toml` in the directory where the tool was executed.
+The CLI looks for a `stylua.toml` or `.stylua.toml` starting from the directory of the file being formatted.
+It will keep searching upwards until it reaches the current directory where the tool was executed.
If not found, we search for an `.editorconfig` file, otherwise fall back to the default configuration.
This feature can be disabled using `--no-editorconfig`.
See [EditorConfig](https://editorconfig.org/) for more details.
diff --git a/src/cli/config.rs b/src/cli/config.rs
--- a/src/cli/config.rs
+++ b/src/cli/config.rs
@@ -1,12 +1,16 @@
use crate::opt::Opt;
use anyhow::{Context, Result};
use log::*;
+use std::collections::HashMap;
use std::env;
use std::fs;
use std::path::{Path, PathBuf};
use stylua_lib::Config;
use stylua_lib::SortRequiresConfig;
+#[cfg(feature = "editorconfig")]
+use stylua_lib::editorconfig;
+
static CONFIG_FILE_NAME: [&str; 2] = ["stylua.toml", ".stylua.toml"];
fn read_config_file(path: &Path) -> Result<Config> {
diff --git a/src/cli/config.rs b/src/cli/config.rs
--- a/src/cli/config.rs
+++ b/src/cli/config.rs
@@ -16,144 +20,217 @@ fn read_config_file(path: &Path) -> Result<Config> {
Ok(config)
}
-/// Searches the directory for the configuration toml file (i.e. `stylua.toml` or `.stylua.toml`)
-fn find_toml_file(directory: &Path) -> Option<PathBuf> {
- for name in &CONFIG_FILE_NAME {
- let file_path = directory.join(name);
- if file_path.exists() {
- return Some(file_path);
- }
- }
+fn read_and_apply_overrides(path: &Path, opt: &Opt) -> Result<Config> {
+ read_config_file(path).map(|config| load_overrides(config, opt))
+}
- None
+pub struct ConfigResolver<'a> {
+ config_cache: HashMap<PathBuf, Option<Config>>,
+ forced_configuration: Option<Config>,
+ current_directory: PathBuf,
+ default_configuration: Config,
+ opt: &'a Opt,
}
-/// Looks for a configuration file in the directory provided (and its parent's recursively, if specified)
-fn find_config_file(mut directory: PathBuf, recursive: bool) -> Result<Option<Config>> {
- debug!("config: looking for config in {}", directory.display());
- let config_file = find_toml_file(&directory);
- match config_file {
- Some(file_path) => {
- debug!("config: found config at {}", file_path.display());
- read_config_file(&file_path).map(Some)
+impl ConfigResolver<'_> {
+ pub fn new(opt: &Opt) -> Result<ConfigResolver> {
+ let forced_configuration = opt
+ .config_path
+ .as_ref()
+ .map(|config_path| {
+ debug!(
+ "config: explicit config path provided at {}",
+ config_path.display()
+ );
+ read_and_apply_overrides(config_path, opt)
+ })
+ .transpose()?;
+
+ Ok(ConfigResolver {
+ config_cache: HashMap::new(),
+ forced_configuration,
+ current_directory: env::current_dir().context("Could not find current directory")?,
+ default_configuration: load_overrides(Config::default(), opt),
+ opt,
+ })
+ }
+
+ pub fn load_configuration(&mut self, path: &Path) -> Result<Config> {
+ if let Some(configuration) = self.forced_configuration {
+ return Ok(configuration);
}
- None => {
- // Both don't exist, search up the tree if necessary
- // directory.pop() mutates the path to get its parent, and returns false if no more parent
- if recursive && directory.pop() {
- find_config_file(directory, recursive)
- } else {
- Ok(None)
+
+ let root = match self.opt.search_parent_directories {
+ true => None,
+ false => Some(self.current_directory.to_path_buf()),
+ };
+
+ let absolute_path = self.current_directory.join(path);
+ let parent_path = &absolute_path
+ .parent()
+ .with_context(|| format!("no parent directory found for {}", path.display()))?;
+
+ match self.find_config_file(parent_path, root)? {
+ Some(config) => Ok(config),
+ None => {
+ #[cfg(feature = "editorconfig")]
+ if self.opt.no_editorconfig {
+ Ok(self.default_configuration)
+ } else {
+ editorconfig::parse(self.default_configuration, path)
+ .context("could not parse editorconfig")
+ }
+ #[cfg(not(feature = "editorconfig"))]
+ Ok(self.default_configuration)
}
}
}
-}
-pub fn find_ignore_file_path(mut directory: PathBuf, recursive: bool) -> Option<PathBuf> {
- debug!("config: looking for ignore file in {}", directory.display());
- let file_path = directory.join(".styluaignore");
- if file_path.is_file() {
- Some(file_path)
- } else if recursive && directory.pop() {
- find_ignore_file_path(directory, recursive)
- } else {
- None
- }
-}
+ pub fn load_configuration_for_stdin(&mut self) -> Result<Config> {
+ if let Some(configuration) = self.forced_configuration {
+ return Ok(configuration);
+ }
-/// Looks for a configuration file at either `$XDG_CONFIG_HOME`, `$XDG_CONFIG_HOME/stylua`, `$HOME/.config` or `$HOME/.config/stylua`
-fn search_config_locations() -> Result<Option<Config>> {
- // Look in `$XDG_CONFIG_HOME`
- if let Ok(xdg_config) = std::env::var("XDG_CONFIG_HOME") {
- let xdg_config_path = Path::new(&xdg_config);
- if xdg_config_path.exists() {
- debug!("config: looking in $XDG_CONFIG_HOME");
+ match &self.opt.stdin_filepath {
+ Some(filepath) => self.load_configuration(filepath),
+ None => {
+ #[cfg(feature = "editorconfig")]
+ if self.opt.no_editorconfig {
+ Ok(self.default_configuration)
+ } else {
+ editorconfig::parse(self.default_configuration, &PathBuf::from("*.lua"))
+ .context("could not parse editorconfig")
+ }
+ #[cfg(not(feature = "editorconfig"))]
+ Ok(self.default_configuration)
+ }
+ }
+ }
- if let Some(config) = find_config_file(xdg_config_path.to_path_buf(), false)? {
- return Ok(Some(config));
+ fn lookup_config_file_in_directory(&self, directory: &Path) -> Result<Option<Config>> {
+ debug!("config: looking for config in {}", directory.display());
+ let config_file = find_toml_file(directory);
+ match config_file {
+ Some(file_path) => {
+ debug!("config: found config at {}", file_path.display());
+ read_config_file(&file_path).map(Some)
}
+ None => Ok(None),
+ }
+ }
- debug!("config: looking in $XDG_CONFIG_HOME/stylua");
- let xdg_config_path = xdg_config_path.join("stylua");
- if xdg_config_path.exists() {
- if let Some(config) = find_config_file(xdg_config_path, false)? {
- return Ok(Some(config));
+ /// Looks for a configuration file in the directory provided
+ /// Keep searching recursively upwards until we hit the root (if provided), then stop
+ /// When `--search-parent-directories` is enabled, root = None, else root = Some(cwd)
+ fn find_config_file(
+ &mut self,
+ directory: &Path,
+ root: Option<PathBuf>,
+ ) -> Result<Option<Config>> {
+ if let Some(config) = self.config_cache.get(directory) {
+ return Ok(*config);
+ }
+
+ let resolved_configuration = match self.lookup_config_file_in_directory(directory)? {
+ Some(config) => Some(config),
+ None => {
+ let parent_directory = directory.parent();
+ let should_stop = Some(directory) == root.as_deref() || parent_directory.is_none();
+
+ if should_stop {
+ debug!("config: no configuration file found");
+ if self.opt.search_parent_directories {
+ if let Some(config) = self.search_config_locations()? {
+ return Ok(Some(config));
+ }
+ }
+
+ debug!("config: falling back to default config");
+ None
+ } else {
+ self.find_config_file(parent_directory.unwrap(), root)?
}
}
- }
+ };
+
+ self.config_cache
+ .insert(directory.to_path_buf(), resolved_configuration);
+ Ok(resolved_configuration)
}
- // Look in `$HOME/.config`
- if let Ok(home) = std::env::var("HOME") {
- let home_config_path = Path::new(&home).join(".config");
+ /// Looks for a configuration file at either `$XDG_CONFIG_HOME`, `$XDG_CONFIG_HOME/stylua`, `$HOME/.config` or `$HOME/.config/stylua`
+ fn search_config_locations(&self) -> Result<Option<Config>> {
+ // Look in `$XDG_CONFIG_HOME`
+ if let Ok(xdg_config) = std::env::var("XDG_CONFIG_HOME") {
+ let xdg_config_path = Path::new(&xdg_config);
+ if xdg_config_path.exists() {
+ debug!("config: looking in $XDG_CONFIG_HOME");
- if home_config_path.exists() {
- debug!("config: looking in $HOME/.config");
+ if let Some(config) = self.lookup_config_file_in_directory(xdg_config_path)? {
+ return Ok(Some(config));
+ }
- if let Some(config) = find_config_file(home_config_path.to_owned(), false)? {
- return Ok(Some(config));
+ debug!("config: looking in $XDG_CONFIG_HOME/stylua");
+ let xdg_config_path = xdg_config_path.join("stylua");
+ if xdg_config_path.exists() {
+ if let Some(config) = self.lookup_config_file_in_directory(&xdg_config_path)? {
+ return Ok(Some(config));
+ }
+ }
}
+ }
+
+ // Look in `$HOME/.config`
+ if let Ok(home) = std::env::var("HOME") {
+ let home_config_path = Path::new(&home).join(".config");
- debug!("config: looking in $HOME/.config/stylua");
- let home_config_path = home_config_path.join("stylua");
if home_config_path.exists() {
- if let Some(config) = find_config_file(home_config_path, false)? {
+ debug!("config: looking in $HOME/.config");
+
+ if let Some(config) = self.lookup_config_file_in_directory(&home_config_path)? {
return Ok(Some(config));
}
+
+ debug!("config: looking in $HOME/.config/stylua");
+ let home_config_path = home_config_path.join("stylua");
+ if home_config_path.exists() {
+ if let Some(config) = self.lookup_config_file_in_directory(&home_config_path)? {
+ return Ok(Some(config));
+ }
+ }
}
}
- }
- Ok(None)
+ Ok(None)
+ }
}
-pub fn load_config(opt: &Opt) -> Result<Option<Config>> {
- match &opt.config_path {
- Some(config_path) => {
- debug!(
- "config: explicit config path provided at {}",
- config_path.display()
- );
- read_config_file(config_path).map(Some)
+/// Searches the directory for the configuration toml file (i.e. `stylua.toml` or `.stylua.toml`)
+fn find_toml_file(directory: &Path) -> Option<PathBuf> {
+ for name in &CONFIG_FILE_NAME {
+ let file_path = directory.join(name);
+ if file_path.exists() {
+ return Some(file_path);
}
- None => {
- let current_dir = match &opt.stdin_filepath {
- Some(file_path) => file_path
- .parent()
- .context("Could not find current directory from provided stdin filepath")?
- .to_path_buf(),
- None => env::current_dir().context("Could not find current directory")?,
- };
-
- debug!(
- "config: starting config search from {} - recursively searching parents: {}",
- current_dir.display(),
- opt.search_parent_directories
- );
- let config = find_config_file(current_dir, opt.search_parent_directories)?;
- match config {
- Some(config) => Ok(Some(config)),
- None => {
- debug!("config: no configuration file found");
+ }
- // Search the configuration directory for a file, if necessary
- if opt.search_parent_directories {
- if let Some(config) = search_config_locations()? {
- return Ok(Some(config));
- }
- }
+ None
+}
- // Fallback to a default configuration
- debug!("config: falling back to default config");
- Ok(None)
- }
- }
- }
+pub fn find_ignore_file_path(mut directory: PathBuf, recursive: bool) -> Option<PathBuf> {
+ debug!("config: looking for ignore file in {}", directory.display());
+ let file_path = directory.join(".styluaignore");
+ if file_path.is_file() {
+ Some(file_path)
+ } else if recursive && directory.pop() {
+ find_ignore_file_path(directory, recursive)
+ } else {
+ None
}
}
/// Handles any overrides provided by command line options
-pub fn load_overrides(config: Config, opt: &Opt) -> Config {
+fn load_overrides(config: Config, opt: &Opt) -> Config {
let mut new_config = config;
if let Some(syntax) = opt.format_opts.syntax {
diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -8,16 +8,12 @@ use std::collections::HashSet;
use std::fs;
use std::io::{stderr, stdin, stdout, Read, Write};
use std::path::Path;
-#[cfg(feature = "editorconfig")]
-use std::path::PathBuf;
use std::sync::atomic::{AtomicI32, AtomicU32, Ordering};
use std::sync::Arc;
use std::time::Instant;
use thiserror::Error;
use threadpool::ThreadPool;
-#[cfg(feature = "editorconfig")]
-use stylua_lib::editorconfig;
use stylua_lib::{format_code, Config, OutputVerification, Range};
use crate::config::find_ignore_file_path;
diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -272,14 +268,11 @@ fn format(opt: opt::Opt) -> Result<i32> {
}
// Load the configuration
- let loaded = config::load_config(&opt)?;
- #[cfg(feature = "editorconfig")]
- let is_default_config = loaded.is_none();
- let config = loaded.unwrap_or_default();
+ let opt_for_config_resolver = opt.clone();
+ let mut config_resolver = config::ConfigResolver::new(&opt_for_config_resolver)?;
- // Handle any configuration overrides provided by options
- let config = config::load_overrides(config, &opt);
- debug!("config: {:#?}", config);
+ // TODO:
+ // debug!("config: {:#?}", config);
// Create range if provided
let range = if opt.range_start.is_some() || opt.range_end.is_some() {
diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -425,40 +418,24 @@ fn format(opt: opt::Opt) -> Result<i32> {
None => false,
};
- pool.execute(move || {
- #[cfg(not(feature = "editorconfig"))]
- let used_config = Ok(config);
- #[cfg(feature = "editorconfig")]
- let used_config = match is_default_config && !&opt.no_editorconfig {
- true => {
- let path = match &opt.stdin_filepath {
- Some(filepath) => filepath.to_path_buf(),
- None => PathBuf::from("*.lua"),
- };
- editorconfig::parse(config, &path)
- .context("could not parse editorconfig")
- }
- false => Ok(config),
- };
+ let config = config_resolver.load_configuration_for_stdin()?;
+ pool.execute(move || {
let mut buf = String::new();
tx.send(
- used_config
- .and_then(|used_config| {
- stdin()
- .read_to_string(&mut buf)
- .map_err(|err| err.into())
- .and_then(|_| {
- format_string(
- buf,
- used_config,
- range,
- &opt,
- verify_output,
- should_skip_format,
- )
- .context("could not format from stdin")
- })
+ stdin()
+ .read_to_string(&mut buf)
+ .map_err(|err| err.into())
+ .and_then(|_| {
+ format_string(
+ buf,
+ config,
+ range,
+ &opt,
+ verify_output,
+ should_skip_format,
+ )
+ .context("could not format from stdin")
})
.map_err(|error| {
ErrorFileWrapper {
diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -506,29 +483,20 @@ fn format(opt: opt::Opt) -> Result<i32> {
continue;
}
+ let config = config_resolver.load_configuration(&path)?;
+
let tx = tx.clone();
pool.execute(move || {
- #[cfg(not(feature = "editorconfig"))]
- let used_config = Ok(config);
- #[cfg(feature = "editorconfig")]
- let used_config = match is_default_config && !&opt.no_editorconfig {
- true => editorconfig::parse(config, &path)
- .context("could not parse editorconfig"),
- false => Ok(config),
- };
-
tx.send(
- used_config
- .and_then(|used_config| {
- format_file(&path, used_config, range, &opt, verify_output)
- })
- .map_err(|error| {
+ format_file(&path, config, range, &opt, verify_output).map_err(
+ |error| {
ErrorFileWrapper {
file: path.display().to_string(),
error,
}
.into()
- }),
+ },
+ ),
)
.unwrap()
});
diff --git a/src/cli/opt.rs b/src/cli/opt.rs
--- a/src/cli/opt.rs
+++ b/src/cli/opt.rs
@@ -9,7 +9,7 @@ lazy_static::lazy_static! {
static ref NUM_CPUS: String = num_cpus::get().to_string();
}
-#[derive(StructOpt, Debug)]
+#[derive(StructOpt, Clone, Debug)]
#[structopt(name = "stylua", about = "A utility to format Lua code", version)]
pub struct Opt {
/// Specify path to stylua.toml configuration file.
diff --git a/src/cli/opt.rs b/src/cli/opt.rs
--- a/src/cli/opt.rs
+++ b/src/cli/opt.rs
@@ -160,7 +160,7 @@ pub enum OutputFormat {
Summary,
}
-#[derive(StructOpt, Debug)]
+#[derive(StructOpt, Clone, Copy, Debug)]
pub struct FormatOpts {
/// The type of Lua syntax to parse
#[structopt(long, arg_enum, ignore_case = true)]
| diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -21,6 +21,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Update internal Lua parser version (full-moon) to v1.1.0. This includes parser performance improvements. ([#854](https://github.com/JohnnyMorganz/StyLua/issues/854))
- LuaJIT is now separated from Lua52, and is available in its own feature and syntax flag
+- `.stylua.toml` config resolution now supports looking up config files next to files being formatted, recursively going
+ upwards until reaching the current working directory, then stopping (unless `--search-parent-directories` was specified).
+ For example, for a file `./src/test.lua`, executing `stylua src/` will look for `./src/stylua.toml` and then `./stylua.toml`.
### Fixed
diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -808,4 +776,181 @@ mod tests {
cwd.close().unwrap();
}
+
+ #[test]
+ fn test_stdin_filepath_respects_cwd_configuration_next_to_file() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferSingle'",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .args(["--stdin-filepath", "foo.lua", "-"])
+ .write_stdin("local x = \"hello\"")
+ .assert()
+ .success()
+ .stdout("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_stdin_filepath_respects_cwd_configuration_for_nested_file() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferSingle'",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .args(["--stdin-filepath", "build/foo.lua", "-"])
+ .write_stdin("local x = \"hello\"")
+ .assert()
+ .success()
+ .stdout("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_cwd_configuration_respected_for_file_in_cwd() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferSingle'",
+ "foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .arg("foo.lua")
+ .assert()
+ .success();
+
+ cwd.child("foo.lua").assert("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_cwd_configuration_respected_for_nested_file() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferSingle'",
+ "build/foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .arg("build/foo.lua")
+ .assert()
+ .success();
+
+ cwd.child("build/foo.lua").assert("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_configuration_is_not_used_outside_of_cwd() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferSingle'",
+ "build/foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.child("build").path())
+ .arg("foo.lua")
+ .assert()
+ .success();
+
+ cwd.child("build/foo.lua").assert("local x = \"hello\"\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_configuration_used_outside_of_cwd_when_search_parent_directories_is_enabled() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferSingle'",
+ "build/foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.child("build").path())
+ .args(["--search-parent-directories", "foo.lua"])
+ .assert()
+ .success();
+
+ cwd.child("build/foo.lua").assert("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_configuration_is_searched_next_to_file() {
+ let cwd = construct_tree!({
+ "build/stylua.toml": "quote_style = 'AutoPreferSingle'",
+ "build/foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .arg("build/foo.lua")
+ .assert()
+ .success();
+
+ cwd.child("build/foo.lua").assert("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_configuration_is_used_closest_to_the_file() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferDouble'",
+ "build/stylua.toml": "quote_style = 'AutoPreferSingle'",
+ "build/foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .arg("build/foo.lua")
+ .assert()
+ .success();
+
+ cwd.child("build/foo.lua").assert("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
+
+ #[test]
+ fn test_respect_config_path_override() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferDouble'",
+ "build/stylua.toml": "quote_style = 'AutoPreferSingle'",
+ "foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .args(["--config-path", "build/stylua.toml", "foo.lua"])
+ .assert()
+ .success();
+ }
+
+ #[test]
+ fn test_respect_config_path_override_for_stdin_filepath() {
+ let cwd = construct_tree!({
+ "stylua.toml": "quote_style = 'AutoPreferDouble'",
+ "build/stylua.toml": "quote_style = 'AutoPreferSingle'",
+ "foo.lua": "local x = \"hello\"",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .args(["--config-path", "build/stylua.toml", "-"])
+ .write_stdin("local x = \"hello\"")
+ .assert()
+ .success()
+ .stdout("local x = 'hello'\n");
+
+ cwd.close().unwrap();
+ }
}
| feat: Use closest `stylua.toml` config to file being formatted
It would be nice if the formatter can be configured to stop looking for config files when reached a Git root (e.g. `git rev-parse --show-toplevel`) this will be useful such that when inside a lua repo and inside submodule, we stop at first `.stylelua` file, somehow it does not stop and merges? with the config file in the main repo, which contains the submodule...
VSCode Extension ignoring stylua.toml
Began coding this morning and upon formatting my files, had all my spaces turn into tabs. Saw the VSCode extension got updated to 1.7.0 last night.
Reverting back to 1.6.3 resolves the issue and formatting resumes back to the settings in my `stylua.toml`
| > we stop at first .stylelua file, somehow it does not stop and merges
this sounds incorrect to me, we don't merge two stylua files together. That sounds like the underlying issue here, rather than adding something else to stop at git roots.
Ah maybe you start the discovery of the style file from the current working directory, which IMO is wrong. Could that be the issue here?
Ah yes, that seems more like it. Indeed, we just use the config found at the cwd across all the code.
I have been thinking about this though - maybe we should be using the closest configuration to the file.
I do question why you are running stylua on a sub module though. I typically think sub modules are vendored code, hence should be ignored by style tools, but maybe that is not the case for you.
Von meinem iPhone gesendetAm 23.12.2023 um 18:32 schrieb JohnnyMorganz ***@***.***>:
Ah yes, that seems more like it. Indeed, we just use the config found at the cwd across all the code.
I have been thinking about this though - maybe we should be using the closest configuration to the file.
I do question why you are running stylua on a sub module though. I typically think sub modules are vendored code, hence should be ignored by style tools, but maybe that is not the case for you.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
Ehm, hehe 😉 yes thats true, bitbthe astronvim distribution works differently :). But tou are right I meant I always executed the format with the cwd in the submodule and since it has a config file that one should be found, strange, I check again in a few days.
Anyway, all format tools actually do this as far as I know from clang-format - start from the file. Please check again, now a bit unsure.
Yes, I think it is valid still. I will restructure your feature request to "Use closest configuration to file"
@JohnnyMorganz : Would improve tooling much, since it will pick up the closest .stylelua.toml file and make an astronvim user install much nicer! Looking forward to this. BR
What is the structure of your workspace? (i.e. the location of the file relative to your stylua.toml and the folder you have open in VSCode)
I have `folder` open
```
folder
- stylua.toml
- test.lua
- src
- client
- Tiles
- TileBoard.lua
```
I get space indentation in `test.lua`, but tab indentation in `src/client/Tiles/TileBoard.lua`
`stylua.toml`
```
column_width = 140
line_endings = "Windows"
indent_type = "Spaces"
indent_width = 4
quote_style = "AutoPreferDouble"
call_parentheses = "Always"
collapse_simple_statement = "Never"
```
Agh, thanks.
This is unfortunately me misunderstanding the way I originally implemented `stdin-filepath` and how it relates to config resolution.
I'm going to fix this properly in #831 which will be in the next release. But for now, you can either use the older version or configure "search parent directories" in the vscode settings | 2024-11-17T19:57:54 | 0.20 | 26047670e05ba310afe9c1c1f91be6749e1f3ac9 | [
"tests::test_stdin_filepath_respects_cwd_configuration_for_nested_file",
"tests::test_configuration_is_searched_next_to_file",
"tests::test_configuration_is_used_closest_to_the_file"
] | [
"editorconfig::tests::test_call_parentheses_no_single_string",
"editorconfig::tests::test_call_parentheses_always",
"editorconfig::tests::test_call_parentheses_no_single_table",
"editorconfig::tests::test_call_parentheses_none",
"editorconfig::tests::test_collapse_simple_statement_always",
"editorconfig::... | [] | [] | 59 |
JohnnyMorganz/StyLua | 852 | JohnnyMorganz__StyLua-852 | [
"845"
] | bc3ce881eaaee46e8eb851366d33cba808d2a1f7 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -23,6 +23,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- The CLI tool will now only write files if the contents differ, and not modify if no change ([#827](https://github.com/JohnnyMorganz/StyLua/issues/827))
- Fixed parentheses around a Luau compound type inside of a type table indexer being removed causing a syntax error ([#828](https://github.com/JohnnyMorganz/StyLua/issues/828))
- Fixed function body parentheses being placed on multiple lines unnecessarily when there are no parameters ([#830](https://github.com/JohnnyMorganz/StyLua/issues/830))
+- Fixed directory paths w/o globs in `.styluaignore` not matching when using `--respect-ignores` ([#845](https://github.com/JohnnyMorganz/StyLua/issues/845))
## [0.19.1] - 2023-11-15
diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -262,7 +262,7 @@ fn path_is_stylua_ignored(path: &Path, search_parent_directories: bool) -> Resul
.context("failed to parse ignore file")?;
Ok(matches!(
- ignore.matched(path, false),
+ ignore.matched_path_or_any_parents(path, false),
ignore::Match::Ignore(_)
))
}
| diff --git a/src/cli/main.rs b/src/cli/main.rs
--- a/src/cli/main.rs
+++ b/src/cli/main.rs
@@ -769,4 +769,21 @@ mod tests {
cwd.close().unwrap();
}
+
+ #[test]
+ fn test_respect_ignores_directory_no_glob() {
+ // https://github.com/JohnnyMorganz/StyLua/issues/845
+ let cwd = construct_tree!({
+ ".styluaignore": "build/",
+ "build/foo.lua": "local x = 1",
+ });
+
+ let mut cmd = create_stylua();
+ cmd.current_dir(cwd.path())
+ .args(["--check", "--respect-ignores", "build/foo.lua"])
+ .assert()
+ .success();
+
+ cwd.close().unwrap();
+ }
}
| `--respect-ignores`: does not correctly ignore folders without the globbing pattern
Problem: `.styluaignore` does not ignore **folders** or **directories** like `.gitignore`, `.rgignore`, etc. do. Instead, full globbing patterns (`*.lua` or `**/*.lua`) would be needed to ignore all files under a directory.
EDIT: This seems to be the case only when used with `--respect-ignores <pattern>`.
## Example
(Tested with stylua 0.19.1, which already includes #765)
Let's say we have `src/foo.lua` and `build/foo.lua` in the project root (with `.styluaignore`).
### Current behavior
With the following `.styluaignore`:
```
build/
```
`$ stylua --check --respect-ignores build/foo.lua` would not ignore `build/foo.lua`.
### Expected behavior
`build/foo.lua` should be ignored.
### Current behavior (2)
With the following `.styluaignore`:
```
build/**/*.lua
```
`$ stylua --check --respect-ignores build/foo.lua` does ignore `build/foo.lua`. This behavior is consistent with `.gitignore`, etc.
| We use the ignore crate to provide gitignore-style ignores: https://docs.rs/ignore/latest/ignore/. This may actually be an issue upstream, but I will verify it myself first
(Ignore my previous response - I did not read your issue properly!)
Yes, I find that ignoring a (sub)directory works when scanning, i.e., "stylua ." (unit tests: https://github.com/JohnnyMorganz/StyLua/blob/v0.19.1/src/cli/main.rs#L723), but doesn't work only for `--respect-ignores <individual filename>`. | 2024-01-20T20:13:59 | 0.19 | bc3ce881eaaee46e8eb851366d33cba808d2a1f7 | [
"tests::test_respect_ignores_directory_no_glob"
] | [
"editorconfig::tests::test_call_parentheses_none",
"editorconfig::tests::test_call_parentheses_always",
"editorconfig::tests::test_call_parentheses_no_single_table",
"editorconfig::tests::test_call_parentheses_no_single_string",
"editorconfig::tests::test_collapse_simple_statement_conditional_only",
"edit... | [] | [] | 60 |
gfx-rs/naga | 1,611 | gfx-rs__naga-1611 | [
"1545"
] | 33c1daeceef9268a9502df0720d69c9aa9b464da | diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3470,9 +3470,20 @@ impl Parser {
lexer.expect(Token::Paren(')'))?;
let accept = self.parse_block(lexer, context.reborrow(), false)?;
+
let mut elsif_stack = Vec::new();
let mut elseif_span_start = lexer.current_byte_offset();
- while lexer.skip(Token::Word("elseif")) {
+ let mut reject = loop {
+ if !lexer.skip(Token::Word("else")) {
+ break crate::Block::new();
+ }
+
+ if !lexer.skip(Token::Word("if")) {
+ // ... else { ... }
+ break self.parse_block(lexer, context.reborrow(), false)?;
+ }
+
+ // ... else if (...) { ... }
let mut sub_emitter = super::Emitter::default();
lexer.expect(Token::Paren('('))?;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3491,12 +3502,8 @@ impl Parser {
other_block,
));
elseif_span_start = lexer.current_byte_offset();
- }
- let mut reject = if lexer.skip(Token::Word("else")) {
- self.parse_block(lexer, context.reborrow(), false)?
- } else {
- crate::Block::new()
};
+
let span_end = lexer.current_byte_offset();
// reverse-fold the else-if blocks
//Note: we may consider uplifting this to the IR
| diff --git a/src/front/wgsl/tests.rs b/src/front/wgsl/tests.rs
--- a/src/front/wgsl/tests.rs
+++ b/src/front/wgsl/tests.rs
@@ -215,7 +215,7 @@ fn parse_if() {
if (0 != 1) {}
if (false) {
return;
- } elseif (true) {
+ } else if (true) {
return;
} else {}
}
| The else if statement is not supported
Naga does not allow an if_statement inside an else_statement, but the [spec](https://www.w3.org/TR/WGSL/#if-statement) allows for it.
```
if (true) {} else if (false) {}
```
```
Parse Error: expected '{', found 'if'
```
| 2021-12-17T23:11:03 | 0.7 | 33c1daeceef9268a9502df0720d69c9aa9b464da | [
"front::wgsl::tests::parse_if"
] | [
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_unique",
"arena::tests::append_non_unique",
"back::spv::layout::test_physical_layout_in_words",
"back::msl::test_error_size",
"arena::tests::fetch_or_append_unique",
"back::spv::writer::test_write_physical_layout",
"back::spv::layout::t... | [
"back::msl::writer::test_stack_size"
] | [] | 61 | |
gfx-rs/naga | 1,139 | gfx-rs__naga-1139 | [
"1138"
] | e97c8f944121a58bbc908ecb64bdf97d4ada039d | diff --git a/src/proc/mod.rs b/src/proc/mod.rs
--- a/src/proc/mod.rs
+++ b/src/proc/mod.rs
@@ -101,11 +101,15 @@ impl super::TypeInner {
kind: _,
width,
} => (size as u8 * width) as u32,
+ // matrices are treated as arrays of aligned columns
Self::Matrix {
columns,
rows,
width,
- } => (columns as u8 * rows as u8 * width) as u32,
+ } => {
+ let aligned_rows = if rows > crate::VectorSize::Bi { 4 } else { 2 };
+ columns as u32 * aligned_rows * width as u32
+ }
Self::Pointer { .. } | Self::ValuePointer { .. } => POINTER_SPAN,
Self::Array {
base: _,
| diff --git a/src/proc/mod.rs b/src/proc/mod.rs
--- a/src/proc/mod.rs
+++ b/src/proc/mod.rs
@@ -318,3 +322,17 @@ impl super::SwizzleComponent {
}
}
}
+
+#[test]
+fn test_matrix_size() {
+ let constants = crate::Arena::new();
+ assert_eq!(
+ crate::TypeInner::Matrix {
+ columns: crate::VectorSize::Tri,
+ rows: crate::VectorSize::Tri,
+ width: 4
+ }
+ .span(&constants),
+ 48
+ );
+}
| [wgsl-in] Size of nx3 matrices is not to spec, resulting in overlapping fields
This struct declaration:
```
struct Foo {
m: mat3x3<f32>;
f: f32;
};
```
Produces this spir-v:
```
; SPIR-V
; Version: 1.0
; Generator: Khronos; 28
; Bound: 7
; Schema: 0
OpCapability Shader
OpCapability Linkage
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpSource GLSL 450
OpName %Foo "Foo"
OpMemberName %Foo 0 "m"
OpMemberName %Foo 1 "f"
OpMemberDecorate %Foo 0 Offset 0
OpMemberDecorate %Foo 0 ColMajor
OpMemberDecorate %Foo 0 MatrixStride 16
OpMemberDecorate %Foo 1 Offset 36
%void = OpTypeVoid
%float = OpTypeFloat 32
%v3float = OpTypeVector %float 3
%mat3v3float = OpTypeMatrix %v3float 3
%Foo = OpTypeStruct %mat3v3float %float
```
`f` has been given offset 36 in the struct, but [per the spec](https://gpuweb.github.io/gpuweb/wgsl/#alignment-and-size) the size of `m` should be 48. Since the stride of `m` has correctly been set to 16, it does actually have a size of 48 and so `f` overlaps with `m`.
| 2021-07-26T22:37:50 | 0.5 | c39810233274b0973fe0fcbfedb3ced8c9f685b6 | [
"proc::test_matrix_size"
] | [
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"arena::tests::fetch_or_append_non_unique",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::layout::test_physical_layout_in_words",
"front::glsl::const... | [
"back::msl::writer::test_stack_size"
] | [] | 62 | |
gfx-rs/naga | 1,006 | gfx-rs__naga-1006 | [
"1003"
] | 3a4d6fa29584228e206ecb1d0eeac6dd565cdb1c | diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -955,6 +955,7 @@ pub enum TypeQualifier {
WorkGroupSize(usize, u32),
Sampling(Sampling),
Layout(StructLayout),
+ Precision(Precision),
EarlyFragmentTests,
}
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -984,6 +985,14 @@ pub enum StructLayout {
Std430,
}
+// TODO: Encode precision hints in the IR
+#[derive(Debug, Clone, PartialEq, Copy)]
+pub enum Precision {
+ Low,
+ Medium,
+ High,
+}
+
#[derive(Debug, Clone, PartialEq, Copy)]
pub enum ParameterQualifier {
In,
diff --git a/src/front/glsl/lex.rs b/src/front/glsl/lex.rs
--- a/src/front/glsl/lex.rs
+++ b/src/front/glsl/lex.rs
@@ -1,4 +1,5 @@
use super::{
+ ast::Precision,
token::{SourceMetadata, Token, TokenValue},
types::parse_type,
};
diff --git a/src/front/glsl/lex.rs b/src/front/glsl/lex.rs
--- a/src/front/glsl/lex.rs
+++ b/src/front/glsl/lex.rs
@@ -58,6 +59,7 @@ impl<'a> Iterator for Lexer<'a> {
PPTokenValue::Float(float) => TokenValue::FloatConstant(float),
PPTokenValue::Ident(ident) => {
match ident.as_str() {
+ // Qualifiers
"layout" => TokenValue::Layout,
"in" => TokenValue::In,
"out" => TokenValue::Out,
diff --git a/src/front/glsl/lex.rs b/src/front/glsl/lex.rs
--- a/src/front/glsl/lex.rs
+++ b/src/front/glsl/lex.rs
@@ -70,6 +72,10 @@ impl<'a> Iterator for Lexer<'a> {
"sample" => TokenValue::Sampling(crate::Sampling::Sample),
"const" => TokenValue::Const,
"inout" => TokenValue::InOut,
+ "precision" => TokenValue::Precision,
+ "highp" => TokenValue::PrecisionQualifier(Precision::High),
+ "mediump" => TokenValue::PrecisionQualifier(Precision::Medium),
+ "lowp" => TokenValue::PrecisionQualifier(Precision::Low),
// values
"true" => TokenValue::BoolConstant(true),
"false" => TokenValue::BoolConstant(false),
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -10,9 +10,11 @@ use super::{
Program,
};
use crate::{
- arena::Handle, front::glsl::error::ExpectedToken, Arena, ArraySize, BinaryOperator, Block,
- Constant, ConstantInner, Expression, Function, FunctionResult, ResourceBinding, ScalarValue,
- Statement, StorageClass, StructMember, SwitchCase, Type, TypeInner, UnaryOperator,
+ arena::Handle,
+ front::glsl::{ast::Precision, error::ExpectedToken},
+ Arena, ArraySize, BinaryOperator, Block, Constant, ConstantInner, Expression, Function,
+ FunctionResult, ResourceBinding, ScalarKind, ScalarValue, Statement, StorageClass,
+ StructMember, SwitchCase, Type, TypeInner, UnaryOperator,
};
use core::convert::TryFrom;
use std::{iter::Peekable, mem};
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -206,6 +208,7 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
self.lexer.peek().map_or(false, |t| match t.value {
TokenValue::Interpolation(_)
| TokenValue::Sampling(_)
+ | TokenValue::PrecisionQualifier(_)
| TokenValue::Const
| TokenValue::In
| TokenValue::Out
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -241,7 +244,7 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
StorageQualifier::StorageClass(StorageClass::Storage),
),
TokenValue::Sampling(s) => TypeQualifier::Sampling(s),
-
+ TokenValue::PrecisionQualifier(p) => TypeQualifier::Precision(p),
_ => unreachable!(),
},
token.meta,
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -829,7 +832,51 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
}
}
} else {
- Ok(false)
+ match self.lexer.peek().map(|t| &t.value) {
+ Some(&TokenValue::Precision) => {
+ // PRECISION precision_qualifier type_specifier SEMICOLON
+ self.bump()?;
+
+ let token = self.bump()?;
+ let _ = match token.value {
+ TokenValue::PrecisionQualifier(p) => p,
+ _ => {
+ return Err(ErrorKind::InvalidToken(
+ token,
+ vec![
+ TokenValue::PrecisionQualifier(Precision::High).into(),
+ TokenValue::PrecisionQualifier(Precision::Medium).into(),
+ TokenValue::PrecisionQualifier(Precision::Low).into(),
+ ],
+ ))
+ }
+ };
+
+ let (ty, meta) = self.parse_type_non_void()?;
+
+ match self.program.module.types[ty].inner {
+ TypeInner::Scalar {
+ kind: ScalarKind::Float,
+ ..
+ }
+ | TypeInner::Scalar {
+ kind: ScalarKind::Sint,
+ ..
+ } => {}
+ _ => {
+ return Err(ErrorKind::SemanticError(
+ meta,
+ "Precision statement can only work on floats and ints".into(),
+ ))
+ }
+ }
+
+ self.expect(TokenValue::Semicolon)?;
+
+ Ok(true)
+ }
+ _ => Ok(false),
+ }
}
}
diff --git a/src/front/glsl/token.rs b/src/front/glsl/token.rs
--- a/src/front/glsl/token.rs
+++ b/src/front/glsl/token.rs
@@ -1,5 +1,6 @@
pub use pp_rs::token::{Float, Integer, PreprocessorError};
+use super::ast::Precision;
use crate::{Interpolation, Sampling, Type};
use std::{fmt, ops::Range};
diff --git a/src/front/glsl/token.rs b/src/front/glsl/token.rs
--- a/src/front/glsl/token.rs
+++ b/src/front/glsl/token.rs
@@ -57,6 +58,8 @@ pub enum TokenValue {
Const,
Interpolation(Interpolation),
Sampling(Sampling),
+ Precision,
+ PrecisionQualifier(Precision),
Continue,
Break,
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -8,6 +8,16 @@ use super::ast::*;
use super::error::ErrorKind;
use super::token::SourceMetadata;
+macro_rules! qualifier_arm {
+ ($src:expr, $tgt:expr, $meta:expr, $msg:literal $(,)?) => {{
+ if $tgt.is_some() {
+ return Err(ErrorKind::SemanticError($meta, $msg.into()));
+ }
+
+ $tgt = Some($src);
+ }};
+}
+
pub struct VarDeclaration<'a> {
pub qualifiers: &'a [(TypeQualifier, SourceMetadata)],
pub ty: Handle<Type>,
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -94,7 +104,7 @@ impl Program<'_> {
),
"gl_VertexIndex" => add_builtin(
TypeInner::Scalar {
- kind: ScalarKind::Sint,
+ kind: ScalarKind::Uint,
width: 4,
},
BuiltIn::VertexIndex,
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -103,7 +113,7 @@ impl Program<'_> {
),
"gl_InstanceIndex" => add_builtin(
TypeInner::Scalar {
- kind: ScalarKind::Sint,
+ kind: ScalarKind::Uint,
width: 4,
},
BuiltIn::InstanceIndex,
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -123,7 +133,7 @@ impl Program<'_> {
"gl_FrontFacing" => add_builtin(
TypeInner::Scalar {
kind: ScalarKind::Bool,
- width: 1,
+ width: crate::BOOL_WIDTH,
},
BuiltIn::FrontFacing,
false,
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -292,6 +302,7 @@ impl Program<'_> {
let mut location = None;
let mut sampling = None;
let mut layout = None;
+ let mut precision = None;
for &(ref qualifier, meta) in qualifiers {
match *qualifier {
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -310,57 +321,42 @@ impl Program<'_> {
storage = s;
}
- TypeQualifier::Interpolation(i) => {
- if interpolation.is_some() {
- return Err(ErrorKind::SemanticError(
- meta,
- "Cannot use more than one interpolation qualifier per declaration"
- .into(),
- ));
- }
-
- interpolation = Some(i);
- }
- TypeQualifier::ResourceBinding(ref r) => {
- if binding.is_some() {
- return Err(ErrorKind::SemanticError(
- meta,
- "Cannot use more than one binding per declaration".into(),
- ));
- }
-
- binding = Some(r.clone());
- }
- TypeQualifier::Location(l) => {
- if location.is_some() {
- return Err(ErrorKind::SemanticError(
- meta,
- "Cannot use more than one binding per declaration".into(),
- ));
- }
-
- location = Some(l);
- }
- TypeQualifier::Sampling(s) => {
- if sampling.is_some() {
- return Err(ErrorKind::SemanticError(
- meta,
- "Cannot use more than one sampling qualifier per declaration".into(),
- ));
- }
-
- sampling = Some(s);
- }
- TypeQualifier::Layout(ref l) => {
- if layout.is_some() {
- return Err(ErrorKind::SemanticError(
- meta,
- "Cannot use more than one layout qualifier per declaration".into(),
- ));
- }
-
- layout = Some(l);
- }
+ TypeQualifier::Interpolation(i) => qualifier_arm!(
+ i,
+ interpolation,
+ meta,
+ "Cannot use more than one interpolation qualifier per declaration"
+ ),
+ TypeQualifier::ResourceBinding(ref r) => qualifier_arm!(
+ r.clone(),
+ binding,
+ meta,
+ "Cannot use more than one binding per declaration"
+ ),
+ TypeQualifier::Location(l) => qualifier_arm!(
+ l,
+ location,
+ meta,
+ "Cannot use more than one binding per declaration"
+ ),
+ TypeQualifier::Sampling(s) => qualifier_arm!(
+ s,
+ sampling,
+ meta,
+ "Cannot use more than one sampling qualifier per declaration"
+ ),
+ TypeQualifier::Layout(ref l) => qualifier_arm!(
+ l,
+ layout,
+ meta,
+ "Cannot use more than one layout qualifier per declaration"
+ ),
+ TypeQualifier::Precision(ref p) => qualifier_arm!(
+ p,
+ precision,
+ meta,
+ "Cannot use more than one precision qualifier per declaration"
+ ),
_ => {
return Err(ErrorKind::SemanticError(
meta,
| diff --git a/src/front/glsl/parser_tests.rs b/src/front/glsl/parser_tests.rs
--- a/src/front/glsl/parser_tests.rs
+++ b/src/front/glsl/parser_tests.rs
@@ -246,6 +246,15 @@ fn declarations() {
&entry_points,
)
.unwrap();
+
+ let _program = parse_program(
+ r#"
+ #version 450
+ precision highp float;
+ "#,
+ &entry_points,
+ )
+ .unwrap();
}
#[test]
| [glsl-in] precision statement is not parsed
As a statement, `precision highp float;` shouldn't cause an error.
| 2021-06-22T04:33:55 | 0.5 | c39810233274b0973fe0fcbfedb3ced8c9f685b6 | [
"front::glsl::parser_tests::declarations"
] | [
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::writer::test_write_physical_layout",
"back::spv::layout::t... | [
"back::msl::writer::test_stack_size"
] | [] | 63 | |
gfx-rs/naga | 992 | gfx-rs__naga-992 | [
"976"
] | c16f2097add7705f2439d45fa4334efbf495ff14 | diff --git a/Makefile b/Makefile
--- a/Makefile
+++ b/Makefile
@@ -11,7 +11,7 @@ all:
clean:
rm *.metal *.air *.metallib *.vert *.frag *.comp *.spv
-%.metal: $(SNAPSHOTS_IN)/%.wgsl $(wildcard src/*.rs src/**/*.rs examples/*.rs)
+%.metal: $(SNAPSHOTS_BASE_IN)/%.wgsl $(wildcard src/*.rs src/**/*.rs examples/*.rs)
cargo run --features wgsl-in,msl-out -- $< $@
%.air: %.metal
diff --git a/Makefile b/Makefile
--- a/Makefile
+++ b/Makefile
@@ -20,53 +20,53 @@ clean:
%.metallib: %.air
xcrun -sdk macosx metallib $< -o $@
-%.dot: $(SNAPSHOTS_IN)/%.wgsl $(wildcard src/*.rs src/front/wgsl/*.rs src/back/dot/*.rs bin/naga.rs)
+%.dot: $(SNAPSHOTS_BASE_IN)/%.wgsl $(wildcard src/*.rs src/front/wgsl/*.rs src/back/dot/*.rs bin/naga.rs)
cargo run --features wgsl-in,dot-out -- $< $@
%.png: %.dot
dot -Tpng $< -o $@
-validate-spv: $(SNAPSHOTS_OUT)/*.spvasm
+validate-spv: $(SNAPSHOTS_BASE_OUT)/spv/*.spvasm
@set -e && for file in $^ ; do \
- echo "Validating" $${file#"$(SNAPSHOTS_OUT)/"}; \
+ echo "Validating" $${file#"$(SNAPSHOTS_BASE_OUT)/"}; \
cat $${file} | spirv-as --target-env vulkan1.0 -o - | spirv-val; \
done
-validate-msl: $(SNAPSHOTS_OUT)/*.msl
+validate-msl: $(SNAPSHOTS_BASE_OUT)/msl/*.msl
@set -e && for file in $^ ; do \
- echo "Validating" $${file#"$(SNAPSHOTS_OUT)/"}; \
+ echo "Validating" $${file#"$(SNAPSHOTS_BASE_OUT)/"}; \
cat $${file} | xcrun -sdk macosx metal -mmacosx-version-min=10.11 -x metal - -o /dev/null; \
done
-validate-glsl: $(SNAPSHOTS_OUT)/*.glsl
- @set -e && for file in $(SNAPSHOTS_OUT)/*.Vertex.glsl ; do \
- echo "Validating" $${file#"$(SNAPSHOTS_OUT)/"};\
+validate-glsl: $(SNAPSHOTS_BASE_OUT)/glsl/*.glsl
+ @set -e && for file in $(SNAPSHOTS_BASE_OUT)/glsl/*.Vertex.glsl ; do \
+ echo "Validating" $${file#"$(SNAPSHOTS_BASE_OUT)/"};\
cat $${file} | glslangValidator --stdin -S vert; \
done
- @set -e && for file in $(SNAPSHOTS_OUT)/*.Fragment.glsl ; do \
- echo "Validating" $${file#"$(SNAPSHOTS_OUT)/"};\
+ @set -e && for file in $(SNAPSHOTS_BASE_OUT)/glsl/*.Fragment.glsl ; do \
+ echo "Validating" $${file#"$(SNAPSHOTS_BASE_OUT)/"};\
cat $${file} | glslangValidator --stdin -S frag; \
done
- @set -e && for file in $(SNAPSHOTS_OUT)/*.Compute.glsl ; do \
- echo "Validating" $${file#"$(SNAPSHOTS_OUT)/"};\
+ @set -e && for file in $(SNAPSHOTS_BASE_OUT)/glsl/*.Compute.glsl ; do \
+ echo "Validating" $${file#"$(SNAPSHOTS_BASE_OUT)/"};\
cat $${file} | glslangValidator --stdin -S comp; \
done
-validate-dot: $(SNAPSHOTS_OUT)/*.dot
+validate-dot: $(SNAPSHOTS_BASE_OUT)/dot/*.dot
@set -e && for file in $^ ; do \
- echo "Validating" $${file#"$(SNAPSHOTS_OUT)/"}; \
+ echo "Validating" $${file#"$(SNAPSHOTS_BASE_OUT)/"}; \
cat $${file} | dot -o /dev/null; \
done
-validate-wgsl: $(SNAPSHOTS_OUT)/*.wgsl
+validate-wgsl: $(SNAPSHOTS_BASE_OUT)/wgsl/*.wgsl
@set -e && for file in $^ ; do \
- echo "Validating" $${file#"$(SNAPSHOTS_OUT)/"}; \
+ echo "Validating" $${file#"$(SNAPSHOTS_BASE_OUT)/"}; \
cargo run $${file}; \
done
-validate-hlsl: $(SNAPSHOTS_OUT)/*.hlsl
+validate-hlsl: $(SNAPSHOTS_BASE_OUT)/hlsl/*.hlsl
@set -e && for file in $^ ; do \
- echo "Validating" $${file#"$(SNAPSHOTS_OUT)/"}; \
+ echo "Validating" $${file#"$(SNAPSHOTS_BASE_OUT)/"}; \
config="$$(dirname $${file})/$$(basename $${file}).config"; \
vertex=""\
fragment="" \
| diff --git a/Makefile b/Makefile
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
.PHONY: all clean validate-spv validate-msl validate-glsl validate-dot validate-wgsl validate-hlsl
.SECONDARY: boids.metal quad.metal
-SNAPSHOTS_IN=tests/in
-SNAPSHOTS_OUT=tests/out
+SNAPSHOTS_BASE_IN=tests/in
+SNAPSHOTS_BASE_OUT=tests/out
all:
cargo fmt
diff --git a/tests/cases/glsl_constant_expression.vert /dev/null
--- a/tests/cases/glsl_constant_expression.vert
+++ /dev/null
@@ -1,9 +0,0 @@
-#version 450 core
-
-const int N = 5;
-
-float foo[2+N];
-
-void main() {
- gl_Position = vec4(1);
-}
\ No newline at end of file
diff --git a/tests/cases/glsl_if_preprocessor.vert /dev/null
--- a/tests/cases/glsl_if_preprocessor.vert
+++ /dev/null
@@ -1,14 +0,0 @@
-#version 460 core
-
-#define TEST 3
-#define TEST_EXPR 2 && 2
-
-#if TEST_EXPR - 2 == 0
-#error 0
-#elif TEST_EXPR - 2 == 1
-#error 1
-#elif TEST_EXPR - 2 == 2
-#error 2
-#else
-#error You shouldn't do that
-#endif
diff --git a/tests/cases/glsl_phong_lighting.frag /dev/null
--- a/tests/cases/glsl_phong_lighting.frag
+++ /dev/null
@@ -1,56 +0,0 @@
-#version 450 core
-
-layout(location=0) in vec3 v_position;
-layout(location=1) in vec3 v_color;
-layout(location=2) in vec3 v_normal;
-
-layout(location=0) out vec4 f_color;
-
-layout(set=0, binding=0)
-uniform Globals {
- mat4 u_view_proj;
- vec3 u_view_position;
-};
-
-layout(set = 1, binding = 0) uniform Light {
- vec3 u_position;
- vec3 u_color;
-};
-
-layout(set=2, binding=0)
-uniform Locals {
- mat4 u_transform;
- vec2 u_min_max;
-};
-
-layout (set = 2, binding = 1) uniform texture2D t_color;
-layout (set = 2, binding = 2) uniform sampler s_color;
-
-float invLerp(float from, float to, float value){
- return (value - from) / (to - from);
-}
-
-void main() {
- vec3 object_color =
- texture(sampler2D(t_color,s_color), vec2(invLerp(u_min_max.x,u_min_max.y,length(v_position)),0.0)).xyz;
-
- float ambient_strength = 0.1;
- vec3 ambient_color = u_color * ambient_strength;
-
- vec3 normal = normalize(v_normal);
- vec3 light_dir = normalize(u_position - v_position);
-
- float diffuse_strength = max(dot(normal, light_dir), 0.0);
- vec3 diffuse_color = u_color * diffuse_strength;
-
- vec3 view_dir = normalize(u_view_position - v_position);
- vec3 half_dir = normalize(view_dir + light_dir);
-
- float specular_strength = pow(max(dot(normal, half_dir), 0.0), 32);
-
- vec3 specular_color = specular_strength * u_color;
-
- vec3 result = (ambient_color + diffuse_color + specular_color) * object_color;
-
- f_color = vec4(result, 1.0);
-}
diff --git a/tests/cases/glsl_preprocessor_abuse.vert /dev/null
--- a/tests/cases/glsl_preprocessor_abuse.vert
+++ /dev/null
@@ -1,11 +0,0 @@
-#version 450 core
-
-#define MAIN void main() {
-#define V_POSITION layout(location=0) in vec4 a_position;
-#define ASSIGN_POSITION gl_Position = a_position;
-
-V_POSITION
-
-MAIN
- ASSIGN_POSITION
-}
\ No newline at end of file
diff --git a/tests/cases/glsl_vertex_test_shader.vert /dev/null
--- a/tests/cases/glsl_vertex_test_shader.vert
+++ /dev/null
@@ -1,29 +0,0 @@
-#version 450 core
-
-layout(location=0) in vec3 a_position;
-layout(location=1) in vec3 a_color;
-layout(location=2) in vec3 a_normal;
-
-layout(location=0) out vec3 v_position;
-layout(location=1) out vec3 v_color;
-layout(location=2) out vec3 v_normal;
-
-layout(set=0, binding=0)
-uniform Globals {
- mat4 u_view_proj;
- vec3 u_view_position;
-};
-
-layout(set=2, binding=0)
-uniform Locals {
- mat4 u_transform;
- vec2 U_min_max;
-};
-
-void main() {
- v_color = a_color;
- v_normal = a_normal;
-
- v_position = (u_transform * vec4(a_position, 1.0)).xyz;
- gl_Position = u_view_proj * u_transform * vec4(a_position, 1.0);
-}
diff --git /dev/null b/tests/in/glsl/multiple_entry_points.glsl
new file mode 100644
--- /dev/null
+++ b/tests/in/glsl/multiple_entry_points.glsl
@@ -0,0 +1,19 @@
+#version 450
+// vertex
+void vert_main() {
+ gl_Position = vec4(1.0, 1.0, 1.0, 1.0);
+}
+
+// fragment
+layout(location = 0) out vec4 o_color;
+void frag_main() {
+ o_color = vec4(1.0, 1.0, 1.0, 1.0);
+}
+
+//compute
+// layout(local_size_x = 64, local_size_y = 1, local_size_z = 1) in;
+void comp_main() {
+ if (gl_GlobalInvocationID.x > 1) {
+ return;
+ }
+}
diff --git /dev/null b/tests/in/glsl/multiple_entry_points.params.ron
new file mode 100644
--- /dev/null
+++ b/tests/in/glsl/multiple_entry_points.params.ron
@@ -0,0 +1,6 @@
+(
+ spv_version: (1, 0),
+ glsl_vert_ep_name: Some("vert_main"),
+ glsl_frag_ep_name: Some("frag_main"),
+ glsl_comp_ep_name: Some("comp_main"),
+)
diff --git a/tests/in/quad-glsl.glsl b/tests/in/glsl/quad_glsl.frag
--- a/tests/in/quad-glsl.glsl
+++ b/tests/in/glsl/quad_glsl.frag
@@ -1,17 +1,4 @@
#version 450
-// vertex
-const float c_scale = 1.2;
-
-layout(location = 0) in vec2 a_pos;
-layout(location = 1) in vec2 a_uv;
-layout(location = 0) out vec2 v_uv;
-
-void vert_main() {
- v_uv = a_uv;
- gl_Position = vec4(c_scale * a_pos, 0.0, 1.0);
-}
-
-// fragment
layout(location = 0) in vec2 v_uv;
#ifdef TEXTURE
layout(set = 0, binding = 0) uniform texture2D u_texture;
diff --git a/tests/in/quad-glsl.glsl b/tests/in/glsl/quad_glsl.frag
--- a/tests/in/quad-glsl.glsl
+++ b/tests/in/glsl/quad_glsl.frag
@@ -19,7 +6,7 @@ layout(set = 0, binding = 1) uniform sampler u_sampler;
#endif
layout(location = 0) out vec4 o_color;
-void frag_main() {
+void main() {
#ifdef TEXTURE
o_color = texture(sampler2D(u_texture, u_sampler), v_uv);
#else
diff --git /dev/null b/tests/in/glsl/quad_glsl.vert
new file mode 100644
--- /dev/null
+++ b/tests/in/glsl/quad_glsl.vert
@@ -0,0 +1,11 @@
+#version 450
+const float c_scale = 1.2;
+
+layout(location = 0) in vec2 a_pos;
+layout(location = 1) in vec2 a_uv;
+layout(location = 0) out vec2 v_uv;
+
+void main() {
+ v_uv = a_uv;
+ gl_Position = vec4(c_scale * a_pos, 0.0, 1.0);
+}
diff --git a/tests/in/pointer-access.spvasm /dev/null
--- a/tests/in/pointer-access.spvasm
+++ /dev/null
@@ -1,57 +0,0 @@
-;;; Indexing into composite values passed by pointer.
-
-OpCapability Shader
-OpCapability Linkage
-OpExtension "SPV_KHR_storage_buffer_storage_class"
-OpMemoryModel Logical GLSL450
-OpName %dpa_arg_array "dpa_arg_array"
-OpName %dpa_arg_index "dpa_arg_index"
-OpName %dpra_arg_struct "dpra_arg_struct"
-OpName %dpra_arg_index "dpra_arg_index"
-OpDecorate %run_array ArrayStride 4
-OpDecorate %run_struct Block
-OpDecorate %dummy_var DescriptorSet 0
-OpDecorate %dummy_var Binding 0
-OpMemberDecorate %run_struct 0 Offset 0
-
-%uint = OpTypeInt 32 0
-%uint_ptr = OpTypePointer StorageBuffer %uint
-%const_0 = OpConstant %uint 0
-%const_7 = OpConstant %uint 7
-
-%array = OpTypeArray %uint %const_7
-%array_ptr = OpTypePointer StorageBuffer %array
-%dpa_type = OpTypeFunction %uint %array_ptr %uint
-
-%run_array = OpTypeRuntimeArray %uint
-%run_struct = OpTypeStruct %run_array
-%run_struct_ptr = OpTypePointer StorageBuffer %run_struct
-%dpra_type = OpTypeFunction %uint %run_struct_ptr %uint
-
-;;; This makes Naga request SPV_KHR_storage_buffer_storage_class in the output,
-;;; too.
-%dummy_var = OpVariable %run_struct_ptr StorageBuffer
-
-;;; Take a pointer to an array of unsigned integers, and an index,
-;;; and return the given element of the array.
-%dpa = OpFunction %uint None %dpa_type
-%dpa_arg_array = OpFunctionParameter %array_ptr
-%dpa_arg_index = OpFunctionParameter %uint
-
- %dpa_label = OpLabel
- %elt_ptr = OpAccessChain %uint_ptr %dpa_arg_array %dpa_arg_index
- %elt_value = OpLoad %uint %elt_ptr
- OpReturnValue %elt_value
- OpFunctionEnd
-
-;;; Take a pointer to a struct containing a run-time array, and an index, and
-;;; return the given element of the array.
-%dpra = OpFunction %uint None %dpra_type
-%dpra_arg_struct = OpFunctionParameter %run_struct_ptr
-%dpra_arg_index = OpFunctionParameter %uint
-
- %dpra_label = OpLabel
- %elt_ptr2 = OpAccessChain %uint_ptr %dpra_arg_struct %const_0 %dpra_arg_index
- %elt_value2 = OpLoad %uint %elt_ptr2
- OpReturnValue %elt_value2
- OpFunctionEnd
diff --git a/tests/in/quad-glsl.param.ron /dev/null
--- a/tests/in/quad-glsl.param.ron
+++ /dev/null
@@ -1,5 +0,0 @@
-(
- spv_version: (1, 0),
- spv_debug: true,
- spv_adjust_coordinate_space: true,
-)
diff --git a/tests/out/operators.Vertex.glsl /dev/null
--- a/tests/out/operators.Vertex.glsl
+++ /dev/null
@@ -1,10 +0,0 @@
-#version 310 es
-
-precision highp float;
-
-
-void main() {
- gl_Position = ((((vec2(1.0) + vec2(2.0)) - vec2(3.0)) / vec2(4.0)).xyxy + vec4((ivec4(5) % ivec4(2))));
- return;
-}
-
diff --git /dev/null b/tests/out/wgsl/multiple_entry_points-glsl.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/out/wgsl/multiple_entry_points-glsl.wgsl
@@ -0,0 +1,53 @@
+struct VertexOutput {
+ [[builtin(position)]] member: vec4<f32>;
+};
+
+struct FragmentOutput {
+ [[location(0), interpolate(perspective)]] o_color: vec4<f32>;
+};
+
+var<private> gl_Position: vec4<f32>;
+var<private> o_color: vec4<f32>;
+var<private> gl_GlobalInvocationID: vec3<u32>;
+
+fn vert_main1() {
+ gl_Position = vec4<f32>(1.0, 1.0, 1.0, 1.0);
+ return;
+}
+
+fn frag_main1() {
+ o_color = vec4<f32>(1.0, 1.0, 1.0, 1.0);
+ return;
+}
+
+fn comp_main1() {
+ let _e3: vec3<u32> = gl_GlobalInvocationID;
+ if ((_e3.x > u32(1))) {
+ {
+ return;
+ }
+ } else {
+ return;
+ }
+}
+
+[[stage(vertex)]]
+fn vert_main() -> VertexOutput {
+ vert_main1();
+ let _e1: vec4<f32> = gl_Position;
+ return VertexOutput(_e1);
+}
+
+[[stage(fragment)]]
+fn frag_main() -> FragmentOutput {
+ frag_main1();
+ let _e1: vec4<f32> = o_color;
+ return FragmentOutput(_e1);
+}
+
+[[stage(compute), workgroup_size(1, 1, 1)]]
+fn comp_main([[builtin(global_invocation_id)]] param: vec3<u32>) {
+ gl_GlobalInvocationID = param;
+ comp_main1();
+ return;
+}
diff --git /dev/null b/tests/out/wgsl/quad_glsl-frag.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/out/wgsl/quad_glsl-frag.wgsl
@@ -0,0 +1,18 @@
+struct FragmentOutput {
+ [[location(0), interpolate(perspective)]] o_color: vec4<f32>;
+};
+
+var<private> v_uv: vec2<f32>;
+var<private> o_color: vec4<f32>;
+
+fn main1() {
+ o_color = vec4<f32>(1.0, 1.0, 1.0, 1.0);
+ return;
+}
+
+[[stage(fragment)]]
+fn main() -> FragmentOutput {
+ main1();
+ let _e1: vec4<f32> = o_color;
+ return FragmentOutput(_e1);
+}
diff --git a/tests/out/quad-glsl.wgsl b/tests/out/wgsl/quad_glsl-vert.wgsl
--- a/tests/out/quad-glsl.wgsl
+++ b/tests/out/wgsl/quad_glsl-vert.wgsl
@@ -3,18 +3,12 @@ struct VertexOutput {
[[builtin(position)]] member: vec4<f32>;
};
-struct FragmentOutput {
- [[location(0), interpolate(perspective)]] o_color: vec4<f32>;
-};
-
var<private> a_pos1: vec2<f32>;
var<private> a_uv1: vec2<f32>;
var<private> v_uv: vec2<f32>;
var<private> gl_Position: vec4<f32>;
-var<private> v_uv1: vec2<f32>;
-var<private> o_color: vec4<f32>;
-fn vert_main1() {
+fn main1() {
let _e4: vec2<f32> = a_uv1;
v_uv = _e4;
let _e6: vec2<f32> = a_pos1;
diff --git a/tests/out/quad-glsl.wgsl b/tests/out/wgsl/quad_glsl-vert.wgsl
--- a/tests/out/quad-glsl.wgsl
+++ b/tests/out/wgsl/quad_glsl-vert.wgsl
@@ -22,24 +16,12 @@ fn vert_main1() {
return;
}
-fn frag_main1() {
- o_color = vec4<f32>(1.0, 1.0, 1.0, 1.0);
- return;
-}
-
[[stage(vertex)]]
-fn vert_main([[location(0), interpolate(perspective)]] a_pos: vec2<f32>, [[location(1), interpolate(perspective)]] a_uv: vec2<f32>) -> VertexOutput {
+fn main([[location(0), interpolate(perspective)]] a_pos: vec2<f32>, [[location(1), interpolate(perspective)]] a_uv: vec2<f32>) -> VertexOutput {
a_pos1 = a_pos;
a_uv1 = a_uv;
- vert_main1();
+ main1();
let _e5: vec2<f32> = v_uv;
let _e7: vec4<f32> = gl_Position;
return VertexOutput(_e5, _e7);
}
-
-[[stage(fragment)]]
-fn frag_main() -> FragmentOutput {
- frag_main1();
- let _e1: vec4<f32> = o_color;
- return FragmentOutput(_e1);
-}
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -3,8 +3,8 @@
use std::{fs, path::PathBuf};
-const DIR_IN: &str = "tests/in";
-const DIR_OUT: &str = "tests/out";
+const BASE_DIR_IN: &str = "tests/in";
+const BASE_DIR_OUT: &str = "tests/out";
bitflags::bitflags! {
struct Targets: u32 {
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -52,12 +52,21 @@ struct Parameters {
#[cfg_attr(not(feature = "glsl-out"), allow(dead_code))]
#[serde(default)]
glsl_desktop_version: Option<u16>,
+ #[cfg_attr(not(feature = "glsl-out"), allow(dead_code))]
+ #[serde(default)]
+ glsl_vert_ep_name: Option<String>,
+ #[cfg_attr(not(feature = "glsl-out"), allow(dead_code))]
+ #[serde(default)]
+ glsl_frag_ep_name: Option<String>,
+ #[cfg_attr(not(feature = "glsl-out"), allow(dead_code))]
+ #[serde(default)]
+ glsl_comp_ep_name: Option<String>,
}
#[allow(dead_code, unused_variables)]
fn check_targets(module: &naga::Module, name: &str, targets: Targets) {
let root = env!("CARGO_MANIFEST_DIR");
- let params = match fs::read_to_string(format!("{}/{}/{}.param.ron", root, DIR_IN, name)) {
+ let params = match fs::read_to_string(format!("{}/{}/{}.param.ron", root, BASE_DIR_IN, name)) {
Ok(string) => ron::de::from_str(&string).expect("Couldn't find param file"),
Err(_) => Parameters::default(),
};
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -73,39 +82,39 @@ fn check_targets(module: &naga::Module, name: &str, targets: Targets) {
.validate(module)
.unwrap();
- let dest = PathBuf::from(root).join(DIR_OUT).join(name);
+ let dest = PathBuf::from(root).join(BASE_DIR_OUT);
#[cfg(feature = "serialize")]
{
if targets.contains(Targets::IR) {
let config = ron::ser::PrettyConfig::default().with_new_line("\n".to_string());
let string = ron::ser::to_string_pretty(module, config).unwrap();
- fs::write(dest.with_extension("ron"), string).unwrap();
+ fs::write(dest.join(format!("ir/{}.ron", name)), string).unwrap();
}
if targets.contains(Targets::ANALYSIS) {
let config = ron::ser::PrettyConfig::default().with_new_line("\n".to_string());
let string = ron::ser::to_string_pretty(&info, config).unwrap();
- fs::write(dest.with_extension("info.ron"), string).unwrap();
+ fs::write(dest.join(format!("analysis/{}.info.ron", name)), string).unwrap();
}
}
#[cfg(feature = "spv-out")]
{
if targets.contains(Targets::SPIRV) {
- check_output_spv(module, &info, &dest, ¶ms);
+ write_output_spv(module, &info, &dest, name, ¶ms);
}
}
#[cfg(feature = "msl-out")]
{
if targets.contains(Targets::METAL) {
- check_output_msl(module, &info, &dest, ¶ms);
+ write_output_msl(module, &info, &dest, name, ¶ms);
}
}
#[cfg(feature = "glsl-out")]
{
if targets.contains(Targets::GLSL) {
for ep in module.entry_points.iter() {
- check_output_glsl(module, &info, &dest, ep.stage, &ep.name, ¶ms);
+ write_output_glsl(module, &info, &dest, name, ep.stage, &ep.name, ¶ms);
}
}
}
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -113,28 +122,29 @@ fn check_targets(module: &naga::Module, name: &str, targets: Targets) {
{
if targets.contains(Targets::DOT) {
let string = naga::back::dot::write(module, Some(&info)).unwrap();
- fs::write(dest.with_extension("dot"), string).unwrap();
+ fs::write(dest.join(format!("dot/{}.dot", name)), string).unwrap();
}
}
#[cfg(feature = "hlsl-out")]
{
if targets.contains(Targets::HLSL) {
- check_output_hlsl(module, &info, &dest);
+ write_output_hlsl(module, &info, &dest, name);
}
}
#[cfg(feature = "wgsl-out")]
{
if targets.contains(Targets::WGSL) {
- check_output_wgsl(module, &info, &dest);
+ write_output_wgsl(module, &info, &dest, name);
}
}
}
#[cfg(feature = "spv-out")]
-fn check_output_spv(
+fn write_output_spv(
module: &naga::Module,
info: &naga::valid::ModuleInfo,
destination: &PathBuf,
+ file_name: &str,
params: &Parameters,
) {
use naga::back::spv;
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -171,14 +181,15 @@ fn check_output_spv(
.expect("Produced invalid SPIR-V")
.disassemble();
- fs::write(destination.with_extension("spvasm"), dis).unwrap();
+ fs::write(destination.join(format!("spv/{}.spvasm", file_name)), dis).unwrap();
}
#[cfg(feature = "msl-out")]
-fn check_output_msl(
+fn write_output_msl(
module: &naga::Module,
info: &naga::valid::ModuleInfo,
destination: &PathBuf,
+ file_name: &str,
params: &Parameters,
) {
use naga::back::msl;
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -207,14 +218,15 @@ fn check_output_msl(
}
}
- fs::write(destination.with_extension("msl"), string).unwrap();
+ fs::write(destination.join(format!("msl/{}.msl", file_name)), string).unwrap();
}
#[cfg(feature = "glsl-out")]
-fn check_output_glsl(
+fn write_output_glsl(
module: &naga::Module,
info: &naga::valid::ModuleInfo,
destination: &PathBuf,
+ file_name: &str,
stage: naga::ShaderStage,
ep_name: &str,
params: &Parameters,
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -234,17 +246,25 @@ fn check_output_glsl(
let mut writer = glsl::Writer::new(&mut buffer, module, info, &options).unwrap();
writer.write().unwrap();
- let ext = format!("{:?}.glsl", stage);
- fs::write(destination.with_extension(&ext), buffer).unwrap();
+ fs::write(
+ destination.join(format!("glsl/{}.{:?}.glsl", file_name, stage)),
+ buffer,
+ )
+ .unwrap();
}
#[cfg(feature = "hlsl-out")]
-fn check_output_hlsl(module: &naga::Module, info: &naga::valid::ModuleInfo, destination: &PathBuf) {
+fn write_output_hlsl(
+ module: &naga::Module,
+ info: &naga::valid::ModuleInfo,
+ destination: &PathBuf,
+ file_name: &str,
+) {
use naga::back::hlsl;
let options = hlsl::Options::default();
let string = hlsl::write_string(module, info, &options).unwrap();
- fs::write(destination.with_extension("hlsl"), string).unwrap();
+ fs::write(destination.join(format!("hlsl/{}.hlsl", file_name)), string).unwrap();
// We need a config file for validation script
// This file contains an info about profiles (shader stages) contains inside generated shader
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -263,16 +283,25 @@ fn check_output_hlsl(module: &naga::Module, info: &naga::valid::ModuleInfo, dest
config_str, stage_str, profile, stage_str, ep_name
);
}
- fs::write(destination.with_extension("hlsl.config"), config_str).unwrap();
+ fs::write(
+ destination.join(format!("hlsl/{}.hlsl.config", file_name)),
+ config_str,
+ )
+ .unwrap();
}
#[cfg(feature = "wgsl-out")]
-fn check_output_wgsl(module: &naga::Module, info: &naga::valid::ModuleInfo, destination: &PathBuf) {
+fn write_output_wgsl(
+ module: &naga::Module,
+ info: &naga::valid::ModuleInfo,
+ destination: &PathBuf,
+ file_name: &str,
+) {
use naga::back::wgsl;
let string = wgsl::write_string(module, info).unwrap();
- fs::write(destination.with_extension("wgsl"), string).unwrap();
+ fs::write(destination.join(format!("wgsl/{}.wgsl", file_name)), string).unwrap();
}
#[cfg(feature = "wgsl-in")]
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -343,7 +372,8 @@ fn convert_wgsl() {
for &(name, targets) in inputs.iter() {
println!("Processing '{}'", name);
- let file = fs::read_to_string(format!("{}/{}/{}.wgsl", root, DIR_IN, name))
+ // WGSL shaders lives in root dir as a privileged.
+ let file = fs::read_to_string(format!("{}/{}/{}.wgsl", root, BASE_DIR_IN, name))
.expect("Couldn't find wgsl file");
match naga::front::wgsl::parse_str(&file) {
Ok(module) => check_targets(&module, name, targets),
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -356,7 +386,8 @@ fn convert_wgsl() {
fn convert_spv(name: &str, adjust_coordinate_space: bool, targets: Targets) {
let root = env!("CARGO_MANIFEST_DIR");
let module = naga::front::spv::parse_u8_slice(
- &fs::read(format!("{}/{}/{}.spv", root, DIR_IN, name)).expect("Couldn't find spv file"),
+ &fs::read(format!("{}/{}/spv/{}.spv", root, BASE_DIR_IN, name))
+ .expect("Couldn't find spv file"),
&naga::front::spv::Options {
adjust_coordinate_space,
strict_capabilities: false,
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -395,47 +426,55 @@ fn convert_spv_pointer_access() {
convert_spv("pointer-access", true, Targets::SPIRV);
}
-#[cfg(feature = "glsl-in")]
-fn convert_glsl(
- name: &str,
- entry_points: naga::FastHashMap<String, naga::ShaderStage>,
- targets: Targets,
-) {
- let root = env!("CARGO_MANIFEST_DIR");
- let module = naga::front::glsl::parse_str(
- &fs::read_to_string(format!("{}/{}/{}.glsl", root, DIR_IN, name))
- .expect("Couldn't find glsl file"),
- &naga::front::glsl::Options {
- entry_points,
- defines: Default::default(),
- },
- )
- .unwrap();
- println!("{:#?}", module);
-
- check_targets(&module, name, targets);
-}
-
#[cfg(feature = "glsl-in")]
#[allow(unused_variables)]
#[test]
fn convert_glsl_folder() {
let root = env!("CARGO_MANIFEST_DIR");
- for entry in std::fs::read_dir(format!("{}/{}/glsl", root, DIR_IN)).unwrap() {
+ for entry in std::fs::read_dir(format!("{}/{}/glsl", root, BASE_DIR_IN)).unwrap() {
let entry = entry.unwrap();
let file_name = entry.file_name().into_string().unwrap();
-
+ if file_name.ends_with(".ron") {
+ // No needed to validate ron files
+ continue;
+ }
+ let params_path = format!(
+ "{}/{}/glsl/{}.params.ron",
+ root,
+ BASE_DIR_IN,
+ PathBuf::from(&file_name).with_extension("").display()
+ );
+ let is_params_used = PathBuf::from(¶ms_path).exists();
println!("Processing {}", file_name);
let mut entry_points = naga::FastHashMap::default();
- let stage = match entry.path().extension().and_then(|s| s.to_str()).unwrap() {
- "vert" => naga::ShaderStage::Vertex,
- "frag" => naga::ShaderStage::Fragment,
- "comp" => naga::ShaderStage::Compute,
- ext => panic!("Unknown extension for glsl file {}", ext),
- };
- entry_points.insert("main".to_string(), stage);
+ if is_params_used {
+ let params: Parameters = match fs::read_to_string(¶ms_path) {
+ Ok(string) => ron::de::from_str(&string).expect("Couldn't find param file"),
+ Err(_) => panic!("Can't parse glsl params ron file: {:?}", ¶ms_path),
+ };
+
+ if let Some(vert) = params.glsl_vert_ep_name {
+ entry_points.insert(vert, naga::ShaderStage::Vertex);
+ };
+
+ if let Some(frag) = params.glsl_frag_ep_name {
+ entry_points.insert(frag, naga::ShaderStage::Fragment);
+ };
+
+ if let Some(comp) = params.glsl_comp_ep_name {
+ entry_points.insert(comp, naga::ShaderStage::Compute);
+ };
+ } else {
+ let stage = match entry.path().extension().and_then(|s| s.to_str()).unwrap() {
+ "vert" => naga::ShaderStage::Vertex,
+ "frag" => naga::ShaderStage::Fragment,
+ "comp" => naga::ShaderStage::Compute,
+ ext => panic!("Unknown extension for glsl file {}", ext),
+ };
+ entry_points.insert("main".to_string(), stage);
+ }
let module = naga::front::glsl::parse_str(
&fs::read_to_string(entry.path()).expect("Couldn't find glsl file"),
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -453,20 +492,10 @@ fn convert_glsl_folder() {
.validate(&module)
.unwrap();
- let dest = PathBuf::from(root)
- .join(DIR_OUT)
- .join(&file_name.replace(".", "-"));
-
#[cfg(feature = "wgsl-out")]
- check_output_wgsl(&module, &info, &dest);
+ {
+ let dest = PathBuf::from(root).join(BASE_DIR_OUT);
+ write_output_wgsl(&module, &info, &dest, &file_name.replace(".", "-"));
+ }
}
}
-
-#[cfg(feature = "glsl-in")]
-#[test]
-fn convert_glsl_quad() {
- let mut entry_points = naga::FastHashMap::default();
- entry_points.insert("vert_main".to_string(), naga::ShaderStage::Vertex);
- entry_points.insert("frag_main".to_string(), naga::ShaderStage::Fragment);
- convert_glsl("quad-glsl", entry_points, Targets::WGSL);
-}
| Reorganize test snapshot folder
Since the number of tests is increasing, so I suggest moving test snapshots to subfolders for each front/back.
As is:
```
./test/in
./test/out
```
To be:
```
./test/in/wgsl
./test/in/spv
...
./test/out/wgsl
./test/out/glsl
...
```
| Thank you! I think some reorg will be useful.
I would prefer to make one slight tweak to your suggestion though. Following gfx-rs/wgpu#4327, it's natural to treat WGSL as a privileged input/output. So can we have it in the root?
```
test/in/xxx.wgsl
test/in/spv/yyy.spv
test/out/yyy.wgsl
test/out/glsl/xxx.glsl
```
@kvark What we should do with `*.param.ron` files? Leave them at the root directory with `wgsl` shaders?
Let's keep them in the same place as the files they describe. So the SPV inputs will have the param ron in their tests/in/spv/ | 2021-06-19T06:51:26 | 0.5 | c39810233274b0973fe0fcbfedb3ced8c9f685b6 | [
"convert_spv_pointer_access",
"convert_spv_quad_vert",
"convert_spv_shadow",
"convert_wgsl"
] | [
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_unique",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [
"back::msl::writer::test_stack_size"
] | [
"convert_glsl_folder"
] | 64 |
gfx-rs/naga | 933 | gfx-rs__naga-933 | [
"931"
] | 61bfb29963fba9a7e8807bdd7ca1c5e5a8b76fec | diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -12,11 +12,17 @@ use crate::{
};
#[derive(Debug, Clone, Copy)]
-pub enum GlobalLookup {
+pub enum GlobalLookupKind {
Variable(Handle<GlobalVariable>),
BlockSelect(Handle<GlobalVariable>, u32),
}
+#[derive(Debug, Clone, Copy)]
+pub struct GlobalLookup {
+ pub kind: GlobalLookupKind,
+ pub entry_arg: Option<usize>,
+}
+
#[derive(Debug, PartialEq, Eq, Hash)]
pub struct FunctionSignature {
pub name: String,
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -33,6 +39,13 @@ pub struct FunctionDeclaration {
pub void: bool,
}
+bitflags::bitflags! {
+ pub struct EntryArgUse: u32 {
+ const READ = 0x1;
+ const WRITE = 0x2;
+ }
+}
+
#[derive(Debug)]
pub struct Program<'a> {
pub version: u16,
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -45,8 +58,10 @@ pub struct Program<'a> {
pub global_variables: Vec<(String, GlobalLookup)>,
pub constants: Vec<(String, Handle<Constant>)>,
- pub entry_args: Vec<(Binding, bool, Handle<GlobalVariable>)>,
+ pub entry_args: Vec<(Binding, Handle<GlobalVariable>)>,
pub entries: Vec<(String, ShaderStage, Handle<Function>)>,
+ // TODO: More efficient representation
+ pub function_arg_use: Vec<Vec<EntryArgUse>>,
pub module: Module,
}
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -65,6 +80,7 @@ impl<'a> Program<'a> {
entry_args: Vec::new(),
entries: Vec::new(),
+ function_arg_use: Vec::new(),
module: Module::default(),
}
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -150,6 +166,7 @@ pub struct Context<'function> {
expressions: &'function mut Arena<Expression>,
pub locals: &'function mut Arena<LocalVariable>,
pub arguments: &'function mut Vec<FunctionArgument>,
+ pub arg_use: Vec<EntryArgUse>,
//TODO: Find less allocation heavy representation
pub scopes: Vec<FastHashMap<String, VariableReference>>,
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -173,6 +190,7 @@ impl<'function> Context<'function> {
expressions,
locals,
arguments,
+ arg_use: vec![EntryArgUse::empty(); program.entry_args.len()],
scopes: vec![FastHashMap::default()],
lookup_global_var_exps: FastHashMap::with_capacity_and_hasher(
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -186,36 +204,44 @@ impl<'function> Context<'function> {
emitter: Emitter::default(),
};
- this.emit_start();
-
for &(ref name, handle) in program.constants.iter() {
let expr = this.expressions.append(Expression::Constant(handle));
let var = VariableReference {
expr,
load: None,
mutable: false,
+ entry_arg: None,
};
this.lookup_global_var_exps.insert(name.into(), var);
}
+ this.emit_start();
+
for &(ref name, lookup) in program.global_variables.iter() {
this.emit_flush(body);
- let (expr, load) = match lookup {
- GlobalLookup::Variable(v) => (
- Expression::GlobalVariable(v),
- program.module.global_variables[v].class != StorageClass::Handle,
- ),
- GlobalLookup::BlockSelect(handle, index) => {
+ let GlobalLookup { kind, entry_arg } = lookup;
+ let (expr, load) = match kind {
+ GlobalLookupKind::Variable(v) => {
+ let res = (
+ this.expressions.append(Expression::GlobalVariable(v)),
+ program.module.global_variables[v].class != StorageClass::Handle,
+ );
+ this.emit_start();
+
+ res
+ }
+ GlobalLookupKind::BlockSelect(handle, index) => {
let base = this.expressions.append(Expression::GlobalVariable(handle));
+ this.emit_start();
+ let expr = this
+ .expressions
+ .append(Expression::AccessIndex { base, index });
- (Expression::AccessIndex { base, index }, true)
+ (expr, true)
}
};
- let expr = this.expressions.append(expr);
- this.emit_start();
-
let var = VariableReference {
expr,
load: if load {
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -225,6 +251,7 @@ impl<'function> Context<'function> {
},
// TODO: respect constant qualifier
mutable: true,
+ entry_arg,
};
this.lookup_global_var_exps.insert(name.into(), var);
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -285,6 +312,7 @@ impl<'function> Context<'function> {
expr,
load: Some(load),
mutable,
+ entry_arg: None,
},
);
}
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -336,6 +364,7 @@ impl<'function> Context<'function> {
expr,
load,
mutable,
+ entry_arg: None,
},
);
}
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -455,8 +484,16 @@ impl<'function> Context<'function> {
));
}
+ if let Some(idx) = var.entry_arg {
+ self.arg_use[idx] |= EntryArgUse::WRITE
+ }
+
var.expr
} else {
+ if let Some(idx) = var.entry_arg {
+ self.arg_use[idx] |= EntryArgUse::READ
+ }
+
var.load.unwrap_or(var.expr)
}
}
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -617,6 +654,7 @@ pub struct VariableReference {
pub expr: Handle<Expression>,
pub load: Option<Handle<Expression>>,
pub mutable: bool,
+ pub entry_arg: Option<usize>,
}
#[derive(Debug, Clone)]
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -1,8 +1,7 @@
use crate::{
- proc::ensure_block_returns, Arena, BinaryOperator, Binding, Block, BuiltIn, EntryPoint,
- Expression, Function, FunctionArgument, FunctionResult, Handle, MathFunction,
- RelationalFunction, SampleLevel, ScalarKind, ShaderStage, Statement, StructMember,
- SwizzleComponent, Type, TypeInner,
+ proc::ensure_block_returns, Arena, BinaryOperator, Block, EntryPoint, Expression, Function,
+ FunctionArgument, FunctionResult, Handle, MathFunction, RelationalFunction, SampleLevel,
+ ScalarKind, Statement, StructMember, SwizzleComponent, Type, TypeInner,
};
use super::{ast::*, error::ErrorKind, SourceMetadata};
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -392,13 +391,15 @@ impl Program<'_> {
sig: FunctionSignature,
qualifiers: Vec<ParameterQualifier>,
meta: SourceMetadata,
- ) -> Result<(), ErrorKind> {
+ ) -> Result<Handle<Function>, ErrorKind> {
ensure_block_returns(&mut function.body);
let stage = self.entry_points.get(&sig.name);
- if let Some(&stage) = stage {
+ Ok(if let Some(&stage) = stage {
let handle = self.module.functions.append(function);
self.entries.push((sig.name, stage, handle));
+ self.function_arg_use.push(Vec::new());
+ handle
} else {
let void = function.result.is_none();
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -412,7 +413,9 @@ impl Program<'_> {
decl.defined = true;
*self.module.functions.get_mut(decl.handle) = function;
+ decl.handle
} else {
+ self.function_arg_use.push(Vec::new());
let handle = self.module.functions.append(function);
self.lookup_function.insert(
sig,
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -423,10 +426,9 @@ impl Program<'_> {
void,
},
);
+ handle
}
- }
-
- Ok(())
+ })
}
pub fn add_prototype(
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -438,6 +440,7 @@ impl Program<'_> {
) -> Result<(), ErrorKind> {
let void = function.result.is_none();
+ self.function_arg_use.push(Vec::new());
let handle = self.module.functions.append(function);
if self
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -462,17 +465,86 @@ impl Program<'_> {
Ok(())
}
+ fn check_call_global(
+ &self,
+ caller: Handle<Function>,
+ function_arg_use: &mut [Vec<EntryArgUse>],
+ stmt: &Statement,
+ ) {
+ match *stmt {
+ Statement::Block(ref block) => {
+ for stmt in block {
+ self.check_call_global(caller, function_arg_use, stmt)
+ }
+ }
+ Statement::If {
+ ref accept,
+ ref reject,
+ ..
+ } => {
+ for stmt in accept.iter().chain(reject.iter()) {
+ self.check_call_global(caller, function_arg_use, stmt)
+ }
+ }
+ Statement::Switch {
+ ref cases,
+ ref default,
+ ..
+ } => {
+ for stmt in cases
+ .iter()
+ .flat_map(|c| c.body.iter())
+ .chain(default.iter())
+ {
+ self.check_call_global(caller, function_arg_use, stmt)
+ }
+ }
+ Statement::Loop {
+ ref body,
+ ref continuing,
+ } => {
+ for stmt in body.iter().chain(continuing.iter()) {
+ self.check_call_global(caller, function_arg_use, stmt)
+ }
+ }
+ Statement::Call { function, .. } => {
+ let callee_len = function_arg_use[function.index()].len();
+ let caller_len = function_arg_use[caller.index()].len();
+ function_arg_use[caller.index()].extend(
+ std::iter::repeat(EntryArgUse::empty())
+ .take(callee_len.saturating_sub(caller_len)),
+ );
+
+ for i in 0..callee_len.max(caller_len) {
+ let callee_use = function_arg_use[function.index()][i];
+ function_arg_use[caller.index()][i] |= callee_use
+ }
+ }
+ _ => {}
+ }
+ }
+
pub fn add_entry_points(&mut self) {
+ let mut function_arg_use = Vec::new();
+ std::mem::swap(&mut self.function_arg_use, &mut function_arg_use);
+
+ for (handle, function) in self.module.functions.iter() {
+ for stmt in function.body.iter() {
+ self.check_call_global(handle, &mut function_arg_use, stmt)
+ }
+ }
+
for (name, stage, function) in self.entries.iter().cloned() {
let mut arguments = Vec::new();
let mut expressions = Arena::new();
let mut body = Vec::new();
- for (binding, input, handle) in self.entry_args.iter().cloned() {
- match binding {
- Binding::Location { .. } if !input => continue,
- Binding::BuiltIn(builtin) if !should_read(builtin, stage) => continue,
- _ => {}
+ for (i, (binding, handle)) in self.entry_args.iter().cloned().enumerate() {
+ if function_arg_use[function.index()]
+ .get(i)
+ .map_or(true, |u| !u.contains(EntryArgUse::READ))
+ {
+ continue;
}
let ty = self.module.global_variables[handle].ty;
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -500,11 +572,12 @@ impl Program<'_> {
let mut members = Vec::new();
let mut components = Vec::new();
- for (binding, input, handle) in self.entry_args.iter().cloned() {
- match binding {
- Binding::Location { .. } if input => continue,
- Binding::BuiltIn(builtin) if !should_write(builtin, stage) => continue,
- _ => {}
+ for (i, (binding, handle)) in self.entry_args.iter().cloned().enumerate() {
+ if function_arg_use[function.index()]
+ .get(i)
+ .map_or(true, |u| !u.contains(EntryArgUse::WRITE))
+ {
+ continue;
}
let ty = self.module.global_variables[handle].ty;
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -556,40 +629,3 @@ impl Program<'_> {
}
}
}
-
-// FIXME: Both of the functions below should be removed they are a temporary solution
-//
-// The fix should analyze the entry point and children function calls
-// (recursively) and store something like `GlobalUse` and then later only read
-// or store the globals that need to be read or written in that stage
-
-fn should_read(built_in: BuiltIn, stage: ShaderStage) -> bool {
- match (built_in, stage) {
- (BuiltIn::Position, ShaderStage::Fragment)
- | (BuiltIn::BaseInstance, ShaderStage::Vertex)
- | (BuiltIn::BaseVertex, ShaderStage::Vertex)
- | (BuiltIn::ClipDistance, ShaderStage::Fragment)
- | (BuiltIn::InstanceIndex, ShaderStage::Vertex)
- | (BuiltIn::VertexIndex, ShaderStage::Vertex)
- | (BuiltIn::FrontFacing, ShaderStage::Fragment)
- | (BuiltIn::SampleIndex, ShaderStage::Fragment)
- | (BuiltIn::SampleMask, ShaderStage::Fragment)
- | (BuiltIn::GlobalInvocationId, ShaderStage::Compute)
- | (BuiltIn::LocalInvocationId, ShaderStage::Compute)
- | (BuiltIn::LocalInvocationIndex, ShaderStage::Compute)
- | (BuiltIn::WorkGroupId, ShaderStage::Compute)
- | (BuiltIn::WorkGroupSize, ShaderStage::Compute) => true,
- _ => false,
- }
-}
-
-fn should_write(built_in: BuiltIn, stage: ShaderStage) -> bool {
- match (built_in, stage) {
- (BuiltIn::Position, ShaderStage::Vertex)
- | (BuiltIn::ClipDistance, ShaderStage::Vertex)
- | (BuiltIn::PointSize, ShaderStage::Vertex)
- | (BuiltIn::FragDepth, ShaderStage::Fragment)
- | (BuiltIn::SampleMask, ShaderStage::Fragment) => true,
- _ => false,
- }
-}
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -1,7 +1,8 @@
use super::{
ast::{
- Context, FunctionCall, FunctionCallKind, FunctionSignature, GlobalLookup, HirExpr,
- HirExprKind, ParameterQualifier, Profile, StorageQualifier, StructLayout, TypeQualifier,
+ Context, FunctionCall, FunctionCallKind, FunctionSignature, GlobalLookup, GlobalLookupKind,
+ HirExpr, HirExprKind, ParameterQualifier, Profile, StorageQualifier, StructLayout,
+ TypeQualifier,
},
error::ErrorKind,
lex::Lexer,
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -623,7 +624,8 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
// parse the body
self.parse_compound_statement(&mut context, &mut body)?;
- self.program.add_function(
+ let Context { arg_use, .. } = context;
+ let handle = self.program.add_function(
Function {
name: Some(name),
result,
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -637,6 +639,8 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
meta,
)?;
+ self.program.function_arg_use[handle.index()] = arg_use;
+
Ok(true)
}
_ => Err(ErrorKind::InvalidToken(token)),
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -800,9 +804,13 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
});
if let Some(k) = name {
- self.program
- .global_variables
- .push((k, GlobalLookup::Variable(handle)));
+ self.program.global_variables.push((
+ k,
+ GlobalLookup {
+ kind: GlobalLookupKind::Variable(handle),
+ entry_arg: None,
+ },
+ ));
}
for (i, k) in members
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -810,9 +818,13 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
.enumerate()
.filter_map(|(i, m)| m.name.map(|s| (i as u32, s)))
{
- self.program
- .global_variables
- .push((k, GlobalLookup::BlockSelect(handle, i)));
+ self.program.global_variables.push((
+ k,
+ GlobalLookup {
+ kind: GlobalLookupKind::BlockSelect(handle, i),
+ entry_arg: None,
+ },
+ ));
}
Ok(true)
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -45,11 +45,17 @@ impl Program<'_> {
storage_access: StorageAccess::empty(),
});
- self.entry_args
- .push((Binding::BuiltIn(builtin), true, handle));
+ let idx = self.entry_args.len();
+ self.entry_args.push((Binding::BuiltIn(builtin), handle));
- self.global_variables
- .push((name.into(), GlobalLookup::Variable(handle)));
+ self.global_variables.push((
+ name.into(),
+ GlobalLookup {
+ kind: GlobalLookupKind::Variable(handle),
+ entry_arg: Some(idx),
+ },
+ ));
+ ctx.arg_use.push(EntryArgUse::empty());
let expr = ctx.add_expression(Expression::GlobalVariable(handle), body);
let load = ctx.add_expression(Expression::Load { pointer: expr }, body);
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -59,6 +65,7 @@ impl Program<'_> {
expr,
load: Some(load),
mutable,
+ entry_arg: Some(idx),
},
);
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -297,7 +304,6 @@ impl Program<'_> {
}
if let Some(location) = location {
- let input = StorageQualifier::Input == storage;
let interpolation = self.module.types[ty].inner.scalar_kind().map(|kind| {
if let ScalarKind::Float = kind {
Interpolation::Perspective
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -315,18 +321,23 @@ impl Program<'_> {
storage_access: StorageAccess::empty(),
});
+ let idx = self.entry_args.len();
self.entry_args.push((
Binding::Location {
location,
interpolation,
sampling,
},
- input,
handle,
));
- self.global_variables
- .push((name, GlobalLookup::Variable(handle)));
+ self.global_variables.push((
+ name,
+ GlobalLookup {
+ kind: GlobalLookupKind::Variable(handle),
+ entry_arg: Some(idx),
+ },
+ ));
return Ok(ctx.add_expression(Expression::GlobalVariable(handle), body));
} else if let StorageQualifier::Const = storage {
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -369,8 +380,13 @@ impl Program<'_> {
storage_access,
});
- self.global_variables
- .push((name, GlobalLookup::Variable(handle)));
+ self.global_variables.push((
+ name,
+ GlobalLookup {
+ kind: GlobalLookupKind::Variable(handle),
+ entry_arg: None,
+ },
+ ));
Ok(ctx.add_expression(Expression::GlobalVariable(handle), body))
}
| diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -348,15 +348,13 @@ fn convert_spv_shadow() {
}
#[cfg(feature = "glsl-in")]
-// TODO: Reenable tests later
-#[allow(dead_code)]
fn convert_glsl(
name: &str,
entry_points: naga::FastHashMap<String, naga::ShaderStage>,
- _targets: Targets,
+ targets: Targets,
) {
let root = env!("CARGO_MANIFEST_DIR");
- let _module = naga::front::glsl::parse_str(
+ let module = naga::front::glsl::parse_str(
&fs::read_to_string(format!("{}/{}/{}.glsl", root, DIR_IN, name))
.expect("Couldn't find glsl file"),
&naga::front::glsl::Options {
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -365,16 +363,16 @@ fn convert_glsl(
},
)
.unwrap();
- //TODO
- //check_targets(&module, name, targets);
+ println!("{:#?}", module);
+
+ check_targets(&module, name, targets);
}
#[cfg(feature = "glsl-in")]
#[test]
fn convert_glsl_quad() {
- // TODO: Reenable tests later
- // let mut entry_points = naga::FastHashMap::default();
- // entry_points.insert("vert_main".to_string(), naga::ShaderStage::Vertex);
- // entry_points.insert("frag_main".to_string(), naga::ShaderStage::Fragment);
- // convert_glsl("quad-glsl", entry_points, Targets::SPIRV | Targets::IR);
+ let mut entry_points = naga::FastHashMap::default();
+ entry_points.insert("vert_main".to_string(), naga::ShaderStage::Vertex);
+ entry_points.insert("frag_main".to_string(), naga::ShaderStage::Fragment);
+ convert_glsl("quad-glsl", entry_points, Targets::SPIRV | Targets::IR);
}
| [glsl-in] expression already in scope with constant above function
```glsl
#version 450
const int constant = 10;
float function() {
return 0.0;
}
```
fails to validate with the following error:
```
Function [1] 'function' is invalid:
Expression [1] can't be introduced - it's already in scope
```
Changing the `function` to return `void` or moving the constant below the function "fixes" the error.
This is the module it generates:
```rust
Module {
types: {
[1]: Type {
name: None,
inner: Scalar {
kind: Sint,
width: 4,
},
},
[2]: Type {
name: None,
inner: Scalar {
kind: Float,
width: 4,
},
},
},
constants: {
[1]: Constant {
name: None,
specialization: None,
inner: Scalar {
width: 4,
value: Sint(
10,
),
},
},
[2]: Constant {
name: None,
specialization: None,
inner: Scalar {
width: 4,
value: Float(
0.0,
),
},
},
},
global_variables: {},
functions: {
[1]: Function {
name: Some(
"function",
),
arguments: [],
result: Some(
FunctionResult {
ty: [2],
binding: None,
},
),
local_variables: {},
expressions: {
[1]: Constant(
[1],
),
[2]: Constant(
[2],
),
},
body: [
Emit(
[1..1],
),
Return {
value: Some(
[2],
),
},
],
},
},
entry_points: [],
}
```
| Thank you for your report, this is an issue with emitting that I have already fixed in my local fork and will make a PR soon | 2021-06-02T02:31:43 | 0.4 | 575304a50c102f2e2a3a622b4be567c0ccf6e6c0 | [
"convert_glsl_quad"
] | [
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"back::msl::test_error_size",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [
"back::msl::writer::test_stack_size"
] | [] | 65 |
gfx-rs/naga | 898 | gfx-rs__naga-898 | [
"887"
] | 99fbb34bd6b07ca0a0cc2cf01c151d8264789965 | diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -25,7 +25,7 @@ pub struct FunctionSignature {
#[derive(Debug, Clone)]
pub struct FunctionDeclaration {
- pub parameters: Vec<ParameterQualifier>,
+ pub qualifiers: Vec<ParameterQualifier>,
pub handle: Handle<Function>,
/// Wheter this function was already defined or is just a prototype
pub defined: bool,
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -203,12 +203,15 @@ impl<'function> Context<'function> {
for &(ref name, lookup) in program.global_variables.iter() {
this.emit_flush(body);
- let expr = match lookup {
- GlobalLookup::Variable(v) => Expression::GlobalVariable(v),
+ let (expr, load) = match lookup {
+ GlobalLookup::Variable(v) => (
+ Expression::GlobalVariable(v),
+ program.module.global_variables[v].class != StorageClass::Handle,
+ ),
GlobalLookup::BlockSelect(handle, index) => {
let base = this.expressions.append(Expression::GlobalVariable(handle));
- Expression::AccessIndex { base, index }
+ (Expression::AccessIndex { base, index }, true)
}
};
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -217,7 +220,11 @@ impl<'function> Context<'function> {
let var = VariableReference {
expr,
- load: Some(this.add_expression(Expression::Load { pointer: expr }, body)),
+ load: if load {
+ Some(this.add_expression(Expression::Load { pointer: expr }, body))
+ } else {
+ None
+ },
// TODO: respect constant qualifier
mutable: true,
};
diff --git a/src/front/glsl/ast.rs b/src/front/glsl/ast.rs
--- a/src/front/glsl/ast.rs
+++ b/src/front/glsl/ast.rs
@@ -288,22 +295,37 @@ impl<'function> Context<'function> {
/// Add function argument to current scope
pub fn add_function_arg(
&mut self,
+ program: &mut Program,
+ sig: &mut FunctionSignature,
body: &mut Block,
name: Option<String>,
ty: Handle<Type>,
- parameter: ParameterQualifier,
+ qualifier: ParameterQualifier,
) {
let index = self.arguments.len();
- self.arguments.push(FunctionArgument {
+ let mut arg = FunctionArgument {
name: name.clone(),
ty,
binding: None,
- });
+ };
+ sig.parameters.push(ty);
+
+ if qualifier.is_lhs() {
+ arg.ty = program.module.types.fetch_or_append(Type {
+ name: None,
+ inner: TypeInner::Pointer {
+ base: arg.ty,
+ class: StorageClass::Function,
+ },
+ })
+ }
+
+ self.arguments.push(arg);
if let Some(name) = name {
let expr = self.add_expression(Expression::FunctionArgument(index as u32), body);
- let mutable = parameter != ParameterQualifier::Const;
- let load = if parameter.is_lhs() {
+ let mutable = qualifier != ParameterQualifier::Const;
+ let load = if qualifier.is_lhs() {
Some(self.add_expression(Expression::Load { pointer: expr }, body))
} else {
None
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -1,8 +1,8 @@
use crate::{
proc::ensure_block_returns, Arena, BinaryOperator, Binding, Block, BuiltIn, EntryPoint,
Expression, Function, FunctionArgument, FunctionResult, Handle, MathFunction,
- RelationalFunction, SampleLevel, ScalarKind, ShaderStage, Statement, StorageClass,
- StructMember, Type, TypeInner,
+ RelationalFunction, SampleLevel, ShaderStage, Statement, StructMember, SwizzleComponent, Type,
+ TypeInner,
};
use super::{ast::*, error::ErrorKind, SourceMetadata};
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -37,8 +37,7 @@ impl Program<'_> {
},
body,
),
- TypeInner::Scalar { kind, width }
- | TypeInner::Vector { kind, width, .. } => ctx.add_expression(
+ TypeInner::Scalar { kind, width } => ctx.add_expression(
Expression::As {
kind,
expr: args[0].0,
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -46,24 +45,80 @@ impl Program<'_> {
},
body,
),
- TypeInner::Matrix {
- columns,
- rows,
- width,
- } => {
- let value = ctx.add_expression(
- Expression::As {
- kind: ScalarKind::Float,
- expr: args[0].0,
- convert: Some(width),
+ TypeInner::Vector { size, kind, width } => {
+ let expr = ctx.add_expression(
+ Expression::Swizzle {
+ size,
+ vector: args[0].0,
+ pattern: [
+ SwizzleComponent::X,
+ SwizzleComponent::Y,
+ SwizzleComponent::Z,
+ SwizzleComponent::W,
+ ],
},
body,
);
- let column = if is_vec {
- value
- } else {
- ctx.add_expression(Expression::Splat { size: rows, value }, body)
+ ctx.add_expression(
+ Expression::As {
+ kind,
+ expr,
+ convert: Some(width),
+ },
+ body,
+ )
+ }
+ TypeInner::Matrix { columns, rows, .. } => {
+ // TODO: casts
+ // `Expression::As` doesn't support matrix width
+ // casts so we need to do some extra work for casts
+
+ let column = match *self.resolve_type(ctx, args[0].0, args[0].1)? {
+ TypeInner::Scalar { .. } => ctx.add_expression(
+ Expression::Splat {
+ size: rows,
+ value: args[0].0,
+ },
+ body,
+ ),
+ TypeInner::Matrix { .. } => {
+ let mut components = Vec::new();
+
+ for n in 0..columns as u32 {
+ let vector = ctx.add_expression(
+ Expression::AccessIndex {
+ base: args[0].0,
+ index: n,
+ },
+ body,
+ );
+
+ let c = ctx.add_expression(
+ Expression::Swizzle {
+ size: rows,
+ vector,
+ pattern: [
+ SwizzleComponent::X,
+ SwizzleComponent::Y,
+ SwizzleComponent::Z,
+ SwizzleComponent::W,
+ ],
+ },
+ body,
+ );
+
+ components.push(c)
+ }
+
+ let h = ctx.add_expression(
+ Expression::Compose { ty, components },
+ body,
+ );
+
+ return Ok(Some(h));
+ }
+ _ => args[0].0,
};
let columns =
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -125,6 +180,14 @@ impl Program<'_> {
if args.len() != 3 {
return Err(ErrorKind::wrong_function_args(name, 3, args.len(), meta));
}
+ let exact = ctx.add_expression(
+ Expression::As {
+ kind: crate::ScalarKind::Float,
+ expr: args[2].0,
+ convert: Some(4),
+ },
+ body,
+ );
if let Some(sampler) = ctx.samplers.get(&args[0].0).copied() {
Ok(Some(ctx.add_expression(
Expression::ImageSample {
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -133,7 +196,7 @@ impl Program<'_> {
coordinate: args[1].0,
array_index: None, //TODO
offset: None, //TODO
- level: SampleLevel::Exact(args[2].0),
+ level: SampleLevel::Exact(exact),
depth_ref: None,
},
body,
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -274,7 +337,7 @@ impl Program<'_> {
.clone();
let mut arguments = Vec::with_capacity(raw_args.len());
- for (qualifier, expr) in fun.parameters.iter().zip(raw_args.iter()) {
+ for (qualifier, expr) in fun.qualifiers.iter().zip(raw_args.iter()) {
let handle = ctx.lower_expect(self, *expr, qualifier.is_lhs(), body)?.0;
arguments.push(handle)
}
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -327,37 +390,17 @@ impl Program<'_> {
pub fn add_function(
&mut self,
mut function: Function,
- parameters: Vec<ParameterQualifier>,
+ sig: FunctionSignature,
+ qualifiers: Vec<ParameterQualifier>,
meta: SourceMetadata,
) -> Result<(), ErrorKind> {
ensure_block_returns(&mut function.body);
- let name = function
- .name
- .clone()
- .ok_or_else(|| ErrorKind::SemanticError(meta, "Unnamed function".into()))?;
- let stage = self.entry_points.get(&name);
+ let stage = self.entry_points.get(&sig.name);
if let Some(&stage) = stage {
let handle = self.module.functions.append(function);
- self.entries.push((name, stage, handle));
+ self.entries.push((sig.name, stage, handle));
} else {
- let sig = FunctionSignature {
- name,
- parameters: function.arguments.iter().map(|p| p.ty).collect(),
- };
-
- for (arg, qualifier) in function.arguments.iter_mut().zip(parameters.iter()) {
- if qualifier.is_lhs() {
- arg.ty = self.module.types.fetch_or_append(Type {
- name: None,
- inner: TypeInner::Pointer {
- base: arg.ty,
- class: StorageClass::Function,
- },
- })
- }
- }
-
let void = function.result.is_none();
if let Some(decl) = self.lookup_function.get_mut(&sig) {
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -375,7 +418,7 @@ impl Program<'_> {
self.lookup_function.insert(
sig,
FunctionDeclaration {
- parameters,
+ qualifiers,
handle,
defined: true,
void,
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -389,43 +432,33 @@ impl Program<'_> {
pub fn add_prototype(
&mut self,
- mut function: Function,
- parameters: Vec<ParameterQualifier>,
+ function: Function,
+ sig: FunctionSignature,
+ qualifiers: Vec<ParameterQualifier>,
meta: SourceMetadata,
) -> Result<(), ErrorKind> {
- let name = function
- .name
- .clone()
- .ok_or_else(|| ErrorKind::SemanticError(meta, "Unnamed function".into()))?;
- let sig = FunctionSignature {
- name,
- parameters: function.arguments.iter().map(|p| p.ty).collect(),
- };
let void = function.result.is_none();
- for (arg, qualifier) in function.arguments.iter_mut().zip(parameters.iter()) {
- if qualifier.is_lhs() {
- arg.ty = self.module.types.fetch_or_append(Type {
- name: None,
- inner: TypeInner::Pointer {
- base: arg.ty,
- class: StorageClass::Function,
- },
- })
- }
- }
-
let handle = self.module.functions.append(function);
- self.lookup_function.insert(
- sig,
- FunctionDeclaration {
- parameters,
- handle,
- defined: false,
- void,
- },
- );
+ if self
+ .lookup_function
+ .insert(
+ sig,
+ FunctionDeclaration {
+ qualifiers,
+ handle,
+ defined: false,
+ void,
+ },
+ )
+ .is_some()
+ {
+ return Err(ErrorKind::SemanticError(
+ meta,
+ "Prototype already defined".into(),
+ ));
+ }
Ok(())
}
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -439,6 +472,7 @@ impl Program<'_> {
for (binding, input, handle) in self.entry_args.iter().cloned() {
match binding {
Binding::Location { .. } if !input => continue,
+ Binding::BuiltIn(builtin) if !should_read(builtin, stage) => continue,
_ => {}
}
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -495,7 +529,7 @@ impl Program<'_> {
let ty = self.module.types.append(Type {
name: None,
inner: TypeInner::Struct {
- top_level: true,
+ top_level: false,
members,
span,
},
diff --git a/src/front/glsl/functions.rs b/src/front/glsl/functions.rs
--- a/src/front/glsl/functions.rs
+++ b/src/front/glsl/functions.rs
@@ -524,6 +558,32 @@ impl Program<'_> {
}
}
+// FIXME: Both of the functions below should be removed they are a temporary solution
+//
+// The fix should analyze the entry point and children function calls
+// (recursively) and store something like `GlobalUse` and then later only read
+// or store the globals that need to be read or written in that stage
+
+fn should_read(built_in: BuiltIn, stage: ShaderStage) -> bool {
+ match (built_in, stage) {
+ (BuiltIn::Position, ShaderStage::Fragment)
+ | (BuiltIn::BaseInstance, ShaderStage::Vertex)
+ | (BuiltIn::BaseVertex, ShaderStage::Vertex)
+ | (BuiltIn::ClipDistance, ShaderStage::Fragment)
+ | (BuiltIn::InstanceIndex, ShaderStage::Vertex)
+ | (BuiltIn::VertexIndex, ShaderStage::Vertex)
+ | (BuiltIn::FrontFacing, ShaderStage::Fragment)
+ | (BuiltIn::SampleIndex, ShaderStage::Fragment)
+ | (BuiltIn::SampleMask, ShaderStage::Fragment)
+ | (BuiltIn::GlobalInvocationId, ShaderStage::Compute)
+ | (BuiltIn::LocalInvocationId, ShaderStage::Compute)
+ | (BuiltIn::LocalInvocationIndex, ShaderStage::Compute)
+ | (BuiltIn::WorkGroupId, ShaderStage::Compute)
+ | (BuiltIn::WorkGroupSize, ShaderStage::Compute) => true,
+ _ => false,
+ }
+}
+
fn should_write(built_in: BuiltIn, stage: ShaderStage) -> bool {
match (built_in, stage) {
(BuiltIn::Position, ShaderStage::Vertex)
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -1,7 +1,7 @@
use super::{
ast::{
- Context, FunctionCall, FunctionCallKind, GlobalLookup, HirExpr, HirExprKind,
- ParameterQualifier, Profile, StorageQualifier, StructLayout, TypeQualifier,
+ Context, FunctionCall, FunctionCallKind, FunctionSignature, GlobalLookup, HirExpr,
+ HirExprKind, ParameterQualifier, Profile, StorageQualifier, StructLayout, TypeQualifier,
},
error::ErrorKind,
lex::Lexer,
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -567,6 +567,10 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
let mut arguments = Vec::new();
let mut parameters = Vec::new();
let mut body = Block::new();
+ let mut sig = FunctionSignature {
+ name: name.clone(),
+ parameters: Vec::new(),
+ };
let mut context = Context::new(
self.program,
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -576,7 +580,12 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
&mut arguments,
);
- self.parse_function_args(&mut context, &mut body, &mut parameters)?;
+ self.parse_function_args(
+ &mut context,
+ &mut body,
+ &mut parameters,
+ &mut sig,
+ )?;
let end_meta = self.expect(TokenValue::RightParen)?.meta;
meta = meta.union(&end_meta);
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -592,6 +601,7 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
arguments,
..Default::default()
},
+ sig,
parameters,
meta,
)?;
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -615,6 +625,7 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
arguments,
body,
},
+ sig,
parameters,
meta,
)?;
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -775,8 +786,7 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
binding,
ty,
init: None,
- // TODO
- storage_access: StorageAccess::all(),
+ storage_access: StorageAccess::empty(),
});
if let Some(k) = name {
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -1546,6 +1556,7 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
context: &mut Context,
body: &mut Block,
parameters: &mut Vec<ParameterQualifier>,
+ sig: &mut FunctionSignature,
) -> Result<()> {
loop {
if self.peek_type_name() || self.peek_parameter_qualifier() {
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -1556,7 +1567,7 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
match self.expect_peek()?.value {
TokenValue::Comma => {
self.bump()?;
- context.add_function_arg(body, None, ty, qualifier);
+ context.add_function_arg(&mut self.program, sig, body, None, ty, qualifier);
continue;
}
TokenValue::Identifier(_) => {
diff --git a/src/front/glsl/parser.rs b/src/front/glsl/parser.rs
--- a/src/front/glsl/parser.rs
+++ b/src/front/glsl/parser.rs
@@ -1565,7 +1576,14 @@ impl<'source, 'program, 'options> Parser<'source, 'program, 'options> {
let size = self.parse_array_specifier()?;
let ty = self.maybe_array(ty, size);
- context.add_function_arg(body, Some(name), ty, qualifier);
+ context.add_function_arg(
+ &mut self.program,
+ sig,
+ body,
+ Some(name),
+ ty,
+ qualifier,
+ );
if self.bump_if(TokenValue::Comma).is_some() {
continue;
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -1,6 +1,7 @@
use crate::{
- Binding, Block, BuiltIn, Constant, Expression, GlobalVariable, Handle, LocalVariable,
- ScalarKind, StorageAccess, StorageClass, Type, TypeInner, VectorSize,
+ Binding, Block, BuiltIn, Constant, Expression, GlobalVariable, Handle, ImageClass,
+ Interpolation, LocalVariable, ScalarKind, StorageAccess, StorageClass, Type, TypeInner,
+ VectorSize,
};
use super::ast::*;
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -41,7 +42,7 @@ impl Program<'_> {
binding: None,
ty,
init: None,
- storage_access: StorageAccess::all(),
+ storage_access: StorageAccess::empty(),
});
self.entry_args
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -297,6 +298,13 @@ impl Program<'_> {
if let Some(location) = location {
let input = StorageQualifier::Input == storage;
+ let interpolation = self.module.types[ty].inner.scalar_kind().map(|kind| {
+ if let ScalarKind::Float = kind {
+ Interpolation::Perspective
+ } else {
+ Interpolation::Flat
+ }
+ });
let handle = self.module.global_variables.append(GlobalVariable {
name: Some(name.clone()),
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -304,7 +312,7 @@ impl Program<'_> {
binding: None,
ty,
init,
- storage_access: StorageAccess::all(),
+ storage_access: StorageAccess::empty(),
});
self.entry_args.push((
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -331,9 +339,25 @@ impl Program<'_> {
return Ok(ctx.add_expression(Expression::Constant(handle), body));
}
- let class = match storage {
- StorageQualifier::StorageClass(class) => class,
- _ => StorageClass::Private,
+ let (class, storage_access) = match self.module.types[ty].inner {
+ TypeInner::Image { class, .. } => (
+ StorageClass::Handle,
+ if let ImageClass::Storage(_) = class {
+ // TODO: Add support for qualifiers such as readonly,
+ // writeonly and readwrite
+ StorageAccess::all()
+ } else {
+ StorageAccess::empty()
+ },
+ ),
+ TypeInner::Sampler { .. } => (StorageClass::Handle, StorageAccess::empty()),
+ _ => (
+ match storage {
+ StorageQualifier::StorageClass(class) => class,
+ _ => StorageClass::Private,
+ },
+ StorageAccess::empty(),
+ ),
};
let handle = self.module.global_variables.append(GlobalVariable {
diff --git a/src/front/glsl/variables.rs b/src/front/glsl/variables.rs
--- a/src/front/glsl/variables.rs
+++ b/src/front/glsl/variables.rs
@@ -342,8 +366,7 @@ impl Program<'_> {
binding,
ty,
init,
- // TODO
- storage_access: StorageAccess::all(),
+ storage_access,
});
self.global_variables
| diff --git a/src/front/glsl/parser_tests.rs b/src/front/glsl/parser_tests.rs
--- a/src/front/glsl/parser_tests.rs
+++ b/src/front/glsl/parser_tests.rs
@@ -369,6 +369,22 @@ fn functions() {
&entry_points,
)
.unwrap();
+
+ parse_program(
+ r#"
+ # version 450
+ void fun(vec2 in_parameter, out float out_parameter) {
+ ivec2 _ = ivec2(in_parameter);
+ }
+
+ void main() {
+ float a;
+ fun(vec2(1.0), a);
+ }
+ "#,
+ &entry_points,
+ )
+ .unwrap();
}
#[test]
| [glsl-in] Builtin global have empty storage access
```glsl
#version 450
void main() {
gl_Position = vec4(1.0, 1.0, 1,0, 1.0);
}
```
```bash
Global variable [1] 'gl_Position' is invalid:
Storage access LOAD | STORE exceeds the allowed (empty)
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', bin/naga.rs:286:70
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
| 2021-05-23T03:41:52 | 0.4 | 575304a50c102f2e2a3a622b4be567c0ccf6e6c0 | [
"front::glsl::parser_tests::functions"
] | [
"arena::tests::append_unique",
"arena::tests::fetch_or_append_unique",
"arena::tests::append_non_unique",
"back::msl::test_error_size",
"arena::tests::fetch_or_append_non_unique",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [
"back::msl::writer::test_stack_size"
] | [] | 66 | |
gfx-rs/naga | 2,510 | gfx-rs__naga-2510 | [
"2412",
"2234"
] | 3bcb114adbf144544cb46497b000adf3e7e71181 | diff --git a/src/back/glsl/mod.rs b/src/back/glsl/mod.rs
--- a/src/back/glsl/mod.rs
+++ b/src/back/glsl/mod.rs
@@ -1129,7 +1129,8 @@ impl<'a, W: Write> Writer<'a, W> {
let ty_name = &self.names[&NameKey::Type(global.ty)];
let block_name = format!(
"{}_block_{}{:?}",
- ty_name,
+ // avoid double underscores as they are reserved in GLSL
+ ty_name.trim_end_matches('_'),
self.block_id.generate(),
self.entry_point.stage,
);
diff --git a/src/proc/namer.rs b/src/proc/namer.rs
--- a/src/proc/namer.rs
+++ b/src/proc/namer.rs
@@ -36,6 +36,7 @@ impl Namer {
/// - Drop leading digits.
/// - Retain only alphanumeric and `_` characters.
/// - Avoid prefixes in [`Namer::reserved_prefixes`].
+ /// - Replace consecutive `_` characters with a single `_` character.
///
/// The return value is a valid identifier prefix in all of Naga's output languages,
/// and it never ends with a `SEPARATOR` character.
diff --git a/src/proc/namer.rs b/src/proc/namer.rs
--- a/src/proc/namer.rs
+++ b/src/proc/namer.rs
@@ -46,6 +47,7 @@ impl Namer {
.trim_end_matches(SEPARATOR);
let base = if !string.is_empty()
+ && !string.contains("__")
&& string
.chars()
.all(|c: char| c.is_ascii_alphanumeric() || c == '_')
diff --git a/src/proc/namer.rs b/src/proc/namer.rs
--- a/src/proc/namer.rs
+++ b/src/proc/namer.rs
@@ -55,7 +57,13 @@ impl Namer {
let mut filtered = string
.chars()
.filter(|&c| c.is_ascii_alphanumeric() || c == '_')
- .collect::<String>();
+ .fold(String::new(), |mut s, c| {
+ if s.ends_with('_') && c == '_' {
+ return s;
+ }
+ s.push(c);
+ s
+ });
let stripped_len = filtered.trim_end_matches(SEPARATOR).len();
filtered.truncate(stripped_len);
if filtered.is_empty() {
| diff --git a/src/proc/namer.rs b/src/proc/namer.rs
--- a/src/proc/namer.rs
+++ b/src/proc/namer.rs
@@ -268,4 +276,6 @@ fn test() {
assert_eq!(namer.call("x"), "x");
assert_eq!(namer.call("x"), "x_1");
assert_eq!(namer.call("x1"), "x1_");
+ assert_eq!(namer.call("__x"), "_x");
+ assert_eq!(namer.call("1___x"), "_x_1");
}
diff --git a/tests/out/glsl/math-functions.main.Fragment.glsl b/tests/out/glsl/math-functions.main.Fragment.glsl
--- a/tests/out/glsl/math-functions.main.Fragment.glsl
+++ b/tests/out/glsl/math-functions.main.Fragment.glsl
@@ -3,55 +3,55 @@
precision highp float;
precision highp int;
-struct __modf_result_f32_ {
+struct _modf_result_f32_ {
float fract_;
float whole;
};
-struct __modf_result_vec2_f32_ {
+struct _modf_result_vec2_f32_ {
vec2 fract_;
vec2 whole;
};
-struct __modf_result_vec4_f32_ {
+struct _modf_result_vec4_f32_ {
vec4 fract_;
vec4 whole;
};
-struct __frexp_result_f32_ {
+struct _frexp_result_f32_ {
float fract_;
int exp_;
};
-struct __frexp_result_vec4_f32_ {
+struct _frexp_result_vec4_f32_ {
vec4 fract_;
ivec4 exp_;
};
-__modf_result_f32_ naga_modf(float arg) {
+_modf_result_f32_ naga_modf(float arg) {
float other;
float fract = modf(arg, other);
- return __modf_result_f32_(fract, other);
+ return _modf_result_f32_(fract, other);
}
-__modf_result_vec2_f32_ naga_modf(vec2 arg) {
+_modf_result_vec2_f32_ naga_modf(vec2 arg) {
vec2 other;
vec2 fract = modf(arg, other);
- return __modf_result_vec2_f32_(fract, other);
+ return _modf_result_vec2_f32_(fract, other);
}
-__modf_result_vec4_f32_ naga_modf(vec4 arg) {
+_modf_result_vec4_f32_ naga_modf(vec4 arg) {
vec4 other;
vec4 fract = modf(arg, other);
- return __modf_result_vec4_f32_(fract, other);
+ return _modf_result_vec4_f32_(fract, other);
}
-__frexp_result_f32_ naga_frexp(float arg) {
+_frexp_result_f32_ naga_frexp(float arg) {
int other;
float fract = frexp(arg, other);
- return __frexp_result_f32_(fract, other);
+ return _frexp_result_f32_(fract, other);
}
-__frexp_result_vec4_f32_ naga_frexp(vec4 arg) {
+_frexp_result_vec4_f32_ naga_frexp(vec4 arg) {
ivec4 other;
vec4 fract = frexp(arg, other);
- return __frexp_result_vec4_f32_(fract, other);
+ return _frexp_result_vec4_f32_(fract, other);
}
void main() {
diff --git a/tests/out/glsl/math-functions.main.Fragment.glsl b/tests/out/glsl/math-functions.main.Fragment.glsl
--- a/tests/out/glsl/math-functions.main.Fragment.glsl
+++ b/tests/out/glsl/math-functions.main.Fragment.glsl
@@ -90,13 +90,13 @@ void main() {
uvec2 clz_d = uvec2(ivec2(31) - findMSB(uvec2(1u)));
float lde_a = ldexp(1.0, 2);
vec2 lde_b = ldexp(vec2(1.0, 2.0), ivec2(3, 4));
- __modf_result_f32_ modf_a = naga_modf(1.5);
+ _modf_result_f32_ modf_a = naga_modf(1.5);
float modf_b = naga_modf(1.5).fract_;
float modf_c = naga_modf(1.5).whole;
- __modf_result_vec2_f32_ modf_d = naga_modf(vec2(1.5, 1.5));
+ _modf_result_vec2_f32_ modf_d = naga_modf(vec2(1.5, 1.5));
float modf_e = naga_modf(vec4(1.5, 1.5, 1.5, 1.5)).whole.x;
float modf_f = naga_modf(vec2(1.5, 1.5)).fract_.y;
- __frexp_result_f32_ frexp_a = naga_frexp(1.5);
+ _frexp_result_f32_ frexp_a = naga_frexp(1.5);
float frexp_b = naga_frexp(1.5).fract_;
int frexp_c = naga_frexp(1.5).exp_;
int frexp_d = naga_frexp(vec4(1.5, 1.5, 1.5, 1.5)).exp_.x;
diff --git a/tests/out/glsl/padding.vertex.Vertex.glsl b/tests/out/glsl/padding.vertex.Vertex.glsl
--- a/tests/out/glsl/padding.vertex.Vertex.glsl
+++ b/tests/out/glsl/padding.vertex.Vertex.glsl
@@ -20,9 +20,9 @@ struct Test3_ {
};
uniform Test_block_0Vertex { Test _group_0_binding_0_vs; };
-uniform Test2__block_1Vertex { Test2_ _group_0_binding_1_vs; };
+uniform Test2_block_1Vertex { Test2_ _group_0_binding_1_vs; };
-uniform Test3__block_2Vertex { Test3_ _group_0_binding_2_vs; };
+uniform Test3_block_2Vertex { Test3_ _group_0_binding_2_vs; };
void main() {
diff --git a/tests/out/hlsl/math-functions.hlsl b/tests/out/hlsl/math-functions.hlsl
--- a/tests/out/hlsl/math-functions.hlsl
+++ b/tests/out/hlsl/math-functions.hlsl
@@ -1,63 +1,63 @@
-struct __modf_result_f32_ {
+struct _modf_result_f32_ {
float fract;
float whole;
};
-struct __modf_result_vec2_f32_ {
+struct _modf_result_vec2_f32_ {
float2 fract;
float2 whole;
};
-struct __modf_result_vec4_f32_ {
+struct _modf_result_vec4_f32_ {
float4 fract;
float4 whole;
};
-struct __frexp_result_f32_ {
+struct _frexp_result_f32_ {
float fract;
int exp_;
};
-struct __frexp_result_vec4_f32_ {
+struct _frexp_result_vec4_f32_ {
float4 fract;
int4 exp_;
};
-__modf_result_f32_ naga_modf(float arg) {
+_modf_result_f32_ naga_modf(float arg) {
float other;
- __modf_result_f32_ result;
+ _modf_result_f32_ result;
result.fract = modf(arg, other);
result.whole = other;
return result;
}
-__modf_result_vec2_f32_ naga_modf(float2 arg) {
+_modf_result_vec2_f32_ naga_modf(float2 arg) {
float2 other;
- __modf_result_vec2_f32_ result;
+ _modf_result_vec2_f32_ result;
result.fract = modf(arg, other);
result.whole = other;
return result;
}
-__modf_result_vec4_f32_ naga_modf(float4 arg) {
+_modf_result_vec4_f32_ naga_modf(float4 arg) {
float4 other;
- __modf_result_vec4_f32_ result;
+ _modf_result_vec4_f32_ result;
result.fract = modf(arg, other);
result.whole = other;
return result;
}
-__frexp_result_f32_ naga_frexp(float arg) {
+_frexp_result_f32_ naga_frexp(float arg) {
float other;
- __frexp_result_f32_ result;
+ _frexp_result_f32_ result;
result.fract = sign(arg) * frexp(arg, other);
result.exp_ = other;
return result;
}
-__frexp_result_vec4_f32_ naga_frexp(float4 arg) {
+_frexp_result_vec4_f32_ naga_frexp(float4 arg) {
float4 other;
- __frexp_result_vec4_f32_ result;
+ _frexp_result_vec4_f32_ result;
result.fract = sign(arg) * frexp(arg, other);
result.exp_ = other;
return result;
diff --git a/tests/out/hlsl/math-functions.hlsl b/tests/out/hlsl/math-functions.hlsl
--- a/tests/out/hlsl/math-functions.hlsl
+++ b/tests/out/hlsl/math-functions.hlsl
@@ -100,13 +100,13 @@ void main()
uint2 clz_d = ((31u).xx - firstbithigh((1u).xx));
float lde_a = ldexp(1.0, 2);
float2 lde_b = ldexp(float2(1.0, 2.0), int2(3, 4));
- __modf_result_f32_ modf_a = naga_modf(1.5);
+ _modf_result_f32_ modf_a = naga_modf(1.5);
float modf_b = naga_modf(1.5).fract;
float modf_c = naga_modf(1.5).whole;
- __modf_result_vec2_f32_ modf_d = naga_modf(float2(1.5, 1.5));
+ _modf_result_vec2_f32_ modf_d = naga_modf(float2(1.5, 1.5));
float modf_e = naga_modf(float4(1.5, 1.5, 1.5, 1.5)).whole.x;
float modf_f = naga_modf(float2(1.5, 1.5)).fract.y;
- __frexp_result_f32_ frexp_a = naga_frexp(1.5);
+ _frexp_result_f32_ frexp_a = naga_frexp(1.5);
float frexp_b = naga_frexp(1.5).fract;
int frexp_c = naga_frexp(1.5).exp_;
int frexp_d = naga_frexp(float4(1.5, 1.5, 1.5, 1.5)).exp_.x;
diff --git a/tests/out/msl/math-functions.msl b/tests/out/msl/math-functions.msl
--- a/tests/out/msl/math-functions.msl
+++ b/tests/out/msl/math-functions.msl
@@ -4,55 +4,55 @@
using metal::uint;
-struct __modf_result_f32_ {
+struct _modf_result_f32_ {
float fract;
float whole;
};
-struct __modf_result_vec2_f32_ {
+struct _modf_result_vec2_f32_ {
metal::float2 fract;
metal::float2 whole;
};
-struct __modf_result_vec4_f32_ {
+struct _modf_result_vec4_f32_ {
metal::float4 fract;
metal::float4 whole;
};
-struct __frexp_result_f32_ {
+struct _frexp_result_f32_ {
float fract;
int exp;
};
-struct __frexp_result_vec4_f32_ {
+struct _frexp_result_vec4_f32_ {
metal::float4 fract;
metal::int4 exp;
};
-__modf_result_f32_ naga_modf(float arg) {
+_modf_result_f32_ naga_modf(float arg) {
float other;
float fract = metal::modf(arg, other);
- return __modf_result_f32_{ fract, other };
+ return _modf_result_f32_{ fract, other };
}
-__modf_result_vec2_f32_ naga_modf(metal::float2 arg) {
+_modf_result_vec2_f32_ naga_modf(metal::float2 arg) {
metal::float2 other;
metal::float2 fract = metal::modf(arg, other);
- return __modf_result_vec2_f32_{ fract, other };
+ return _modf_result_vec2_f32_{ fract, other };
}
-__modf_result_vec4_f32_ naga_modf(metal::float4 arg) {
+_modf_result_vec4_f32_ naga_modf(metal::float4 arg) {
metal::float4 other;
metal::float4 fract = metal::modf(arg, other);
- return __modf_result_vec4_f32_{ fract, other };
+ return _modf_result_vec4_f32_{ fract, other };
}
-__frexp_result_f32_ naga_frexp(float arg) {
+_frexp_result_f32_ naga_frexp(float arg) {
int other;
float fract = metal::frexp(arg, other);
- return __frexp_result_f32_{ fract, other };
+ return _frexp_result_f32_{ fract, other };
}
-__frexp_result_vec4_f32_ naga_frexp(metal::float4 arg) {
+_frexp_result_vec4_f32_ naga_frexp(metal::float4 arg) {
int4 other;
metal::float4 fract = metal::frexp(arg, other);
- return __frexp_result_vec4_f32_{ fract, other };
+ return _frexp_result_vec4_f32_{ fract, other };
}
fragment void main_(
diff --git a/tests/out/msl/math-functions.msl b/tests/out/msl/math-functions.msl
--- a/tests/out/msl/math-functions.msl
+++ b/tests/out/msl/math-functions.msl
@@ -95,13 +95,13 @@ fragment void main_(
metal::uint2 clz_d = metal::clz(metal::uint2(1u));
float lde_a = metal::ldexp(1.0, 2);
metal::float2 lde_b = metal::ldexp(metal::float2(1.0, 2.0), metal::int2(3, 4));
- __modf_result_f32_ modf_a = naga_modf(1.5);
+ _modf_result_f32_ modf_a = naga_modf(1.5);
float modf_b = naga_modf(1.5).fract;
float modf_c = naga_modf(1.5).whole;
- __modf_result_vec2_f32_ modf_d = naga_modf(metal::float2(1.5, 1.5));
+ _modf_result_vec2_f32_ modf_d = naga_modf(metal::float2(1.5, 1.5));
float modf_e = naga_modf(metal::float4(1.5, 1.5, 1.5, 1.5)).whole.x;
float modf_f = naga_modf(metal::float2(1.5, 1.5)).fract.y;
- __frexp_result_f32_ frexp_a = naga_frexp(1.5);
+ _frexp_result_f32_ frexp_a = naga_frexp(1.5);
float frexp_b = naga_frexp(1.5).fract;
int frexp_c = naga_frexp(1.5).exp;
int frexp_d = naga_frexp(metal::float4(1.5, 1.5, 1.5, 1.5)).exp.x;
| [glsl-out] Rename identifiers containing two consecutive underscores since they are reserved
> In addition, all identifiers containing two consecutive underscores (__) are reserved for use by underlying software layers. Defining such a name in a shader does not itself result in an error, but may result in unintended behaviors that stem from having multiple definitions of the same name.
_from https://registry.khronos.org/OpenGL/specs/gl/GLSLangSpec.4.60.html#keywords_
with naga-10, from glsl to glsl target, compile error: names containing consecutive underscores are reserved
with naga-10, compile glsl code to glsl target, when type name is ends with number, occus compile error
## compile error with glsl source code in vertex shader
``` txt
Error: 0:38: 'M01__block_1Vetrtex' : identifiers containing two consecutive underscores (__) are reserved as possible future keywords
Error: 0:40: 'M02__block_2Vetrtex' : identifiers containing two consecutive underscores (__) are reserved as possible future keywords
```
## source code
``` glsl
#version 450
precision highp float;
layout(location=0) in vec2 position;
layout(set=0,binding=0) uniform M01 {
mat4 project;
mat4 view;
};
layout(set=2,binding=0) uniform M02 {
mat4 world;
};
void main() {
gl_Position = project * view * world * vec4(position.x, position.y, 1.0, 1.0);
}
```
## specs of gles 3.0
By convention, all macro names containing two consecutive underscores ( __ ) are reserved for use by underlying software layers. Defining such a name in a shader does not itself result in an error, but may result in unintended behaviors that stem from having multiple definitions of the same name
|
## first underscore
when type name is ends with number, it put the underscore

## last underscore in code:
in naga\src\back\glsl\mod.rs, write_interface_block function, as follow:
 | 2023-09-26T17:57:14 | 0.13 | a17a93ef8f6a06ed8dfc3145276663786828df03 | [
"proc::namer::test"
] | [
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [] | [] | 67 |
gfx-rs/naga | 2,454 | gfx-rs__naga-2454 | [
"1161",
"1441"
] | f49314dbbdfb3beb8c91c4e48402ca7a0591edeb | diff --git a/src/back/glsl/keywords.rs b/src/back/glsl/keywords.rs
--- a/src/back/glsl/keywords.rs
+++ b/src/back/glsl/keywords.rs
@@ -477,4 +477,7 @@ pub const RESERVED_KEYWORDS: &[&str] = &[
// entry point name (should not be shadowed)
//
"main",
+ // Naga utilities:
+ super::MODF_FUNCTION,
+ super::FREXP_FUNCTION,
];
diff --git a/src/back/glsl/mod.rs b/src/back/glsl/mod.rs
--- a/src/back/glsl/mod.rs
+++ b/src/back/glsl/mod.rs
@@ -72,6 +72,9 @@ pub const SUPPORTED_ES_VERSIONS: &[u16] = &[300, 310, 320];
/// of detail for bounds checking in `ImageLoad`
const CLAMPED_LOD_SUFFIX: &str = "_clamped_lod";
+pub(crate) const MODF_FUNCTION: &str = "naga_modf";
+pub(crate) const FREXP_FUNCTION: &str = "naga_frexp";
+
/// Mapping between resources and bindings.
pub type BindingMap = std::collections::BTreeMap<crate::ResourceBinding, u8>;
diff --git a/src/back/glsl/mod.rs b/src/back/glsl/mod.rs
--- a/src/back/glsl/mod.rs
+++ b/src/back/glsl/mod.rs
@@ -631,6 +634,53 @@ impl<'a, W: Write> Writer<'a, W> {
}
}
+ // Write functions to create special types.
+ for (type_key, struct_ty) in self.module.special_types.predeclared_types.iter() {
+ match type_key {
+ &crate::PredeclaredType::ModfResult { size, width }
+ | &crate::PredeclaredType::FrexpResult { size, width } => {
+ let arg_type_name_owner;
+ let arg_type_name = if let Some(size) = size {
+ arg_type_name_owner =
+ format!("{}vec{}", if width == 8 { "d" } else { "" }, size as u8);
+ &arg_type_name_owner
+ } else if width == 8 {
+ "double"
+ } else {
+ "float"
+ };
+
+ let other_type_name_owner;
+ let (defined_func_name, called_func_name, other_type_name) =
+ if matches!(type_key, &crate::PredeclaredType::ModfResult { .. }) {
+ (MODF_FUNCTION, "modf", arg_type_name)
+ } else {
+ let other_type_name = if let Some(size) = size {
+ other_type_name_owner = format!("ivec{}", size as u8);
+ &other_type_name_owner
+ } else {
+ "int"
+ };
+ (FREXP_FUNCTION, "frexp", other_type_name)
+ };
+
+ let struct_name = &self.names[&NameKey::Type(*struct_ty)];
+
+ writeln!(self.out)?;
+ writeln!(
+ self.out,
+ "{} {defined_func_name}({arg_type_name} arg) {{
+ {other_type_name} other;
+ {arg_type_name} fract = {called_func_name}(arg, other);
+ return {}(fract, other);
+}}",
+ struct_name, struct_name
+ )?;
+ }
+ &crate::PredeclaredType::AtomicCompareExchangeWeakResult { .. } => {}
+ }
+ }
+
// Write all named constants
let mut constants = self
.module
diff --git a/src/back/glsl/mod.rs b/src/back/glsl/mod.rs
--- a/src/back/glsl/mod.rs
+++ b/src/back/glsl/mod.rs
@@ -2997,8 +3047,8 @@ impl<'a, W: Write> Writer<'a, W> {
Mf::Round => "roundEven",
Mf::Fract => "fract",
Mf::Trunc => "trunc",
- Mf::Modf => "modf",
- Mf::Frexp => "frexp",
+ Mf::Modf => MODF_FUNCTION,
+ Mf::Frexp => FREXP_FUNCTION,
Mf::Ldexp => "ldexp",
// exponent
Mf::Exp => "exp",
diff --git a/src/back/hlsl/help.rs b/src/back/hlsl/help.rs
--- a/src/back/hlsl/help.rs
+++ b/src/back/hlsl/help.rs
@@ -781,6 +781,59 @@ impl<'a, W: Write> super::Writer<'a, W> {
Ok(())
}
+ /// Write functions to create special types.
+ pub(super) fn write_special_functions(&mut self, module: &crate::Module) -> BackendResult {
+ for (type_key, struct_ty) in module.special_types.predeclared_types.iter() {
+ match type_key {
+ &crate::PredeclaredType::ModfResult { size, width }
+ | &crate::PredeclaredType::FrexpResult { size, width } => {
+ let arg_type_name_owner;
+ let arg_type_name = if let Some(size) = size {
+ arg_type_name_owner = format!(
+ "{}{}",
+ if width == 8 { "double" } else { "float" },
+ size as u8
+ );
+ &arg_type_name_owner
+ } else if width == 8 {
+ "double"
+ } else {
+ "float"
+ };
+
+ let (defined_func_name, called_func_name, second_field_name, sign_multiplier) =
+ if matches!(type_key, &crate::PredeclaredType::ModfResult { .. }) {
+ (super::writer::MODF_FUNCTION, "modf", "whole", "")
+ } else {
+ (
+ super::writer::FREXP_FUNCTION,
+ "frexp",
+ "exp_",
+ "sign(arg) * ",
+ )
+ };
+
+ let struct_name = &self.names[&NameKey::Type(*struct_ty)];
+
+ writeln!(
+ self.out,
+ "{struct_name} {defined_func_name}({arg_type_name} arg) {{
+ {arg_type_name} other;
+ {struct_name} result;
+ result.fract = {sign_multiplier}{called_func_name}(arg, other);
+ result.{second_field_name} = other;
+ return result;
+}}"
+ )?;
+ writeln!(self.out)?;
+ }
+ &crate::PredeclaredType::AtomicCompareExchangeWeakResult { .. } => {}
+ }
+ }
+
+ Ok(())
+ }
+
/// Helper function that writes compose wrapped functions
pub(super) fn write_wrapped_compose_functions(
&mut self,
diff --git a/src/back/hlsl/keywords.rs b/src/back/hlsl/keywords.rs
--- a/src/back/hlsl/keywords.rs
+++ b/src/back/hlsl/keywords.rs
@@ -814,6 +814,9 @@ pub const RESERVED: &[&str] = &[
"TextureBuffer",
"ConstantBuffer",
"RayQuery",
+ // Naga utilities
+ super::writer::MODF_FUNCTION,
+ super::writer::FREXP_FUNCTION,
];
// DXC scalar types, from https://github.com/microsoft/DirectXShaderCompiler/blob/18c9e114f9c314f93e68fbc72ce207d4ed2e65ae/tools/clang/lib/AST/ASTContextHLSL.cpp#L48-L254
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -17,6 +17,9 @@ const SPECIAL_BASE_VERTEX: &str = "base_vertex";
const SPECIAL_BASE_INSTANCE: &str = "base_instance";
const SPECIAL_OTHER: &str = "other";
+pub(crate) const MODF_FUNCTION: &str = "naga_modf";
+pub(crate) const FREXP_FUNCTION: &str = "naga_frexp";
+
struct EpStructMember {
name: String,
ty: Handle<crate::Type>,
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -244,6 +247,8 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
}
}
+ self.write_special_functions(module)?;
+
self.write_wrapped_compose_functions(module, &module.const_expressions)?;
// Write all named constants
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -2675,8 +2680,8 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
Mf::Round => Function::Regular("round"),
Mf::Fract => Function::Regular("frac"),
Mf::Trunc => Function::Regular("trunc"),
- Mf::Modf => Function::Regular("modf"),
- Mf::Frexp => Function::Regular("frexp"),
+ Mf::Modf => Function::Regular(MODF_FUNCTION),
+ Mf::Frexp => Function::Regular(FREXP_FUNCTION),
Mf::Ldexp => Function::Regular("ldexp"),
// exponent
Mf::Exp => Function::Regular("exp"),
diff --git a/src/back/msl/keywords.rs b/src/back/msl/keywords.rs
--- a/src/back/msl/keywords.rs
+++ b/src/back/msl/keywords.rs
@@ -214,4 +214,6 @@ pub const RESERVED: &[&str] = &[
// Naga utilities
"DefaultConstructible",
"clamped_lod_e",
+ super::writer::FREXP_FUNCTION,
+ super::writer::MODF_FUNCTION,
];
diff --git a/src/back/msl/writer.rs b/src/back/msl/writer.rs
--- a/src/back/msl/writer.rs
+++ b/src/back/msl/writer.rs
@@ -32,6 +32,9 @@ const RAY_QUERY_FIELD_INTERSECTION: &str = "intersection";
const RAY_QUERY_FIELD_READY: &str = "ready";
const RAY_QUERY_FUN_MAP_INTERSECTION: &str = "_map_intersection_type";
+pub(crate) const MODF_FUNCTION: &str = "naga_modf";
+pub(crate) const FREXP_FUNCTION: &str = "naga_frexp";
+
/// Write the Metal name for a Naga numeric type: scalar, vector, or matrix.
///
/// The `sizes` slice determines whether this function writes a
diff --git a/src/back/msl/writer.rs b/src/back/msl/writer.rs
--- a/src/back/msl/writer.rs
+++ b/src/back/msl/writer.rs
@@ -1678,8 +1681,8 @@ impl<W: Write> Writer<W> {
Mf::Round => "rint",
Mf::Fract => "fract",
Mf::Trunc => "trunc",
- Mf::Modf => "modf",
- Mf::Frexp => "frexp",
+ Mf::Modf => MODF_FUNCTION,
+ Mf::Frexp => FREXP_FUNCTION,
Mf::Ldexp => "ldexp",
// exponent
Mf::Exp => "exp",
diff --git a/src/back/msl/writer.rs b/src/back/msl/writer.rs
--- a/src/back/msl/writer.rs
+++ b/src/back/msl/writer.rs
@@ -1813,6 +1816,9 @@ impl<W: Write> Writer<W> {
write!(self.out, "((")?;
self.put_expression(arg, context, false)?;
write!(self.out, ") * 57.295779513082322865)")?;
+ } else if fun == Mf::Modf || fun == Mf::Frexp {
+ write!(self.out, "{fun_name}")?;
+ self.put_call_parameters(iter::once(arg), context)?;
} else {
write!(self.out, "{NAMESPACE}::{fun_name}")?;
self.put_call_parameters(
diff --git a/src/back/msl/writer.rs b/src/back/msl/writer.rs
--- a/src/back/msl/writer.rs
+++ b/src/back/msl/writer.rs
@@ -3236,6 +3242,57 @@ impl<W: Write> Writer<W> {
}
}
}
+
+ // Write functions to create special types.
+ for (type_key, struct_ty) in module.special_types.predeclared_types.iter() {
+ match type_key {
+ &crate::PredeclaredType::ModfResult { size, width }
+ | &crate::PredeclaredType::FrexpResult { size, width } => {
+ let arg_type_name_owner;
+ let arg_type_name = if let Some(size) = size {
+ arg_type_name_owner = format!(
+ "{NAMESPACE}::{}{}",
+ if width == 8 { "double" } else { "float" },
+ size as u8
+ );
+ &arg_type_name_owner
+ } else if width == 8 {
+ "double"
+ } else {
+ "float"
+ };
+
+ let other_type_name_owner;
+ let (defined_func_name, called_func_name, other_type_name) =
+ if matches!(type_key, &crate::PredeclaredType::ModfResult { .. }) {
+ (MODF_FUNCTION, "modf", arg_type_name)
+ } else {
+ let other_type_name = if let Some(size) = size {
+ other_type_name_owner = format!("int{}", size as u8);
+ &other_type_name_owner
+ } else {
+ "int"
+ };
+ (FREXP_FUNCTION, "frexp", other_type_name)
+ };
+
+ let struct_name = &self.names[&NameKey::Type(*struct_ty)];
+
+ writeln!(self.out)?;
+ writeln!(
+ self.out,
+ "{} {defined_func_name}({arg_type_name} arg) {{
+ {other_type_name} other;
+ {arg_type_name} fract = {NAMESPACE}::{called_func_name}(arg, other);
+ return {}{{ fract, other }};
+}}",
+ struct_name, struct_name
+ )?;
+ }
+ &crate::PredeclaredType::AtomicCompareExchangeWeakResult { .. } => {}
+ }
+ }
+
Ok(())
}
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -787,8 +787,8 @@ impl<'w> BlockContext<'w> {
Mf::Floor => MathOp::Ext(spirv::GLOp::Floor),
Mf::Fract => MathOp::Ext(spirv::GLOp::Fract),
Mf::Trunc => MathOp::Ext(spirv::GLOp::Trunc),
- Mf::Modf => MathOp::Ext(spirv::GLOp::Modf),
- Mf::Frexp => MathOp::Ext(spirv::GLOp::Frexp),
+ Mf::Modf => MathOp::Ext(spirv::GLOp::ModfStruct),
+ Mf::Frexp => MathOp::Ext(spirv::GLOp::FrexpStruct),
Mf::Ldexp => MathOp::Ext(spirv::GLOp::Ldexp),
// geometry
Mf::Dot => match *self.fun_info[arg].ty.inner_with(&self.ir_module.types) {
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -97,6 +97,14 @@ impl<W: Write> Writer<W> {
self.ep_results.clear();
}
+ fn is_builtin_wgsl_struct(&self, module: &Module, handle: Handle<crate::Type>) -> bool {
+ module
+ .special_types
+ .predeclared_types
+ .values()
+ .any(|t| *t == handle)
+ }
+
pub fn write(&mut self, module: &Module, info: &valid::ModuleInfo) -> BackendResult {
self.reset(module);
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -109,13 +117,13 @@ impl<W: Write> Writer<W> {
// Write all structs
for (handle, ty) in module.types.iter() {
- if let TypeInner::Struct {
- ref members,
- span: _,
- } = ty.inner
- {
- self.write_struct(module, handle, members)?;
- writeln!(self.out)?;
+ if let TypeInner::Struct { ref members, .. } = ty.inner {
+ {
+ if !self.is_builtin_wgsl_struct(module, handle) {
+ self.write_struct(module, handle, members)?;
+ writeln!(self.out)?;
+ }
+ }
}
}
diff --git a/src/front/type_gen.rs b/src/front/type_gen.rs
--- a/src/front/type_gen.rs
+++ b/src/front/type_gen.rs
@@ -5,55 +5,6 @@ Type generators.
use crate::{arena::Handle, span::Span};
impl crate::Module {
- pub fn generate_atomic_compare_exchange_result(
- &mut self,
- kind: crate::ScalarKind,
- width: crate::Bytes,
- ) -> Handle<crate::Type> {
- let bool_ty = self.types.insert(
- crate::Type {
- name: None,
- inner: crate::TypeInner::Scalar {
- kind: crate::ScalarKind::Bool,
- width: crate::BOOL_WIDTH,
- },
- },
- Span::UNDEFINED,
- );
- let scalar_ty = self.types.insert(
- crate::Type {
- name: None,
- inner: crate::TypeInner::Scalar { kind, width },
- },
- Span::UNDEFINED,
- );
-
- self.types.insert(
- crate::Type {
- name: Some(format!(
- "__atomic_compare_exchange_result<{kind:?},{width}>"
- )),
- inner: crate::TypeInner::Struct {
- members: vec![
- crate::StructMember {
- name: Some("old_value".to_string()),
- ty: scalar_ty,
- binding: None,
- offset: 0,
- },
- crate::StructMember {
- name: Some("exchanged".to_string()),
- ty: bool_ty,
- binding: None,
- offset: 4,
- },
- ],
- span: 8,
- },
- },
- Span::UNDEFINED,
- )
- }
/// Populate this module's [`SpecialTypes::ray_desc`] type.
///
/// [`SpecialTypes::ray_desc`] is the type of the [`descriptor`] operand of
diff --git a/src/front/type_gen.rs b/src/front/type_gen.rs
--- a/src/front/type_gen.rs
+++ b/src/front/type_gen.rs
@@ -311,4 +262,203 @@ impl crate::Module {
self.special_types.ray_intersection = Some(handle);
handle
}
+
+ /// Populate this module's [`SpecialTypes::predeclared_types`] type and return the handle.
+ ///
+ /// [`SpecialTypes::predeclared_types`]: crate::SpecialTypes::predeclared_types
+ pub fn generate_predeclared_type(
+ &mut self,
+ special_type: crate::PredeclaredType,
+ ) -> Handle<crate::Type> {
+ use std::fmt::Write;
+
+ if let Some(value) = self.special_types.predeclared_types.get(&special_type) {
+ return *value;
+ }
+
+ let ty = match special_type {
+ crate::PredeclaredType::AtomicCompareExchangeWeakResult { kind, width } => {
+ let bool_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Scalar {
+ kind: crate::ScalarKind::Bool,
+ width: crate::BOOL_WIDTH,
+ },
+ },
+ Span::UNDEFINED,
+ );
+ let scalar_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Scalar { kind, width },
+ },
+ Span::UNDEFINED,
+ );
+
+ crate::Type {
+ name: Some(format!(
+ "__atomic_compare_exchange_result<{kind:?},{width}>"
+ )),
+ inner: crate::TypeInner::Struct {
+ members: vec![
+ crate::StructMember {
+ name: Some("old_value".to_string()),
+ ty: scalar_ty,
+ binding: None,
+ offset: 0,
+ },
+ crate::StructMember {
+ name: Some("exchanged".to_string()),
+ ty: bool_ty,
+ binding: None,
+ offset: 4,
+ },
+ ],
+ span: 8,
+ },
+ }
+ }
+ crate::PredeclaredType::ModfResult { size, width } => {
+ let float_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Scalar {
+ kind: crate::ScalarKind::Float,
+ width,
+ },
+ },
+ Span::UNDEFINED,
+ );
+
+ let (member_ty, second_offset) = if let Some(size) = size {
+ let vec_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Vector {
+ size,
+ kind: crate::ScalarKind::Float,
+ width,
+ },
+ },
+ Span::UNDEFINED,
+ );
+ (vec_ty, size as u32 * width as u32)
+ } else {
+ (float_ty, width as u32)
+ };
+
+ let mut type_name = "__modf_result_".to_string();
+ if let Some(size) = size {
+ let _ = write!(type_name, "vec{}_", size as u8);
+ }
+ let _ = write!(type_name, "f{}", width * 8);
+
+ crate::Type {
+ name: Some(type_name),
+ inner: crate::TypeInner::Struct {
+ members: vec![
+ crate::StructMember {
+ name: Some("fract".to_string()),
+ ty: member_ty,
+ binding: None,
+ offset: 0,
+ },
+ crate::StructMember {
+ name: Some("whole".to_string()),
+ ty: member_ty,
+ binding: None,
+ offset: second_offset,
+ },
+ ],
+ span: second_offset * 2,
+ },
+ }
+ }
+ crate::PredeclaredType::FrexpResult { size, width } => {
+ let float_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Scalar {
+ kind: crate::ScalarKind::Float,
+ width,
+ },
+ },
+ Span::UNDEFINED,
+ );
+
+ let int_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Scalar {
+ kind: crate::ScalarKind::Sint,
+ width,
+ },
+ },
+ Span::UNDEFINED,
+ );
+
+ let (fract_member_ty, exp_member_ty, second_offset) = if let Some(size) = size {
+ let vec_float_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Vector {
+ size,
+ kind: crate::ScalarKind::Float,
+ width,
+ },
+ },
+ Span::UNDEFINED,
+ );
+ let vec_int_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Vector {
+ size,
+ kind: crate::ScalarKind::Sint,
+ width,
+ },
+ },
+ Span::UNDEFINED,
+ );
+ (vec_float_ty, vec_int_ty, size as u32 * width as u32)
+ } else {
+ (float_ty, int_ty, width as u32)
+ };
+
+ let mut type_name = "__frexp_result_".to_string();
+ if let Some(size) = size {
+ let _ = write!(type_name, "vec{}_", size as u8);
+ }
+ let _ = write!(type_name, "f{}", width * 8);
+
+ crate::Type {
+ name: Some(type_name),
+ inner: crate::TypeInner::Struct {
+ members: vec![
+ crate::StructMember {
+ name: Some("fract".to_string()),
+ ty: fract_member_ty,
+ binding: None,
+ offset: 0,
+ },
+ crate::StructMember {
+ name: Some("exp".to_string()),
+ ty: exp_member_ty,
+ binding: None,
+ offset: second_offset,
+ },
+ ],
+ span: second_offset * 2,
+ },
+ }
+ }
+ };
+
+ let handle = self.types.insert(ty, Span::UNDEFINED);
+ self.special_types
+ .predeclared_types
+ .insert(special_type, handle);
+ handle
+ }
}
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -1745,6 +1745,25 @@ impl<'source, 'temp> Lowerer<'source, 'temp> {
args.finish()?;
+ if fun == crate::MathFunction::Modf || fun == crate::MathFunction::Frexp {
+ ctx.grow_types(arg)?;
+ if let Some((size, width)) = match *ctx.resolved_inner(arg) {
+ crate::TypeInner::Scalar { width, .. } => Some((None, width)),
+ crate::TypeInner::Vector { size, width, .. } => {
+ Some((Some(size), width))
+ }
+ _ => None,
+ } {
+ ctx.module.generate_predeclared_type(
+ if fun == crate::MathFunction::Modf {
+ crate::PredeclaredType::ModfResult { size, width }
+ } else {
+ crate::PredeclaredType::FrexpResult { size, width }
+ },
+ );
+ }
+ }
+
crate::Expression::Math {
fun,
arg,
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -1880,10 +1899,12 @@ impl<'source, 'temp> Lowerer<'source, 'temp> {
let expression = match *ctx.resolved_inner(value) {
crate::TypeInner::Scalar { kind, width } => {
crate::Expression::AtomicResult {
- //TODO: cache this to avoid generating duplicate types
- ty: ctx
- .module
- .generate_atomic_compare_exchange_result(kind, width),
+ ty: ctx.module.generate_predeclared_type(
+ crate::PredeclaredType::AtomicCompareExchangeWeakResult {
+ kind,
+ width,
+ },
+ ),
comparison: true,
}
}
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -1945,6 +1945,31 @@ pub struct EntryPoint {
pub function: Function,
}
+/// Return types predeclared for the frexp, modf, and atomicCompareExchangeWeak built-in functions.
+///
+/// These cannot be spelled in WGSL source.
+///
+/// Stored in [`SpecialTypes::predeclared_types`] and created by [`Module::generate_predeclared_type`].
+#[derive(Debug, PartialEq, Eq, Hash)]
+#[cfg_attr(feature = "clone", derive(Clone))]
+#[cfg_attr(feature = "serialize", derive(Serialize))]
+#[cfg_attr(feature = "deserialize", derive(Deserialize))]
+#[cfg_attr(feature = "arbitrary", derive(Arbitrary))]
+pub enum PredeclaredType {
+ AtomicCompareExchangeWeakResult {
+ kind: ScalarKind,
+ width: Bytes,
+ },
+ ModfResult {
+ size: Option<VectorSize>,
+ width: Bytes,
+ },
+ FrexpResult {
+ size: Option<VectorSize>,
+ width: Bytes,
+ },
+}
+
/// Set of special types that can be optionally generated by the frontends.
#[derive(Debug, Default)]
#[cfg_attr(feature = "clone", derive(Clone))]
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -1963,6 +1988,12 @@ pub struct SpecialTypes {
/// Call [`Module::generate_ray_intersection_type`] to populate
/// this if needed and return the handle.
pub ray_intersection: Option<Handle<Type>>,
+
+ /// Types for predeclared wgsl types instantiated on demand.
+ ///
+ /// Call [`Module::generate_predeclared_type`] to populate this if
+ /// needed and return the handle.
+ pub predeclared_types: indexmap::IndexMap<PredeclaredType, Handle<Type>>,
}
/// Shader module.
diff --git a/src/proc/mod.rs b/src/proc/mod.rs
--- a/src/proc/mod.rs
+++ b/src/proc/mod.rs
@@ -375,8 +375,8 @@ impl super::MathFunction {
Self::Round => 1,
Self::Fract => 1,
Self::Trunc => 1,
- Self::Modf => 2,
- Self::Frexp => 2,
+ Self::Modf => 1,
+ Self::Frexp => 1,
Self::Ldexp => 2,
// exponent
Self::Exp => 1,
diff --git a/src/proc/typifier.rs b/src/proc/typifier.rs
--- a/src/proc/typifier.rs
+++ b/src/proc/typifier.rs
@@ -706,8 +706,6 @@ impl<'a> ResolveContext<'a> {
Mf::Round |
Mf::Fract |
Mf::Trunc |
- Mf::Modf |
- Mf::Frexp |
Mf::Ldexp |
// exponent
Mf::Exp |
diff --git a/src/proc/typifier.rs b/src/proc/typifier.rs
--- a/src/proc/typifier.rs
+++ b/src/proc/typifier.rs
@@ -715,6 +713,31 @@ impl<'a> ResolveContext<'a> {
Mf::Log |
Mf::Log2 |
Mf::Pow => res_arg.clone(),
+ Mf::Modf | Mf::Frexp => {
+ let (size, width) = match res_arg.inner_with(types) {
+ &Ti::Scalar {
+ kind: crate::ScalarKind::Float,
+ width,
+ } => (None, width),
+ &Ti::Vector {
+ kind: crate::ScalarKind::Float,
+ size,
+ width,
+ } => (Some(size), width),
+ ref other =>
+ return Err(ResolveError::IncompatibleOperands(format!("{fun:?}({other:?}, _)")))
+ };
+ let result = self
+ .special_types
+ .predeclared_types
+ .get(&if fun == Mf::Modf {
+ crate::PredeclaredType::ModfResult { size, width }
+ } else {
+ crate::PredeclaredType::FrexpResult { size, width }
+ })
+ .ok_or(ResolveError::MissingSpecialType)?;
+ TypeResolution::Handle(*result)
+ },
// geometry
Mf::Dot => match *res_arg.inner_with(types) {
Ti::Vector {
diff --git a/src/valid/expression.rs b/src/valid/expression.rs
--- a/src/valid/expression.rs
+++ b/src/valid/expression.rs
@@ -1015,38 +1015,20 @@ impl super::Validator {
}
}
Mf::Modf | Mf::Frexp => {
- let arg1_ty = match (arg1_ty, arg2_ty, arg3_ty) {
- (Some(ty1), None, None) => ty1,
- _ => return Err(ExpressionError::WrongArgumentCount(fun)),
- };
- let (size0, width0) = match *arg_ty {
+ if arg1_ty.is_some() | arg2_ty.is_some() | arg3_ty.is_some() {
+ return Err(ExpressionError::WrongArgumentCount(fun));
+ }
+ if !matches!(
+ *arg_ty,
Ti::Scalar {
kind: Sk::Float,
- width,
- } => (None, width),
- Ti::Vector {
- kind: Sk::Float,
- size,
- width,
- } => (Some(size), width),
- _ => return Err(ExpressionError::InvalidArgumentType(fun, 0, arg)),
- };
- let good = match *arg1_ty {
- Ti::Pointer { base, space: _ } => module.types[base].inner == *arg_ty,
- Ti::ValuePointer {
- size,
+ ..
+ } | Ti::Vector {
kind: Sk::Float,
- width,
- space: _,
- } => size == size0 && width == width0,
- _ => false,
- };
- if !good {
- return Err(ExpressionError::InvalidArgumentType(
- fun,
- 1,
- arg1.unwrap(),
- ));
+ ..
+ },
+ ) {
+ return Err(ExpressionError::InvalidArgumentType(fun, 1, arg));
}
}
Mf::Ldexp => {
| diff --git a/src/front/wgsl/tests.rs b/src/front/wgsl/tests.rs
--- a/src/front/wgsl/tests.rs
+++ b/src/front/wgsl/tests.rs
@@ -456,10 +456,11 @@ fn binary_expression_mixed_scalar_and_vector_operands() {
#[test]
fn parse_pointers() {
parse_str(
- "fn foo() {
+ "fn foo(a: ptr<private, f32>) -> f32 { return *a; }
+ fn bar() {
var x: f32 = 1.0;
let px = &x;
- let py = frexp(0.5, px);
+ let py = foo(px);
}",
)
.unwrap();
diff --git a/tests/in/math-functions.wgsl b/tests/in/math-functions.wgsl
--- a/tests/in/math-functions.wgsl
+++ b/tests/in/math-functions.wgsl
@@ -31,4 +31,14 @@ fn main() {
let clz_d = countLeadingZeros(vec2(1u));
let lde_a = ldexp(1.0, 2);
let lde_b = ldexp(vec2(1.0, 2.0), vec2(3, 4));
+ let modf_a = modf(1.5);
+ let modf_b = modf(1.5).fract;
+ let modf_c = modf(1.5).whole;
+ let modf_d = modf(vec2(1.5, 1.5));
+ let modf_e = modf(vec4(1.5, 1.5, 1.5, 1.5)).whole.x;
+ let modf_f: f32 = modf(vec2(1.5, 1.5)).fract.y;
+ let frexp_a = frexp(1.5);
+ let frexp_b = frexp(1.5).fract;
+ let frexp_c: i32 = frexp(1.5).exp;
+ let frexp_d: i32 = frexp(vec4(1.5, 1.5, 1.5, 1.5)).exp.x;
}
diff --git a/tests/out/glsl/math-functions.main.Fragment.glsl b/tests/out/glsl/math-functions.main.Fragment.glsl
--- a/tests/out/glsl/math-functions.main.Fragment.glsl
+++ b/tests/out/glsl/math-functions.main.Fragment.glsl
@@ -3,6 +3,56 @@
precision highp float;
precision highp int;
+struct __modf_result_f32_ {
+ float fract_;
+ float whole;
+};
+struct __modf_result_vec2_f32_ {
+ vec2 fract_;
+ vec2 whole;
+};
+struct __modf_result_vec4_f32_ {
+ vec4 fract_;
+ vec4 whole;
+};
+struct __frexp_result_f32_ {
+ float fract_;
+ int exp_;
+};
+struct __frexp_result_vec4_f32_ {
+ vec4 fract_;
+ ivec4 exp_;
+};
+
+__modf_result_f32_ naga_modf(float arg) {
+ float other;
+ float fract = modf(arg, other);
+ return __modf_result_f32_(fract, other);
+}
+
+__modf_result_vec2_f32_ naga_modf(vec2 arg) {
+ vec2 other;
+ vec2 fract = modf(arg, other);
+ return __modf_result_vec2_f32_(fract, other);
+}
+
+__modf_result_vec4_f32_ naga_modf(vec4 arg) {
+ vec4 other;
+ vec4 fract = modf(arg, other);
+ return __modf_result_vec4_f32_(fract, other);
+}
+
+__frexp_result_f32_ naga_frexp(float arg) {
+ int other;
+ float fract = frexp(arg, other);
+ return __frexp_result_f32_(fract, other);
+}
+
+__frexp_result_vec4_f32_ naga_frexp(vec4 arg) {
+ ivec4 other;
+ vec4 fract = frexp(arg, other);
+ return __frexp_result_vec4_f32_(fract, other);
+}
void main() {
vec4 v = vec4(0.0);
diff --git a/tests/out/glsl/math-functions.main.Fragment.glsl b/tests/out/glsl/math-functions.main.Fragment.glsl
--- a/tests/out/glsl/math-functions.main.Fragment.glsl
+++ b/tests/out/glsl/math-functions.main.Fragment.glsl
@@ -36,5 +86,15 @@ void main() {
uvec2 clz_d = uvec2(ivec2(31) - findMSB(uvec2(1u)));
float lde_a = ldexp(1.0, 2);
vec2 lde_b = ldexp(vec2(1.0, 2.0), ivec2(3, 4));
+ __modf_result_f32_ modf_a = naga_modf(1.5);
+ float modf_b = naga_modf(1.5).fract_;
+ float modf_c = naga_modf(1.5).whole;
+ __modf_result_vec2_f32_ modf_d = naga_modf(vec2(1.5, 1.5));
+ float modf_e = naga_modf(vec4(1.5, 1.5, 1.5, 1.5)).whole.x;
+ float modf_f = naga_modf(vec2(1.5, 1.5)).fract_.y;
+ __frexp_result_f32_ frexp_a = naga_frexp(1.5);
+ float frexp_b = naga_frexp(1.5).fract_;
+ int frexp_c = naga_frexp(1.5).exp_;
+ int frexp_d = naga_frexp(vec4(1.5, 1.5, 1.5, 1.5)).exp_.x;
}
diff --git a/tests/out/hlsl/math-functions.hlsl b/tests/out/hlsl/math-functions.hlsl
--- a/tests/out/hlsl/math-functions.hlsl
+++ b/tests/out/hlsl/math-functions.hlsl
@@ -1,3 +1,68 @@
+struct __modf_result_f32_ {
+ float fract;
+ float whole;
+};
+
+struct __modf_result_vec2_f32_ {
+ float2 fract;
+ float2 whole;
+};
+
+struct __modf_result_vec4_f32_ {
+ float4 fract;
+ float4 whole;
+};
+
+struct __frexp_result_f32_ {
+ float fract;
+ int exp_;
+};
+
+struct __frexp_result_vec4_f32_ {
+ float4 fract;
+ int4 exp_;
+};
+
+__modf_result_f32_ naga_modf(float arg) {
+ float other;
+ __modf_result_f32_ result;
+ result.fract = modf(arg, other);
+ result.whole = other;
+ return result;
+}
+
+__modf_result_vec2_f32_ naga_modf(float2 arg) {
+ float2 other;
+ __modf_result_vec2_f32_ result;
+ result.fract = modf(arg, other);
+ result.whole = other;
+ return result;
+}
+
+__modf_result_vec4_f32_ naga_modf(float4 arg) {
+ float4 other;
+ __modf_result_vec4_f32_ result;
+ result.fract = modf(arg, other);
+ result.whole = other;
+ return result;
+}
+
+__frexp_result_f32_ naga_frexp(float arg) {
+ float other;
+ __frexp_result_f32_ result;
+ result.fract = sign(arg) * frexp(arg, other);
+ result.exp_ = other;
+ return result;
+}
+
+__frexp_result_vec4_f32_ naga_frexp(float4 arg) {
+ float4 other;
+ __frexp_result_vec4_f32_ result;
+ result.fract = sign(arg) * frexp(arg, other);
+ result.exp_ = other;
+ return result;
+}
+
void main()
{
float4 v = (0.0).xxxx;
diff --git a/tests/out/hlsl/math-functions.hlsl b/tests/out/hlsl/math-functions.hlsl
--- a/tests/out/hlsl/math-functions.hlsl
+++ b/tests/out/hlsl/math-functions.hlsl
@@ -31,4 +96,14 @@ void main()
uint2 clz_d = ((31u).xx - firstbithigh((1u).xx));
float lde_a = ldexp(1.0, 2);
float2 lde_b = ldexp(float2(1.0, 2.0), int2(3, 4));
+ __modf_result_f32_ modf_a = naga_modf(1.5);
+ float modf_b = naga_modf(1.5).fract;
+ float modf_c = naga_modf(1.5).whole;
+ __modf_result_vec2_f32_ modf_d = naga_modf(float2(1.5, 1.5));
+ float modf_e = naga_modf(float4(1.5, 1.5, 1.5, 1.5)).whole.x;
+ float modf_f = naga_modf(float2(1.5, 1.5)).fract.y;
+ __frexp_result_f32_ frexp_a = naga_frexp(1.5);
+ float frexp_b = naga_frexp(1.5).fract;
+ int frexp_c = naga_frexp(1.5).exp_;
+ int frexp_d = naga_frexp(float4(1.5, 1.5, 1.5, 1.5)).exp_.x;
}
diff --git a/tests/out/ir/access.ron b/tests/out/ir/access.ron
--- a/tests/out/ir/access.ron
+++ b/tests/out/ir/access.ron
@@ -334,6 +334,7 @@
special_types: (
ray_desc: None,
ray_intersection: None,
+ predeclared_types: {},
),
constants: [],
global_variables: [
diff --git a/tests/out/ir/collatz.ron b/tests/out/ir/collatz.ron
--- a/tests/out/ir/collatz.ron
+++ b/tests/out/ir/collatz.ron
@@ -41,6 +41,7 @@
special_types: (
ray_desc: None,
ray_intersection: None,
+ predeclared_types: {},
),
constants: [],
global_variables: [
diff --git a/tests/out/ir/shadow.ron b/tests/out/ir/shadow.ron
--- a/tests/out/ir/shadow.ron
+++ b/tests/out/ir/shadow.ron
@@ -266,6 +266,7 @@
special_types: (
ray_desc: None,
ray_intersection: None,
+ predeclared_types: {},
),
constants: [
(
diff --git a/tests/out/msl/math-functions.msl b/tests/out/msl/math-functions.msl
--- a/tests/out/msl/math-functions.msl
+++ b/tests/out/msl/math-functions.msl
@@ -4,6 +4,56 @@
using metal::uint;
+struct __modf_result_f32_ {
+ float fract;
+ float whole;
+};
+struct __modf_result_vec2_f32_ {
+ metal::float2 fract;
+ metal::float2 whole;
+};
+struct __modf_result_vec4_f32_ {
+ metal::float4 fract;
+ metal::float4 whole;
+};
+struct __frexp_result_f32_ {
+ float fract;
+ int exp;
+};
+struct __frexp_result_vec4_f32_ {
+ metal::float4 fract;
+ metal::int4 exp;
+};
+
+__modf_result_f32_ naga_modf(float arg) {
+ float other;
+ float fract = metal::modf(arg, other);
+ return __modf_result_f32_{ fract, other };
+}
+
+__modf_result_vec2_f32_ naga_modf(metal::float2 arg) {
+ metal::float2 other;
+ metal::float2 fract = metal::modf(arg, other);
+ return __modf_result_vec2_f32_{ fract, other };
+}
+
+__modf_result_vec4_f32_ naga_modf(metal::float4 arg) {
+ metal::float4 other;
+ metal::float4 fract = metal::modf(arg, other);
+ return __modf_result_vec4_f32_{ fract, other };
+}
+
+__frexp_result_f32_ naga_frexp(float arg) {
+ int other;
+ float fract = metal::frexp(arg, other);
+ return __frexp_result_f32_{ fract, other };
+}
+
+__frexp_result_vec4_f32_ naga_frexp(metal::float4 arg) {
+ int4 other;
+ metal::float4 fract = metal::frexp(arg, other);
+ return __frexp_result_vec4_f32_{ fract, other };
+}
fragment void main_(
) {
diff --git a/tests/out/msl/math-functions.msl b/tests/out/msl/math-functions.msl
--- a/tests/out/msl/math-functions.msl
+++ b/tests/out/msl/math-functions.msl
@@ -40,4 +90,14 @@ fragment void main_(
metal::uint2 clz_d = metal::clz(metal::uint2(1u));
float lde_a = metal::ldexp(1.0, 2);
metal::float2 lde_b = metal::ldexp(metal::float2(1.0, 2.0), metal::int2(3, 4));
+ __modf_result_f32_ modf_a = naga_modf(1.5);
+ float modf_b = naga_modf(1.5).fract;
+ float modf_c = naga_modf(1.5).whole;
+ __modf_result_vec2_f32_ modf_d = naga_modf(metal::float2(1.5, 1.5));
+ float modf_e = naga_modf(metal::float4(1.5, 1.5, 1.5, 1.5)).whole.x;
+ float modf_f = naga_modf(metal::float2(1.5, 1.5)).fract.y;
+ __frexp_result_f32_ frexp_a = naga_frexp(1.5);
+ float frexp_b = naga_frexp(1.5).fract;
+ int frexp_c = naga_frexp(1.5).exp;
+ int frexp_d = naga_frexp(metal::float4(1.5, 1.5, 1.5, 1.5)).exp.x;
}
diff --git a/tests/out/spv/math-functions.spvasm b/tests/out/spv/math-functions.spvasm
--- a/tests/out/spv/math-functions.spvasm
+++ b/tests/out/spv/math-functions.spvasm
@@ -1,106 +1,147 @@
; SPIR-V
; Version: 1.1
; Generator: rspirv
-; Bound: 96
+; Bound: 127
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
-OpEntryPoint Fragment %9 "main"
-OpExecutionMode %9 OriginUpperLeft
+OpEntryPoint Fragment %15 "main"
+OpExecutionMode %15 OriginUpperLeft
+OpMemberDecorate %8 0 Offset 0
+OpMemberDecorate %8 1 Offset 4
+OpMemberDecorate %9 0 Offset 0
+OpMemberDecorate %9 1 Offset 8
+OpMemberDecorate %10 0 Offset 0
+OpMemberDecorate %10 1 Offset 16
+OpMemberDecorate %11 0 Offset 0
+OpMemberDecorate %11 1 Offset 4
+OpMemberDecorate %13 0 Offset 0
+OpMemberDecorate %13 1 Offset 16
%2 = OpTypeVoid
%4 = OpTypeFloat 32
%3 = OpTypeVector %4 4
%6 = OpTypeInt 32 1
%5 = OpTypeVector %6 2
%7 = OpTypeVector %4 2
-%10 = OpTypeFunction %2
-%11 = OpConstant %4 1.0
-%12 = OpConstant %4 0.0
-%13 = OpConstantNull %5
-%14 = OpTypeInt 32 0
-%15 = OpConstant %14 0
-%16 = OpConstant %6 -1
-%17 = OpConstant %14 1
-%18 = OpConstant %6 0
-%19 = OpConstant %14 4294967295
-%20 = OpConstant %6 1
-%21 = OpConstant %6 2
-%22 = OpConstant %4 2.0
-%23 = OpConstant %6 3
-%24 = OpConstant %6 4
-%32 = OpConstantComposite %3 %12 %12 %12 %12
-%33 = OpConstantComposite %3 %11 %11 %11 %11
-%36 = OpConstantNull %6
-%49 = OpTypeVector %14 2
-%59 = OpConstant %14 32
-%69 = OpConstantComposite %49 %59 %59
-%81 = OpConstant %6 31
-%87 = OpConstantComposite %5 %81 %81
-%9 = OpFunction %2 None %10
-%8 = OpLabel
-OpBranch %25
-%25 = OpLabel
-%26 = OpCompositeConstruct %3 %12 %12 %12 %12
-%27 = OpExtInst %4 %1 Degrees %11
-%28 = OpExtInst %4 %1 Radians %11
-%29 = OpExtInst %3 %1 Degrees %26
-%30 = OpExtInst %3 %1 Radians %26
-%31 = OpExtInst %3 %1 FClamp %26 %32 %33
-%34 = OpExtInst %3 %1 Refract %26 %26 %11
-%37 = OpCompositeExtract %6 %13 0
-%38 = OpCompositeExtract %6 %13 0
-%39 = OpIMul %6 %37 %38
-%40 = OpIAdd %6 %36 %39
-%41 = OpCompositeExtract %6 %13 1
-%42 = OpCompositeExtract %6 %13 1
-%43 = OpIMul %6 %41 %42
-%35 = OpIAdd %6 %40 %43
-%44 = OpCopyObject %14 %15
-%45 = OpExtInst %14 %1 FindUMsb %44
-%46 = OpExtInst %6 %1 FindSMsb %16
-%47 = OpCompositeConstruct %5 %16 %16
-%48 = OpExtInst %5 %1 FindSMsb %47
-%50 = OpCompositeConstruct %49 %17 %17
-%51 = OpExtInst %49 %1 FindUMsb %50
-%52 = OpExtInst %6 %1 FindILsb %16
-%53 = OpExtInst %14 %1 FindILsb %17
-%54 = OpCompositeConstruct %5 %16 %16
-%55 = OpExtInst %5 %1 FindILsb %54
-%56 = OpCompositeConstruct %49 %17 %17
-%57 = OpExtInst %49 %1 FindILsb %56
-%60 = OpExtInst %14 %1 FindILsb %15
-%58 = OpExtInst %14 %1 UMin %59 %60
-%62 = OpExtInst %6 %1 FindILsb %18
-%61 = OpExtInst %6 %1 UMin %59 %62
-%64 = OpExtInst %14 %1 FindILsb %19
-%63 = OpExtInst %14 %1 UMin %59 %64
-%66 = OpExtInst %6 %1 FindILsb %16
-%65 = OpExtInst %6 %1 UMin %59 %66
-%67 = OpCompositeConstruct %49 %15 %15
-%70 = OpExtInst %49 %1 FindILsb %67
-%68 = OpExtInst %49 %1 UMin %69 %70
-%71 = OpCompositeConstruct %5 %18 %18
-%73 = OpExtInst %5 %1 FindILsb %71
-%72 = OpExtInst %5 %1 UMin %69 %73
-%74 = OpCompositeConstruct %49 %17 %17
-%76 = OpExtInst %49 %1 FindILsb %74
-%75 = OpExtInst %49 %1 UMin %69 %76
-%77 = OpCompositeConstruct %5 %20 %20
-%79 = OpExtInst %5 %1 FindILsb %77
-%78 = OpExtInst %5 %1 UMin %69 %79
-%82 = OpExtInst %6 %1 FindUMsb %16
-%80 = OpISub %6 %81 %82
-%84 = OpExtInst %6 %1 FindUMsb %17
-%83 = OpISub %14 %81 %84
-%85 = OpCompositeConstruct %5 %16 %16
-%88 = OpExtInst %5 %1 FindUMsb %85
-%86 = OpISub %5 %87 %88
-%89 = OpCompositeConstruct %49 %17 %17
-%91 = OpExtInst %5 %1 FindUMsb %89
-%90 = OpISub %49 %87 %91
-%92 = OpExtInst %4 %1 Ldexp %11 %21
-%93 = OpCompositeConstruct %7 %11 %22
-%94 = OpCompositeConstruct %5 %23 %24
-%95 = OpExtInst %7 %1 Ldexp %93 %94
+%8 = OpTypeStruct %4 %4
+%9 = OpTypeStruct %7 %7
+%10 = OpTypeStruct %3 %3
+%11 = OpTypeStruct %4 %6
+%12 = OpTypeVector %6 4
+%13 = OpTypeStruct %3 %12
+%16 = OpTypeFunction %2
+%17 = OpConstant %4 1.0
+%18 = OpConstant %4 0.0
+%19 = OpConstantNull %5
+%20 = OpTypeInt 32 0
+%21 = OpConstant %20 0
+%22 = OpConstant %6 -1
+%23 = OpConstant %20 1
+%24 = OpConstant %6 0
+%25 = OpConstant %20 4294967295
+%26 = OpConstant %6 1
+%27 = OpConstant %6 2
+%28 = OpConstant %4 2.0
+%29 = OpConstant %6 3
+%30 = OpConstant %6 4
+%31 = OpConstant %4 1.5
+%39 = OpConstantComposite %3 %18 %18 %18 %18
+%40 = OpConstantComposite %3 %17 %17 %17 %17
+%43 = OpConstantNull %6
+%56 = OpTypeVector %20 2
+%66 = OpConstant %20 32
+%76 = OpConstantComposite %56 %66 %66
+%88 = OpConstant %6 31
+%94 = OpConstantComposite %5 %88 %88
+%15 = OpFunction %2 None %16
+%14 = OpLabel
+OpBranch %32
+%32 = OpLabel
+%33 = OpCompositeConstruct %3 %18 %18 %18 %18
+%34 = OpExtInst %4 %1 Degrees %17
+%35 = OpExtInst %4 %1 Radians %17
+%36 = OpExtInst %3 %1 Degrees %33
+%37 = OpExtInst %3 %1 Radians %33
+%38 = OpExtInst %3 %1 FClamp %33 %39 %40
+%41 = OpExtInst %3 %1 Refract %33 %33 %17
+%44 = OpCompositeExtract %6 %19 0
+%45 = OpCompositeExtract %6 %19 0
+%46 = OpIMul %6 %44 %45
+%47 = OpIAdd %6 %43 %46
+%48 = OpCompositeExtract %6 %19 1
+%49 = OpCompositeExtract %6 %19 1
+%50 = OpIMul %6 %48 %49
+%42 = OpIAdd %6 %47 %50
+%51 = OpCopyObject %20 %21
+%52 = OpExtInst %20 %1 FindUMsb %51
+%53 = OpExtInst %6 %1 FindSMsb %22
+%54 = OpCompositeConstruct %5 %22 %22
+%55 = OpExtInst %5 %1 FindSMsb %54
+%57 = OpCompositeConstruct %56 %23 %23
+%58 = OpExtInst %56 %1 FindUMsb %57
+%59 = OpExtInst %6 %1 FindILsb %22
+%60 = OpExtInst %20 %1 FindILsb %23
+%61 = OpCompositeConstruct %5 %22 %22
+%62 = OpExtInst %5 %1 FindILsb %61
+%63 = OpCompositeConstruct %56 %23 %23
+%64 = OpExtInst %56 %1 FindILsb %63
+%67 = OpExtInst %20 %1 FindILsb %21
+%65 = OpExtInst %20 %1 UMin %66 %67
+%69 = OpExtInst %6 %1 FindILsb %24
+%68 = OpExtInst %6 %1 UMin %66 %69
+%71 = OpExtInst %20 %1 FindILsb %25
+%70 = OpExtInst %20 %1 UMin %66 %71
+%73 = OpExtInst %6 %1 FindILsb %22
+%72 = OpExtInst %6 %1 UMin %66 %73
+%74 = OpCompositeConstruct %56 %21 %21
+%77 = OpExtInst %56 %1 FindILsb %74
+%75 = OpExtInst %56 %1 UMin %76 %77
+%78 = OpCompositeConstruct %5 %24 %24
+%80 = OpExtInst %5 %1 FindILsb %78
+%79 = OpExtInst %5 %1 UMin %76 %80
+%81 = OpCompositeConstruct %56 %23 %23
+%83 = OpExtInst %56 %1 FindILsb %81
+%82 = OpExtInst %56 %1 UMin %76 %83
+%84 = OpCompositeConstruct %5 %26 %26
+%86 = OpExtInst %5 %1 FindILsb %84
+%85 = OpExtInst %5 %1 UMin %76 %86
+%89 = OpExtInst %6 %1 FindUMsb %22
+%87 = OpISub %6 %88 %89
+%91 = OpExtInst %6 %1 FindUMsb %23
+%90 = OpISub %20 %88 %91
+%92 = OpCompositeConstruct %5 %22 %22
+%95 = OpExtInst %5 %1 FindUMsb %92
+%93 = OpISub %5 %94 %95
+%96 = OpCompositeConstruct %56 %23 %23
+%98 = OpExtInst %5 %1 FindUMsb %96
+%97 = OpISub %56 %94 %98
+%99 = OpExtInst %4 %1 Ldexp %17 %27
+%100 = OpCompositeConstruct %7 %17 %28
+%101 = OpCompositeConstruct %5 %29 %30
+%102 = OpExtInst %7 %1 Ldexp %100 %101
+%103 = OpExtInst %8 %1 ModfStruct %31
+%104 = OpExtInst %8 %1 ModfStruct %31
+%105 = OpCompositeExtract %4 %104 0
+%106 = OpExtInst %8 %1 ModfStruct %31
+%107 = OpCompositeExtract %4 %106 1
+%108 = OpCompositeConstruct %7 %31 %31
+%109 = OpExtInst %9 %1 ModfStruct %108
+%110 = OpCompositeConstruct %3 %31 %31 %31 %31
+%111 = OpExtInst %10 %1 ModfStruct %110
+%112 = OpCompositeExtract %3 %111 1
+%113 = OpCompositeExtract %4 %112 0
+%114 = OpCompositeConstruct %7 %31 %31
+%115 = OpExtInst %9 %1 ModfStruct %114
+%116 = OpCompositeExtract %7 %115 0
+%117 = OpCompositeExtract %4 %116 1
+%118 = OpExtInst %11 %1 FrexpStruct %31
+%119 = OpExtInst %11 %1 FrexpStruct %31
+%120 = OpCompositeExtract %4 %119 0
+%121 = OpExtInst %11 %1 FrexpStruct %31
+%122 = OpCompositeExtract %6 %121 1
+%123 = OpCompositeConstruct %3 %31 %31 %31 %31
+%124 = OpExtInst %13 %1 FrexpStruct %123
+%125 = OpCompositeExtract %12 %124 1
+%126 = OpCompositeExtract %6 %125 0
OpReturn
OpFunctionEnd
\ No newline at end of file
diff --git a/tests/out/wgsl/atomicCompareExchange.wgsl b/tests/out/wgsl/atomicCompareExchange.wgsl
--- a/tests/out/wgsl/atomicCompareExchange.wgsl
+++ b/tests/out/wgsl/atomicCompareExchange.wgsl
@@ -1,13 +1,3 @@
-struct gen___atomic_compare_exchange_resultSint4_ {
- old_value: i32,
- exchanged: bool,
-}
-
-struct gen___atomic_compare_exchange_resultUint4_ {
- old_value: u32,
- exchanged: bool,
-}
-
const SIZE: u32 = 128u;
@group(0) @binding(0)
diff --git a/tests/out/wgsl/math-functions.wgsl b/tests/out/wgsl/math-functions.wgsl
--- a/tests/out/wgsl/math-functions.wgsl
+++ b/tests/out/wgsl/math-functions.wgsl
@@ -30,4 +30,14 @@ fn main() {
let clz_d = countLeadingZeros(vec2<u32>(1u));
let lde_a = ldexp(1.0, 2);
let lde_b = ldexp(vec2<f32>(1.0, 2.0), vec2<i32>(3, 4));
+ let modf_a = modf(1.5);
+ let modf_b = modf(1.5).fract;
+ let modf_c = modf(1.5).whole;
+ let modf_d = modf(vec2<f32>(1.5, 1.5));
+ let modf_e = modf(vec4<f32>(1.5, 1.5, 1.5, 1.5)).whole.x;
+ let modf_f = modf(vec2<f32>(1.5, 1.5)).fract.y;
+ let frexp_a = frexp(1.5);
+ let frexp_b = frexp(1.5).fract;
+ let frexp_c = frexp(1.5).exp;
+ let frexp_d = frexp(vec4<f32>(1.5, 1.5, 1.5, 1.5)).exp.x;
}
| Change frexp and modf to work with structures
See https://github.com/gpuweb/gpuweb/pull/1973
I think this change needs to be done at IR level.
Change frexp and modf to return structures
Closes #1161
TODO:
- [x] IR
- [x] validation
- [x] typifier
- [ ] frontends (except WGSL)
- [ ] backends (except WGSL)
| Similar work needs to be done for atomic compare+exchange - https://github.com/gpuweb/gpuweb/pull/2113
I think all of this should wait for #1419 first.
This isn't urgent for the release.
Currently bumping up against this when targeting WebGPU, as wgpu's water example uses modf.
This should be simpler now that #2256 is in, which contains infra to create IR structure types.
This branch has conflicts that must be resolved | 2023-08-24T00:48:00 | 0.13 | a17a93ef8f6a06ed8dfc3145276663786828df03 | [
"convert_wgsl"
] | [
"arena::tests::fetch_or_append_unique",
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"back::msl::test_error_size",
"arena::tests::fetch_or_append_non_unique",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"front::glsl::const... | [] | [] | 68 |
gfx-rs/naga | 2,440 | gfx-rs__naga-2440 | [
"2439"
] | 7a19f3af909202c7eafd36633b5584bfbb353ecb | diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -504,13 +504,25 @@ impl<'source, 'temp, 'out> ExpressionContext<'source, 'temp, 'out> {
}
/// Insert splats, if needed by the non-'*' operations.
+ ///
+ /// See the "Binary arithmetic expressions with mixed scalar and vector operands"
+ /// table in the WebGPU Shading Language specification for relevant operators.
+ ///
+ /// Multiply is not handled here as backends are expected to handle vec*scalar
+ /// operations, so inserting splats into the IR increases size needlessly.
fn binary_op_splat(
&mut self,
op: crate::BinaryOperator,
left: &mut Handle<crate::Expression>,
right: &mut Handle<crate::Expression>,
) -> Result<(), Error<'source>> {
- if op != crate::BinaryOperator::Multiply {
+ if matches!(
+ op,
+ crate::BinaryOperator::Add
+ | crate::BinaryOperator::Subtract
+ | crate::BinaryOperator::Divide
+ | crate::BinaryOperator::Modulo
+ ) {
self.grow_types(*left)?.grow_types(*right)?;
let left_size = match *self.resolved_inner(*left) {
| diff --git a/src/front/wgsl/tests.rs b/src/front/wgsl/tests.rs
--- a/src/front/wgsl/tests.rs
+++ b/src/front/wgsl/tests.rs
@@ -387,6 +387,54 @@ fn parse_expressions() {
}").unwrap();
}
+#[test]
+fn binary_expression_mixed_scalar_and_vector_operands() {
+ for (operand, expect_splat) in [
+ ('<', false),
+ ('>', false),
+ ('&', false),
+ ('|', false),
+ ('+', true),
+ ('-', true),
+ ('*', false),
+ ('/', true),
+ ('%', true),
+ ] {
+ let module = parse_str(&format!(
+ "
+ const some_vec = vec3<f32>(1.0, 1.0, 1.0);
+ @fragment
+ fn main() -> @location(0) vec4<f32> {{
+ if (all(1.0 {operand} some_vec)) {{
+ return vec4(0.0);
+ }}
+ return vec4(1.0);
+ }}
+ "
+ ))
+ .unwrap();
+
+ let expressions = &&module.entry_points[0].function.expressions;
+
+ let found_expressions = expressions
+ .iter()
+ .filter(|&(_, e)| {
+ if let crate::Expression::Binary { left, .. } = *e {
+ matches!(
+ (expect_splat, &expressions[left]),
+ (false, &crate::Expression::Literal(crate::Literal::F32(..)))
+ | (true, &crate::Expression::Splat { .. })
+ )
+ } else {
+ false
+ }
+ })
+ .count();
+
+ assert_eq!(found_expressions, 1);
+ }
+}
+
#[test]
fn parse_pointers() {
parse_str(
| [wgsl-in] Comparing scalar with vector type incorrectly passes validation
Spec says both sides of the comparision need to have the same type.
https://www.w3.org/TR/WGSL/#comparison-expr
The following snippet should therefore not pass validation.
```
const some_vec = vec3<f32>(1.0, 1.0, 1.0);
@fragment
fn main() -> @location(0) vec4<f32> {
if (all(1.0 < some_vec)) {
return vec4(0.0);
}
return vec4(1.0);
}
```
Note that Chrome will already correctly reject this, whereas Naga doesn't
| 2023-08-16T18:09:02 | 0.13 | a17a93ef8f6a06ed8dfc3145276663786828df03 | [
"front::wgsl::tests::binary_expression_mixed_scalar_and_vector_operands"
] | [
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::writer::test_write_physical_layout",
"back::spv::layout::t... | [] | [] | 69 | |
gfx-rs/naga | 2,353 | gfx-rs__naga-2353 | [
"1929"
] | 273ff5d829f8207d8d93b70e32e2889362246d33 | diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -121,13 +121,40 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
self.need_bake_expressions.insert(fun_handle);
}
- if let Expression::Math { fun, arg, .. } = *expr {
+ if let Expression::Math {
+ fun,
+ arg,
+ arg1,
+ arg2,
+ arg3,
+ } = *expr
+ {
match fun {
crate::MathFunction::Asinh
| crate::MathFunction::Acosh
| crate::MathFunction::Atanh
- | crate::MathFunction::Unpack2x16float => {
+ | crate::MathFunction::Unpack2x16float
+ | crate::MathFunction::Unpack2x16snorm
+ | crate::MathFunction::Unpack2x16unorm
+ | crate::MathFunction::Unpack4x8snorm
+ | crate::MathFunction::Unpack4x8unorm
+ | crate::MathFunction::Pack2x16float
+ | crate::MathFunction::Pack2x16snorm
+ | crate::MathFunction::Pack2x16unorm
+ | crate::MathFunction::Pack4x8snorm
+ | crate::MathFunction::Pack4x8unorm => {
+ self.need_bake_expressions.insert(arg);
+ }
+ crate::MathFunction::ExtractBits => {
+ self.need_bake_expressions.insert(arg);
+ self.need_bake_expressions.insert(arg1.unwrap());
+ self.need_bake_expressions.insert(arg2.unwrap());
+ }
+ crate::MathFunction::InsertBits => {
self.need_bake_expressions.insert(arg);
+ self.need_bake_expressions.insert(arg1.unwrap());
+ self.need_bake_expressions.insert(arg2.unwrap());
+ self.need_bake_expressions.insert(arg3.unwrap());
}
crate::MathFunction::CountLeadingZeros => {
let inner = info[fun_handle].ty.inner_with(&module.types);
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -2590,7 +2617,18 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
enum Function {
Asincosh { is_sin: bool },
Atanh,
+ ExtractBits,
+ InsertBits,
+ Pack2x16float,
+ Pack2x16snorm,
+ Pack2x16unorm,
+ Pack4x8snorm,
+ Pack4x8unorm,
Unpack2x16float,
+ Unpack2x16snorm,
+ Unpack2x16unorm,
+ Unpack4x8snorm,
+ Unpack4x8unorm,
Regular(&'static str),
MissingIntOverload(&'static str),
MissingIntReturnType(&'static str),
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -2664,7 +2702,20 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
Mf::ReverseBits => Function::MissingIntOverload("reversebits"),
Mf::FindLsb => Function::MissingIntReturnType("firstbitlow"),
Mf::FindMsb => Function::MissingIntReturnType("firstbithigh"),
+ Mf::ExtractBits => Function::ExtractBits,
+ Mf::InsertBits => Function::InsertBits,
+ // Data Packing
+ Mf::Pack2x16float => Function::Pack2x16float,
+ Mf::Pack2x16snorm => Function::Pack2x16snorm,
+ Mf::Pack2x16unorm => Function::Pack2x16unorm,
+ Mf::Pack4x8snorm => Function::Pack4x8snorm,
+ Mf::Pack4x8unorm => Function::Pack4x8unorm,
+ // Data Unpacking
Mf::Unpack2x16float => Function::Unpack2x16float,
+ Mf::Unpack2x16snorm => Function::Unpack2x16snorm,
+ Mf::Unpack2x16unorm => Function::Unpack2x16unorm,
+ Mf::Unpack4x8snorm => Function::Unpack4x8snorm,
+ Mf::Unpack4x8unorm => Function::Unpack4x8unorm,
_ => return Err(Error::Unimplemented(format!("write_expr_math {fun:?}"))),
};
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -2688,6 +2739,140 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
self.write_expr(module, arg, func_ctx)?;
write!(self.out, "))")?;
}
+ Function::ExtractBits => {
+ // e: T,
+ // offset: u32,
+ // count: u32
+ // T is u32 or i32 or vecN<u32> or vecN<i32>
+ if let (Some(offset), Some(count)) = (arg1, arg2) {
+ let scalar_width: u8 = 32;
+ // Works for signed and unsigned
+ // (count == 0 ? 0 : (e << (32 - count - offset)) >> (32 - count))
+ write!(self.out, "(")?;
+ self.write_expr(module, count, func_ctx)?;
+ write!(self.out, " == 0 ? 0 : (")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " << ({scalar_width} - ")?;
+ self.write_expr(module, count, func_ctx)?;
+ write!(self.out, " - ")?;
+ self.write_expr(module, offset, func_ctx)?;
+ write!(self.out, ")) >> ({scalar_width} - ")?;
+ self.write_expr(module, count, func_ctx)?;
+ write!(self.out, "))")?;
+ }
+ }
+ Function::InsertBits => {
+ // e: T,
+ // newbits: T,
+ // offset: u32,
+ // count: u32
+ // returns T
+ // T is i32, u32, vecN<i32>, or vecN<u32>
+ if let (Some(newbits), Some(offset), Some(count)) = (arg1, arg2, arg3) {
+ let scalar_width: u8 = 32;
+ let scalar_max: u32 = 0xFFFFFFFF;
+ // mask = ((0xFFFFFFFFu >> (32 - count)) << offset)
+ // (count == 0 ? e : ((e & ~mask) | ((newbits << offset) & mask)))
+ write!(self.out, "(")?;
+ self.write_expr(module, count, func_ctx)?;
+ write!(self.out, " == 0 ? ")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " : ")?;
+ write!(self.out, "(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " & ~")?;
+ // mask
+ write!(self.out, "(({scalar_max}u >> ({scalar_width}u - ")?;
+ self.write_expr(module, count, func_ctx)?;
+ write!(self.out, ")) << ")?;
+ self.write_expr(module, offset, func_ctx)?;
+ write!(self.out, ")")?;
+ // end mask
+ write!(self.out, ") | ((")?;
+ self.write_expr(module, newbits, func_ctx)?;
+ write!(self.out, " << ")?;
+ self.write_expr(module, offset, func_ctx)?;
+ write!(self.out, ") & ")?;
+ // // mask
+ write!(self.out, "(({scalar_max}u >> ({scalar_width}u - ")?;
+ self.write_expr(module, count, func_ctx)?;
+ write!(self.out, ")) << ")?;
+ self.write_expr(module, offset, func_ctx)?;
+ write!(self.out, ")")?;
+ // // end mask
+ write!(self.out, "))")?;
+ }
+ }
+ Function::Pack2x16float => {
+ write!(self.out, "(f32tof16(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, "[0]) | f32tof16(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, "[1]) << 16)")?;
+ }
+ Function::Pack2x16snorm => {
+ let scale = 32767;
+
+ write!(self.out, "uint((int(round(clamp(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(
+ self.out,
+ "[0], -1.0, 1.0) * {scale}.0)) & 0xFFFF) | ((int(round(clamp("
+ )?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, "[1], -1.0, 1.0) * {scale}.0)) & 0xFFFF) << 16))",)?;
+ }
+ Function::Pack2x16unorm => {
+ let scale = 65535;
+
+ write!(self.out, "(uint(round(clamp(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, "[0], 0.0, 1.0) * {scale}.0)) | uint(round(clamp(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, "[1], 0.0, 1.0) * {scale}.0)) << 16)")?;
+ }
+ Function::Pack4x8snorm => {
+ let scale = 127;
+
+ write!(self.out, "uint((int(round(clamp(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(
+ self.out,
+ "[0], -1.0, 1.0) * {scale}.0)) & 0xFF) | ((int(round(clamp("
+ )?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(
+ self.out,
+ "[1], -1.0, 1.0) * {scale}.0)) & 0xFF) << 8) | ((int(round(clamp("
+ )?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(
+ self.out,
+ "[2], -1.0, 1.0) * {scale}.0)) & 0xFF) << 16) | ((int(round(clamp("
+ )?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, "[3], -1.0, 1.0) * {scale}.0)) & 0xFF) << 24))",)?;
+ }
+ Function::Pack4x8unorm => {
+ let scale = 255;
+
+ write!(self.out, "(uint(round(clamp(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, "[0], 0.0, 1.0) * {scale}.0)) | uint(round(clamp(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(
+ self.out,
+ "[1], 0.0, 1.0) * {scale}.0)) << 8 | uint(round(clamp("
+ )?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(
+ self.out,
+ "[2], 0.0, 1.0) * {scale}.0)) << 16 | uint(round(clamp("
+ )?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, "[3], 0.0, 1.0) * {scale}.0)) << 24)")?;
+ }
+
Function::Unpack2x16float => {
write!(self.out, "float2(f16tof32(")?;
self.write_expr(module, arg, func_ctx)?;
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -2695,6 +2880,50 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
self.write_expr(module, arg, func_ctx)?;
write!(self.out, ") >> 16))")?;
}
+ Function::Unpack2x16snorm => {
+ let scale = 32767;
+
+ write!(self.out, "(float2(int2(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " << 16, ")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, ") >> 16) / {scale}.0)")?;
+ }
+ Function::Unpack2x16unorm => {
+ let scale = 65535;
+
+ write!(self.out, "(float2(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " & 0xFFFF, ")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " >> 16) / {scale}.0)")?;
+ }
+ Function::Unpack4x8snorm => {
+ let scale = 127;
+
+ write!(self.out, "(float4(int4(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " << 24, ")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " << 16, ")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " << 8, ")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, ") >> 24) / {scale}.0)")?;
+ }
+ Function::Unpack4x8unorm => {
+ let scale = 255;
+
+ write!(self.out, "(float4(")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " & 0xFF, ")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " >> 8 & 0xFF, ")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " >> 16 & 0xFF, ")?;
+ self.write_expr(module, arg, func_ctx)?;
+ write!(self.out, " >> 24) / {scale}.0)")?;
+ }
Function::Regular(fun_name) => {
write!(self.out, "{fun_name}(")?;
self.write_expr(module, arg, func_ctx)?;
diff --git a/src/proc/mod.rs b/src/proc/mod.rs
--- a/src/proc/mod.rs
+++ b/src/proc/mod.rs
@@ -190,6 +190,17 @@ impl super::TypeInner {
}
}
+ pub const fn scalar_width(&self) -> Option<u8> {
+ // Multiply by 8 to get the bit width
+ match *self {
+ super::TypeInner::Scalar { width, .. } | super::TypeInner::Vector { width, .. } => {
+ Some(width * 8)
+ }
+ super::TypeInner::Matrix { width, .. } => Some(width * 8),
+ _ => None,
+ }
+ }
+
pub const fn pointer_space(&self) -> Option<crate::AddressSpace> {
match *self {
Self::Pointer { space, .. } => Some(space),
| diff --git /dev/null b/tests/out/hlsl/bits.hlsl
new file mode 100644
--- /dev/null
+++ b/tests/out/hlsl/bits.hlsl
@@ -0,0 +1,131 @@
+
+[numthreads(1, 1, 1)]
+void main()
+{
+ int i = (int)0;
+ int2 i2_ = (int2)0;
+ int3 i3_ = (int3)0;
+ int4 i4_ = (int4)0;
+ uint u = (uint)0;
+ uint2 u2_ = (uint2)0;
+ uint3 u3_ = (uint3)0;
+ uint4 u4_ = (uint4)0;
+ float2 f2_ = (float2)0;
+ float4 f4_ = (float4)0;
+
+ i = 0;
+ i2_ = (0).xx;
+ i3_ = (0).xxx;
+ i4_ = (0).xxxx;
+ u = 0u;
+ u2_ = (0u).xx;
+ u3_ = (0u).xxx;
+ u4_ = (0u).xxxx;
+ f2_ = (0.0).xx;
+ f4_ = (0.0).xxxx;
+ float4 _expr28 = f4_;
+ u = uint((int(round(clamp(_expr28[0], -1.0, 1.0) * 127.0)) & 0xFF) | ((int(round(clamp(_expr28[1], -1.0, 1.0) * 127.0)) & 0xFF) << 8) | ((int(round(clamp(_expr28[2], -1.0, 1.0) * 127.0)) & 0xFF) << 16) | ((int(round(clamp(_expr28[3], -1.0, 1.0) * 127.0)) & 0xFF) << 24));
+ float4 _expr30 = f4_;
+ u = (uint(round(clamp(_expr30[0], 0.0, 1.0) * 255.0)) | uint(round(clamp(_expr30[1], 0.0, 1.0) * 255.0)) << 8 | uint(round(clamp(_expr30[2], 0.0, 1.0) * 255.0)) << 16 | uint(round(clamp(_expr30[3], 0.0, 1.0) * 255.0)) << 24);
+ float2 _expr32 = f2_;
+ u = uint((int(round(clamp(_expr32[0], -1.0, 1.0) * 32767.0)) & 0xFFFF) | ((int(round(clamp(_expr32[1], -1.0, 1.0) * 32767.0)) & 0xFFFF) << 16));
+ float2 _expr34 = f2_;
+ u = (uint(round(clamp(_expr34[0], 0.0, 1.0) * 65535.0)) | uint(round(clamp(_expr34[1], 0.0, 1.0) * 65535.0)) << 16);
+ float2 _expr36 = f2_;
+ u = (f32tof16(_expr36[0]) | f32tof16(_expr36[1]) << 16);
+ uint _expr38 = u;
+ f4_ = (float4(int4(_expr38 << 24, _expr38 << 16, _expr38 << 8, _expr38) >> 24) / 127.0);
+ uint _expr40 = u;
+ f4_ = (float4(_expr40 & 0xFF, _expr40 >> 8 & 0xFF, _expr40 >> 16 & 0xFF, _expr40 >> 24) / 255.0);
+ uint _expr42 = u;
+ f2_ = (float2(int2(_expr42 << 16, _expr42) >> 16) / 32767.0);
+ uint _expr44 = u;
+ f2_ = (float2(_expr44 & 0xFFFF, _expr44 >> 16) / 65535.0);
+ uint _expr46 = u;
+ f2_ = float2(f16tof32(_expr46), f16tof32((_expr46) >> 16));
+ int _expr48 = i;
+ int _expr49 = i;
+ i = (10u == 0 ? _expr48 : (_expr48 & ~((4294967295u >> (32u - 10u)) << 5u)) | ((_expr49 << 5u) & ((4294967295u >> (32u - 10u)) << 5u)));
+ int2 _expr53 = i2_;
+ int2 _expr54 = i2_;
+ i2_ = (10u == 0 ? _expr53 : (_expr53 & ~((4294967295u >> (32u - 10u)) << 5u)) | ((_expr54 << 5u) & ((4294967295u >> (32u - 10u)) << 5u)));
+ int3 _expr58 = i3_;
+ int3 _expr59 = i3_;
+ i3_ = (10u == 0 ? _expr58 : (_expr58 & ~((4294967295u >> (32u - 10u)) << 5u)) | ((_expr59 << 5u) & ((4294967295u >> (32u - 10u)) << 5u)));
+ int4 _expr63 = i4_;
+ int4 _expr64 = i4_;
+ i4_ = (10u == 0 ? _expr63 : (_expr63 & ~((4294967295u >> (32u - 10u)) << 5u)) | ((_expr64 << 5u) & ((4294967295u >> (32u - 10u)) << 5u)));
+ uint _expr68 = u;
+ uint _expr69 = u;
+ u = (10u == 0 ? _expr68 : (_expr68 & ~((4294967295u >> (32u - 10u)) << 5u)) | ((_expr69 << 5u) & ((4294967295u >> (32u - 10u)) << 5u)));
+ uint2 _expr73 = u2_;
+ uint2 _expr74 = u2_;
+ u2_ = (10u == 0 ? _expr73 : (_expr73 & ~((4294967295u >> (32u - 10u)) << 5u)) | ((_expr74 << 5u) & ((4294967295u >> (32u - 10u)) << 5u)));
+ uint3 _expr78 = u3_;
+ uint3 _expr79 = u3_;
+ u3_ = (10u == 0 ? _expr78 : (_expr78 & ~((4294967295u >> (32u - 10u)) << 5u)) | ((_expr79 << 5u) & ((4294967295u >> (32u - 10u)) << 5u)));
+ uint4 _expr83 = u4_;
+ uint4 _expr84 = u4_;
+ u4_ = (10u == 0 ? _expr83 : (_expr83 & ~((4294967295u >> (32u - 10u)) << 5u)) | ((_expr84 << 5u) & ((4294967295u >> (32u - 10u)) << 5u)));
+ int _expr88 = i;
+ i = (10u == 0 ? 0 : (_expr88 << (32 - 10u - 5u)) >> (32 - 10u));
+ int2 _expr92 = i2_;
+ i2_ = (10u == 0 ? 0 : (_expr92 << (32 - 10u - 5u)) >> (32 - 10u));
+ int3 _expr96 = i3_;
+ i3_ = (10u == 0 ? 0 : (_expr96 << (32 - 10u - 5u)) >> (32 - 10u));
+ int4 _expr100 = i4_;
+ i4_ = (10u == 0 ? 0 : (_expr100 << (32 - 10u - 5u)) >> (32 - 10u));
+ uint _expr104 = u;
+ u = (10u == 0 ? 0 : (_expr104 << (32 - 10u - 5u)) >> (32 - 10u));
+ uint2 _expr108 = u2_;
+ u2_ = (10u == 0 ? 0 : (_expr108 << (32 - 10u - 5u)) >> (32 - 10u));
+ uint3 _expr112 = u3_;
+ u3_ = (10u == 0 ? 0 : (_expr112 << (32 - 10u - 5u)) >> (32 - 10u));
+ uint4 _expr116 = u4_;
+ u4_ = (10u == 0 ? 0 : (_expr116 << (32 - 10u - 5u)) >> (32 - 10u));
+ int _expr120 = i;
+ i = asint(firstbitlow(_expr120));
+ uint2 _expr122 = u2_;
+ u2_ = firstbitlow(_expr122);
+ int3 _expr124 = i3_;
+ i3_ = asint(firstbithigh(_expr124));
+ uint3 _expr126 = u3_;
+ u3_ = firstbithigh(_expr126);
+ int _expr128 = i;
+ i = asint(firstbithigh(_expr128));
+ uint _expr130 = u;
+ u = firstbithigh(_expr130);
+ int _expr132 = i;
+ i = asint(countbits(asuint(_expr132)));
+ int2 _expr134 = i2_;
+ i2_ = asint(countbits(asuint(_expr134)));
+ int3 _expr136 = i3_;
+ i3_ = asint(countbits(asuint(_expr136)));
+ int4 _expr138 = i4_;
+ i4_ = asint(countbits(asuint(_expr138)));
+ uint _expr140 = u;
+ u = countbits(_expr140);
+ uint2 _expr142 = u2_;
+ u2_ = countbits(_expr142);
+ uint3 _expr144 = u3_;
+ u3_ = countbits(_expr144);
+ uint4 _expr146 = u4_;
+ u4_ = countbits(_expr146);
+ int _expr148 = i;
+ i = asint(reversebits(asuint(_expr148)));
+ int2 _expr150 = i2_;
+ i2_ = asint(reversebits(asuint(_expr150)));
+ int3 _expr152 = i3_;
+ i3_ = asint(reversebits(asuint(_expr152)));
+ int4 _expr154 = i4_;
+ i4_ = asint(reversebits(asuint(_expr154)));
+ uint _expr156 = u;
+ u = reversebits(_expr156);
+ uint2 _expr158 = u2_;
+ u2_ = reversebits(_expr158);
+ uint3 _expr160 = u3_;
+ u3_ = reversebits(_expr160);
+ uint4 _expr162 = u4_;
+ u4_ = reversebits(_expr162);
+ return;
+}
diff --git /dev/null b/tests/out/hlsl/bits.hlsl.config
new file mode 100644
--- /dev/null
+++ b/tests/out/hlsl/bits.hlsl.config
@@ -0,0 +1,3 @@
+vertex=()
+fragment=()
+compute=(main:cs_5_1 )
diff --git /dev/null b/tests/out/hlsl/bits.ron
new file mode 100644
--- /dev/null
+++ b/tests/out/hlsl/bits.ron
@@ -0,0 +1,12 @@
+(
+ vertex:[
+ ],
+ fragment:[
+ ],
+ compute:[
+ (
+ entry_point:"main",
+ target_profile:"cs_5_1",
+ ),
+ ],
+)
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -420,7 +420,7 @@ fn convert_wgsl() {
),
(
"bits",
- Targets::SPIRV | Targets::METAL | Targets::GLSL | Targets::WGSL,
+ Targets::SPIRV | Targets::METAL | Targets::GLSL | Targets::HLSL | Targets::WGSL,
),
(
"bitcast",
| Missing extractBits and insertBits implementations for HLSL
Seems they were implemented for other backends in #1449 but not for HLSL.
| I think all the other functions from that PR are missing too:
- pack4x8snorm
- pack4x8unorm
- pack2x16snorm
- pack2x16unorm
- pack2x16float
- unpack4x8snorm
- unpack4x8unorm
- unpack2x16snorm
- unpack2x16unorm
- unpack2x16float | 2023-05-25T12:07:20 | 0.12 | 04ef22f6dc8d9c4b958dc6575bbb3be9d7719ee0 | [
"convert_wgsl"
] | [
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::writer::test_write_physical_layout",
"back::spv::layout::t... | [] | [] | 70 |
gfx-rs/naga | 2,342 | gfx-rs__naga-2342 | [
"2312"
] | 423a069dcddc790fdf27d8c389fa4487f07fcc0a | diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -84,6 +84,18 @@ impl<'a> ExpressionContext<'a, '_, '_> {
}
Ok(accumulator)
}
+
+ fn declare_local(&mut self, name: ast::Ident<'a>) -> Result<Handle<ast::Local>, Error<'a>> {
+ let handle = self.locals.append(ast::Local, name.span);
+ if let Some(old) = self.local_table.add(name.name, handle) {
+ Err(Error::Redefinition {
+ previous: self.locals.get_span(old),
+ current: name.span,
+ })
+ } else {
+ Ok(handle)
+ }
+ }
}
/// Which grammar rule we are in the midst of parsing.
diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -1612,14 +1624,7 @@ impl Parser {
let expr_id = self.general_expression(lexer, ctx.reborrow())?;
lexer.expect(Token::Separator(';'))?;
- let handle = ctx.locals.append(ast::Local, name.span);
- if let Some(old) = ctx.local_table.add(name.name, handle) {
- return Err(Error::Redefinition {
- previous: ctx.locals.get_span(old),
- current: name.span,
- });
- }
-
+ let handle = ctx.declare_local(name)?;
ast::StatementKind::LocalDecl(ast::LocalDecl::Let(ast::Let {
name,
ty: given_ty,
diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -1647,14 +1652,7 @@ impl Parser {
lexer.expect(Token::Separator(';'))?;
- let handle = ctx.locals.append(ast::Local, name.span);
- if let Some(old) = ctx.local_table.add(name.name, handle) {
- return Err(Error::Redefinition {
- previous: ctx.locals.get_span(old),
- current: name.span,
- });
- }
-
+ let handle = ctx.declare_local(name)?;
ast::StatementKind::LocalDecl(ast::LocalDecl::Var(ast::LocalVariable {
name,
ty,
diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -2013,15 +2011,15 @@ impl Parser {
ctx.local_table.push_scope();
lexer.expect(Token::Paren('{'))?;
- let mut statements = ast::Block::default();
+ let mut block = ast::Block::default();
while !lexer.skip(Token::Paren('}')) {
- self.statement(lexer, ctx.reborrow(), &mut statements)?;
+ self.statement(lexer, ctx.reborrow(), &mut block)?;
}
ctx.local_table.pop_scope();
let span = self.pop_rule_span(lexer);
- Ok((statements, span))
+ Ok((block, span))
}
fn varying_binding<'a>(
diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -2060,6 +2058,9 @@ impl Parser {
unresolved: dependencies,
};
+ // start a scope that contains arguments as well as the function body
+ ctx.local_table.push_scope();
+
// read parameter list
let mut arguments = Vec::new();
lexer.expect(Token::Paren('('))?;
diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -2078,8 +2079,7 @@ impl Parser {
lexer.expect(Token::Separator(':'))?;
let param_type = self.type_decl(lexer, ctx.reborrow())?;
- let handle = ctx.locals.append(ast::Local, param_name.span);
- ctx.local_table.add(param_name.name, handle);
+ let handle = ctx.declare_local(param_name)?;
arguments.push(ast::FunctionArgument {
name: param_name,
ty: param_type,
diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -2097,8 +2097,14 @@ impl Parser {
None
};
- // read body
- let body = self.block(lexer, ctx)?.0;
+ // do not use `self.block` here, since we must not push a new scope
+ lexer.expect(Token::Paren('{'))?;
+ let mut body = ast::Block::default();
+ while !lexer.skip(Token::Paren('}')) {
+ self.statement(lexer, ctx.reborrow(), &mut body)?;
+ }
+
+ ctx.local_table.pop_scope();
let fun = ast::Function {
entry_point: None,
| diff --git a/tests/in/lexical-scopes.wgsl b/tests/in/lexical-scopes.wgsl
--- a/tests/in/lexical-scopes.wgsl
+++ b/tests/in/lexical-scopes.wgsl
@@ -1,45 +1,41 @@
fn blockLexicalScope(a: bool) {
- let a = 1.0;
{
let a = 2;
{
- let a = true;
+ let a = 2.0;
}
- let test = a == 3;
+ let test: i32 = a;
}
- let test = a == 2.0;
+ let test: bool = a;
}
fn ifLexicalScope(a: bool) {
- let a = 1.0;
- if (a == 1.0) {
- let a = true;
+ if (a) {
+ let a = 2.0;
}
- let test = a == 2.0;
+ let test: bool = a;
}
fn loopLexicalScope(a: bool) {
- let a = 1.0;
loop {
- let a = true;
+ let a = 2.0;
}
- let test = a == 2.0;
+ let test: bool = a;
}
fn forLexicalScope(a: f32) {
- let a = false;
for (var a = 0; a < 1; a++) {
- let a = 3.0;
+ let a = true;
}
- let test = a == true;
+ let test: f32 = a;
}
fn whileLexicalScope(a: i32) {
while (a > 2) {
let a = false;
}
- let test = a == 1;
+ let test: i32 = a;
}
fn switchLexicalScope(a: i32) {
diff --git a/tests/out/wgsl/lexical-scopes.wgsl b/tests/out/wgsl/lexical-scopes.wgsl
--- a/tests/out/wgsl/lexical-scopes.wgsl
+++ b/tests/out/wgsl/lexical-scopes.wgsl
@@ -1,22 +1,23 @@
fn blockLexicalScope(a: bool) {
{
{
+ return;
}
- let test = (2 == 3);
}
- let test_1 = (1.0 == 2.0);
}
fn ifLexicalScope(a_1: bool) {
- if (1.0 == 1.0) {
+ if a_1 {
+ return;
+ } else {
+ return;
}
- let test_2 = (1.0 == 2.0);
}
fn loopLexicalScope(a_2: bool) {
loop {
}
- let test_3 = (1.0 == 2.0);
+ return;
}
fn forLexicalScope(a_3: f32) {
diff --git a/tests/out/wgsl/lexical-scopes.wgsl b/tests/out/wgsl/lexical-scopes.wgsl
--- a/tests/out/wgsl/lexical-scopes.wgsl
+++ b/tests/out/wgsl/lexical-scopes.wgsl
@@ -24,19 +25,19 @@ fn forLexicalScope(a_3: f32) {
a_4 = 0;
loop {
- let _e4 = a_4;
- if (_e4 < 1) {
+ let _e3 = a_4;
+ if (_e3 < 1) {
} else {
break;
}
{
}
continuing {
- let _e8 = a_4;
- a_4 = (_e8 + 1);
+ let _e7 = a_4;
+ a_4 = (_e7 + 1);
}
}
- let test_4 = (false == true);
+ return;
}
fn whileLexicalScope(a_5: i32) {
diff --git a/tests/out/wgsl/lexical-scopes.wgsl b/tests/out/wgsl/lexical-scopes.wgsl
--- a/tests/out/wgsl/lexical-scopes.wgsl
+++ b/tests/out/wgsl/lexical-scopes.wgsl
@@ -48,7 +49,7 @@ fn whileLexicalScope(a_5: i32) {
{
}
}
- let test_5 = (a_5 == 1);
+ return;
}
fn switchLexicalScope(a_6: i32) {
diff --git a/tests/out/wgsl/lexical-scopes.wgsl b/tests/out/wgsl/lexical-scopes.wgsl
--- a/tests/out/wgsl/lexical-scopes.wgsl
+++ b/tests/out/wgsl/lexical-scopes.wgsl
@@ -60,6 +61,6 @@ fn switchLexicalScope(a_6: i32) {
default: {
}
}
- let test_6 = (a_6 == 2);
+ let test = (a_6 == 2);
}
diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -1883,6 +1883,44 @@ fn function_returns_void() {
)
}
+#[test]
+fn function_param_redefinition_as_param() {
+ check(
+ "
+ fn x(a: f32, a: vec2<f32>) {}
+ ",
+ r###"error: redefinition of `a`
+ ┌─ wgsl:2:14
+ │
+2 │ fn x(a: f32, a: vec2<f32>) {}
+ │ ^ ^ redefinition of `a`
+ │ │
+ │ previous definition of `a`
+
+"###,
+ )
+}
+
+#[test]
+fn function_param_redefinition_as_local() {
+ check(
+ "
+ fn x(a: f32) {
+ let a = 0.0;
+ }
+ ",
+ r###"error: redefinition of `a`
+ ┌─ wgsl:2:14
+ │
+2 │ fn x(a: f32) {
+ │ ^ previous definition of `a`
+3 │ let a = 0.0;
+ │ ^ redefinition of `a`
+
+"###,
+ )
+}
+
#[test]
fn binding_array_local() {
check_validation! {
| [wgsl-in] Redefining parameter is incorreclty allowed
The following snippet redefines an incoming parameter which should not compile (and Tint will complain about it!):
```
fn fun(t: f32) -> f32 {
let t = t + 1.0;
return t;
}
```
| 2023-05-15T23:41:22 | 0.12 | 04ef22f6dc8d9c4b958dc6575bbb3be9d7719ee0 | [
"function_param_redefinition_as_param",
"function_param_redefinition_as_local"
] | [
"arena::tests::append_non_unique",
"front::glsl::constants::tests::nan_handling",
"arena::tests::fetch_or_append_unique",
"back::msl::writer::test_stack_size",
"front::glsl::constants::tests::unary_op",
"front::glsl::lex::tests::lex_tokens",
"arena::tests::fetch_or_append_non_unique",
"back::spv::layo... | [] | [] | 71 | |
gfx-rs/naga | 2,290 | gfx-rs__naga-2290 | [
"1977"
] | dd54aaf26062384db0ecc7553c0fbe9534bb491e | diff --git a/src/front/spv/function.rs b/src/front/spv/function.rs
--- a/src/front/spv/function.rs
+++ b/src/front/spv/function.rs
@@ -597,7 +597,11 @@ impl<'function> BlockContext<'function> {
crate::Span::default(),
)
}
- super::BodyFragment::Loop { body, continuing } => {
+ super::BodyFragment::Loop {
+ body,
+ continuing,
+ break_if,
+ } => {
let body = lower_impl(blocks, bodies, body);
let continuing = lower_impl(blocks, bodies, continuing);
diff --git a/src/front/spv/function.rs b/src/front/spv/function.rs
--- a/src/front/spv/function.rs
+++ b/src/front/spv/function.rs
@@ -605,7 +609,7 @@ impl<'function> BlockContext<'function> {
crate::Statement::Loop {
body,
continuing,
- break_if: None,
+ break_if,
},
crate::Span::default(),
)
diff --git a/src/front/spv/mod.rs b/src/front/spv/mod.rs
--- a/src/front/spv/mod.rs
+++ b/src/front/spv/mod.rs
@@ -388,6 +388,11 @@ enum BodyFragment {
Loop {
body: BodyIndex,
continuing: BodyIndex,
+
+ /// If the SPIR-V loop's back-edge branch is conditional, this is the
+ /// expression that must be `false` for the back-edge to be taken, with
+ /// `true` being for the "loop merge" (which breaks out of the loop).
+ break_if: Option<Handle<crate::Expression>>,
},
Switch {
selector: Handle<crate::Expression>,
diff --git a/src/front/spv/mod.rs b/src/front/spv/mod.rs
--- a/src/front/spv/mod.rs
+++ b/src/front/spv/mod.rs
@@ -429,7 +434,7 @@ struct PhiExpression {
expressions: Vec<(spirv::Word, spirv::Word)>,
}
-#[derive(Debug)]
+#[derive(Copy, Clone, Debug, PartialEq, Eq)]
enum MergeBlockInformation {
LoopMerge,
LoopContinue,
diff --git a/src/front/spv/mod.rs b/src/front/spv/mod.rs
--- a/src/front/spv/mod.rs
+++ b/src/front/spv/mod.rs
@@ -3114,35 +3119,121 @@ impl<I: Iterator<Item = u32>> Frontend<I> {
get_expr_handle!(condition_id, lexp)
};
+ // HACK(eddyb) Naga doesn't seem to have this helper,
+ // so it's declared on the fly here for convenience.
+ #[derive(Copy, Clone)]
+ struct BranchTarget {
+ label_id: spirv::Word,
+ merge_info: Option<MergeBlockInformation>,
+ }
+ let branch_target = |label_id| BranchTarget {
+ label_id,
+ merge_info: ctx.mergers.get(&label_id).copied(),
+ };
+
+ let true_target = branch_target(self.next()?);
+ let false_target = branch_target(self.next()?);
+
+ // Consume branch weights
+ for _ in 4..inst.wc {
+ let _ = self.next()?;
+ }
+
+ // Handle `OpBranchConditional`s used at the end of a loop
+ // body's "continuing" section as a "conditional backedge",
+ // i.e. a `do`-`while` condition, or `break if` in WGSL.
+
+ // HACK(eddyb) this has to go to the parent *twice*, because
+ // `OpLoopMerge` left the "continuing" section nested in the
+ // loop body in terms of `parent`, but not `BodyFragment`.
+ let parent_body_idx = ctx.bodies[body_idx].parent;
+ let parent_parent_body_idx = ctx.bodies[parent_body_idx].parent;
+ match ctx.bodies[parent_parent_body_idx].data[..] {
+ // The `OpLoopMerge`'s `continuing` block and the loop's
+ // backedge block may not be the same, but they'll both
+ // belong to the same body.
+ [.., BodyFragment::Loop {
+ body: loop_body_idx,
+ continuing: loop_continuing_idx,
+ break_if: ref mut break_if_slot @ None,
+ }] if body_idx == loop_continuing_idx => {
+ // Try both orderings of break-vs-backedge, because
+ // SPIR-V is symmetrical here, unlike WGSL `break if`.
+ let break_if_cond = [true, false].into_iter().find_map(|true_breaks| {
+ let (break_candidate, backedge_candidate) = if true_breaks {
+ (true_target, false_target)
+ } else {
+ (false_target, true_target)
+ };
+
+ if break_candidate.merge_info
+ != Some(MergeBlockInformation::LoopMerge)
+ {
+ return None;
+ }
+
+ // HACK(eddyb) since Naga doesn't explicitly track
+ // backedges, this is checking for the outcome of
+ // `OpLoopMerge` below (even if it looks weird).
+ let backedge_candidate_is_backedge =
+ backedge_candidate.merge_info.is_none()
+ && ctx.body_for_label.get(&backedge_candidate.label_id)
+ == Some(&loop_body_idx);
+ if !backedge_candidate_is_backedge {
+ return None;
+ }
+
+ Some(if true_breaks {
+ condition
+ } else {
+ ctx.expressions.append(
+ crate::Expression::Unary {
+ op: crate::UnaryOperator::Not,
+ expr: condition,
+ },
+ span,
+ )
+ })
+ });
+
+ if let Some(break_if_cond) = break_if_cond {
+ *break_if_slot = Some(break_if_cond);
+
+ // This `OpBranchConditional` ends the "continuing"
+ // section of the loop body as normal, with the
+ // `break if` condition having been stashed above.
+ break None;
+ }
+ }
+ _ => {}
+ }
+
block.extend(emitter.finish(ctx.expressions));
ctx.blocks.insert(block_id, block);
let body = &mut ctx.bodies[body_idx];
body.data.push(BodyFragment::BlockId(block_id));
- let true_id = self.next()?;
- let false_id = self.next()?;
-
- let same_target = true_id == false_id;
+ let same_target = true_target.label_id == false_target.label_id;
// Start a body block for the `accept` branch.
let accept = ctx.bodies.len();
let mut accept_block = Body::with_parent(body_idx);
- // If the `OpBranchConditional`target is somebody else's
+ // If the `OpBranchConditional` target is somebody else's
// merge or continue block, then put a `Break` or `Continue`
// statement in this new body block.
- if let Some(info) = ctx.mergers.get(&true_id) {
+ if let Some(info) = true_target.merge_info {
merger(
match same_target {
true => &mut ctx.bodies[body_idx],
false => &mut accept_block,
},
- info,
+ &info,
)
} else {
// Note the body index for the block we're branching to.
let prev = ctx.body_for_label.insert(
- true_id,
+ true_target.label_id,
match same_target {
true => body_idx,
false => accept,
diff --git a/src/front/spv/mod.rs b/src/front/spv/mod.rs
--- a/src/front/spv/mod.rs
+++ b/src/front/spv/mod.rs
@@ -3161,10 +3252,10 @@ impl<I: Iterator<Item = u32>> Frontend<I> {
let reject = ctx.bodies.len();
let mut reject_block = Body::with_parent(body_idx);
- if let Some(info) = ctx.mergers.get(&false_id) {
- merger(&mut reject_block, info)
+ if let Some(info) = false_target.merge_info {
+ merger(&mut reject_block, &info)
} else {
- let prev = ctx.body_for_label.insert(false_id, reject);
+ let prev = ctx.body_for_label.insert(false_target.label_id, reject);
debug_assert!(prev.is_none());
}
diff --git a/src/front/spv/mod.rs b/src/front/spv/mod.rs
--- a/src/front/spv/mod.rs
+++ b/src/front/spv/mod.rs
@@ -3177,11 +3268,6 @@ impl<I: Iterator<Item = u32>> Frontend<I> {
reject,
});
- // Consume branch weights
- for _ in 4..inst.wc {
- let _ = self.next()?;
- }
-
return Ok(());
}
Op::Switch => {
diff --git a/src/front/spv/mod.rs b/src/front/spv/mod.rs
--- a/src/front/spv/mod.rs
+++ b/src/front/spv/mod.rs
@@ -3351,6 +3437,7 @@ impl<I: Iterator<Item = u32>> Frontend<I> {
parent_body.data.push(BodyFragment::Loop {
body: loop_body_idx,
continuing: continue_idx,
+ break_if: None,
});
body_idx = loop_body_idx;
}
| diff --git /dev/null b/tests/in/spv/do-while.spvasm
new file mode 100644
--- /dev/null
+++ b/tests/in/spv/do-while.spvasm
@@ -0,0 +1,64 @@
+;; Ensure that `do`-`while`-style loops, with conditional backedges, are properly
+;; supported, via `break if` (as `continuing { ... if c { break; } }` is illegal).
+;;
+;; The SPIR-V below was compiled from this GLSL fragment shader:
+;; ```glsl
+;; #version 450
+;;
+;; void f(bool cond) {
+;; do {} while(cond);
+;; }
+;;
+;; void main() {
+;; f(false);
+;; }
+;; ```
+
+ OpCapability Shader
+ %1 = OpExtInstImport "GLSL.std.450"
+ OpMemoryModel Logical GLSL450
+ OpEntryPoint Fragment %main "main"
+ OpExecutionMode %main OriginUpperLeft
+ OpSource GLSL 450
+ OpName %main "main"
+ OpName %f_b1_ "f(b1;"
+ OpName %cond "cond"
+ OpName %param "param"
+ %void = OpTypeVoid
+ %3 = OpTypeFunction %void
+ %bool = OpTypeBool
+%_ptr_Function_bool = OpTypePointer Function %bool
+ %8 = OpTypeFunction %void %_ptr_Function_bool
+ %false = OpConstantFalse %bool
+
+ %main = OpFunction %void None %3
+ %5 = OpLabel
+ %param = OpVariable %_ptr_Function_bool Function
+ OpStore %param %false
+ %19 = OpFunctionCall %void %f_b1_ %param
+ OpReturn
+ OpFunctionEnd
+
+ %f_b1_ = OpFunction %void None %8
+ %cond = OpFunctionParameter %_ptr_Function_bool
+
+ %11 = OpLabel
+ OpBranch %12
+
+ %12 = OpLabel
+ OpLoopMerge %14 %15 None
+ OpBranch %13
+
+ %13 = OpLabel
+ OpBranch %15
+
+;; This is the "continuing" block, and it contains a conditional branch between
+;; the backedge (back to the loop header) and the loop merge ("break") target.
+ %15 = OpLabel
+ %16 = OpLoad %bool %cond
+ OpBranchConditional %16 %12 %14
+
+ %14 = OpLabel
+ OpReturn
+
+ OpFunctionEnd
diff --git /dev/null b/tests/out/glsl/do-while.main.Fragment.glsl
new file mode 100644
--- /dev/null
+++ b/tests/out/glsl/do-while.main.Fragment.glsl
@@ -0,0 +1,33 @@
+#version 310 es
+
+precision highp float;
+precision highp int;
+
+
+void fb1_(inout bool cond) {
+ bool loop_init = true;
+ while(true) {
+ if (!loop_init) {
+ bool _e6 = cond;
+ bool unnamed = !(_e6);
+ if (unnamed) {
+ break;
+ }
+ }
+ loop_init = false;
+ continue;
+ }
+ return;
+}
+
+void main_1() {
+ bool param = false;
+ param = false;
+ fb1_(param);
+ return;
+}
+
+void main() {
+ main_1();
+}
+
diff --git /dev/null b/tests/out/hlsl/do-while.hlsl
new file mode 100644
--- /dev/null
+++ b/tests/out/hlsl/do-while.hlsl
@@ -0,0 +1,31 @@
+
+void fb1_(inout bool cond)
+{
+ bool loop_init = true;
+ while(true) {
+ if (!loop_init) {
+ bool _expr6 = cond;
+ bool unnamed = !(_expr6);
+ if (unnamed) {
+ break;
+ }
+ }
+ loop_init = false;
+ continue;
+ }
+ return;
+}
+
+void main_1()
+{
+ bool param = (bool)0;
+
+ param = false;
+ fb1_(param);
+ return;
+}
+
+void main()
+{
+ main_1();
+}
diff --git /dev/null b/tests/out/hlsl/do-while.hlsl.config
new file mode 100644
--- /dev/null
+++ b/tests/out/hlsl/do-while.hlsl.config
@@ -0,0 +1,3 @@
+vertex=()
+fragment=(main:ps_5_1 )
+compute=()
diff --git /dev/null b/tests/out/msl/do-while.msl
new file mode 100644
--- /dev/null
+++ b/tests/out/msl/do-while.msl
@@ -0,0 +1,37 @@
+// language: metal2.0
+#include <metal_stdlib>
+#include <simd/simd.h>
+
+using metal::uint;
+
+
+void fb1_(
+ thread bool& cond
+) {
+ bool loop_init = true;
+ while(true) {
+ if (!loop_init) {
+ bool _e6 = cond;
+ bool unnamed = !(_e6);
+ if (!(cond)) {
+ break;
+ }
+ }
+ loop_init = false;
+ continue;
+ }
+ return;
+}
+
+void main_1(
+) {
+ bool param = {};
+ param = false;
+ fb1_(param);
+ return;
+}
+
+fragment void main_(
+) {
+ main_1();
+}
diff --git /dev/null b/tests/out/wgsl/do-while.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/out/wgsl/do-while.wgsl
@@ -0,0 +1,24 @@
+fn fb1_(cond: ptr<function, bool>) {
+ loop {
+ continue;
+ continuing {
+ let _e6 = (*cond);
+ _ = !(_e6);
+ break if !(_e6);
+ }
+ }
+ return;
+}
+
+fn main_1() {
+ var param: bool;
+
+ param = false;
+ fb1_((¶m));
+ return;
+}
+
+@fragment
+fn main() {
+ main_1();
+}
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -640,6 +640,11 @@ fn convert_spv_all() {
convert_spv("degrees", false, Targets::empty());
convert_spv("binding-arrays.dynamic", true, Targets::WGSL);
convert_spv("binding-arrays.static", true, Targets::WGSL);
+ convert_spv(
+ "do-while",
+ true,
+ Targets::METAL | Targets::GLSL | Targets::HLSL | Targets::WGSL,
+ );
}
#[cfg(feature = "glsl-in")]
| [spv-in] Conditional loop backedge not lowered to `Statement::Loop` with `break_if` expression
Naga (https://github.com/gfx-rs/naga/commit/ab2806e05fbd69a502b4b25a85f99ae4d3e82278) rejects this SPIR-V:
```
OpCapability Shader
OpMemoryModel Logical GLSL450
OpEntryPoint GLCompute %7 "main"
OpExecutionMode %7 LocalSize 1 1 1
; Below, we declare various types and variables for storage buffers.
; These decorations tell SPIR-V that the types and variables relate to storage buffers
OpDecorate %size_1_struct_type BufferBlock
OpMemberDecorate %size_1_struct_type 0 Offset 0
OpDecorate %size_1_array_type ArrayStride 4
OpDecorate %output_struct_type BufferBlock
OpMemberDecorate %output_struct_type 0 Offset 0
OpDecorate %output_array_type ArrayStride 4
OpDecorate %directions_8_variable DescriptorSet 0
OpDecorate %directions_8_variable Binding 0
OpDecorate %directions_10_variable DescriptorSet 0
OpDecorate %directions_10_variable Binding 1
OpDecorate %directions_14_variable DescriptorSet 0
OpDecorate %directions_14_variable Binding 2
OpDecorate %directions_15_variable DescriptorSet 0
OpDecorate %directions_15_variable Binding 3
OpDecorate %output_variable DescriptorSet 0
OpDecorate %output_variable Binding 4
%1 = OpTypeVoid
%2 = OpTypeFunction %1
%3 = OpTypeBool
%4 = OpTypeInt 32 0
%5 = OpConstantTrue %3
%6 = OpConstant %4 0
%constant_0 = OpConstant %4 0
%constant_1 = OpConstant %4 1
%constant_7 = OpConstant %4 7
%constant_8 = OpConstant %4 8
%constant_9 = OpConstant %4 9
%constant_10 = OpConstant %4 10
%constant_11 = OpConstant %4 11
%constant_12 = OpConstant %4 12
%constant_13 = OpConstant %4 13
%constant_14 = OpConstant %4 14
%constant_15 = OpConstant %4 15
; Declaration of storage buffers for the 4 directions and the output
%size_1_array_type = OpTypeArray %4 %constant_1
%size_1_struct_type = OpTypeStruct %size_1_array_type
%size_1_pointer_type = OpTypePointer Uniform %size_1_struct_type
%directions_8_variable = OpVariable %size_1_pointer_type Uniform
%directions_10_variable = OpVariable %size_1_pointer_type Uniform
%directions_14_variable = OpVariable %size_1_pointer_type Uniform
%directions_15_variable = OpVariable %size_1_pointer_type Uniform
%output_array_type = OpTypeArray %4 %constant_7
%output_struct_type = OpTypeStruct %output_array_type
%output_pointer_type = OpTypePointer Uniform %output_struct_type
%output_variable = OpVariable %output_pointer_type Uniform
; Pointer type for declaring local variables of int type
%local_int_ptr = OpTypePointer Function %4
; Pointer type for integer data in a storage buffer
%storage_buffer_int_ptr = OpTypePointer Uniform %4
%7 = OpFunction %1 None %2
%8 = OpLabel ; validCFG/SelectionHeader$2
%output_index = OpVariable %local_int_ptr Function %constant_0
%directions_8_index = OpVariable %local_int_ptr Function %constant_0
%directions_10_index = OpVariable %local_int_ptr Function %constant_0
%directions_14_index = OpVariable %local_int_ptr Function %constant_0
%directions_15_index = OpVariable %local_int_ptr Function %constant_0
%temp_8_0 = OpLoad %4 %output_index
%temp_8_1 = OpAccessChain %storage_buffer_int_ptr %output_variable %constant_0 %temp_8_0
OpStore %temp_8_1 %constant_8
%temp_8_2 = OpIAdd %4 %temp_8_0 %constant_1
OpStore %output_index %temp_8_2
%temp_8_3 = OpLoad %4 %directions_8_index
%temp_8_4 = OpAccessChain %storage_buffer_int_ptr %directions_8_variable %constant_0 %temp_8_3
%temp_8_5 = OpLoad %4 %temp_8_4
%temp_8_7 = OpIAdd %4 %temp_8_3 %constant_1
OpStore %directions_8_index %temp_8_7
OpSelectionMerge %9 None
OpSwitch %temp_8_5 %10 1 %9
%10 = OpLabel ; validCFG/SelectionHeader$1
%temp_10_0 = OpLoad %4 %output_index
%temp_10_1 = OpAccessChain %storage_buffer_int_ptr %output_variable %constant_0 %temp_10_0
OpStore %temp_10_1 %constant_10
%temp_10_2 = OpIAdd %4 %temp_10_0 %constant_1
OpStore %output_index %temp_10_2
%temp_10_3 = OpLoad %4 %directions_10_index
%temp_10_4 = OpAccessChain %storage_buffer_int_ptr %directions_10_variable %constant_0 %temp_10_3
%temp_10_5 = OpLoad %4 %temp_10_4
%temp_10_7 = OpIAdd %4 %temp_10_3 %constant_1
OpStore %directions_10_index %temp_10_7
OpSelectionMerge %11 None
OpSwitch %temp_10_5 %12 1 %11
%12 = OpLabel ; validCFG/Block$0
OpReturn
%11 = OpLabel ; validCFG/LoopHeader$0
%temp_11_0 = OpLoad %4 %output_index
%temp_11_1 = OpAccessChain %storage_buffer_int_ptr %output_variable %constant_0 %temp_11_0
OpStore %temp_11_1 %constant_11
%temp_11_2 = OpIAdd %4 %temp_11_0 %constant_1
OpStore %output_index %temp_11_2
OpLoopMerge %13 %14 None
OpBranch %14
%14 = OpLabel ; validCFG/SelectionHeader$0
%temp_14_0 = OpLoad %4 %output_index
%temp_14_1 = OpAccessChain %storage_buffer_int_ptr %output_variable %constant_0 %temp_14_0
OpStore %temp_14_1 %constant_14
%temp_14_2 = OpIAdd %4 %temp_14_0 %constant_1
OpStore %output_index %temp_14_2
%temp_14_3 = OpLoad %4 %directions_14_index
%temp_14_4 = OpAccessChain %storage_buffer_int_ptr %directions_14_variable %constant_0 %temp_14_3
%temp_14_5 = OpLoad %4 %temp_14_4
%temp_14_7 = OpIAdd %4 %temp_14_3 %constant_1
OpStore %directions_14_index %temp_14_7
OpSelectionMerge %15 None
OpSwitch %temp_14_5 %15
%15 = OpLabel ; validCFG/StructurallyReachableBlock$2
%temp_15_0 = OpLoad %4 %output_index
%temp_15_1 = OpAccessChain %storage_buffer_int_ptr %output_variable %constant_0 %temp_15_0
OpStore %temp_15_1 %constant_15
%temp_15_2 = OpIAdd %4 %temp_15_0 %constant_1
OpStore %output_index %temp_15_2
%temp_15_3 = OpLoad %4 %directions_15_index
%temp_15_4 = OpAccessChain %storage_buffer_int_ptr %directions_15_variable %constant_0 %temp_15_3
%temp_15_5 = OpLoad %4 %temp_15_4
%temp_15_6 = OpIEqual %3 %temp_15_5 %constant_1
%temp_15_7 = OpIAdd %4 %temp_15_3 %constant_1
OpStore %directions_15_index %temp_15_7
OpBranchConditional %temp_15_6 %13 %11
%13 = OpLabel ; validCFG/StructurallyReachableBlock$1
%temp_13_0 = OpLoad %4 %output_index
%temp_13_1 = OpAccessChain %storage_buffer_int_ptr %output_variable %constant_0 %temp_13_0
OpStore %temp_13_1 %constant_13
%temp_13_2 = OpIAdd %4 %temp_13_0 %constant_1
OpStore %output_index %temp_13_2
OpBranch %9
%9 = OpLabel ; validCFG/StructurallyReachableBlock$0
%temp_9_0 = OpLoad %4 %output_index
%temp_9_1 = OpAccessChain %storage_buffer_int_ptr %output_variable %constant_0 %temp_9_0
OpStore %temp_9_1 %constant_9
%temp_9_2 = OpIAdd %4 %temp_9_0 %constant_1
OpStore %output_index %temp_9_2
OpReturn
OpFunctionEnd
```
We get:
`The 'break' is used outside of a 'loop' or 'switch' context`
The SPIR-V is valid, and spirv-val accepts it. There is no "break" outside any switch.

This is exactly the same as this [issue](https://github.com/gfx-rs/naga/issues/1028).
| @JCapucho any idea? (#1028 seems to have been fixed; maybe a regression somewhere?)
I finally had some time to analyze this issue and it seems to be a combination of smaller issues. The part that's is causing all of this is
```rust
%15 = OpLabel ; validCFG/StructurallyReachableBlock$2
// ...
OpBranchConditional %temp_15_6 %13 %11
```
To note here is the conditional branch that goes to either `%13` the loop merge target or to `%11` this is the loop header, essentially causing it to continue.
The second target is tricky and the problem is further discussed in this issue https://github.com/gfx-rs/wgpu/issues/4388
The first target is a break and should be handled as such, but naga's validator seems to disallow `break`s in the `continuing` block which the wgsl spec seems to allow (https://gpuweb.github.io/gpuweb/wgsl/#behaviors), but I'm not sure, maybe @jimblandy can confirm it.
[The grammar](https://gpuweb.github.io/gpuweb/wgsl/#continuing-statement) isn't especially clear, but `break` statements in `continuing` blocks are restricted to be the very last statement in the block, like this:
```
loop {
...
continuing {
...
break if (condition);
}
}
```
Further statements after the `break if` or after the `continuing` block are forbidden.
This is supposed to exactly match [SPIR-V's requirements for continuing blocks](https://www.khronos.org/registry/SPIR-V/specs/unified1/SPIRV.html#_structured_control_flow):
> - the Continue Target must dominate the back-edge block
> - the back-edge block must post dominate the Continue Target
Those rules forbid breaks in continue constructs, except that the final backward branch may be conditional.
It doesn't look like Naga has implemented that final `break if` yet. I think the Naga IR for loops should end up like this:
```
Loop { body: Block, continuing: Block, bottom_break_if: Option<Handle<Expression>> },
```
By using the Naga CLI, I was able to reduce one of my own errors to:
<details>
<summary>(click to open - it's not that relevant, see later below)</summary>
```rs
fn f(cond: bool) {
loop {
// ... loop body ...
continue;
continuing {
if cond {
} else {
break;
}
}
}
}
```
```console
$ naga minimal.wgsl
error:
┌─ minimal.wgsl:1:1
│
1 │ ╭ fn f(cond: bool) {
2 │ │ loop {
3 │ │ // ... loop body ...
4 │ │ continue;
· │
8 │ │ break;
│ │ ^^^^^ invalid break
9 │ │ }
10 │ │ }
11 │ │ }
│ ╰─────^ naga::Function [1]
Function [1] 'f' is invalid:
The `break` is used outside of a `loop` or `switch` context
```
</details>
Is this the same issue, @jimblandy? I wonder if the title of this issue should be changed to remove "switch", in that case.
---
Though, reducing the WGSL is a bit misleading, since what I'd really want to do is reduce the `spirv-val`-**passing** input that it fails on. And I can't trivially get that from WGSL, either:
```console
$ naga --validate 0 minimal.wgsl minimal.spv
$ spirv-val minimal.spv
error: line 24: The continue construct with the continue target '12[%12]' is not structurally post dominated by the back-edge block '13[%13]'
%13 = OpLabel
```
But by starting from `spirv-dis minimal.spv > minimal.spvasm`, I was able to to come up with this:
```llvm
OpCapability Shader
OpCapability Linkage
OpMemoryModel Logical GLSL450
%void = OpTypeVoid
%bool = OpTypeBool
%typeof_f = OpTypeFunction %void %bool
%f = OpFunction %void None %typeof_f
%cond = OpFunctionParameter %bool
%entry = OpLabel
OpBranch %loop
%loop = OpLabel
OpLoopMerge %loop_exit_merge %loop_continue None
OpBranch %loop_body
%loop_body = OpLabel
OpBranch %loop_continue
%loop_continue = OpLabel
; OpSelectionMerge %loop_back_edge None
; OpBranchConditional %cond %loop_break %loop_back_edge
OpBranchConditional %cond %loop_exit_merge %loop
; %loop_break = OpLabel
; OpBranch %loop_exit_merge
; %loop_back_edge = OpLabel
; OpBranch %loop
%loop_exit_merge = OpLabel
OpReturn
OpFunctionEnd
```
(the commented out instructions are the original ones, that I had to look at the original larger shader to know what to replace them with)
<sub>(in retrospect it makes so much sense that `break if` is the same feature where the backedge can be a `OpBranchConditional` instead of just an `OpBranch`, without a `OpSelectionMerge` since it's the backedge, but "conditional `break`" is not how I think of such backedges)</sub>
```console
$ spirv-as minimal.spvasm -o minimal.spv
$ spirv-val minimal.spv
$ naga minimal.spv
Function [1] '' is invalid:
The `break` is used outside of a `loop` or `switch` context
```
@jimblandy so it feels like "SPIR-V frontend: conditional backedge not lowered to `break if`" or similar should be the title (not super familiar with Naga terminology, don't mind me if this isn't helpful).
As suspected, this breaks idiomatic (i.e. using conditional backedges) SPIR-V `do`-`while` loops:
```console
$ cat do_while.vert
#version 450
void f(bool cond) {
do {} while(cond);
}
void main() {
f(true);
}
$ glslangValidator -V do_while.vert -o do_while.spv
do_while.vert
$ spirv-val do_while.spv
$ naga do_while.spv
Function [1] 'f(b1;' is invalid:
The `break` is used outside of a `loop` or `switch` context
```
By comparison, Naga's own GLSL frontend produces effectively this (w/o any `continuing`/`break if`):
```rs
fn f(cond: bool) {
loop {
if !cond {
break;
}
}
return;
}
``` | 2023-03-23T17:25:39 | 0.12 | 04ef22f6dc8d9c4b958dc6575bbb3be9d7719ee0 | [
"convert_spv_all"
] | [
"arena::tests::fetch_or_append_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_non_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [] | [] | 72 |
gfx-rs/naga | 2,233 | gfx-rs__naga-2233 | [
"2169"
] | 6be394dac31bc9796d4ae4bb450a40c6b6ee0b08 | diff --git a/src/front/wgsl/error.rs b/src/front/wgsl/error.rs
--- a/src/front/wgsl/error.rs
+++ b/src/front/wgsl/error.rs
@@ -134,7 +134,7 @@ pub enum NumberError {
pub enum InvalidAssignmentType {
Other,
Swizzle,
- ImmutableBinding,
+ ImmutableBinding(Span),
}
#[derive(Clone, Debug)]
diff --git a/src/front/wgsl/error.rs b/src/front/wgsl/error.rs
--- a/src/front/wgsl/error.rs
+++ b/src/front/wgsl/error.rs
@@ -536,21 +536,33 @@ impl<'a> Error<'a> {
labels: vec![(span, "expression is not a reference".into())],
notes: vec![],
},
- Error::InvalidAssignment { span, ty } => ParseError {
- message: "invalid left-hand side of assignment".into(),
- labels: vec![(span, "cannot assign to this expression".into())],
- notes: match ty {
- InvalidAssignmentType::Swizzle => vec![
- "WGSL does not support assignments to swizzles".into(),
- "consider assigning each component individually".into(),
- ],
- InvalidAssignmentType::ImmutableBinding => vec![
- format!("'{}' is an immutable binding", &source[span]),
- "consider declaring it with `var` instead of `let`".into(),
- ],
- InvalidAssignmentType::Other => vec![],
- },
- },
+ Error::InvalidAssignment { span, ty } => {
+ let (extra_label, notes) = match ty {
+ InvalidAssignmentType::Swizzle => (
+ None,
+ vec![
+ "WGSL does not support assignments to swizzles".into(),
+ "consider assigning each component individually".into(),
+ ],
+ ),
+ InvalidAssignmentType::ImmutableBinding(binding_span) => (
+ Some((binding_span, "this is an immutable binding".into())),
+ vec![format!(
+ "consider declaring '{}' with `var` instead of `let`",
+ &source[binding_span]
+ )],
+ ),
+ InvalidAssignmentType::Other => (None, vec![]),
+ };
+
+ ParseError {
+ message: "invalid left-hand side of assignment".into(),
+ labels: std::iter::once((span, "cannot assign to this expression".into()))
+ .chain(extra_label)
+ .collect(),
+ notes,
+ }
+ }
Error::Pointer(what, span) => ParseError {
message: format!("{what} must not be a pointer"),
labels: vec![(span, "expression is a pointer".into())],
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -5,6 +5,7 @@ use crate::front::wgsl::parse::{ast, conv};
use crate::front::{Emitter, Typifier};
use crate::proc::{ensure_block_returns, Alignment, Layouter, ResolveContext, TypeResolution};
use crate::{Arena, FastHashMap, Handle, Span};
+use indexmap::IndexMap;
mod construction;
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -74,7 +75,9 @@ pub struct StatementContext<'source, 'temp, 'out> {
typifier: &'temp mut Typifier,
variables: &'out mut Arena<crate::LocalVariable>,
naga_expressions: &'out mut Arena<crate::Expression>,
- named_expressions: &'out mut crate::NamedExpressions,
+ /// Stores the names of expressions that are assigned in `let` statement
+ /// Also stores the spans of the names, for use in errors.
+ named_expressions: &'out mut IndexMap<Handle<crate::Expression>, (String, Span)>,
arguments: &'out [crate::FunctionArgument],
module: &'out mut crate::Module,
}
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -126,6 +129,19 @@ impl<'a, 'temp> StatementContext<'a, 'temp, '_> {
module: self.module,
}
}
+
+ fn invalid_assignment_type(&self, expr: Handle<crate::Expression>) -> InvalidAssignmentType {
+ if let Some(&(_, span)) = self.named_expressions.get(&expr) {
+ InvalidAssignmentType::ImmutableBinding(span)
+ } else {
+ match self.naga_expressions[expr] {
+ crate::Expression::Swizzle { .. } => InvalidAssignmentType::Swizzle,
+ crate::Expression::Access { base, .. } => self.invalid_assignment_type(base),
+ crate::Expression::AccessIndex { base, .. } => self.invalid_assignment_type(base),
+ _ => InvalidAssignmentType::Other,
+ }
+ }
+ }
}
/// State for lowering an `ast::Expression` to Naga IR.
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -352,10 +368,10 @@ impl<'a> ExpressionContext<'a, '_, '_> {
}
crate::TypeInner::Vector { size, kind, width } => {
let scalar_ty = self.ensure_type_exists(crate::TypeInner::Scalar { width, kind });
- let component = self.create_zero_value_constant(scalar_ty);
+ let component = self.create_zero_value_constant(scalar_ty)?;
crate::ConstantInner::Composite {
ty,
- components: (0..size as u8).map(|_| component).collect::<Option<_>>()?,
+ components: (0..size as u8).map(|_| component).collect(),
}
}
crate::TypeInner::Matrix {
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -368,12 +384,10 @@ impl<'a> ExpressionContext<'a, '_, '_> {
kind: crate::ScalarKind::Float,
size: rows,
});
- let component = self.create_zero_value_constant(vec_ty);
+ let component = self.create_zero_value_constant(vec_ty)?;
crate::ConstantInner::Composite {
ty,
- components: (0..columns as u8)
- .map(|_| component)
- .collect::<Option<_>>()?,
+ components: (0..columns as u8).map(|_| component).collect(),
}
}
crate::TypeInner::Array {
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -381,12 +395,11 @@ impl<'a> ExpressionContext<'a, '_, '_> {
size: crate::ArraySize::Constant(size),
..
} => {
- let component = self.create_zero_value_constant(base);
+ let size = self.module.constants[size].to_array_length()?;
+ let component = self.create_zero_value_constant(base)?;
crate::ConstantInner::Composite {
ty,
- components: (0..self.module.constants[size].to_array_length().unwrap())
- .map(|_| component)
- .collect::<Option<_>>()?,
+ components: (0..size).map(|_| component).collect(),
}
}
crate::TypeInner::Struct { ref members, .. } => {
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -740,7 +753,7 @@ impl<'source, 'temp> Lowerer<'source, 'temp> {
let mut local_table = FastHashMap::default();
let mut local_variables = Arena::new();
let mut expressions = Arena::new();
- let mut named_expressions = crate::NamedExpressions::default();
+ let mut named_expressions = IndexMap::default();
let arguments = f
.arguments
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -751,7 +764,7 @@ impl<'source, 'temp> Lowerer<'source, 'temp> {
let expr = expressions
.append(crate::Expression::FunctionArgument(i as u32), arg.name.span);
local_table.insert(arg.handle, TypedExpression::non_reference(expr));
- named_expressions.insert(expr, arg.name.name.to_string());
+ named_expressions.insert(expr, (arg.name.name.to_string(), arg.name.span));
Ok(crate::FunctionArgument {
name: Some(arg.name.name.to_string()),
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -797,7 +810,10 @@ impl<'source, 'temp> Lowerer<'source, 'temp> {
result,
local_variables,
expressions,
- named_expressions,
+ named_expressions: named_expressions
+ .into_iter()
+ .map(|(key, (name, _))| (key, name))
+ .collect(),
body,
};
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -870,7 +886,8 @@ impl<'source, 'temp> Lowerer<'source, 'temp> {
block.extend(emitter.finish(ctx.naga_expressions));
ctx.local_table
.insert(l.handle, TypedExpression::non_reference(value));
- ctx.named_expressions.insert(value, l.name.name.to_string());
+ ctx.named_expressions
+ .insert(value, (l.name.name.to_string(), l.name.span));
return Ok(());
}
diff --git a/src/front/wgsl/lower/mod.rs b/src/front/wgsl/lower/mod.rs
--- a/src/front/wgsl/lower/mod.rs
+++ b/src/front/wgsl/lower/mod.rs
@@ -1071,14 +1088,7 @@ impl<'source, 'temp> Lowerer<'source, 'temp> {
let mut value = self.expression(value, ctx.as_expression(block, &mut emitter))?;
if !expr.is_reference {
- let ty = if ctx.named_expressions.contains_key(&expr.handle) {
- InvalidAssignmentType::ImmutableBinding
- } else {
- match ctx.naga_expressions[expr.handle] {
- crate::Expression::Swizzle { .. } => InvalidAssignmentType::Swizzle,
- _ => InvalidAssignmentType::Other,
- }
- };
+ let ty = ctx.invalid_assignment_type(expr.handle);
return Err(Error::InvalidAssignment {
span: ctx.ast_expressions.get_span(target),
diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -1661,14 +1661,17 @@ impl Parser {
let span = span.until(&peeked_span);
return Err(Error::InvalidBreakIf(span));
}
+ lexer.expect(Token::Separator(';'))?;
ast::StatementKind::Break
}
"continue" => {
let _ = lexer.next();
+ lexer.expect(Token::Separator(';'))?;
ast::StatementKind::Continue
}
"discard" => {
let _ = lexer.next();
+ lexer.expect(Token::Separator(';'))?;
ast::StatementKind::Kill
}
// assignment or a function call
diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -1685,6 +1688,7 @@ impl Parser {
}
_ => {
self.assignment_statement(lexer, ctx.reborrow(), block)?;
+ lexer.expect(Token::Separator(';'))?;
self.pop_rule_span(lexer);
}
}
diff --git a/src/front/wgsl/parse/mod.rs b/src/front/wgsl/parse/mod.rs
--- a/src/front/wgsl/parse/mod.rs
+++ b/src/front/wgsl/parse/mod.rs
@@ -1776,7 +1780,7 @@ impl Parser {
ctx.local_table.push_scope();
- let _ = lexer.next();
+ lexer.expect(Token::Paren('{'))?;
let mut statements = ast::Block::default();
while !lexer.skip(Token::Paren('}')) {
self.statement(lexer, ctx.reborrow(), &mut statements)?;
| diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -1624,13 +1624,56 @@ fn assign_to_let() {
}
",
r###"error: invalid left-hand side of assignment
- ┌─ wgsl:4:10
+ ┌─ wgsl:3:17
│
+3 │ let a = 10;
+ │ ^ this is an immutable binding
4 │ a = 20;
│ ^ cannot assign to this expression
│
- = note: 'a' is an immutable binding
- = note: consider declaring it with `var` instead of `let`
+ = note: consider declaring 'a' with `var` instead of `let`
+
+"###,
+ );
+
+ check(
+ "
+ fn f() {
+ let a = array(1, 2);
+ a[0] = 1;
+ }
+ ",
+ r###"error: invalid left-hand side of assignment
+ ┌─ wgsl:3:17
+ │
+3 │ let a = array(1, 2);
+ │ ^ this is an immutable binding
+4 │ a[0] = 1;
+ │ ^^^^ cannot assign to this expression
+ │
+ = note: consider declaring 'a' with `var` instead of `let`
+
+"###,
+ );
+
+ check(
+ "
+ struct S { a: i32 }
+
+ fn f() {
+ let a = S(10);
+ a.a = 20;
+ }
+ ",
+ r###"error: invalid left-hand side of assignment
+ ┌─ wgsl:5:17
+ │
+5 │ let a = S(10);
+ │ ^ this is an immutable binding
+6 │ a.a = 20;
+ │ ^^^ cannot assign to this expression
+ │
+ = note: consider declaring 'a' with `var` instead of `let`
"###,
);
| [wgsl-in] Invalid assignment diagnostic doesn't extend to derived expressions
Given this code:
```rs
fn main() {
let a = 1;
a = 2;
}
```
Naga errors with:
```rs
error: invalid left-hand side of assignment
┌─ wgsl:3:2
│
3 │ a = 2;
│ ^ cannot assign to this expression
│
= note: 'a' is an immutable binding
= note: consider declaring it with `var` instead of `let`
```
This lets us know exactly the issue and how to fix it. However, when given:
```rs
fn main() {
let a = array(1, 2);
a[0] = 2;
}
```
We error with:
```rs
error: invalid left-hand side of assignment
┌─ wgsl:3:2
│
3 │ a[0] = 2;
│ ^^^^ cannot assign to this expression
```
The error is the same as the above, however, we do not catch the fact that `a[0]` is derived from a `let`, and thus, don't tell the user how to fix it.
When validating assignments, we should recursively check if any of the arguments to a specific instruction is present in `ctx.named_expressions`.
| 2023-02-01T00:50:40 | 0.11 | da8e911d9d207a7571241e875f7ac066856e8b3f | [
"assign_to_let"
] | [
"arena::tests::append_unique",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::writer:... | [] | [] | 73 | |
gfx-rs/naga | 2,056 | gfx-rs__naga-2056 | [
"2052"
] | 4f8db997b0b1e73fc1e2cac070485333a5b61925 | diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -273,7 +273,7 @@ impl<'a> Lexer<'a> {
if next.0 == expected {
Ok(next.1)
} else {
- Err(Error::Unexpected(next, ExpectedToken::Token(expected)))
+ Err(Error::Unexpected(next.1, ExpectedToken::Token(expected)))
}
}
diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -288,7 +288,7 @@ impl<'a> Lexer<'a> {
Ok(())
} else {
Err(Error::Unexpected(
- next,
+ next.1,
ExpectedToken::Token(Token::Paren(expected)),
))
}
diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -314,7 +314,7 @@ impl<'a> Lexer<'a> {
Err(Error::ReservedIdentifierPrefix(span))
}
(Token::Word(word), span) => Ok((word, span)),
- other => Err(Error::Unexpected(other, ExpectedToken::Identifier)),
+ other => Err(Error::Unexpected(other.1, ExpectedToken::Identifier)),
}
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -71,6 +71,8 @@ pub enum ExpectedToken<'a> {
Constant,
/// Expected: constant, parenthesized expression, identifier
PrimaryExpression,
+ /// Expected: assignment, increment/decrement expression
+ Assignment,
/// Expected: '}', identifier
FieldName,
/// Expected: attribute for a type
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -95,9 +97,16 @@ pub enum NumberError {
UnimplementedF16,
}
+#[derive(Copy, Clone, Debug, PartialEq)]
+pub enum InvalidAssignmentType {
+ Other,
+ Swizzle,
+ ImmutableBinding,
+}
+
#[derive(Clone, Debug)]
pub enum Error<'a> {
- Unexpected(TokenSpan<'a>, ExpectedToken<'a>),
+ Unexpected(Span, ExpectedToken<'a>),
UnexpectedComponents(Span),
BadNumber(Span, NumberError),
/// A negative signed integer literal where both signed and unsigned,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -151,6 +160,10 @@ pub enum Error<'a> {
Pointer(&'static str, Span),
NotPointer(Span),
NotReference(&'static str, Span),
+ InvalidAssignment {
+ span: Span,
+ ty: InvalidAssignmentType,
+ },
ReservedKeyword(Span),
Redefinition {
previous: Span,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -162,7 +175,7 @@ pub enum Error<'a> {
impl<'a> Error<'a> {
fn as_parse_error(&self, source: &'a str) -> ParseError {
match *self {
- Error::Unexpected((_, ref unexpected_span), expected) => {
+ Error::Unexpected(ref unexpected_span, expected) => {
let expected_str = match expected {
ExpectedToken::Token(token) => {
match token {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -195,6 +208,7 @@ impl<'a> Error<'a> {
ExpectedToken::Integer => "unsigned/signed integer literal".to_string(),
ExpectedToken::Constant => "constant".to_string(),
ExpectedToken::PrimaryExpression => "expression".to_string(),
+ ExpectedToken::Assignment => "assignment or increment/decrement".to_string(),
ExpectedToken::FieldName => "field name or a closing curly bracket to signify the end of the struct".to_string(),
ExpectedToken::TypeAttribute => "type attribute".to_string(),
ExpectedToken::Statement => "statement".to_string(),
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -439,6 +453,20 @@ impl<'a> Error<'a> {
labels: vec![(span.clone(), "expression is not a reference".into())],
notes: vec![],
},
+ Error::InvalidAssignment{ ref span, ty} => ParseError {
+ message: "invalid left-hand side of assignment".into(),
+ labels: vec![(span.clone(), "cannot assign to this expression".into())],
+ notes: match ty {
+ InvalidAssignmentType::Swizzle => vec![
+ "WGSL does not support assignments to swizzles".into(),
+ "consider assigning each component individually".into()
+ ],
+ InvalidAssignmentType::ImmutableBinding => vec![
+ format!("'{}' is an immutable binding", &source[span.clone()]), "consider declaring it with `var` instead of `let`".into()
+ ],
+ InvalidAssignmentType::Other => vec![],
+ },
+ },
Error::Pointer(what, ref span) => ParseError {
message: format!("{} must not be a pointer", what),
labels: vec![(span.clone(), "expression is a pointer".into())],
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1409,7 +1437,7 @@ impl Parser {
Token::Number(Ok(Number::U32(num))) if uint => Ok(num as i32),
Token::Number(Ok(Number::I32(num))) if !uint => Ok(num),
Token::Number(Err(e)) => Err(Error::BadNumber(token_span.1, e)),
- _ => Err(Error::Unexpected(token_span, ExpectedToken::Integer)),
+ _ => Err(Error::Unexpected(token_span.1, ExpectedToken::Integer)),
}
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1422,7 +1450,7 @@ impl Parser {
}
(Token::Number(Err(e)), span) => Err(Error::BadNumber(span, e)),
other => Err(Error::Unexpected(
- other,
+ other.1,
ExpectedToken::Number(NumberType::I32),
)),
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1439,7 +1467,7 @@ impl Parser {
(Token::Number(Ok(Number::U32(num))), _) => Ok(num),
(Token::Number(Err(e)), span) => Err(Error::BadNumber(span, e)),
other => Err(Error::Unexpected(
- other,
+ other.1,
ExpectedToken::Number(NumberType::I32),
)),
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2239,7 +2267,7 @@ impl Parser {
components,
}
}
- other => return Err(Error::Unexpected(other, ExpectedToken::Constant)),
+ other => return Err(Error::Unexpected(other.1, ExpectedToken::Constant)),
};
// Only set span if it's a named constant. Otherwise, the enclosing Expression should have
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2330,7 +2358,7 @@ impl Parser {
}
}
}
- other => return Err(Error::Unexpected(other, ExpectedToken::PrimaryExpression)),
+ other => return Err(Error::Unexpected(other.1, ExpectedToken::PrimaryExpression)),
};
Ok(expr)
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2831,7 +2859,7 @@ impl Parser {
while !lexer.skip(Token::Paren('}')) {
if !ready {
return Err(Error::Unexpected(
- lexer.next(),
+ lexer.next().1,
ExpectedToken::Token(Token::Separator(',')),
));
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2863,7 +2891,7 @@ impl Parser {
let (name, span) = match lexer.next() {
(Token::Word(word), span) => (word, span),
- other => return Err(Error::Unexpected(other, ExpectedToken::FieldName)),
+ other => return Err(Error::Unexpected(other.1, ExpectedToken::FieldName)),
};
if crate::keywords::wgsl::RESERVED.contains(&name) {
return Err(Error::ReservedKeyword(span));
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3276,7 +3304,7 @@ impl Parser {
if lexer.skip(Token::Attribute) {
let other = lexer.next();
- return Err(Error::Unexpected(other, ExpectedToken::TypeAttribute));
+ return Err(Error::Unexpected(other.1, ExpectedToken::TypeAttribute));
}
let (name, name_span) = lexer.next_ident_with_span()?;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3300,23 +3328,42 @@ impl Parser {
fn parse_assignment_statement<'a, 'out>(
&mut self,
lexer: &mut Lexer<'a>,
- mut context: ExpressionContext<'a, '_, 'out>,
+ mut context: StatementContext<'a, '_, 'out>,
+ block: &mut crate::Block,
+ emitter: &mut super::Emitter,
) -> Result<(), Error<'a>> {
use crate::BinaryOperator as Bo;
let span_start = lexer.start_byte_offset();
- context.emitter.start(context.expressions);
- let reference = self.parse_unary_expression(lexer, context.reborrow())?;
+ emitter.start(context.expressions);
+ let (reference, lhs_span) = self
+ .parse_general_expression_for_reference(lexer, context.as_expression(block, emitter))?;
+ let op = lexer.next();
// The left hand side of an assignment must be a reference.
- let lhs_span = span_start..lexer.end_byte_offset();
- if !reference.is_reference {
- return Err(Error::NotReference(
- "the left-hand side of an assignment",
- lhs_span,
- ));
+ if !matches!(
+ op.0,
+ Token::Operation('=')
+ | Token::AssignmentOperation(_)
+ | Token::IncrementOperation
+ | Token::DecrementOperation
+ ) {
+ return Err(Error::Unexpected(lhs_span, ExpectedToken::Assignment));
+ } else if !reference.is_reference {
+ let ty = if context.named_expressions.contains_key(&reference.handle) {
+ InvalidAssignmentType::ImmutableBinding
+ } else {
+ match *context.expressions.get_mut(reference.handle) {
+ crate::Expression::Swizzle { .. } => InvalidAssignmentType::Swizzle,
+ _ => InvalidAssignmentType::Other,
+ }
+ };
+
+ return Err(Error::InvalidAssignment { span: lhs_span, ty });
}
- let value = match lexer.next() {
+ let mut context = context.as_expression(block, emitter);
+
+ let value = match op {
(Token::Operation('='), _) => {
self.parse_general_expression(lexer, context.reborrow())?
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3406,7 +3453,7 @@ impl Parser {
op_span.into(),
)
}
- other => return Err(Error::Unexpected(other, ExpectedToken::SwitchItem)),
+ other => return Err(Error::Unexpected(other.1, ExpectedToken::SwitchItem)),
};
let span_end = lexer.end_byte_offset();
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3843,7 +3890,10 @@ impl Parser {
}
(Token::Paren('}'), _) => break,
other => {
- return Err(Error::Unexpected(other, ExpectedToken::SwitchItem))
+ return Err(Error::Unexpected(
+ other.1,
+ ExpectedToken::SwitchItem,
+ ))
}
}
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3954,7 +4004,9 @@ impl Parser {
}
_ => self.parse_assignment_statement(
lexer,
- context.as_expression(&mut continuing, &mut emitter),
+ context.reborrow(),
+ &mut continuing,
+ &mut emitter,
)?,
}
lexer.expect(Token::Paren(')'))?;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4059,7 +4111,9 @@ impl Parser {
match context.symbol_table.lookup(ident) {
Some(_) => self.parse_assignment_statement(
lexer,
- context.as_expression(block, &mut emitter),
+ context,
+ block,
+ &mut emitter,
)?,
None => self.parse_function_statement(
lexer,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4078,7 +4132,7 @@ impl Parser {
}
_ => {
let mut emitter = super::Emitter::default();
- self.parse_assignment_statement(lexer, context.as_expression(block, &mut emitter))?;
+ self.parse_assignment_statement(lexer, context, block, &mut emitter)?;
self.pop_rule_span(lexer);
}
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4262,7 +4316,7 @@ impl Parser {
while !lexer.skip(Token::Paren(')')) {
if !ready {
return Err(Error::Unexpected(
- lexer.next(),
+ lexer.next().1,
ExpectedToken::Token(Token::Separator(',')),
));
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4391,7 +4445,7 @@ impl Parser {
(Token::Separator(','), _) if i != 2 => (),
other => {
return Err(Error::Unexpected(
- other,
+ other.1,
ExpectedToken::WorkgroupSizeSeparator,
))
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4577,7 +4631,7 @@ impl Parser {
}
}
(Token::End, _) => return Ok(false),
- other => return Err(Error::Unexpected(other, ExpectedToken::GlobalItem)),
+ other => return Err(Error::Unexpected(other.1, ExpectedToken::GlobalItem)),
}
match binding {
| diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -1597,3 +1597,83 @@ fn break_if_bad_condition() {
)
}
}
+
+#[test]
+fn swizzle_assignment() {
+ check(
+ "
+ fn f() {
+ var v = vec2(0);
+ v.xy = vec2(1);
+ }
+ ",
+ r###"error: invalid left-hand side of assignment
+ ┌─ wgsl:4:13
+ │
+4 │ v.xy = vec2(1);
+ │ ^^^^ cannot assign to this expression
+ │
+ = note: WGSL does not support assignments to swizzles
+ = note: consider assigning each component individually
+
+"###,
+ );
+}
+
+#[test]
+fn binary_statement() {
+ check(
+ "
+ fn f() {
+ 3 + 5;
+ }
+ ",
+ r###"error: expected assignment or increment/decrement, found '3 + 5'
+ ┌─ wgsl:3:13
+ │
+3 │ 3 + 5;
+ │ ^^^^^ expected assignment or increment/decrement
+
+"###,
+ );
+}
+
+#[test]
+fn assign_to_expr() {
+ check(
+ "
+ fn f() {
+ 3 + 5 = 10;
+ }
+ ",
+ r###"error: invalid left-hand side of assignment
+ ┌─ wgsl:3:13
+ │
+3 │ 3 + 5 = 10;
+ │ ^^^^^ cannot assign to this expression
+
+"###,
+ );
+}
+
+#[test]
+fn assign_to_let() {
+ check(
+ "
+ fn f() {
+ let a = 10;
+ a = 20;
+ }
+ ",
+ r###"error: invalid left-hand side of assignment
+ ┌─ wgsl:4:10
+ │
+4 │ a = 20;
+ │ ^ cannot assign to this expression
+ │
+ = note: 'a' is an immutable binding
+ = note: consider declaring it with `var` instead of `let`
+
+"###,
+ );
+}
| Assignment to swizzle produces unhelpful error message
Given the program:
```
fn f() {
var v = vec2(0);
v.xy = vec2(1);
}
```
Naga complains:
```
error: the left-hand side of an assignment must be a reference
┌─ assign-to-swizzle.wgsl:2:20
│
2 │ var v = vec2(0);
│ ╭───────────────────^
3 │ │ v.xy = vec2(1);
│ ╰───────^ expression is not a reference
Could not parse WGSL
```
This leads to questions in chat like:
> Hi. What's the equivalent WGSL code to this?
>
> p.xy = f(p.xy)
>
> I get this error and have no idea how to write this nicely...
The error message did not convey to the user that swizzle assignment is simply unsupported.
| We could bake in some detection for this case and add it as note to the diagnostic, it would look something like this:
```
error: the left-hand side of an assignment must be a reference
┌─ test.wgsl:3:4
│
3 │ v.xy = vec2(1);
│ ^^^^ expression is not a reference
│
= You seem to be trying to assign to a swizzle
wgsl doesn't support assignments to swizzles
```
If we wanted something fancier we could try adding suggested fixes like rustc has, but that would be more complicated
```
error: the left-hand side of an assignment must be a reference
┌─ test.wgsl:3:4
│
3 │ v.xy = vec2(1);
│ ^^^^ expression is not a reference
│
= You seem to be trying to assign to a swizzle
wgsl doesn't support assignments to swizzles
help: try using assigning each component individually
┌─ test.wgsl:3:4
│
3 │ let a = vec2(1);
4 │ v.xy = a.x;
5 │ v.xy = a.y;
│ ^^^
│
``` | 2022-09-14T16:05:36 | 0.9 | 4f8db997b0b1e73fc1e2cac070485333a5b61925 | [
"assign_to_expr",
"assign_to_let",
"binary_statement",
"swizzle_assignment"
] | [
"arena::tests::append_non_unique",
"back::msl::test_error_size",
"arena::tests::fetch_or_append_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_unique",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [] | [] | 74 |
gfx-rs/naga | 2,055 | gfx-rs__naga-2055 | [
"2053"
] | 1a99c1cf066de8d660904f1fb681f5647bc45205 | diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -163,6 +163,8 @@ fn is_word_part(c: char) -> bool {
pub(super) struct Lexer<'a> {
input: &'a str,
pub(super) source: &'a str,
+ // The byte offset of the end of the last non-trivia token.
+ last_end_offset: usize,
}
impl<'a> Lexer<'a> {
diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -170,6 +172,7 @@ impl<'a> Lexer<'a> {
Lexer {
input,
source: input,
+ last_end_offset: 0,
}
}
diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -196,6 +199,22 @@ impl<'a> Lexer<'a> {
Ok((res, start..end))
}
+ pub(super) fn start_byte_offset(&mut self) -> usize {
+ loop {
+ // Eat all trivia because `next` doesn't eat trailing trivia.
+ let (token, rest) = consume_token(self.input, false);
+ if let Token::Trivia = token {
+ self.input = rest;
+ } else {
+ return self.current_byte_offset();
+ }
+ }
+ }
+
+ pub(super) const fn end_byte_offset(&self) -> usize {
+ self.last_end_offset
+ }
+
fn peek_token_and_rest(&mut self) -> (TokenSpan<'a>, &'a str) {
let mut cloned = self.clone();
let token = cloned.next();
diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -203,12 +222,12 @@ impl<'a> Lexer<'a> {
(token, rest)
}
- pub(super) const fn current_byte_offset(&self) -> usize {
+ const fn current_byte_offset(&self) -> usize {
self.source.len() - self.input.len()
}
pub(super) const fn span_from(&self, offset: usize) -> Span {
- offset..self.current_byte_offset()
+ offset..self.end_byte_offset()
}
#[must_use]
diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -219,7 +238,10 @@ impl<'a> Lexer<'a> {
self.input = rest;
match token {
Token::Trivia => start_byte_offset = self.current_byte_offset(),
- _ => return (token, start_byte_offset..self.current_byte_offset()),
+ _ => {
+ self.last_end_offset = self.current_byte_offset();
+ return (token, start_byte_offset..self.last_end_offset);
+ }
}
}
}
diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -237,22 +259,6 @@ impl<'a> Lexer<'a> {
}
}
- /// Consumes [`Trivia`] tokens until another token is encountered, returns
- /// the byte offset after consuming the tokens.
- ///
- /// [`Trivia`]: Token::Trivia
- #[must_use]
- pub(super) fn consume_blankspace(&mut self) -> usize {
- loop {
- let (token, rest) = consume_token(self.input, false);
- if let Token::Trivia = token {
- self.input = rest;
- } else {
- return self.current_byte_offset();
- }
- }
- }
-
#[must_use]
pub(super) fn peek(&mut self) -> TokenSpan<'a> {
let (token, _) = self.peek_token_and_rest();
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -223,7 +223,7 @@ impl<'a> Error<'a> {
Error::BadNumber(ref bad_span, ref err) => ParseError {
message: format!(
"{}: `{}`",
- err,&source[bad_span.clone()],
+ err, &source[bad_span.clone()],
),
labels: vec![(bad_span.clone(), err.to_string().into())],
notes: vec![],
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -878,7 +878,7 @@ impl<'a> ExpressionContext<'a, '_, '_> {
ExpressionContext<'a, '_, '_>,
) -> Result<TypedExpression, Error<'a>>,
) -> Result<TypedExpression, Error<'a>> {
- let start = lexer.current_byte_offset() as u32;
+ let start = lexer.start_byte_offset() as u32;
let mut accumulator = parser(lexer, self.reborrow())?;
while let Some(op) = classifier(lexer.peek().0) {
let _ = lexer.next();
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -886,7 +886,7 @@ impl<'a> ExpressionContext<'a, '_, '_> {
let mut left = self.apply_load_rule(accumulator);
let unloaded_right = parser(lexer, self.reborrow())?;
let right = self.apply_load_rule(unloaded_right);
- let end = lexer.current_byte_offset() as u32;
+ let end = lexer.end_byte_offset() as u32;
left = self.expressions.append(
crate::Expression::Binary { op, left, right },
NagaSpan::new(start, end),
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -906,7 +906,7 @@ impl<'a> ExpressionContext<'a, '_, '_> {
ExpressionContext<'a, '_, '_>,
) -> Result<TypedExpression, Error<'a>>,
) -> Result<TypedExpression, Error<'a>> {
- let start = lexer.current_byte_offset() as u32;
+ let start = lexer.start_byte_offset() as u32;
let mut accumulator = parser(lexer, self.reborrow())?;
while let Some(op) = classifier(lexer.peek().0) {
let _ = lexer.next();
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -914,7 +914,7 @@ impl<'a> ExpressionContext<'a, '_, '_> {
let mut left = self.apply_load_rule(accumulator);
let unloaded_right = parser(lexer, self.reborrow())?;
let mut right = self.apply_load_rule(unloaded_right);
- let end = lexer.current_byte_offset() as u32;
+ let end = lexer.end_byte_offset() as u32;
self.binary_op_splat(op, &mut left, &mut right)?;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1389,8 +1389,8 @@ impl Parser {
self.layouter.clear();
}
- fn push_rule_span(&mut self, rule: Rule, lexer: &Lexer<'_>) {
- self.rules.push((rule, lexer.current_byte_offset()));
+ fn push_rule_span(&mut self, rule: Rule, lexer: &mut Lexer<'_>) {
+ self.rules.push((rule, lexer.start_byte_offset()));
}
fn pop_rule_span(&mut self, lexer: &Lexer<'_>) -> Span {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2589,7 +2589,7 @@ impl Parser {
lexer: &mut Lexer<'a>,
mut ctx: ExpressionContext<'a, '_, '_>,
) -> Result<TypedExpression, Error<'a>> {
- let start = lexer.current_byte_offset();
+ let start = lexer.start_byte_offset();
self.push_rule_span(Rule::SingularExpr, lexer);
let primary_expr = self.parse_primary_expression(lexer, ctx.reborrow())?;
let singular_expr = self.parse_postfix(start, lexer, ctx.reborrow(), primary_expr)?;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3249,7 +3249,7 @@ impl Parser {
None => {
match self.parse_type_decl_impl(lexer, attribute, name, type_arena, const_arena)? {
Some(inner) => {
- let span = name_span.start..lexer.current_byte_offset();
+ let span = name_span.start..lexer.end_byte_offset();
type_arena.insert(
crate::Type {
name: debug_name.map(|s| s.to_string()),
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3304,11 +3304,11 @@ impl Parser {
) -> Result<(), Error<'a>> {
use crate::BinaryOperator as Bo;
- let span_start = lexer.consume_blankspace();
+ let span_start = lexer.start_byte_offset();
context.emitter.start(context.expressions);
let reference = self.parse_unary_expression(lexer, context.reborrow())?;
// The left hand side of an assignment must be a reference.
- let lhs_span = span_start..lexer.current_byte_offset();
+ let lhs_span = span_start..lexer.end_byte_offset();
if !reference.is_reference {
return Err(Error::NotReference(
"the left-hand side of an assignment",
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3409,7 +3409,7 @@ impl Parser {
other => return Err(Error::Unexpected(other, ExpectedToken::SwitchItem)),
};
- let span_end = lexer.current_byte_offset();
+ let span_end = lexer.end_byte_offset();
context
.block
.extend(context.emitter.finish(context.expressions));
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3732,7 +3732,7 @@ impl Parser {
let accept = self.parse_block(lexer, context.reborrow(), false)?;
let mut elsif_stack = Vec::new();
- let mut elseif_span_start = lexer.current_byte_offset();
+ let mut elseif_span_start = lexer.start_byte_offset();
let mut reject = loop {
if !lexer.skip(Token::Word("else")) {
break crate::Block::new();
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3759,10 +3759,10 @@ impl Parser {
other_emit,
other_block,
));
- elseif_span_start = lexer.current_byte_offset();
+ elseif_span_start = lexer.start_byte_offset();
};
- let span_end = lexer.current_byte_offset();
+ let span_end = lexer.end_byte_offset();
// reverse-fold the else-if blocks
//Note: we may consider uplifting this to the IR
for (other_span_start, other_cond, other_emit, other_block) in
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4427,7 +4427,7 @@ impl Parser {
}
// read items
- let start = lexer.current_byte_offset();
+ let start = lexer.start_byte_offset();
match lexer.next() {
(Token::Separator(';'), _) => {}
(Token::Word("struct"), _) => {
| diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -640,11 +640,11 @@ fn reserved_keyword() {
r#"
var bool: bool = true;
"#,
- r###"error: name ` bool: bool = true;` is a reserved keyword
- ┌─ wgsl:2:16
+ r###"error: name `bool: bool = true;` is a reserved keyword
+ ┌─ wgsl:2:17
│
2 │ var bool: bool = true;
- │ ^^^^^^^^^^^^^^^^^^^ definition of ` bool: bool = true;`
+ │ ^^^^^^^^^^^^^^^^^^ definition of `bool: bool = true;`
"###,
);
diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -765,13 +765,13 @@ fn module_scope_identifier_redefinition() {
var foo: bool = true;
var foo: bool = true;
"#,
- r###"error: redefinition of ` foo: bool = true;`
- ┌─ wgsl:2:16
+ r###"error: redefinition of `foo: bool = true;`
+ ┌─ wgsl:2:17
│
2 │ var foo: bool = true;
- │ ^^^^^^^^^^^^^^^^^^ previous definition of ` foo: bool = true;`
+ │ ^^^^^^^^^^^^^^^^^ previous definition of `foo: bool = true;`
3 │ var foo: bool = true;
- │ ^^^^^^^^^^^^^^^^^^ redefinition of ` foo: bool = true;`
+ │ ^^^^^^^^^^^^^^^^^ redefinition of `foo: bool = true;`
"###,
);
diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -783,10 +783,10 @@ fn module_scope_identifier_redefinition() {
let foo: bool = true;
"#,
r###"error: redefinition of `foo`
- ┌─ wgsl:2:16
+ ┌─ wgsl:2:17
│
2 │ var foo: bool = true;
- │ ^^^^^^^^^^^^^^^^^^ previous definition of ` foo: bool = true;`
+ │ ^^^^^^^^^^^^^^^^^ previous definition of `foo: bool = true;`
3 │ let foo: bool = true;
│ ^^^ redefinition of `foo`
| Spans on expressions are bad
Given the input:
```
fn f() {
var v = vec2(0);
v.xy = vec2(1);
}
```
Naga produces the error message:
```
error: the left-hand side of an assignment must be a reference
┌─ assign-to-swizzle.wgsl:2:20
│
2 │ var v = vec2(0);
│ ╭───────────────────^
3 │ │ v.xy = vec2(1);
│ ╰───────^ expression is not a reference
Could not parse WGSL
```
That span is terrible. It should simply refer to the `v.xy`, or ideally just the `.xy`.
The quality of the error message itself is addressed in #2052.
| 2022-09-14T02:11:09 | 0.9 | 4f8db997b0b1e73fc1e2cac070485333a5b61925 | [
"module_scope_identifier_redefinition",
"reserved_keyword"
] | [
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::writer::test_write_physical_layout",
"back::spv::layout::t... | [] | [] | 75 | |
gfx-rs/naga | 2,024 | gfx-rs__naga-2024 | [
"2021"
] | b209d911681c4ef563f7d9048623667743e6248f | diff --git a/src/front/mod.rs b/src/front/mod.rs
--- a/src/front/mod.rs
+++ b/src/front/mod.rs
@@ -14,6 +14,7 @@ pub mod wgsl;
use crate::{
arena::{Arena, Handle, UniqueArena},
proc::{ResolveContext, ResolveError, TypeResolution},
+ FastHashMap,
};
use std::ops;
diff --git a/src/front/mod.rs b/src/front/mod.rs
--- a/src/front/mod.rs
+++ b/src/front/mod.rs
@@ -133,3 +134,157 @@ impl ops::Index<Handle<crate::Expression>> for Typifier {
&self.resolutions[handle.index()]
}
}
+
+/// Type representing a lexical scope, associating a name to a single variable
+///
+/// The scope is generic over the variable representation and name representaion
+/// in order to allow larger flexibility on the frontends on how they might
+/// represent them.
+type Scope<Name, Var> = FastHashMap<Name, Var>;
+
+/// Structure responsible for managing variable lookups and keeping track of
+/// lexical scopes
+///
+/// The symbol table is generic over the variable representation and its name
+/// to allow larger flexibility on the frontends on how they might represent them.
+///
+/// ```
+/// use naga::front::SymbolTable;
+///
+/// // Create a new symbol table with `u32`s representing the variable
+/// let mut symbol_table: SymbolTable<&str, u32> = SymbolTable::default();
+///
+/// // Add two variables named `var1` and `var2` with 0 and 2 respectively
+/// symbol_table.add("var1", 0);
+/// symbol_table.add("var2", 2);
+///
+/// // Check that `var1` exists and is `0`
+/// assert_eq!(symbol_table.lookup("var1"), Some(&0));
+///
+/// // Push a new scope and add a variable to it named `var1` shadowing the
+/// // variable of our previous scope
+/// symbol_table.push_scope();
+/// symbol_table.add("var1", 1);
+///
+/// // Check that `var1` now points to the new value of `1` and `var2` still
+/// // exists with its value of `2`
+/// assert_eq!(symbol_table.lookup("var1"), Some(&1));
+/// assert_eq!(symbol_table.lookup("var2"), Some(&2));
+///
+/// // Pop the scope
+/// symbol_table.pop_scope();
+///
+/// // Check that `var1` now refers to our initial variable with value `0`
+/// assert_eq!(symbol_table.lookup("var1"), Some(&0));
+/// ```
+///
+/// Scopes are ordered as a LIFO stack so a variable defined in a later scope
+/// with the same name as another variable defined in a earlier scope will take
+/// precedence in the lookup. Scopes can be added with [`push_scope`] and
+/// removed with [`pop_scope`].
+///
+/// A root scope is added when the symbol table is created and must always be
+/// present. Trying to pop it will result in a panic.
+///
+/// Variables can be added with [`add`] and looked up with [`lookup`]. Adding a
+/// variable will do so in the currently active scope and as mentioned
+/// previously a lookup will search from the current scope to the root scope.
+///
+/// [`push_scope`]: Self::push_scope
+/// [`pop_scope`]: Self::push_scope
+/// [`add`]: Self::add
+/// [`lookup`]: Self::lookup
+pub struct SymbolTable<Name, Var> {
+ /// Stack of lexical scopes. Not all scopes are active; see [`cursor`].
+ ///
+ /// [`cursor`]: Self::cursor
+ scopes: Vec<Scope<Name, Var>>,
+ /// Limit of the [`scopes`] stack (exclusive). By using a separate value for
+ /// the stack length instead of `Vec`'s own internal length, the scopes can
+ /// be reused to cache memory allocations.
+ ///
+ /// [`scopes`]: Self::scopes
+ cursor: usize,
+}
+
+impl<Name, Var> SymbolTable<Name, Var> {
+ /// Adds a new lexical scope.
+ ///
+ /// All variables declared after this point will be added to this scope
+ /// until another scope is pushed or [`pop_scope`] is called, causing this
+ /// scope to be removed along with all variables added to it.
+ ///
+ /// [`pop_scope`]: Self::pop_scope
+ pub fn push_scope(&mut self) {
+ // If the cursor is equal to the scope's stack length then we need to
+ // push another empty scope. Otherwise we can reuse the already existing
+ // scope.
+ if self.scopes.len() == self.cursor {
+ self.scopes.push(FastHashMap::default())
+ } else {
+ self.scopes[self.cursor].clear();
+ }
+
+ self.cursor += 1;
+ }
+
+ /// Removes the current lexical scope and all its variables
+ ///
+ /// # PANICS
+ /// - If the current lexical scope is the root scope
+ pub fn pop_scope(&mut self) {
+ // Despite the method title, the variables are only deleted when the
+ // scope is reused. This is because while a clear is inevitable if the
+ // scope needs to be reused, there are cases where the scope might be
+ // popped and not reused, i.e. if another scope with the same nesting
+ // level is never pushed again.
+ assert!(self.cursor != 1, "Tried to pop the root scope");
+
+ self.cursor -= 1;
+ }
+}
+
+impl<Name, Var> SymbolTable<Name, Var>
+where
+ Name: std::hash::Hash + Eq,
+{
+ /// Perform a lookup for a variable named `name`.
+ ///
+ /// As stated in the struct level documentation the lookup will proceed from
+ /// the current scope to the root scope, returning `Some` when a variable is
+ /// found or `None` if there doesn't exist a variable with `name` in any
+ /// scope.
+ pub fn lookup<Q: ?Sized>(&mut self, name: &Q) -> Option<&Var>
+ where
+ Name: std::borrow::Borrow<Q>,
+ Q: std::hash::Hash + Eq,
+ {
+ // Iterate backwards trough the scopes and try to find the variable
+ for scope in self.scopes[..self.cursor].iter().rev() {
+ if let Some(var) = scope.get(name) {
+ return Some(var);
+ }
+ }
+
+ None
+ }
+
+ /// Adds a new variable to the current scope.
+ ///
+ /// Returns the previous variable with the same name in this scope if it
+ /// exists, so that the frontend might handle it in case variable shadowing
+ /// is disallowed.
+ pub fn add(&mut self, name: Name, var: Var) -> Option<Var> {
+ self.scopes[self.cursor - 1].insert(name, var)
+ }
+}
+
+impl<Name, Var> Default for SymbolTable<Name, Var> {
+ /// Constructs a new symbol table with a root scope
+ fn default() -> Self {
+ Self {
+ scopes: vec![FastHashMap::default()],
+ cursor: 1,
+ }
+ }
+}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -749,7 +749,7 @@ impl<'a> StringValueLookup<'a> for FastHashMap<&'a str, TypedExpression> {
}
struct StatementContext<'input, 'temp, 'out> {
- lookup_ident: &'temp mut FastHashMap<&'input str, TypedExpression>,
+ symbol_table: &'temp mut super::SymbolTable<&'input str, TypedExpression>,
typifier: &'temp mut super::Typifier,
variables: &'out mut Arena<crate::LocalVariable>,
expressions: &'out mut Arena<crate::Expression>,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -764,7 +764,7 @@ struct StatementContext<'input, 'temp, 'out> {
impl<'a, 'temp> StatementContext<'a, 'temp, '_> {
fn reborrow(&mut self) -> StatementContext<'a, '_, '_> {
StatementContext {
- lookup_ident: self.lookup_ident,
+ symbol_table: self.symbol_table,
typifier: self.typifier,
variables: self.variables,
expressions: self.expressions,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -786,7 +786,7 @@ impl<'a, 'temp> StatementContext<'a, 'temp, '_> {
'temp: 't,
{
ExpressionContext {
- lookup_ident: self.lookup_ident,
+ symbol_table: self.symbol_table,
typifier: self.typifier,
expressions: self.expressions,
types: self.types,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -807,7 +807,7 @@ struct SamplingContext {
}
struct ExpressionContext<'input, 'temp, 'out> {
- lookup_ident: &'temp FastHashMap<&'input str, TypedExpression>,
+ symbol_table: &'temp mut super::SymbolTable<&'input str, TypedExpression>,
typifier: &'temp mut super::Typifier,
expressions: &'out mut Arena<crate::Expression>,
types: &'out mut UniqueArena<crate::Type>,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -823,7 +823,7 @@ struct ExpressionContext<'input, 'temp, 'out> {
impl<'a> ExpressionContext<'a, '_, '_> {
fn reborrow(&mut self) -> ExpressionContext<'a, '_, '_> {
ExpressionContext {
- lookup_ident: self.lookup_ident,
+ symbol_table: self.symbol_table,
typifier: self.typifier,
expressions: self.expressions,
types: self.types,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2286,7 +2286,7 @@ impl Parser {
)
}
(Token::Word(word), span) => {
- if let Some(definition) = ctx.lookup_ident.get(word) {
+ if let Some(definition) = ctx.symbol_table.lookup(word) {
let _ = lexer.next();
self.pop_scope(lexer);
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3432,6 +3432,9 @@ impl Parser {
mut context: StatementContext<'a, '_, 'out>,
) -> Result<(bool, crate::Block), Error<'a>> {
let mut body = crate::Block::new();
+ // Push a new lexical scope for the switch case body
+ context.symbol_table.push_scope();
+
lexer.expect(Token::Paren('{'))?;
let fall_through = loop {
// default statements
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3445,6 +3448,8 @@ impl Parser {
}
self.parse_statement(lexer, context.reborrow(), &mut body, false)?;
};
+ // Pop the switch case body lexical scope
+ context.symbol_table.pop_scope();
Ok((fall_through, body))
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3465,6 +3470,9 @@ impl Parser {
}
(Token::Paren('{'), _) => {
self.push_scope(Scope::Block, lexer);
+ // Push a new lexical scope for the block statement
+ context.symbol_table.push_scope();
+
let _ = lexer.next();
let mut statements = crate::Block::new();
while !lexer.skip(Token::Paren('}')) {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3475,6 +3483,9 @@ impl Parser {
is_uniform_control_flow,
)?;
}
+ // Pop the block statement lexical scope
+ context.symbol_table.pop_scope();
+
self.pop_scope(lexer);
let span = NagaSpan::from(self.pop_scope(lexer));
block.push(crate::Statement::Block(statements), span);
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3539,7 +3550,7 @@ impl Parser {
}
}
block.extend(emitter.finish(context.expressions));
- context.lookup_ident.insert(
+ context.symbol_table.add(
name,
TypedExpression {
handle: expr_id,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3655,7 +3666,7 @@ impl Parser {
let expr_id = context
.expressions
.append(crate::Expression::LocalVariable(var_id), Default::default());
- context.lookup_ident.insert(
+ context.symbol_table.add(
name,
TypedExpression {
handle: expr_id,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3843,10 +3854,14 @@ impl Parser {
},
NagaSpan::from(span),
);
+ // Push a lexical scope for the while loop body
+ context.symbol_table.push_scope();
while !lexer.skip(Token::Paren('}')) {
self.parse_statement(lexer, context.reborrow(), &mut body, false)?;
}
+ // Pop the while loop body lexical scope
+ context.symbol_table.pop_scope();
Some(crate::Statement::Loop {
body,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3857,6 +3872,9 @@ impl Parser {
"for" => {
let _ = lexer.next();
lexer.expect(Token::Paren('('))?;
+ // Push a lexical scope for the for loop
+ context.symbol_table.push_scope();
+
if !lexer.skip(Token::Separator(';')) {
let num_statements = block.len();
let (_, span) = lexer.capture_span(|lexer| {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3904,7 +3922,9 @@ impl Parser {
let mut continuing = crate::Block::new();
if !lexer.skip(Token::Paren(')')) {
match lexer.peek().0 {
- Token::Word(ident) if context.lookup_ident.get(ident).is_none() => {
+ Token::Word(ident)
+ if context.symbol_table.lookup(ident).is_none() =>
+ {
self.parse_function_statement(
lexer,
ident,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3923,6 +3943,8 @@ impl Parser {
while !lexer.skip(Token::Paren('}')) {
self.parse_statement(lexer, context.reborrow(), &mut body, false)?;
}
+ // Pop the for loop lexical scope
+ context.symbol_table.pop_scope();
Some(crate::Statement::Loop {
body,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4013,7 +4035,7 @@ impl Parser {
}
// assignment or a function call
ident => {
- match context.lookup_ident.get(ident) {
+ match context.symbol_table.lookup(ident) {
Some(_) => self.parse_assignment_statement(
lexer,
context.as_expression(block, &mut emitter),
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4052,6 +4074,10 @@ impl Parser {
let mut body = crate::Block::new();
let mut continuing = crate::Block::new();
let mut break_if = None;
+
+ // Push a lexical scope for the loop body
+ context.symbol_table.push_scope();
+
lexer.expect(Token::Paren('{'))?;
loop {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4113,6 +4139,9 @@ impl Parser {
self.parse_statement(lexer, context.reborrow(), &mut body, false)?;
}
+ // Pop the loop body lexical scope
+ context.symbol_table.pop_scope();
+
Ok(crate::Statement::Loop {
body,
continuing,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4127,6 +4156,9 @@ impl Parser {
is_uniform_control_flow: bool,
) -> Result<crate::Block, Error<'a>> {
self.push_scope(Scope::Block, lexer);
+ // Push a lexical scope for the block
+ context.symbol_table.push_scope();
+
lexer.expect(Token::Paren('{'))?;
let mut block = crate::Block::new();
while !lexer.skip(Token::Paren('}')) {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4137,6 +4169,9 @@ impl Parser {
is_uniform_control_flow,
)?;
}
+ //Pop the block lexical scope
+ context.symbol_table.pop_scope();
+
self.pop_scope(lexer);
Ok(block)
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4165,7 +4200,7 @@ impl Parser {
) -> Result<(crate::Function, &'a str), Error<'a>> {
self.push_scope(Scope::FunctionDecl, lexer);
// read function name
- let mut lookup_ident = FastHashMap::default();
+ let mut symbol_table = super::SymbolTable::default();
let (fun_name, span) = lexer.next_ident_with_span()?;
if crate::keywords::wgsl::RESERVED.contains(&fun_name) {
return Err(Error::ReservedKeyword(span));
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4191,7 +4226,7 @@ impl Parser {
_ => unreachable!(),
};
let expression = expressions.append(expression.clone(), span);
- lookup_ident.insert(
+ symbol_table.add(
name,
TypedExpression {
handle: expression,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4221,7 +4256,7 @@ impl Parser {
crate::Expression::FunctionArgument(param_index),
NagaSpan::from(param_name_span),
);
- lookup_ident.insert(
+ symbol_table.add(
param_name,
TypedExpression {
handle: expression,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4266,7 +4301,7 @@ impl Parser {
fun.body = self.parse_block(
lexer,
StatementContext {
- lookup_ident: &mut lookup_ident,
+ symbol_table: &mut symbol_table,
typifier: &mut typifier,
variables: &mut fun.local_variables,
expressions: &mut fun.expressions,
| diff --git /dev/null b/tests/in/lexical-scopes.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/in/lexical-scopes.wgsl
@@ -0,0 +1,58 @@
+fn blockLexicalScope(a: bool) {
+ let a = 1.0;
+ {
+ let a = 2;
+ {
+ let a = true;
+ }
+ let test = a == 3;
+ }
+ let test = a == 2.0;
+}
+
+fn ifLexicalScope(a: bool) {
+ let a = 1.0;
+ if (a == 1.0) {
+ let a = true;
+ }
+ let test = a == 2.0;
+}
+
+
+fn loopLexicalScope(a: bool) {
+ let a = 1.0;
+ loop {
+ let a = true;
+ }
+ let test = a == 2.0;
+}
+
+fn forLexicalScope(a: f32) {
+ let a = false;
+ for (var a = 0; a < 1; a++) {
+ let a = 3.0;
+ }
+ let test = a == true;
+}
+
+fn whileLexicalScope(a: i32) {
+ while (a > 2) {
+ let a = false;
+ }
+ let test = a == 1;
+}
+
+fn switchLexicalScope(a: i32) {
+ switch (a) {
+ case 0 {
+ let a = false;
+ }
+ case 1 {
+ let a = 2.0;
+ }
+ default {
+ let a = true;
+ }
+ }
+ let test = a == 2;
+}
diff --git /dev/null b/tests/out/wgsl/lexical-scopes.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/out/wgsl/lexical-scopes.wgsl
@@ -0,0 +1,60 @@
+fn blockLexicalScope(a: bool) {
+ {
+ {
+ }
+ let test = (2 == 3);
+ }
+ let test_1 = (1.0 == 2.0);
+}
+
+fn ifLexicalScope(a_1: bool) {
+ if (1.0 == 1.0) {
+ }
+ let test_2 = (1.0 == 2.0);
+}
+
+fn loopLexicalScope(a_2: bool) {
+ loop {
+ }
+ let test_3 = (1.0 == 2.0);
+}
+
+fn forLexicalScope(a_3: f32) {
+ var a_4: i32 = 0;
+
+ loop {
+ let _e4 = a_4;
+ if (_e4 < 1) {
+ } else {
+ break;
+ }
+ continuing {
+ let _e7 = a_4;
+ a_4 = (_e7 + 1);
+ }
+ }
+ let test_4 = (false == true);
+}
+
+fn whileLexicalScope(a_5: i32) {
+ loop {
+ if (a_5 > 2) {
+ } else {
+ break;
+ }
+ }
+ let test_5 = (a_5 == 1);
+}
+
+fn switchLexicalScope(a_6: i32) {
+ switch a_6 {
+ case 0: {
+ }
+ case 1: {
+ }
+ default: {
+ }
+ }
+ let test_6 = (a_6 == 2);
+}
+
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -543,6 +543,7 @@ fn convert_wgsl() {
"break-if",
Targets::WGSL | Targets::GLSL | Targets::SPIRV | Targets::HLSL | Targets::METAL,
),
+ ("lexical-scopes", Targets::WGSL),
];
for &(name, targets) in inputs.iter() {
| [wgsl-in] let declarations from different scopes merged?
Shadowed let declarations can get incorrectly hoisted out of conditional scopes.
See example WGSL:
```
@vertex
fn main(@location(0) cond: f32) -> @location(0) f32 {
let value = 1.;
if cond > 0. {
let value = 2.;
return 3.;
}
return value;
}
```
This shader always returns 2 when the if condition evaluates to false.
Same shader translated to GLSL by Naga:
```
#version 310 es
precision highp float;
precision highp int;
layout(location = 0) in float _p2vs_location0;
layout(location = 0) smooth out float _vs2fs_location0;
void main() {
float cond = _p2vs_location0;
if ((cond > 0.0)) {
_vs2fs_location0 = 3.0;
gl_Position.yz = vec2(-gl_Position.y, gl_Position.z * 2.0 - gl_Position.w);
return;
}
_vs2fs_location0 = 2.0;
gl_Position.yz = vec2(-gl_Position.y, gl_Position.z * 2.0 - gl_Position.w);
return;
}
```
Tested this happens on 0.9 and HEAD.
| 2022-08-07T03:59:14 | 0.9 | 4f8db997b0b1e73fc1e2cac070485333a5b61925 | [
"convert_wgsl"
] | [
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"arena::tests::fetch_or_append_non_unique",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [] | [] | 76 | |
gfx-rs/naga | 2,005 | gfx-rs__naga-2005 | [
"1782"
] | e7ddd3564c214d8acf554a03915d9bb179e0b772 | diff --git a/Makefile b/Makefile
--- a/Makefile
+++ b/Makefile
@@ -69,7 +69,7 @@ validate-wgsl: $(SNAPSHOTS_BASE_OUT)/wgsl/*.wgsl
cargo run $${file}; \
done
-validate-hlsl-dxc: SHELL:=/bin/bash # required because config files uses arrays
+validate-hlsl-dxc: SHELL:=/usr/bin/env bash # required because config files uses arrays
validate-hlsl-dxc: $(SNAPSHOTS_BASE_OUT)/hlsl/*.hlsl
@set -e && for file in $^ ; do \
DXC_PARAMS="-Wno-parentheses-equality -Zi -Qembed_debug -Od"; \
diff --git a/Makefile b/Makefile
--- a/Makefile
+++ b/Makefile
@@ -94,7 +94,7 @@ validate-hlsl-dxc: $(SNAPSHOTS_BASE_OUT)/hlsl/*.hlsl
echo "======================"; \
done
-validate-hlsl-fxc: SHELL:=/bin/bash # required because config files uses arrays
+validate-hlsl-fxc: SHELL:=/usr/bin/env bash # required because config files uses arrays
validate-hlsl-fxc: $(SNAPSHOTS_BASE_OUT)/hlsl/*.hlsl
@set -e && for file in $^ ; do \
FXC_PARAMS="-Zi -Od"; \
diff --git a/src/back/hlsl/mod.rs b/src/back/hlsl/mod.rs
--- a/src/back/hlsl/mod.rs
+++ b/src/back/hlsl/mod.rs
@@ -189,6 +189,8 @@ pub struct Options {
/// Add special constants to `SV_VertexIndex` and `SV_InstanceIndex`,
/// to make them work like in Vulkan/Metal, with help of the host.
pub special_constants_binding: Option<BindTarget>,
+ /// Bind target of the push constant buffer
+ pub push_constants_target: Option<BindTarget>,
}
impl Default for Options {
diff --git a/src/back/hlsl/mod.rs b/src/back/hlsl/mod.rs
--- a/src/back/hlsl/mod.rs
+++ b/src/back/hlsl/mod.rs
@@ -198,6 +200,7 @@ impl Default for Options {
binding_map: BindingMap::default(),
fake_missing_bindings: true,
special_constants_binding: None,
+ push_constants_target: None,
}
}
}
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -623,12 +623,45 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
self.write_type(module, global.ty)?;
register
}
- crate::AddressSpace::PushConstant => unimplemented!("Push constants"),
+ crate::AddressSpace::PushConstant => {
+ // The type of the push constants will be wrapped in `ConstantBuffer`
+ write!(self.out, "ConstantBuffer<")?;
+ "b"
+ }
};
+ // If the global is a push constant write the type now because it will be a
+ // generic argument to `ConstantBuffer`
+ if global.space == crate::AddressSpace::PushConstant {
+ self.write_global_type(module, global.ty)?;
+
+ // need to write the array size if the type was emitted with `write_type`
+ if let TypeInner::Array { base, size, .. } = module.types[global.ty].inner {
+ self.write_array_size(module, base, size)?;
+ }
+
+ // Close the angled brackets for the generic argument
+ write!(self.out, ">")?;
+ }
+
let name = &self.names[&NameKey::GlobalVariable(handle)];
write!(self.out, " {}", name)?;
+ // Push constants need to be assigned a binding explicitly by the consumer
+ // since naga has no way to know the binding from the shader alone
+ if global.space == crate::AddressSpace::PushConstant {
+ let target = self
+ .options
+ .push_constants_target
+ .as_ref()
+ .expect("No bind target was defined for the push constants block");
+ write!(self.out, ": register(b{}", target.register)?;
+ if target.space != 0 {
+ write!(self.out, ", space{}", target.space)?;
+ }
+ write!(self.out, ")")?;
+ }
+
if let Some(ref binding) = global.binding {
// this was already resolved earlier when we started evaluating an entry point.
let bt = self.options.resolve_resource_binding(binding).unwrap();
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -665,34 +698,13 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
if global.space == crate::AddressSpace::Uniform {
write!(self.out, " {{ ")?;
- let matrix_data = get_inner_matrix_data(module, global.ty);
-
- // We treat matrices of the form `matCx2` as a sequence of C `vec2`s.
- // See the module-level block comment in mod.rs for details.
- if let Some(MatrixType {
- columns,
- rows: crate::VectorSize::Bi,
- width: 4,
- }) = matrix_data
- {
- write!(
- self.out,
- "__mat{}x2 {}",
- columns as u8,
- &self.names[&NameKey::GlobalVariable(handle)]
- )?;
- } else {
- // Even though Naga IR matrices are column-major, we must describe
- // matrices passed from the CPU as being in row-major order.
- // See the module-level block comment in mod.rs for details.
- if matrix_data.is_some() {
- write!(self.out, "row_major ")?;
- }
+ self.write_global_type(module, global.ty)?;
- self.write_type(module, global.ty)?;
- let sub_name = &self.names[&NameKey::GlobalVariable(handle)];
- write!(self.out, " {}", sub_name)?;
- }
+ write!(
+ self.out,
+ " {}",
+ &self.names[&NameKey::GlobalVariable(handle)]
+ )?;
// need to write the array size if the type was emitted with `write_type`
if let TypeInner::Array { base, size, .. } = module.types[global.ty].inner {
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -829,27 +841,7 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
TypeInner::Array { base, size, .. } => {
// HLSL arrays are written as `type name[size]`
- let matrix_data = get_inner_matrix_data(module, member.ty);
-
- // We treat matrices of the form `matCx2` as a sequence of C `vec2`s.
- // See the module-level block comment in mod.rs for details.
- if let Some(MatrixType {
- columns,
- rows: crate::VectorSize::Bi,
- width: 4,
- }) = matrix_data
- {
- write!(self.out, "__mat{}x2", columns as u8)?;
- } else {
- // Even though Naga IR matrices are column-major, we must describe
- // matrices passed from the CPU as being in row-major order.
- // See the module-level block comment in mod.rs for details.
- if matrix_data.is_some() {
- write!(self.out, "row_major ")?;
- }
-
- self.write_type(module, base)?;
- }
+ self.write_global_type(module, member.ty)?;
// Write `name`
write!(
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -923,6 +915,40 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
Ok(())
}
+ /// Helper method used to write global/structs non image/sampler types
+ ///
+ /// # Notes
+ /// Adds no trailing or leading whitespace
+ pub(super) fn write_global_type(
+ &mut self,
+ module: &Module,
+ ty: Handle<crate::Type>,
+ ) -> BackendResult {
+ let matrix_data = get_inner_matrix_data(module, ty);
+
+ // We treat matrices of the form `matCx2` as a sequence of C `vec2`s.
+ // See the module-level block comment in mod.rs for details.
+ if let Some(MatrixType {
+ columns,
+ rows: crate::VectorSize::Bi,
+ width: 4,
+ }) = matrix_data
+ {
+ write!(self.out, "__mat{}x2", columns as u8)?;
+ } else {
+ // Even though Naga IR matrices are column-major, we must describe
+ // matrices passed from the CPU as being in row-major order.
+ // See the module-level block comment in mod.rs for details.
+ if matrix_data.is_some() {
+ write!(self.out, "row_major ")?;
+ }
+
+ self.write_type(module, ty)?;
+ }
+
+ Ok(())
+ }
+
/// Helper method used to write non image/sampler types
///
/// # Notes
| diff --git a/tests/in/push-constants.param.ron b/tests/in/push-constants.param.ron
--- a/tests/in/push-constants.param.ron
+++ b/tests/in/push-constants.param.ron
@@ -8,4 +8,11 @@
writer_flags: (bits: 0),
binding_map: {},
),
+ hlsl: (
+ shader_model: V5_1,
+ binding_map: {},
+ fake_missing_bindings: true,
+ special_constants_binding: Some((space: 1, register: 0)),
+ push_constants_target: Some((space: 0, register: 0)),
+ ),
)
diff --git a/tests/in/push-constants.wgsl b/tests/in/push-constants.wgsl
--- a/tests/in/push-constants.wgsl
+++ b/tests/in/push-constants.wgsl
@@ -7,6 +7,14 @@ struct FragmentIn {
@location(0) color: vec4<f32>
}
+@vertex
+fn vert_main(
+ @location(0) pos : vec2<f32>,
+ @builtin(vertex_index) vi: u32,
+) -> @builtin(position) vec4<f32> {
+ return vec4<f32>(f32(vi) * pc.multiplier * pos, 0.0, 1.0);
+}
+
@fragment
fn main(in: FragmentIn) -> @location(0) vec4<f32> {
return in.color * pc.multiplier;
diff --git /dev/null b/tests/out/glsl/push-constants.vert_main.Vertex.glsl
new file mode 100644
--- /dev/null
+++ b/tests/out/glsl/push-constants.vert_main.Vertex.glsl
@@ -0,0 +1,23 @@
+#version 320 es
+
+precision highp float;
+precision highp int;
+
+struct PushConstants {
+ float multiplier;
+};
+struct FragmentIn {
+ vec4 color;
+};
+uniform PushConstants pc;
+
+layout(location = 0) in vec2 _p2vs_location0;
+
+void main() {
+ vec2 pos = _p2vs_location0;
+ uint vi = uint(gl_VertexID);
+ float _e5 = pc.multiplier;
+ gl_Position = vec4(((float(vi) * _e5) * pos), 0.0, 1.0);
+ return;
+}
+
diff --git /dev/null b/tests/out/hlsl/push-constants.hlsl
new file mode 100644
--- /dev/null
+++ b/tests/out/hlsl/push-constants.hlsl
@@ -0,0 +1,33 @@
+struct NagaConstants {
+ int base_vertex;
+ int base_instance;
+ uint other;
+};
+ConstantBuffer<NagaConstants> _NagaConstants: register(b0, space1);
+
+struct PushConstants {
+ float multiplier;
+};
+
+struct FragmentIn {
+ float4 color : LOC0;
+};
+
+ConstantBuffer<PushConstants> pc: register(b0);
+
+struct FragmentInput_main {
+ float4 color : LOC0;
+};
+
+float4 vert_main(float2 pos : LOC0, uint vi : SV_VertexID) : SV_Position
+{
+ float _expr5 = pc.multiplier;
+ return float4(((float((_NagaConstants.base_vertex + vi)) * _expr5) * pos), 0.0, 1.0);
+}
+
+float4 main(FragmentInput_main fragmentinput_main) : SV_Target0
+{
+ FragmentIn in_ = { fragmentinput_main.color };
+ float _expr4 = pc.multiplier;
+ return (in_.color * _expr4);
+}
diff --git /dev/null b/tests/out/hlsl/push-constants.hlsl.config
new file mode 100644
--- /dev/null
+++ b/tests/out/hlsl/push-constants.hlsl.config
@@ -0,0 +1,3 @@
+vertex=(vert_main:vs_5_1 )
+fragment=(main:ps_5_1 )
+compute=()
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -471,7 +471,7 @@ fn convert_wgsl() {
Targets::SPIRV | Targets::METAL | Targets::HLSL | Targets::WGSL | Targets::GLSL,
),
("extra", Targets::SPIRV | Targets::METAL | Targets::WGSL),
- ("push-constants", Targets::GLSL),
+ ("push-constants", Targets::GLSL | Targets::HLSL),
(
"operators",
Targets::SPIRV | Targets::METAL | Targets::GLSL | Targets::HLSL | Targets::WGSL,
| [hlsl-out] Implement push constants
Tracking issue for implementing push constants for the HLSL backend
https://github.com/gfx-rs/naga/blob/f90e563c281cfc71c794e0426ebcced9e3999202/src/back/hlsl/writer.rs#L583
| 2022-07-11T01:36:08 | 0.9 | 4f8db997b0b1e73fc1e2cac070485333a5b61925 | [
"convert_wgsl"
] | [
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"back::msl::test_error_size",
"arena::tests::fetch_or_append_unique",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [] | [] | 77 | |
gfx-rs/naga | 1,993 | gfx-rs__naga-1993 | [
"1831"
] | ea832a9eec13560560c017072d4318d5d942e5e5 | diff --git a/src/back/dot/mod.rs b/src/back/dot/mod.rs
--- a/src/back/dot/mod.rs
+++ b/src/back/dot/mod.rs
@@ -81,11 +81,15 @@ impl StatementGraph {
S::Loop {
ref body,
ref continuing,
+ break_if,
} => {
let body_id = self.add(body);
self.flow.push((id, body_id, "body"));
let continuing_id = self.add(continuing);
self.flow.push((body_id, continuing_id, "continuing"));
+ if let Some(expr) = break_if {
+ self.dependencies.push((id, expr, "break if"));
+ }
"Loop"
}
S::Return { value } => {
diff --git a/src/back/glsl/mod.rs b/src/back/glsl/mod.rs
--- a/src/back/glsl/mod.rs
+++ b/src/back/glsl/mod.rs
@@ -1800,15 +1800,24 @@ impl<'a, W: Write> Writer<'a, W> {
Statement::Loop {
ref body,
ref continuing,
+ break_if,
} => {
- if !continuing.is_empty() {
+ if !continuing.is_empty() || break_if.is_some() {
let gate_name = self.namer.call("loop_init");
writeln!(self.out, "{}bool {} = true;", level, gate_name)?;
writeln!(self.out, "{}while(true) {{", level)?;
let l2 = level.next();
+ let l3 = l2.next();
writeln!(self.out, "{}if (!{}) {{", l2, gate_name)?;
for sta in continuing {
- self.write_stmt(sta, ctx, l2.next())?;
+ self.write_stmt(sta, ctx, l3)?;
+ }
+ if let Some(condition) = break_if {
+ write!(self.out, "{}if (", l3)?;
+ self.write_expr(condition, ctx)?;
+ writeln!(self.out, ") {{")?;
+ writeln!(self.out, "{}break;", l3.next())?;
+ writeln!(self.out, "{}}}", l3)?;
}
writeln!(self.out, "{}}}", l2)?;
writeln!(self.out, "{}{} = false;", level.next(), gate_name)?;
diff --git a/src/back/hlsl/writer.rs b/src/back/hlsl/writer.rs
--- a/src/back/hlsl/writer.rs
+++ b/src/back/hlsl/writer.rs
@@ -1497,18 +1497,27 @@ impl<'a, W: fmt::Write> super::Writer<'a, W> {
Statement::Loop {
ref body,
ref continuing,
+ break_if,
} => {
let l2 = level.next();
- if !continuing.is_empty() {
+ if !continuing.is_empty() || break_if.is_some() {
let gate_name = self.namer.call("loop_init");
writeln!(self.out, "{}bool {} = true;", level, gate_name)?;
writeln!(self.out, "{}while(true) {{", level)?;
writeln!(self.out, "{}if (!{}) {{", l2, gate_name)?;
+ let l3 = l2.next();
for sta in continuing.iter() {
- self.write_stmt(module, sta, func_ctx, l2.next())?;
+ self.write_stmt(module, sta, func_ctx, l3)?;
}
- writeln!(self.out, "{}}}", level.next())?;
- writeln!(self.out, "{}{} = false;", level.next(), gate_name)?;
+ if let Some(condition) = break_if {
+ write!(self.out, "{}if (", l3)?;
+ self.write_expr(module, condition, func_ctx)?;
+ writeln!(self.out, ") {{")?;
+ writeln!(self.out, "{}break;", l3.next())?;
+ writeln!(self.out, "{}}}", l3)?;
+ }
+ writeln!(self.out, "{}}}", l2)?;
+ writeln!(self.out, "{}{} = false;", l2, gate_name)?;
} else {
writeln!(self.out, "{}while(true) {{", level)?;
}
diff --git a/src/back/msl/writer.rs b/src/back/msl/writer.rs
--- a/src/back/msl/writer.rs
+++ b/src/back/msl/writer.rs
@@ -2552,14 +2552,23 @@ impl<W: Write> Writer<W> {
crate::Statement::Loop {
ref body,
ref continuing,
+ break_if,
} => {
- if !continuing.is_empty() {
+ if !continuing.is_empty() || break_if.is_some() {
let gate_name = self.namer.call("loop_init");
writeln!(self.out, "{}bool {} = true;", level, gate_name)?;
writeln!(self.out, "{}while(true) {{", level)?;
let lif = level.next();
+ let lcontinuing = lif.next();
writeln!(self.out, "{}if (!{}) {{", lif, gate_name)?;
- self.put_block(lif.next(), continuing, context)?;
+ self.put_block(lcontinuing, continuing, context)?;
+ if let Some(condition) = break_if {
+ write!(self.out, "{}if (", lcontinuing)?;
+ self.put_expression(condition, &context.expression, true)?;
+ writeln!(self.out, ") {{")?;
+ writeln!(self.out, "{}break;", lcontinuing.next())?;
+ writeln!(self.out, "{}}}", lcontinuing)?;
+ }
writeln!(self.out, "{}}}", lif)?;
writeln!(self.out, "{}{} = false;", lif, gate_name)?;
} else {
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -37,6 +37,28 @@ enum ExpressionPointer {
},
}
+/// The termination statement to be added to the end of the block
+pub enum BlockExit {
+ /// Generates an OpReturn (void return)
+ Return,
+ /// Generates an OpBranch to the specified block
+ Branch {
+ /// The branch target block
+ target: Word,
+ },
+ /// Translates a loop `break if` into an `OpBranchConditional` to the
+ /// merge block if true (the merge block is passed through [`LoopContext::break_id`]
+ /// or else to the loop header (passed through [`preamble_id`])
+ ///
+ /// [`preamble_id`]: Self::BreakIf::preamble_id
+ BreakIf {
+ /// The condition of the `break if`
+ condition: Handle<crate::Expression>,
+ /// The loop header block id
+ preamble_id: Word,
+ },
+}
+
impl Writer {
// Flip Y coordinate to adjust for coordinate space difference
// between SPIR-V and our IR.
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -1491,7 +1513,7 @@ impl<'w> BlockContext<'w> {
&mut self,
label_id: Word,
statements: &[crate::Statement],
- exit_id: Option<Word>,
+ exit: BlockExit,
loop_context: LoopContext,
) -> Result<(), Error> {
let mut block = Block::new(label_id);
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -1508,7 +1530,12 @@ impl<'w> BlockContext<'w> {
self.function.consume(block, Instruction::branch(scope_id));
let merge_id = self.gen_id();
- self.write_block(scope_id, block_statements, Some(merge_id), loop_context)?;
+ self.write_block(
+ scope_id,
+ block_statements,
+ BlockExit::Branch { target: merge_id },
+ loop_context,
+ )?;
block = Block::new(merge_id);
}
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -1546,10 +1573,20 @@ impl<'w> BlockContext<'w> {
);
if let Some(block_id) = accept_id {
- self.write_block(block_id, accept, Some(merge_id), loop_context)?;
+ self.write_block(
+ block_id,
+ accept,
+ BlockExit::Branch { target: merge_id },
+ loop_context,
+ )?;
}
if let Some(block_id) = reject_id {
- self.write_block(block_id, reject, Some(merge_id), loop_context)?;
+ self.write_block(
+ block_id,
+ reject,
+ BlockExit::Branch { target: merge_id },
+ loop_context,
+ )?;
}
block = Block::new(merge_id);
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -1611,7 +1648,9 @@ impl<'w> BlockContext<'w> {
self.write_block(
*label_id,
&case.body,
- Some(case_finish_id),
+ BlockExit::Branch {
+ target: case_finish_id,
+ },
inner_context,
)?;
}
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -1619,7 +1658,12 @@ impl<'w> BlockContext<'w> {
// If no default was encountered write a empty block to satisfy the presence of
// a block the default label
if !reached_default {
- self.write_block(default_id, &[], Some(merge_id), inner_context)?;
+ self.write_block(
+ default_id,
+ &[],
+ BlockExit::Branch { target: merge_id },
+ inner_context,
+ )?;
}
block = Block::new(merge_id);
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -1627,6 +1671,7 @@ impl<'w> BlockContext<'w> {
crate::Statement::Loop {
ref body,
ref continuing,
+ break_if,
} => {
let preamble_id = self.gen_id();
self.function
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -1649,17 +1694,29 @@ impl<'w> BlockContext<'w> {
self.write_block(
body_id,
body,
- Some(continuing_id),
+ BlockExit::Branch {
+ target: continuing_id,
+ },
LoopContext {
continuing_id: Some(continuing_id),
break_id: Some(merge_id),
},
)?;
+ let exit = match break_if {
+ Some(condition) => BlockExit::BreakIf {
+ condition,
+ preamble_id,
+ },
+ None => BlockExit::Branch {
+ target: preamble_id,
+ },
+ };
+
self.write_block(
continuing_id,
continuing,
- Some(preamble_id),
+ exit,
LoopContext {
continuing_id: None,
break_id: Some(merge_id),
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -1955,12 +2012,10 @@ impl<'w> BlockContext<'w> {
}
}
- let termination = match exit_id {
- Some(id) => Instruction::branch(id),
- // This can happen if the last branch had all the paths
- // leading out of the graph (i.e. returning).
- // Or it may be the end of the self.function.
- None => match self.ir_function.result {
+ let termination = match exit {
+ // We're generating code for the top-level Block of the function, so we
+ // need to end it with some kind of return instruction.
+ BlockExit::Return => match self.ir_function.result {
Some(ref result) if self.function.entry_point_context.is_none() => {
let type_id = self.get_type_id(LookupType::Handle(result.ty));
let null_id = self.writer.write_constant_null(type_id);
diff --git a/src/back/spv/block.rs b/src/back/spv/block.rs
--- a/src/back/spv/block.rs
+++ b/src/back/spv/block.rs
@@ -1968,6 +2023,19 @@ impl<'w> BlockContext<'w> {
}
_ => Instruction::return_void(),
},
+ BlockExit::Branch { target } => Instruction::branch(target),
+ BlockExit::BreakIf {
+ condition,
+ preamble_id,
+ } => {
+ let condition_id = self.cached[condition];
+
+ Instruction::branch_conditional(
+ condition_id,
+ loop_context.break_id.unwrap(),
+ preamble_id,
+ )
+ }
};
self.function.consume(block, termination);
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -574,7 +574,12 @@ impl Writer {
context
.function
.consume(prelude, Instruction::branch(main_id));
- context.write_block(main_id, &ir_function.body, None, LoopContext::default())?;
+ context.write_block(
+ main_id,
+ &ir_function.body,
+ super::block::BlockExit::Return,
+ LoopContext::default(),
+ )?;
// Consume the `BlockContext`, ending its borrows and letting the
// `Writer` steal back its cached expression table and temp_list.
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -908,6 +908,7 @@ impl<W: Write> Writer<W> {
Statement::Loop {
ref body,
ref continuing,
+ break_if,
} => {
write!(self.out, "{}", level)?;
writeln!(self.out, "loop {{")?;
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -917,11 +918,26 @@ impl<W: Write> Writer<W> {
self.write_stmt(module, sta, func_ctx, l2)?;
}
- if !continuing.is_empty() {
+ // The continuing is optional so we don't need to write it if
+ // it is empty, but the `break if` counts as a continuing statement
+ // so even if `continuing` is empty we must generate it if a
+ // `break if` exists
+ if !continuing.is_empty() || break_if.is_some() {
writeln!(self.out, "{}continuing {{", l2)?;
for sta in continuing.iter() {
self.write_stmt(module, sta, func_ctx, l2.next())?;
}
+
+ // The `break if` is always the last
+ // statement of the `continuing` block
+ if let Some(condition) = break_if {
+ // The trailing space is important
+ write!(self.out, "{}break if ", l2.next())?;
+ self.write_expr(module, condition, func_ctx)?;
+ // Close the `break if` statement
+ writeln!(self.out, ";")?;
+ }
+
writeln!(self.out, "{}}}", l2)?;
}
diff --git a/src/front/glsl/parser/functions.rs b/src/front/glsl/parser/functions.rs
--- a/src/front/glsl/parser/functions.rs
+++ b/src/front/glsl/parser/functions.rs
@@ -358,6 +358,7 @@ impl<'source> ParsingContext<'source> {
Statement::Loop {
body: loop_body,
continuing: Block::new(),
+ break_if: None,
},
meta,
);
diff --git a/src/front/glsl/parser/functions.rs b/src/front/glsl/parser/functions.rs
--- a/src/front/glsl/parser/functions.rs
+++ b/src/front/glsl/parser/functions.rs
@@ -411,6 +412,7 @@ impl<'source> ParsingContext<'source> {
Statement::Loop {
body: loop_body,
continuing: Block::new(),
+ break_if: None,
},
meta,
);
diff --git a/src/front/glsl/parser/functions.rs b/src/front/glsl/parser/functions.rs
--- a/src/front/glsl/parser/functions.rs
+++ b/src/front/glsl/parser/functions.rs
@@ -513,6 +515,7 @@ impl<'source> ParsingContext<'source> {
Statement::Loop {
body: block,
continuing,
+ break_if: None,
},
meta,
);
diff --git a/src/front/spv/function.rs b/src/front/spv/function.rs
--- a/src/front/spv/function.rs
+++ b/src/front/spv/function.rs
@@ -556,7 +556,11 @@ impl<'function> BlockContext<'function> {
let continuing = lower_impl(blocks, bodies, continuing);
block.push(
- crate::Statement::Loop { body, continuing },
+ crate::Statement::Loop {
+ body,
+ continuing,
+ break_if: None,
+ },
crate::Span::default(),
)
}
diff --git a/src/front/spv/mod.rs b/src/front/spv/mod.rs
--- a/src/front/spv/mod.rs
+++ b/src/front/spv/mod.rs
@@ -3565,6 +3565,7 @@ impl<I: Iterator<Item = u32>> Parser<I> {
S::Loop {
ref mut body,
ref mut continuing,
+ break_if: _,
} => {
self.patch_statements(body, expressions, fun_parameter_sampling)?;
self.patch_statements(continuing, expressions, fun_parameter_sampling)?;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -125,6 +125,8 @@ pub enum Error<'a> {
BadIncrDecrReferenceType(Span),
InvalidResolve(ResolveError),
InvalidForInitializer(Span),
+ /// A break if appeared outside of a continuing block
+ InvalidBreakIf(Span),
InvalidGatherComponent(Span, u32),
InvalidConstructorComponentType(Span, i32),
InvalidIdentifierUnderscore(Span),
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -307,6 +309,11 @@ impl<'a> Error<'a> {
labels: vec![(bad_span.clone(), "not an assignment or function call".into())],
notes: vec![],
},
+ Error::InvalidBreakIf(ref bad_span) => ParseError {
+ message: "A break if is only allowed in a continuing block".to_string(),
+ labels: vec![(bad_span.clone(), "not in a continuing block".into())],
+ notes: vec![],
+ },
Error::InvalidGatherComponent(ref bad_span, component) => ParseError {
message: format!("textureGather component {} doesn't exist, must be 0, 1, 2, or 3", component),
labels: vec![(bad_span.clone(), "invalid component".into())],
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3811,26 +3818,7 @@ impl Parser {
Some(crate::Statement::Switch { selector, cases })
}
- "loop" => {
- let _ = lexer.next();
- let mut body = crate::Block::new();
- let mut continuing = crate::Block::new();
- lexer.expect(Token::Paren('{'))?;
-
- loop {
- if lexer.skip(Token::Word("continuing")) {
- continuing = self.parse_block(lexer, context.reborrow(), false)?;
- lexer.expect(Token::Paren('}'))?;
- break;
- }
- if lexer.skip(Token::Paren('}')) {
- break;
- }
- self.parse_statement(lexer, context.reborrow(), &mut body, false)?;
- }
-
- Some(crate::Statement::Loop { body, continuing })
- }
+ "loop" => Some(self.parse_loop(lexer, context.reborrow(), &mut emitter)?),
"while" => {
let _ = lexer.next();
let mut body = crate::Block::new();
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3863,6 +3851,7 @@ impl Parser {
Some(crate::Statement::Loop {
body,
continuing: crate::Block::new(),
+ break_if: None,
})
}
"for" => {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3935,10 +3924,22 @@ impl Parser {
self.parse_statement(lexer, context.reborrow(), &mut body, false)?;
}
- Some(crate::Statement::Loop { body, continuing })
+ Some(crate::Statement::Loop {
+ body,
+ continuing,
+ break_if: None,
+ })
}
"break" => {
- let _ = lexer.next();
+ let (_, mut span) = lexer.next();
+ // Check if the next token is an `if`, this indicates
+ // that the user tried to type out a `break if` which
+ // is illegal in this position.
+ let (peeked_token, peeked_span) = lexer.peek();
+ if let Token::Word("if") = peeked_token {
+ span.end = peeked_span.end;
+ return Err(Error::InvalidBreakIf(span));
+ }
Some(crate::Statement::Break)
}
"continue" => {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4041,6 +4042,84 @@ impl Parser {
Ok(())
}
+ fn parse_loop<'a>(
+ &mut self,
+ lexer: &mut Lexer<'a>,
+ mut context: StatementContext<'a, '_, '_>,
+ emitter: &mut super::Emitter,
+ ) -> Result<crate::Statement, Error<'a>> {
+ let _ = lexer.next();
+ let mut body = crate::Block::new();
+ let mut continuing = crate::Block::new();
+ let mut break_if = None;
+ lexer.expect(Token::Paren('{'))?;
+
+ loop {
+ if lexer.skip(Token::Word("continuing")) {
+ // Branch for the `continuing` block, this must be
+ // the last thing in the loop body
+
+ // Expect a opening brace to start the continuing block
+ lexer.expect(Token::Paren('{'))?;
+ loop {
+ if lexer.skip(Token::Word("break")) {
+ // Branch for the `break if` statement, this statement
+ // has the form `break if <expr>;` and must be the last
+ // statement in a continuing block
+
+ // The break must be followed by an `if` to form
+ // the break if
+ lexer.expect(Token::Word("if"))?;
+
+ // Start the emitter to begin parsing an expression
+ emitter.start(context.expressions);
+ let condition = self.parse_general_expression(
+ lexer,
+ context.as_expression(&mut body, emitter),
+ )?;
+ // Add all emits to the continuing body
+ continuing.extend(emitter.finish(context.expressions));
+ // Set the condition of the break if to the newly parsed
+ // expression
+ break_if = Some(condition);
+
+ // Expext a semicolon to close the statement
+ lexer.expect(Token::Separator(';'))?;
+ // Expect a closing brace to close the continuing block,
+ // since the break if must be the last statement
+ lexer.expect(Token::Paren('}'))?;
+ // Stop parsing the continuing block
+ break;
+ } else if lexer.skip(Token::Paren('}')) {
+ // If we encounter a closing brace it means we have reached
+ // the end of the continuing block and should stop processing
+ break;
+ } else {
+ // Otherwise try to parse a statement
+ self.parse_statement(lexer, context.reborrow(), &mut continuing, false)?;
+ }
+ }
+ // Since the continuing block must be the last part of the loop body,
+ // we expect to see a closing brace to end the loop body
+ lexer.expect(Token::Paren('}'))?;
+ break;
+ }
+ if lexer.skip(Token::Paren('}')) {
+ // If we encounter a closing brace it means we have reached
+ // the end of the loop body and should stop processing
+ break;
+ }
+ // Otherwise try to parse a statement
+ self.parse_statement(lexer, context.reborrow(), &mut body, false)?;
+ }
+
+ Ok(crate::Statement::Loop {
+ body,
+ continuing,
+ break_if,
+ })
+ }
+
fn parse_block<'a>(
&mut self,
lexer: &mut Lexer<'a>,
diff --git a/src/lib.rs b/src/lib.rs
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -1439,11 +1439,23 @@ pub enum Statement {
/// this loop. (It may have `Break` and `Continue` statements targeting
/// loops or switches nested within the `continuing` block.)
///
+ /// If present, `break_if` is an expression which is evaluated after the
+ /// continuing block. If its value is true, control continues after the
+ /// `Loop` statement, rather than branching back to the top of body as
+ /// usual. The `break_if` expression corresponds to a "break if" statement
+ /// in WGSL, or a loop whose back edge is an `OpBranchConditional`
+ /// instruction in SPIR-V.
+ ///
/// [`Break`]: Statement::Break
/// [`Continue`]: Statement::Continue
/// [`Kill`]: Statement::Kill
/// [`Return`]: Statement::Return
- Loop { body: Block, continuing: Block },
+ /// [`break if`]: Self::Loop::break_if
+ Loop {
+ body: Block,
+ continuing: Block,
+ break_if: Option<Handle<Expression>>,
+ },
/// Exits the innermost enclosing [`Loop`] or [`Switch`].
///
diff --git a/src/valid/analyzer.rs b/src/valid/analyzer.rs
--- a/src/valid/analyzer.rs
+++ b/src/valid/analyzer.rs
@@ -841,6 +841,7 @@ impl FunctionInfo {
S::Loop {
ref body,
ref continuing,
+ break_if: _,
} => {
let body_uniformity =
self.process_block(body, other_functions, disruptor, expression_arena)?;
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -499,6 +499,7 @@ impl super::Validator {
S::Loop {
ref body,
ref continuing,
+ break_if,
} => {
// special handling for block scoping is needed here,
// because the continuing{} block inherits the scope
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -520,6 +521,20 @@ impl super::Validator {
&context.with_abilities(ControlFlowAbility::empty()),
)?
.stages;
+
+ if let Some(condition) = break_if {
+ match *context.resolve_type(condition, &self.valid_expression_set)? {
+ Ti::Scalar {
+ kind: crate::ScalarKind::Bool,
+ width: _,
+ } => {}
+ _ => {
+ return Err(FunctionError::InvalidIfType(condition)
+ .with_span_handle(condition, context.expressions))
+ }
+ }
+ }
+
for handle in self.valid_expression_list.drain(base_expression_count..) {
self.valid_expression_set.remove(handle.index());
}
| diff --git /dev/null b/tests/in/break-if.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/in/break-if.wgsl
@@ -0,0 +1,32 @@
+@compute @workgroup_size(1)
+fn main() {}
+
+fn breakIfEmpty() {
+ loop {
+ continuing {
+ break if true;
+ }
+ }
+}
+
+fn breakIfEmptyBody(a: bool) {
+ loop {
+ continuing {
+ var b = a;
+ var c = a != b;
+
+ break if a == c;
+ }
+ }
+}
+
+fn breakIf(a: bool) {
+ loop {
+ var d = a;
+ var e = a != d;
+
+ continuing {
+ break if a == e;
+ }
+ }
+}
diff --git /dev/null b/tests/out/glsl/break-if.main.Compute.glsl
new file mode 100644
--- /dev/null
+++ b/tests/out/glsl/break-if.main.Compute.glsl
@@ -0,0 +1,65 @@
+#version 310 es
+
+precision highp float;
+precision highp int;
+
+layout(local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
+
+
+void breakIfEmpty() {
+ bool loop_init = true;
+ while(true) {
+ if (!loop_init) {
+ if (true) {
+ break;
+ }
+ }
+ loop_init = false;
+ }
+ return;
+}
+
+void breakIfEmptyBody(bool a) {
+ bool b = false;
+ bool c = false;
+ bool loop_init_1 = true;
+ while(true) {
+ if (!loop_init_1) {
+ b = a;
+ bool _e2 = b;
+ c = (a != _e2);
+ bool _e5 = c;
+ bool unnamed = (a == _e5);
+ if (unnamed) {
+ break;
+ }
+ }
+ loop_init_1 = false;
+ }
+ return;
+}
+
+void breakIf(bool a_1) {
+ bool d = false;
+ bool e = false;
+ bool loop_init_2 = true;
+ while(true) {
+ if (!loop_init_2) {
+ bool _e5 = e;
+ bool unnamed_1 = (a_1 == _e5);
+ if (unnamed_1) {
+ break;
+ }
+ }
+ loop_init_2 = false;
+ d = a_1;
+ bool _e2 = d;
+ e = (a_1 != _e2);
+ }
+ return;
+}
+
+void main() {
+ return;
+}
+
diff --git /dev/null b/tests/out/hlsl/break-if.hlsl
new file mode 100644
--- /dev/null
+++ b/tests/out/hlsl/break-if.hlsl
@@ -0,0 +1,64 @@
+
+void breakIfEmpty()
+{
+ bool loop_init = true;
+ while(true) {
+ if (!loop_init) {
+ if (true) {
+ break;
+ }
+ }
+ loop_init = false;
+ }
+ return;
+}
+
+void breakIfEmptyBody(bool a)
+{
+ bool b = (bool)0;
+ bool c = (bool)0;
+
+ bool loop_init_1 = true;
+ while(true) {
+ if (!loop_init_1) {
+ b = a;
+ bool _expr2 = b;
+ c = (a != _expr2);
+ bool _expr5 = c;
+ bool unnamed = (a == _expr5);
+ if (unnamed) {
+ break;
+ }
+ }
+ loop_init_1 = false;
+ }
+ return;
+}
+
+void breakIf(bool a_1)
+{
+ bool d = (bool)0;
+ bool e = (bool)0;
+
+ bool loop_init_2 = true;
+ while(true) {
+ if (!loop_init_2) {
+ bool _expr5 = e;
+ bool unnamed_1 = (a_1 == _expr5);
+ if (unnamed_1) {
+ break;
+ }
+ }
+ loop_init_2 = false;
+ d = a_1;
+ bool _expr2 = d;
+ e = (a_1 != _expr2);
+ }
+ return;
+}
+
+[numthreads(1, 1, 1)]
+void main()
+{
+ return;
+}
diff --git /dev/null b/tests/out/hlsl/break-if.hlsl.config
new file mode 100644
--- /dev/null
+++ b/tests/out/hlsl/break-if.hlsl.config
@@ -0,0 +1,3 @@
+vertex=()
+fragment=()
+compute=(main:cs_5_1 )
diff --git a/tests/out/ir/collatz.ron b/tests/out/ir/collatz.ron
--- a/tests/out/ir/collatz.ron
+++ b/tests/out/ir/collatz.ron
@@ -261,6 +261,7 @@
),
],
continuing: [],
+ break_if: None,
),
Emit((
start: 24,
diff --git a/tests/out/ir/shadow.ron b/tests/out/ir/shadow.ron
--- a/tests/out/ir/shadow.ron
+++ b/tests/out/ir/shadow.ron
@@ -1320,6 +1320,7 @@
value: 120,
),
],
+ break_if: None,
),
Emit((
start: 120,
diff --git /dev/null b/tests/out/msl/break-if.msl
new file mode 100644
--- /dev/null
+++ b/tests/out/msl/break-if.msl
@@ -0,0 +1,69 @@
+// language: metal2.0
+#include <metal_stdlib>
+#include <simd/simd.h>
+
+using metal::uint;
+
+
+void breakIfEmpty(
+) {
+ bool loop_init = true;
+ while(true) {
+ if (!loop_init) {
+ if (true) {
+ break;
+ }
+ }
+ loop_init = false;
+ }
+ return;
+}
+
+void breakIfEmptyBody(
+ bool a
+) {
+ bool b = {};
+ bool c = {};
+ bool loop_init_1 = true;
+ while(true) {
+ if (!loop_init_1) {
+ b = a;
+ bool _e2 = b;
+ c = a != _e2;
+ bool _e5 = c;
+ bool unnamed = a == _e5;
+ if (a == c) {
+ break;
+ }
+ }
+ loop_init_1 = false;
+ }
+ return;
+}
+
+void breakIf(
+ bool a_1
+) {
+ bool d = {};
+ bool e = {};
+ bool loop_init_2 = true;
+ while(true) {
+ if (!loop_init_2) {
+ bool _e5 = e;
+ bool unnamed_1 = a_1 == _e5;
+ if (a_1 == e) {
+ break;
+ }
+ }
+ loop_init_2 = false;
+ d = a_1;
+ bool _e2 = d;
+ e = a_1 != _e2;
+ }
+ return;
+}
+
+kernel void main_(
+) {
+ return;
+}
diff --git /dev/null b/tests/out/spv/break-if.spvasm
new file mode 100644
--- /dev/null
+++ b/tests/out/spv/break-if.spvasm
@@ -0,0 +1,88 @@
+; SPIR-V
+; Version: 1.1
+; Generator: rspirv
+; Bound: 50
+OpCapability Shader
+%1 = OpExtInstImport "GLSL.std.450"
+OpMemoryModel Logical GLSL450
+OpEntryPoint GLCompute %48 "main"
+OpExecutionMode %48 LocalSize 1 1 1
+%2 = OpTypeVoid
+%4 = OpTypeBool
+%3 = OpConstantTrue %4
+%7 = OpTypeFunction %2
+%14 = OpTypePointer Function %4
+%15 = OpConstantNull %4
+%17 = OpConstantNull %4
+%21 = OpTypeFunction %2 %4
+%32 = OpConstantNull %4
+%34 = OpConstantNull %4
+%6 = OpFunction %2 None %7
+%5 = OpLabel
+OpBranch %8
+%8 = OpLabel
+OpBranch %9
+%9 = OpLabel
+OpLoopMerge %10 %12 None
+OpBranch %11
+%11 = OpLabel
+OpBranch %12
+%12 = OpLabel
+OpBranchConditional %3 %10 %9
+%10 = OpLabel
+OpReturn
+OpFunctionEnd
+%20 = OpFunction %2 None %21
+%19 = OpFunctionParameter %4
+%18 = OpLabel
+%13 = OpVariable %14 Function %15
+%16 = OpVariable %14 Function %17
+OpBranch %22
+%22 = OpLabel
+OpBranch %23
+%23 = OpLabel
+OpLoopMerge %24 %26 None
+OpBranch %25
+%25 = OpLabel
+OpBranch %26
+%26 = OpLabel
+OpStore %13 %19
+%27 = OpLoad %4 %13
+%28 = OpLogicalNotEqual %4 %19 %27
+OpStore %16 %28
+%29 = OpLoad %4 %16
+%30 = OpLogicalEqual %4 %19 %29
+OpBranchConditional %30 %24 %23
+%24 = OpLabel
+OpReturn
+OpFunctionEnd
+%37 = OpFunction %2 None %21
+%36 = OpFunctionParameter %4
+%35 = OpLabel
+%31 = OpVariable %14 Function %32
+%33 = OpVariable %14 Function %34
+OpBranch %38
+%38 = OpLabel
+OpBranch %39
+%39 = OpLabel
+OpLoopMerge %40 %42 None
+OpBranch %41
+%41 = OpLabel
+OpStore %31 %36
+%43 = OpLoad %4 %31
+%44 = OpLogicalNotEqual %4 %36 %43
+OpStore %33 %44
+OpBranch %42
+%42 = OpLabel
+%45 = OpLoad %4 %33
+%46 = OpLogicalEqual %4 %36 %45
+OpBranchConditional %46 %40 %39
+%40 = OpLabel
+OpReturn
+OpFunctionEnd
+%48 = OpFunction %2 None %7
+%47 = OpLabel
+OpBranch %49
+%49 = OpLabel
+OpReturn
+OpFunctionEnd
\ No newline at end of file
diff --git /dev/null b/tests/out/wgsl/break-if.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/out/wgsl/break-if.wgsl
@@ -0,0 +1,47 @@
+fn breakIfEmpty() {
+ loop {
+ continuing {
+ break if true;
+ }
+ }
+ return;
+}
+
+fn breakIfEmptyBody(a: bool) {
+ var b: bool;
+ var c: bool;
+
+ loop {
+ continuing {
+ b = a;
+ let _e2 = b;
+ c = (a != _e2);
+ let _e5 = c;
+ _ = (a == _e5);
+ break if (a == _e5);
+ }
+ }
+ return;
+}
+
+fn breakIf(a_1: bool) {
+ var d: bool;
+ var e: bool;
+
+ loop {
+ d = a_1;
+ let _e2 = d;
+ e = (a_1 != _e2);
+ continuing {
+ let _e5 = e;
+ _ = (a_1 == _e5);
+ break if (a_1 == _e5);
+ }
+ }
+ return;
+}
+
+@compute @workgroup_size(1, 1, 1)
+fn main() {
+ return;
+}
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -527,6 +527,10 @@ fn convert_wgsl() {
"binding-arrays",
Targets::WGSL | Targets::HLSL | Targets::METAL | Targets::SPIRV,
),
+ (
+ "break-if",
+ Targets::WGSL | Targets::GLSL | Targets::SPIRV | Targets::HLSL | Targets::METAL,
+ ),
];
for &(name, targets) in inputs.iter() {
diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -1556,3 +1556,44 @@ fn host_shareable_types() {
}
}
}
+
+#[test]
+fn misplaced_break_if() {
+ check(
+ "
+ fn test_misplaced_break_if() {
+ loop {
+ break if true;
+ }
+ }
+ ",
+ r###"error: A break if is only allowed in a continuing block
+ ┌─ wgsl:4:17
+ │
+4 │ break if true;
+ │ ^^^^^^^^ not in a continuing block
+
+"###,
+ );
+}
+
+#[test]
+fn break_if_bad_condition() {
+ check_validation! {
+ "
+ fn test_break_if_bad_condition() {
+ loop {
+ continuing {
+ break if 1;
+ }
+ }
+ }
+ ":
+ Err(
+ naga::valid::ValidationError::Function {
+ error: naga::valid::FunctionError::InvalidIfType(_),
+ ..
+ },
+ )
+ }
+}
| [wgsl-in] Add break-if as optional at end of continuing
Spec PR: https://github.com/gpuweb/gpuweb/pull/2618
| 2022-06-17T23:05:15 | 0.8 | ea832a9eec13560560c017072d4318d5d942e5e5 | [
"convert_wgsl"
] | [
"back::msl::test_error_size",
"arena::tests::append_unique",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::writer:... | [] | [] | 78 | |
gfx-rs/naga | 1,917 | gfx-rs__naga-1917 | [
"1896"
] | ab2806e05fbd69a502b4b25a85f99ae4d3e82278 | diff --git a/src/front/wgsl/construction.rs b/src/front/wgsl/construction.rs
--- a/src/front/wgsl/construction.rs
+++ b/src/front/wgsl/construction.rs
@@ -184,12 +184,15 @@ fn parse_constructor_type<'a>(
Ok(Some(ConstructorType::Vector { size, kind, width }))
}
(Token::Paren('<'), ConstructorType::PartialMatrix { columns, rows }) => {
- let (_, width) = lexer.next_scalar_generic()?;
- Ok(Some(ConstructorType::Matrix {
- columns,
- rows,
- width,
- }))
+ let (kind, width, span) = lexer.next_scalar_generic_with_span()?;
+ match kind {
+ ScalarKind::Float => Ok(Some(ConstructorType::Matrix {
+ columns,
+ rows,
+ width,
+ })),
+ _ => Err(Error::BadMatrixScalarKind(span, kind, width)),
+ }
}
(Token::Paren('<'), ConstructorType::PartialArray) => {
lexer.expect_generic_paren('<')?;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -130,6 +130,7 @@ pub enum Error<'a> {
BadFloat(Span, BadFloatError),
BadU32Constant(Span),
BadScalarWidth(Span, Bytes),
+ BadMatrixScalarKind(Span, crate::ScalarKind, u8),
BadAccessor(Span),
BadTexture(Span),
BadTypeCast {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -296,12 +297,20 @@ impl<'a> Error<'a> {
labels: vec![(bad_span.clone(), "expected unsigned integer".into())],
notes: vec![],
},
-
Error::BadScalarWidth(ref bad_span, width) => ParseError {
message: format!("invalid width of `{}` bits for literal", width as u32 * 8,),
labels: vec![(bad_span.clone(), "invalid width".into())],
notes: vec!["the only valid width is 32 for now".to_string()],
},
+ Error::BadMatrixScalarKind(
+ ref span,
+ kind,
+ width,
+ ) => ParseError {
+ message: format!("matrix scalar type must be floating-point, but found `{}`", kind.to_wgsl(width)),
+ labels: vec![(span.clone(), "must be floating-point (e.g. `f32`)".into())],
+ notes: vec![],
+ },
Error::BadAccessor(ref accessor_span) => ParseError {
message: format!(
"invalid field accessor `{}`",
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2901,6 +2910,23 @@ impl Parser {
Ok((members, span))
}
+ fn parse_matrix_scalar_type<'a>(
+ &mut self,
+ lexer: &mut Lexer<'a>,
+ columns: crate::VectorSize,
+ rows: crate::VectorSize,
+ ) -> Result<crate::TypeInner, Error<'a>> {
+ let (kind, width, span) = lexer.next_scalar_generic_with_span()?;
+ match kind {
+ crate::ScalarKind::Float => Ok(crate::TypeInner::Matrix {
+ columns,
+ rows,
+ width,
+ }),
+ _ => Err(Error::BadMatrixScalarKind(span, kind, width)),
+ }
+ }
+
fn parse_type_decl_impl<'a>(
&mut self,
lexer: &mut Lexer<'a>,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2912,6 +2938,7 @@ impl Parser {
if let Some((kind, width)) = conv::get_scalar_type(word) {
return Ok(Some(crate::TypeInner::Scalar { kind, width }));
}
+
Ok(Some(match word {
"vec2" => {
let (kind, width) = lexer.next_scalar_generic()?;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2938,77 +2965,44 @@ impl Parser {
}
}
"mat2x2" => {
- let (_, width) = lexer.next_scalar_generic()?;
- crate::TypeInner::Matrix {
- columns: crate::VectorSize::Bi,
- rows: crate::VectorSize::Bi,
- width,
- }
+ self.parse_matrix_scalar_type(lexer, crate::VectorSize::Bi, crate::VectorSize::Bi)?
}
"mat2x3" => {
- let (_, width) = lexer.next_scalar_generic()?;
- crate::TypeInner::Matrix {
- columns: crate::VectorSize::Bi,
- rows: crate::VectorSize::Tri,
- width,
- }
- }
- "mat2x4" => {
- let (_, width) = lexer.next_scalar_generic()?;
- crate::TypeInner::Matrix {
- columns: crate::VectorSize::Bi,
- rows: crate::VectorSize::Quad,
- width,
- }
+ self.parse_matrix_scalar_type(lexer, crate::VectorSize::Bi, crate::VectorSize::Tri)?
}
+ "mat2x4" => self.parse_matrix_scalar_type(
+ lexer,
+ crate::VectorSize::Bi,
+ crate::VectorSize::Quad,
+ )?,
"mat3x2" => {
- let (_, width) = lexer.next_scalar_generic()?;
- crate::TypeInner::Matrix {
- columns: crate::VectorSize::Tri,
- rows: crate::VectorSize::Bi,
- width,
- }
- }
- "mat3x3" => {
- let (_, width) = lexer.next_scalar_generic()?;
- crate::TypeInner::Matrix {
- columns: crate::VectorSize::Tri,
- rows: crate::VectorSize::Tri,
- width,
- }
- }
- "mat3x4" => {
- let (_, width) = lexer.next_scalar_generic()?;
- crate::TypeInner::Matrix {
- columns: crate::VectorSize::Tri,
- rows: crate::VectorSize::Quad,
- width,
- }
- }
- "mat4x2" => {
- let (_, width) = lexer.next_scalar_generic()?;
- crate::TypeInner::Matrix {
- columns: crate::VectorSize::Quad,
- rows: crate::VectorSize::Bi,
- width,
- }
- }
- "mat4x3" => {
- let (_, width) = lexer.next_scalar_generic()?;
- crate::TypeInner::Matrix {
- columns: crate::VectorSize::Quad,
- rows: crate::VectorSize::Tri,
- width,
- }
- }
- "mat4x4" => {
- let (_, width) = lexer.next_scalar_generic()?;
- crate::TypeInner::Matrix {
- columns: crate::VectorSize::Quad,
- rows: crate::VectorSize::Quad,
- width,
- }
+ self.parse_matrix_scalar_type(lexer, crate::VectorSize::Tri, crate::VectorSize::Bi)?
}
+ "mat3x3" => self.parse_matrix_scalar_type(
+ lexer,
+ crate::VectorSize::Tri,
+ crate::VectorSize::Tri,
+ )?,
+ "mat3x4" => self.parse_matrix_scalar_type(
+ lexer,
+ crate::VectorSize::Tri,
+ crate::VectorSize::Quad,
+ )?,
+ "mat4x2" => self.parse_matrix_scalar_type(
+ lexer,
+ crate::VectorSize::Quad,
+ crate::VectorSize::Bi,
+ )?,
+ "mat4x3" => self.parse_matrix_scalar_type(
+ lexer,
+ crate::VectorSize::Quad,
+ crate::VectorSize::Tri,
+ )?,
+ "mat4x4" => self.parse_matrix_scalar_type(
+ lexer,
+ crate::VectorSize::Quad,
+ crate::VectorSize::Quad,
+ )?,
"atomic" => {
let (kind, width) = lexer.next_scalar_generic()?;
crate::TypeInner::Atomic { kind, width }
| diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -810,6 +810,39 @@ fn module_scope_identifier_redefinition() {
);
}
+#[test]
+fn matrix_with_bad_type() {
+ check(
+ r#"
+ fn main() {
+ let m = mat2x2<i32>();
+ }
+ "#,
+ r#"error: matrix scalar type must be floating-point, but found `i32`
+ ┌─ wgsl:3:32
+ │
+3 │ let m = mat2x2<i32>();
+ │ ^^^ must be floating-point (e.g. `f32`)
+
+"#,
+ );
+
+ check(
+ r#"
+ fn main() {
+ let m: mat3x3<i32>;
+ }
+ "#,
+ r#"error: matrix scalar type must be floating-point, but found `i32`
+ ┌─ wgsl:3:31
+ │
+3 │ let m: mat3x3<i32>;
+ │ ^^^ must be floating-point (e.g. `f32`)
+
+"#,
+ );
+}
+
/// Check the result of validating a WGSL program against a pattern.
///
/// Unless you are generating code programmatically, the
| [wgsl-in] Error on matrices with non float scalar kinds
The following works (on naga master cf32c2b):
````wgsl
@group(0) @binding(0)
var<storage, write> output_0: vec3<f32>;
@compute @workgroup_size(1)
fn main(@builtin(global_invocation_id) global_id: vec3<u32>) {
let zero: f32 = 0.0;
let zero_vec = vec3<f32>(zero,zero,zero);
let m = mat3x3<f32>(zero_vec, zero_vec, zero_vec);
}
````
````bash
% cargo run -- ./test.wgsl
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/naga ./test.wgsl`
Validation successful
````
The following shows an error (`f32` is replaced with `i32`):
````wgsl
@group(0) @binding(0)
var<storage, write> output_0: vec3<i32>;
@compute @workgroup_size(1)
fn main(@builtin(global_invocation_id) global_id: vec3<u32>) {
let zero: i32 = 0;
let zero_vec = vec3<i32>(zero,zero,zero);
let m = mat3x3<i32>(zero_vec, zero_vec, zero_vec);
}
````
````bash
% cargo run -- ./test.wgsl
Finished dev [unoptimized + debuginfo] target(s) in 0.02s
Running `target/debug/naga ./test.wgsl`
[2022-05-09T19:06:29Z ERROR naga::valid::compose] Matrix component[0] type Handle([1])
error:
┌─ test.wgsl:8:9
│
8 │ let m = mat3x3<i32>(zero_vec, zero_vec, zero_vec);
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ naga::Expression [5]
Entry point main at Compute is invalid:
Expression [5] is invalid
Composing 0's component type is not expected
````
This seems... a bug, or is this just a counterintuitive aspect of the spec?
Use case: in [wonnx](https://github.com/webonnx/wonnx) we generate shaders using templates (e.g. [this one for generalized matrix multiplication](https://github.com/webonnx/wonnx/blob/master/wonnx/templates/matrix/gemm.wgsl)) to implement machine learning operators, which may use different data types (typically f32, i32 or i64) and would like to make the shaders work on all types. We often need to construct zero values (or other statically initialized values).
| According to the latest version of the WGSL spec only floating point matrices are allowed.
See https://gpuweb.github.io/gpuweb/wgsl/#matrix-types
OK, thanks, that makes sense (I guess there's probably some platform that doesn't support it and WGSL therefore restricts to floats).
I would then suggest to improve the error messaging here (mayne it should throw an error even when just naming the type `mat3x3<i32>` as that can never be used?).
Looks like we are currently ignoring the `ScalarKind` returned by `next_scalar_generic()`.
This could be handled by introducing a new error type `BadScalarKind` (akin to the existing `BadScalarWidth`) and if we encounter any scalar kind other than `ScalarKind::Float` return the `BadScalarKind` error.
https://github.com/gfx-rs/naga/blob/cf32c2b7f38c985e1c770eeff05a91e0cd15ee04/src/front/wgsl/construction.rs#L187
https://github.com/gfx-rs/naga/blob/cf32c2b7f38c985e1c770eeff05a91e0cd15ee04/src/front/wgsl/mod.rs#L2940-L3011 | 2022-05-14T10:47:25 | 0.8 | ea832a9eec13560560c017072d4318d5d942e5e5 | [
"matrix_with_bad_type"
] | [
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"arena::tests::append_unique",
"arena::tests::append_non_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [] | [] | 79 |
gfx-rs/naga | 1,790 | gfx-rs__naga-1790 | [
"1774"
] | 012f2a6b2e9328e93939429705fdf9102d9f69b7 | diff --git /dev/null b/src/front/wgsl/construction.rs
new file mode 100644
--- /dev/null
+++ b/src/front/wgsl/construction.rs
@@ -0,0 +1,649 @@
+use crate::{
+ proc::TypeResolution, Arena, ArraySize, Bytes, Constant, ConstantInner, Expression, Handle,
+ ScalarKind, ScalarValue, Span as NagaSpan, Type, TypeInner, UniqueArena, VectorSize,
+};
+
+use super::{Error, ExpressionContext, Lexer, Parser, Scope, Span, Token};
+
+/// Represents the type of the constructor
+///
+/// Vectors, Matrices and Arrays can have partial type information
+/// which later gets inferred from the constructor parameters
+enum ConstructorType {
+ Scalar {
+ kind: ScalarKind,
+ width: Bytes,
+ },
+ PartialVector {
+ size: VectorSize,
+ },
+ Vector {
+ size: VectorSize,
+ kind: ScalarKind,
+ width: Bytes,
+ },
+ PartialMatrix {
+ columns: VectorSize,
+ rows: VectorSize,
+ },
+ Matrix {
+ columns: VectorSize,
+ rows: VectorSize,
+ width: Bytes,
+ },
+ PartialArray,
+ Array {
+ base: Handle<Type>,
+ size: ArraySize,
+ stride: u32,
+ },
+ Struct(Handle<Type>),
+}
+
+impl ConstructorType {
+ fn to_type_resolution(&self) -> Option<TypeResolution> {
+ Some(match *self {
+ ConstructorType::Scalar { kind, width } => {
+ TypeResolution::Value(TypeInner::Scalar { kind, width })
+ }
+ ConstructorType::Vector { size, kind, width } => {
+ TypeResolution::Value(TypeInner::Vector { size, kind, width })
+ }
+ ConstructorType::Matrix {
+ columns,
+ rows,
+ width,
+ } => TypeResolution::Value(TypeInner::Matrix {
+ columns,
+ rows,
+ width,
+ }),
+ ConstructorType::Array { base, size, stride } => {
+ TypeResolution::Value(TypeInner::Array { base, size, stride })
+ }
+ ConstructorType::Struct(handle) => TypeResolution::Handle(handle),
+ _ => return None,
+ })
+ }
+}
+
+impl ConstructorType {
+ fn to_error_string(&self, types: &UniqueArena<Type>, constants: &Arena<Constant>) -> String {
+ match *self {
+ ConstructorType::Scalar { kind, width } => kind.to_wgsl(width),
+ ConstructorType::PartialVector { size } => {
+ format!("vec{}<?>", size as u32,)
+ }
+ ConstructorType::Vector { size, kind, width } => {
+ format!("vec{}<{}>", size as u32, kind.to_wgsl(width))
+ }
+ ConstructorType::PartialMatrix { columns, rows } => {
+ format!("mat{}x{}<?>", columns as u32, rows as u32,)
+ }
+ ConstructorType::Matrix {
+ columns,
+ rows,
+ width,
+ } => {
+ format!(
+ "mat{}x{}<{}>",
+ columns as u32,
+ rows as u32,
+ ScalarKind::Float.to_wgsl(width)
+ )
+ }
+ ConstructorType::PartialArray => "array<?, ?>".to_string(),
+ ConstructorType::Array { base, size, .. } => {
+ format!(
+ "array<{}, {}>",
+ types[base].name.as_deref().unwrap_or("?"),
+ match size {
+ ArraySize::Constant(size) => {
+ constants[size]
+ .to_array_length()
+ .map(|len| len.to_string())
+ .unwrap_or_else(|| "?".to_string())
+ }
+ _ => unreachable!(),
+ }
+ )
+ }
+ ConstructorType::Struct(handle) => types[handle]
+ .name
+ .clone()
+ .unwrap_or_else(|| "?".to_string()),
+ }
+ }
+}
+
+fn parse_constructor_type<'a>(
+ parser: &mut Parser,
+ lexer: &mut Lexer<'a>,
+ word: &'a str,
+ type_arena: &mut UniqueArena<Type>,
+ const_arena: &mut Arena<Constant>,
+) -> Result<Option<ConstructorType>, Error<'a>> {
+ if let Some((kind, width)) = super::conv::get_scalar_type(word) {
+ return Ok(Some(ConstructorType::Scalar { kind, width }));
+ }
+
+ let partial = match word {
+ "vec2" => ConstructorType::PartialVector {
+ size: VectorSize::Bi,
+ },
+ "vec3" => ConstructorType::PartialVector {
+ size: VectorSize::Tri,
+ },
+ "vec4" => ConstructorType::PartialVector {
+ size: VectorSize::Quad,
+ },
+ "mat2x2" => ConstructorType::PartialMatrix {
+ columns: VectorSize::Bi,
+ rows: VectorSize::Bi,
+ },
+ "mat2x3" => ConstructorType::PartialMatrix {
+ columns: VectorSize::Bi,
+ rows: VectorSize::Tri,
+ },
+ "mat2x4" => ConstructorType::PartialMatrix {
+ columns: VectorSize::Bi,
+ rows: VectorSize::Quad,
+ },
+ "mat3x2" => ConstructorType::PartialMatrix {
+ columns: VectorSize::Tri,
+ rows: VectorSize::Bi,
+ },
+ "mat3x3" => ConstructorType::PartialMatrix {
+ columns: VectorSize::Tri,
+ rows: VectorSize::Tri,
+ },
+ "mat3x4" => ConstructorType::PartialMatrix {
+ columns: VectorSize::Tri,
+ rows: VectorSize::Quad,
+ },
+ "mat4x2" => ConstructorType::PartialMatrix {
+ columns: VectorSize::Quad,
+ rows: VectorSize::Bi,
+ },
+ "mat4x3" => ConstructorType::PartialMatrix {
+ columns: VectorSize::Quad,
+ rows: VectorSize::Tri,
+ },
+ "mat4x4" => ConstructorType::PartialMatrix {
+ columns: VectorSize::Quad,
+ rows: VectorSize::Quad,
+ },
+ "array" => ConstructorType::PartialArray,
+ _ => return Ok(None),
+ };
+
+ // parse component type if present
+ match (lexer.peek().0, partial) {
+ (Token::Paren('<'), ConstructorType::PartialVector { size }) => {
+ let (kind, width) = lexer.next_scalar_generic()?;
+ Ok(Some(ConstructorType::Vector { size, kind, width }))
+ }
+ (Token::Paren('<'), ConstructorType::PartialMatrix { columns, rows }) => {
+ let (_, width) = lexer.next_scalar_generic()?;
+ Ok(Some(ConstructorType::Matrix {
+ columns,
+ rows,
+ width,
+ }))
+ }
+ (Token::Paren('<'), ConstructorType::PartialArray) => {
+ lexer.expect_generic_paren('<')?;
+ let base = parser
+ .parse_type_decl(lexer, None, type_arena, const_arena)?
+ .0;
+ let size = if lexer.skip(Token::Separator(',')) {
+ let const_handle = parser.parse_const_expression(lexer, type_arena, const_arena)?;
+ ArraySize::Constant(const_handle)
+ } else {
+ ArraySize::Dynamic
+ };
+ lexer.expect_generic_paren('>')?;
+
+ let stride = {
+ parser.layouter.update(type_arena, const_arena).unwrap();
+ parser.layouter[base].to_stride()
+ };
+
+ Ok(Some(ConstructorType::Array { base, size, stride }))
+ }
+ (_, partial) => Ok(Some(partial)),
+ }
+}
+
+/// Expects [`Scope::PrimaryExpr`] scope on top; if returning Some(_), pops it.
+pub(super) fn parse_construction<'a>(
+ parser: &mut Parser,
+ lexer: &mut Lexer<'a>,
+ type_name: &'a str,
+ type_span: Span,
+ mut ctx: ExpressionContext<'a, '_, '_>,
+) -> Result<Option<Handle<Expression>>, Error<'a>> {
+ assert_eq!(
+ parser.scopes.last().map(|&(ref scope, _)| scope.clone()),
+ Some(Scope::PrimaryExpr)
+ );
+ let dst_ty = match parser.lookup_type.get(type_name) {
+ Some(&handle) => ConstructorType::Struct(handle),
+ None => match parse_constructor_type(parser, lexer, type_name, ctx.types, ctx.constants)? {
+ Some(inner) => inner,
+ None => {
+ match parser.parse_type_decl_impl(
+ lexer,
+ super::TypeAttributes::default(),
+ type_name,
+ ctx.types,
+ ctx.constants,
+ )? {
+ Some(_) => {
+ return Err(Error::TypeNotConstructible(type_span));
+ }
+ None => return Ok(None),
+ }
+ }
+ },
+ };
+
+ lexer.open_arguments()?;
+
+ let mut components = Vec::new();
+ let mut spans = Vec::new();
+
+ if lexer.peek().0 == Token::Paren(')') {
+ let _ = lexer.next();
+ } else {
+ while components.is_empty() || lexer.next_argument()? {
+ let (component, span) = lexer
+ .capture_span(|lexer| parser.parse_general_expression(lexer, ctx.reborrow()))?;
+ components.push(component);
+ spans.push(span);
+ }
+ }
+
+ enum Components<'a> {
+ None,
+ One {
+ component: Handle<Expression>,
+ span: Span,
+ ty: &'a TypeInner,
+ },
+ Many {
+ components: Vec<Handle<Expression>>,
+ spans: Vec<Span>,
+ first_component_ty: &'a TypeInner,
+ },
+ }
+
+ impl<'a> Components<'a> {
+ fn into_components_vec(self) -> Vec<Handle<Expression>> {
+ match self {
+ Components::None => vec![],
+ Components::One { component, .. } => vec![component],
+ Components::Many { components, .. } => components,
+ }
+ }
+ }
+
+ let components = match *components.as_slice() {
+ [] => Components::None,
+ [component] => {
+ ctx.resolve_type(component)?;
+ Components::One {
+ component,
+ span: spans[0].clone(),
+ ty: ctx.typifier.get(component, ctx.types),
+ }
+ }
+ [component, ..] => {
+ ctx.resolve_type(component)?;
+ Components::Many {
+ components,
+ spans,
+ first_component_ty: ctx.typifier.get(component, ctx.types),
+ }
+ }
+ };
+
+ let expr = match (components, dst_ty) {
+ // Empty constructor
+ (Components::None, dst_ty) => {
+ let ty = match dst_ty.to_type_resolution() {
+ Some(TypeResolution::Handle(handle)) => handle,
+ Some(TypeResolution::Value(inner)) => ctx
+ .types
+ .insert(Type { name: None, inner }, Default::default()),
+ None => return Err(Error::TypeNotInferrable(type_span)),
+ };
+
+ return match ctx.create_zero_value_constant(ty) {
+ Some(constant) => {
+ let span = parser.pop_scope(lexer);
+ Ok(Some(ctx.interrupt_emitter(
+ Expression::Constant(constant),
+ span.into(),
+ )))
+ }
+ None => Err(Error::TypeNotConstructible(type_span)),
+ };
+ }
+
+ // Scalar constructor & conversion (scalar -> scalar)
+ (
+ Components::One {
+ component,
+ ty: &TypeInner::Scalar { .. },
+ ..
+ },
+ ConstructorType::Scalar { kind, width },
+ ) => Expression::As {
+ expr: component,
+ kind,
+ convert: Some(width),
+ },
+
+ // Vector conversion (vector -> vector)
+ (
+ Components::One {
+ component,
+ ty: &TypeInner::Vector { size: src_size, .. },
+ ..
+ },
+ ConstructorType::Vector {
+ size: dst_size,
+ kind: dst_kind,
+ width: dst_width,
+ },
+ ) if dst_size == src_size => Expression::As {
+ expr: component,
+ kind: dst_kind,
+ convert: Some(dst_width),
+ },
+
+ // Matrix conversion (matrix -> matrix)
+ (
+ Components::One {
+ component,
+ ty:
+ &TypeInner::Matrix {
+ columns: src_columns,
+ rows: src_rows,
+ ..
+ },
+ ..
+ },
+ ConstructorType::Matrix {
+ columns: dst_columns,
+ rows: dst_rows,
+ width: dst_width,
+ },
+ ) if dst_columns == src_columns && dst_rows == src_rows => Expression::As {
+ expr: component,
+ kind: ScalarKind::Float,
+ convert: Some(dst_width),
+ },
+
+ // Vector constructor (splat) - infer type
+ (
+ Components::One {
+ component,
+ ty: &TypeInner::Scalar { .. },
+ ..
+ },
+ ConstructorType::PartialVector { size },
+ ) => Expression::Splat {
+ size,
+ value: component,
+ },
+
+ // Vector constructor (splat)
+ (
+ Components::One {
+ component,
+ ty:
+ &TypeInner::Scalar {
+ kind: src_kind,
+ width: src_width,
+ ..
+ },
+ ..
+ },
+ ConstructorType::Vector {
+ size,
+ kind: dst_kind,
+ width: dst_width,
+ },
+ ) if dst_kind == src_kind || dst_width == src_width => Expression::Splat {
+ size,
+ value: component,
+ },
+
+ // Vector constructor (by elements)
+ (
+ Components::Many {
+ components,
+ first_component_ty: &TypeInner::Scalar { kind, width },
+ ..
+ },
+ ConstructorType::PartialVector { size },
+ )
+ | (
+ Components::Many {
+ components,
+ first_component_ty: &TypeInner::Vector { kind, width, .. },
+ ..
+ },
+ ConstructorType::PartialVector { size },
+ )
+ | (
+ Components::Many {
+ components,
+ first_component_ty: &TypeInner::Scalar { .. },
+ ..
+ },
+ ConstructorType::Vector { size, width, kind },
+ )
+ | (
+ Components::Many {
+ components,
+ first_component_ty: &TypeInner::Vector { .. },
+ ..
+ },
+ ConstructorType::Vector { size, width, kind },
+ ) => {
+ let ty = ctx.types.insert(
+ Type {
+ name: None,
+ inner: TypeInner::Vector { size, kind, width },
+ },
+ Default::default(),
+ );
+ Expression::Compose { ty, components }
+ }
+
+ // Matrix constructor (by elements)
+ (
+ Components::Many {
+ components,
+ first_component_ty: &TypeInner::Scalar { width, .. },
+ ..
+ },
+ ConstructorType::PartialMatrix { columns, rows },
+ )
+ | (
+ Components::Many {
+ components,
+ first_component_ty: &TypeInner::Scalar { .. },
+ ..
+ },
+ ConstructorType::Matrix {
+ columns,
+ rows,
+ width,
+ },
+ ) => {
+ let vec_ty = ctx.types.insert(
+ Type {
+ name: None,
+ inner: TypeInner::Vector {
+ width,
+ kind: ScalarKind::Float,
+ size: rows,
+ },
+ },
+ Default::default(),
+ );
+
+ let components = components
+ .chunks(rows as usize)
+ .map(|vec_components| {
+ ctx.expressions.append(
+ Expression::Compose {
+ ty: vec_ty,
+ components: Vec::from(vec_components),
+ },
+ Default::default(),
+ )
+ })
+ .collect();
+
+ let ty = ctx.types.insert(
+ Type {
+ name: None,
+ inner: TypeInner::Matrix {
+ columns,
+ rows,
+ width,
+ },
+ },
+ Default::default(),
+ );
+ Expression::Compose { ty, components }
+ }
+
+ // Matrix constructor (by columns)
+ (
+ Components::Many {
+ components,
+ first_component_ty: &TypeInner::Vector { width, .. },
+ ..
+ },
+ ConstructorType::PartialMatrix { columns, rows },
+ )
+ | (
+ Components::Many {
+ components,
+ first_component_ty: &TypeInner::Vector { .. },
+ ..
+ },
+ ConstructorType::Matrix {
+ columns,
+ rows,
+ width,
+ },
+ ) => {
+ let ty = ctx.types.insert(
+ Type {
+ name: None,
+ inner: TypeInner::Matrix {
+ columns,
+ rows,
+ width,
+ },
+ },
+ Default::default(),
+ );
+ Expression::Compose { ty, components }
+ }
+
+ // Array constructor - infer type
+ (components, ConstructorType::PartialArray) => {
+ let components = components.into_components_vec();
+
+ let base = match ctx.typifier[components[0]].clone() {
+ TypeResolution::Handle(ty) => ty,
+ TypeResolution::Value(inner) => ctx
+ .types
+ .insert(Type { name: None, inner }, Default::default()),
+ };
+
+ let size = Constant {
+ name: None,
+ specialization: None,
+ inner: ConstantInner::Scalar {
+ width: 4,
+ value: ScalarValue::Uint(components.len() as u64),
+ },
+ };
+
+ let inner = TypeInner::Array {
+ base,
+ size: ArraySize::Constant(ctx.constants.append(size, Default::default())),
+ stride: {
+ parser.layouter.update(ctx.types, ctx.constants).unwrap();
+ parser.layouter[base].to_stride()
+ },
+ };
+
+ let ty = ctx
+ .types
+ .insert(Type { name: None, inner }, Default::default());
+
+ Expression::Compose { ty, components }
+ }
+
+ // Array constructor
+ (components, ConstructorType::Array { base, size, stride }) => {
+ let components = components.into_components_vec();
+ let inner = TypeInner::Array { base, size, stride };
+ let ty = ctx
+ .types
+ .insert(Type { name: None, inner }, Default::default());
+ Expression::Compose { ty, components }
+ }
+
+ // Struct constructor
+ (components, ConstructorType::Struct(ty)) => Expression::Compose {
+ ty,
+ components: components.into_components_vec(),
+ },
+
+ // ERRORS
+
+ // Bad conversion (type cast)
+ (
+ Components::One {
+ span, ty: src_ty, ..
+ },
+ dst_ty,
+ ) => {
+ return Err(Error::BadTypeCast {
+ span,
+ from_type: src_ty.to_wgsl(ctx.types, ctx.constants),
+ to_type: dst_ty.to_error_string(ctx.types, ctx.constants),
+ });
+ }
+
+ // Too many parameters for scalar constructor
+ (Components::Many { spans, .. }, ConstructorType::Scalar { .. }) => {
+ return Err(Error::UnexpectedComponents(Span {
+ start: spans[1].start,
+ end: spans.last().unwrap().end,
+ }));
+ }
+
+ // Parameters are of the wrong type for vector or matrix constructor
+ (Components::Many { spans, .. }, ConstructorType::Vector { .. })
+ | (Components::Many { spans, .. }, ConstructorType::Matrix { .. })
+ | (Components::Many { spans, .. }, ConstructorType::PartialVector { .. })
+ | (Components::Many { spans, .. }, ConstructorType::PartialMatrix { .. }) => {
+ return Err(Error::InvalidConstructorComponentType(spans[0].clone(), 0));
+ }
+ };
+
+ let span = NagaSpan::from(parser.pop_scope(lexer));
+ Ok(Some(ctx.expressions.append(expr, span)))
+}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -4,6 +4,7 @@ Frontend for [WGSL][wgsl] (WebGPU Shading Language).
[wgsl]: https://gpuweb.github.io/gpuweb/wgsl.html
*/
+mod construction;
mod conv;
mod lexer;
mod number_literals;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -124,6 +125,7 @@ pub enum BadFloatError {
#[derive(Clone, Debug)]
pub enum Error<'a> {
Unexpected(TokenSpan<'a>, ExpectedToken<'a>),
+ UnexpectedComponents(Span),
BadU32(Span, BadIntError),
BadI32(Span, BadIntError),
/// A negative signed integer literal where both signed and unsigned,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -148,6 +150,7 @@ pub enum Error<'a> {
InvalidResolve(ResolveError),
InvalidForInitializer(Span),
InvalidGatherComponent(Span, i32),
+ InvalidConstructorComponentType(Span, i32),
ReservedIdentifierPrefix(Span),
UnknownAddressSpace(Span),
UnknownAttribute(Span),
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -162,6 +165,8 @@ pub enum Error<'a> {
ZeroSizeOrAlign(Span),
InconsistentBinding(Span),
UnknownLocalFunction(Span),
+ TypeNotConstructible(Span),
+ TypeNotInferrable(Span),
InitializationTypeMismatch(Span, String),
MissingType(Span),
MissingAttribute(&'static str, Span),
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -253,6 +258,11 @@ impl<'a> Error<'a> {
notes: vec![],
}
},
+ Error::UnexpectedComponents(ref bad_span) => ParseError {
+ message: "unexpected components".to_string(),
+ labels: vec![(bad_span.clone(), "unexpected components".into())],
+ notes: vec![],
+ },
Error::BadU32(ref bad_span, ref err) => ParseError {
message: format!(
"expected unsigned integer literal, found `{}`",
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -355,6 +365,11 @@ impl<'a> Error<'a> {
labels: vec![(bad_span.clone(), "invalid component".into())],
notes: vec![],
},
+ Error::InvalidConstructorComponentType(ref bad_span, component) => ParseError {
+ message: format!("invalid type for constructor component at index [{}]", component),
+ labels: vec![(bad_span.clone(), "invalid component type".into())],
+ notes: vec![],
+ },
Error::ReservedIdentifierPrefix(ref bad_span) => ParseError {
message: format!("Identifier starts with a reserved prefix: '{}'", &source[bad_span.clone()]),
labels: vec![(bad_span.clone(), "invalid identifier".into())],
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -415,6 +430,16 @@ impl<'a> Error<'a> {
labels: vec![(span.clone(), "unknown local function".into())],
notes: vec![],
},
+ Error::TypeNotConstructible(ref span) => ParseError {
+ message: format!("type `{}` is not constructible", &source[span.clone()]),
+ labels: vec![(span.clone(), "type is not constructible".into())],
+ notes: vec![],
+ },
+ Error::TypeNotInferrable(ref span) => ParseError {
+ message: "type can't be inferred".to_string(),
+ labels: vec![(span.clone(), "type can't be inferred".into())],
+ notes: vec![],
+ },
Error::InitializationTypeMismatch(ref name_span, ref expected_ty) => ParseError {
message: format!("the type of `{}` is expected to be `{}`", &source[name_span.clone()], expected_ty),
labels: vec![(name_span.clone(), format!("definition of `{}`", &source[name_span.clone()]).into())],
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -969,6 +994,98 @@ impl<'a> ExpressionContext<'a, '_, '_> {
expr.handle
}
}
+
+ /// Creates a zero value constant of type `ty`
+ ///
+ /// Returns `None` if the given `ty` is not a constructible type
+ fn create_zero_value_constant(
+ &mut self,
+ ty: Handle<crate::Type>,
+ ) -> Option<Handle<crate::Constant>> {
+ let inner = match self.types[ty].inner {
+ crate::TypeInner::Scalar { kind, width } => {
+ let value = match kind {
+ crate::ScalarKind::Sint => crate::ScalarValue::Sint(0),
+ crate::ScalarKind::Uint => crate::ScalarValue::Uint(0),
+ crate::ScalarKind::Float => crate::ScalarValue::Float(0.),
+ crate::ScalarKind::Bool => crate::ScalarValue::Bool(false),
+ };
+ crate::ConstantInner::Scalar { width, value }
+ }
+ crate::TypeInner::Vector { size, kind, width } => {
+ let scalar_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Scalar { width, kind },
+ },
+ Default::default(),
+ );
+ let component = self.create_zero_value_constant(scalar_ty);
+ crate::ConstantInner::Composite {
+ ty,
+ components: (0..size as u8).map(|_| component).collect::<Option<_>>()?,
+ }
+ }
+ crate::TypeInner::Matrix {
+ columns,
+ rows,
+ width,
+ } => {
+ let vec_ty = self.types.insert(
+ crate::Type {
+ name: None,
+ inner: crate::TypeInner::Vector {
+ width,
+ kind: crate::ScalarKind::Float,
+ size: rows,
+ },
+ },
+ Default::default(),
+ );
+ let component = self.create_zero_value_constant(vec_ty);
+ crate::ConstantInner::Composite {
+ ty,
+ components: (0..columns as u8)
+ .map(|_| component)
+ .collect::<Option<_>>()?,
+ }
+ }
+ crate::TypeInner::Array {
+ base,
+ size: crate::ArraySize::Constant(size),
+ ..
+ } => {
+ let component = self.create_zero_value_constant(base);
+ crate::ConstantInner::Composite {
+ ty,
+ components: (0..self.constants[size].to_array_length().unwrap())
+ .map(|_| component)
+ .collect::<Option<_>>()?,
+ }
+ }
+ crate::TypeInner::Struct { ref members, .. } => {
+ let members = members.clone();
+ crate::ConstantInner::Composite {
+ ty,
+ components: members
+ .iter()
+ .map(|member| self.create_zero_value_constant(member.ty))
+ .collect::<Option<_>>()?,
+ }
+ }
+ _ => return None,
+ };
+
+ let constant = self.constants.fetch_or_append(
+ crate::Constant {
+ name: None,
+ specialization: None,
+ inner,
+ },
+ crate::Span::default(),
+ );
+ Some(constant)
+ }
}
/// A Naga [`Expression`] handle, with WGSL type information.
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2044,158 +2161,6 @@ impl Parser {
}))
}
- /// Expects [`Scope::PrimaryExpr`] scope on top; if returning Some(_), pops it.
- fn parse_construction<'a>(
- &mut self,
- lexer: &mut Lexer<'a>,
- type_name: &'a str,
- mut ctx: ExpressionContext<'a, '_, '_>,
- ) -> Result<Option<Handle<crate::Expression>>, Error<'a>> {
- assert_eq!(
- self.scopes.last().map(|&(ref scope, _)| scope.clone()),
- Some(Scope::PrimaryExpr)
- );
- let ty_resolution = match self.lookup_type.get(type_name) {
- Some(&handle) => TypeResolution::Handle(handle),
- None => match self.parse_type_decl_impl(
- lexer,
- TypeAttributes::default(),
- type_name,
- ctx.types,
- ctx.constants,
- )? {
- Some(inner) => TypeResolution::Value(inner),
- None => return Ok(None),
- },
- };
-
- let mut components = Vec::new();
- let (last_component, arguments_span) = lexer.capture_span(|lexer| {
- lexer.open_arguments()?;
- let mut last_component = self.parse_general_expression(lexer, ctx.reborrow())?;
-
- while lexer.next_argument()? {
- components.push(last_component);
- last_component = self.parse_general_expression(lexer, ctx.reborrow())?;
- }
-
- Ok(last_component)
- })?;
-
- // We can't use the `TypeInner` returned by this because
- // `resolve_type` borrows context mutably.
- // Use it to insert into the right maps,
- // and then grab it again immutably.
- ctx.resolve_type(last_component)?;
-
- let expr = if components.is_empty()
- && ty_resolution.inner_with(ctx.types).scalar_kind().is_some()
- {
- match (
- ty_resolution.inner_with(ctx.types),
- ctx.typifier.get(last_component, ctx.types),
- ) {
- (
- &crate::TypeInner::Vector {
- size, kind, width, ..
- },
- &crate::TypeInner::Scalar {
- kind: arg_kind,
- width: arg_width,
- ..
- },
- ) if arg_kind == kind && arg_width == width => crate::Expression::Splat {
- size,
- value: last_component,
- },
- (
- &crate::TypeInner::Scalar { kind, width, .. },
- &crate::TypeInner::Scalar { .. },
- )
- | (
- &crate::TypeInner::Vector { kind, width, .. },
- &crate::TypeInner::Vector { .. },
- ) => crate::Expression::As {
- expr: last_component,
- kind,
- convert: Some(width),
- },
- (&crate::TypeInner::Matrix { width, .. }, &crate::TypeInner::Matrix { .. }) => {
- crate::Expression::As {
- expr: last_component,
- kind: crate::ScalarKind::Float,
- convert: Some(width),
- }
- }
- (to_type, from_type) => {
- return Err(Error::BadTypeCast {
- span: arguments_span,
- from_type: from_type.to_wgsl(ctx.types, ctx.constants),
- to_type: to_type.to_wgsl(ctx.types, ctx.constants),
- });
- }
- }
- } else {
- components.push(last_component);
- let mut compose_components = Vec::new();
-
- if let (
- &crate::TypeInner::Matrix {
- rows,
- width,
- columns,
- },
- &crate::TypeInner::Scalar {
- kind: crate::ScalarKind::Float,
- ..
- },
- ) = (
- ty_resolution.inner_with(ctx.types),
- ctx.typifier.get(last_component, ctx.types),
- ) {
- let vec_ty = ctx.types.insert(
- crate::Type {
- name: None,
- inner: crate::TypeInner::Vector {
- width,
- kind: crate::ScalarKind::Float,
- size: rows,
- },
- },
- Default::default(),
- );
-
- compose_components.reserve(columns as usize);
- for vec_components in components.chunks(rows as usize) {
- let handle = ctx.expressions.append(
- crate::Expression::Compose {
- ty: vec_ty,
- components: Vec::from(vec_components),
- },
- crate::Span::default(),
- );
- compose_components.push(handle);
- }
- } else {
- compose_components = components;
- }
-
- let ty = match ty_resolution {
- TypeResolution::Handle(handle) => handle,
- TypeResolution::Value(inner) => ctx
- .types
- .insert(crate::Type { name: None, inner }, Default::default()),
- };
- crate::Expression::Compose {
- ty,
- components: compose_components,
- }
- };
-
- let span = NagaSpan::from(self.pop_scope(lexer));
- Ok(Some(ctx.expressions.append(expr, span)))
- }
-
fn parse_const_expression_impl<'a>(
&mut self,
first_token_span: TokenSpan<'a>,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2325,7 +2290,13 @@ impl Parser {
TypedExpression::non_reference(expr)
} else {
let _ = lexer.next();
- if let Some(expr) = self.parse_construction(lexer, word, ctx.reborrow())? {
+ if let Some(expr) = construction::parse_construction(
+ self,
+ lexer,
+ word,
+ span.clone(),
+ ctx.reborrow(),
+ )? {
TypedExpression::non_reference(expr)
} else {
return Err(Error::UnknownIdent(span, word));
| diff --git a/tests/in/operators.wgsl b/tests/in/operators.wgsl
--- a/tests/in/operators.wgsl
+++ b/tests/in/operators.wgsl
@@ -58,6 +58,21 @@ fn constructors() -> f32 {
0.0, 0.0, 0.0, 1.0,
);
+ // zero value constructors
+ var _ = bool();
+ var _ = i32();
+ var _ = u32();
+ var _ = f32();
+ var _ = vec2<u32>();
+ var _ = mat2x2<f32>();
+ var _ = array<Foo, 3>();
+ var _ = Foo();
+
+ // constructors that infer their type from their parameters
+ var _ = vec2(0u);
+ var _ = mat2x2(vec2(0.), vec2(0.));
+ var _ = array(0, 1, 2, 3);
+
return foo.a.x;
}
diff --git a/tests/out/glsl/operators.main.Compute.glsl b/tests/out/glsl/operators.main.Compute.glsl
--- a/tests/out/glsl/operators.main.Compute.glsl
+++ b/tests/out/glsl/operators.main.Compute.glsl
@@ -43,11 +43,25 @@ vec3 bool_cast(vec3 x) {
float constructors() {
Foo foo = Foo(vec4(0.0), 0);
+ bool unnamed = false;
+ int unnamed_1 = 0;
+ uint unnamed_2 = 0u;
+ float unnamed_3 = 0.0;
+ uvec2 unnamed_4 = uvec2(0u, 0u);
+ mat2x2 unnamed_5 = mat2x2(vec2(0.0, 0.0), vec2(0.0, 0.0));
+ Foo unnamed_6[3] = Foo[3](Foo(vec4(0.0, 0.0, 0.0, 0.0), 0), Foo(vec4(0.0, 0.0, 0.0, 0.0), 0), Foo(vec4(0.0, 0.0, 0.0, 0.0), 0));
+ Foo unnamed_7 = Foo(vec4(0.0, 0.0, 0.0, 0.0), 0);
+ uvec2 unnamed_8 = uvec2(0u);
+ mat2x2 unnamed_9 = mat2x2(0.0);
+ int unnamed_10[4] = int[4](0, 0, 0, 0);
foo = Foo(vec4(1.0), 1);
mat2x2 mat2comp = mat2x2(vec2(1.0, 0.0), vec2(0.0, 1.0));
mat4x4 mat4comp = mat4x4(vec4(1.0, 0.0, 0.0, 0.0), vec4(0.0, 1.0, 0.0, 0.0), vec4(0.0, 0.0, 1.0, 0.0), vec4(0.0, 0.0, 0.0, 1.0));
- float _e39 = foo.a.x;
- return _e39;
+ unnamed_8 = uvec2(0u);
+ unnamed_9 = mat2x2(vec2(0.0), vec2(0.0));
+ unnamed_10 = int[4](0, 1, 2, 3);
+ float _e70 = foo.a.x;
+ return _e70;
}
void modulo() {
diff --git a/tests/out/hlsl/operators.hlsl b/tests/out/hlsl/operators.hlsl
--- a/tests/out/hlsl/operators.hlsl
+++ b/tests/out/hlsl/operators.hlsl
@@ -53,12 +53,29 @@ Foo ConstructFoo(float4 arg0, int arg1) {
float constructors()
{
Foo foo = (Foo)0;
+ bool unnamed = false;
+ int unnamed_1 = 0;
+ uint unnamed_2 = 0u;
+ float unnamed_3 = 0.0;
+ uint2 unnamed_4 = uint2(0u, 0u);
+ float2x2 unnamed_5 = float2x2(float2(0.0, 0.0), float2(0.0, 0.0));
+ Foo unnamed_6[3] = { { float4(0.0, 0.0, 0.0, 0.0), 0 }, { float4(0.0, 0.0, 0.0, 0.0), 0 }, { float4(0.0, 0.0, 0.0, 0.0), 0 } };
+ Foo unnamed_7 = { float4(0.0, 0.0, 0.0, 0.0), 0 };
+ uint2 unnamed_8 = (uint2)0;
+ float2x2 unnamed_9 = (float2x2)0;
+ int unnamed_10[4] = {(int)0,(int)0,(int)0,(int)0};
foo = ConstructFoo(float4(1.0.xxxx), 1);
float2x2 mat2comp = float2x2(float2(1.0, 0.0), float2(0.0, 1.0));
float4x4 mat4comp = float4x4(float4(1.0, 0.0, 0.0, 0.0), float4(0.0, 1.0, 0.0, 0.0), float4(0.0, 0.0, 1.0, 0.0), float4(0.0, 0.0, 0.0, 1.0));
- float _expr39 = foo.a.x;
- return _expr39;
+ unnamed_8 = uint2(0u.xx);
+ unnamed_9 = float2x2(float2(0.0.xx), float2(0.0.xx));
+ {
+ int _result[4]={ 0, 1, 2, 3 };
+ for(int _i=0; _i<4; ++_i) unnamed_10[_i] = _result[_i];
+ }
+ float _expr70 = foo.a.x;
+ return _expr70;
}
void modulo()
diff --git a/tests/out/msl/operators.msl b/tests/out/msl/operators.msl
--- a/tests/out/msl/operators.msl
+++ b/tests/out/msl/operators.msl
@@ -8,10 +8,22 @@ struct Foo {
metal::float4 a;
int b;
};
+struct type_12 {
+ Foo inner[3];
+};
+struct type_13 {
+ int inner[4u];
+};
constant metal::float4 v_f32_one = {1.0, 1.0, 1.0, 1.0};
constant metal::float4 v_f32_zero = {0.0, 0.0, 0.0, 0.0};
constant metal::float4 v_f32_half = {0.5, 0.5, 0.5, 0.5};
constant metal::int4 v_i32_one = {1, 1, 1, 1};
+constant metal::uint2 const_type_11_ = {0u, 0u};
+constant metal::float2 const_type_6_ = {0.0, 0.0};
+constant metal::float2x2 const_type_7_ = {const_type_6_, const_type_6_};
+constant metal::float4 const_type = {0.0, 0.0, 0.0, 0.0};
+constant Foo const_Foo = {const_type, 0};
+constant type_12 const_type_12_ = {const_Foo, const_Foo, const_Foo};
metal::float4 builtins(
) {
diff --git a/tests/out/msl/operators.msl b/tests/out/msl/operators.msl
--- a/tests/out/msl/operators.msl
+++ b/tests/out/msl/operators.msl
@@ -52,11 +64,25 @@ metal::float3 bool_cast(
float constructors(
) {
Foo foo;
+ bool unnamed = false;
+ int unnamed_1 = 0;
+ uint unnamed_2 = 0u;
+ float unnamed_3 = 0.0;
+ metal::uint2 unnamed_4 = const_type_11_;
+ metal::float2x2 unnamed_5 = const_type_7_;
+ type_12 unnamed_6 = const_type_12_;
+ Foo unnamed_7 = const_Foo;
+ metal::uint2 unnamed_8;
+ metal::float2x2 unnamed_9;
+ type_13 unnamed_10;
foo = Foo {metal::float4(1.0), 1};
metal::float2x2 mat2comp = metal::float2x2(metal::float2(1.0, 0.0), metal::float2(0.0, 1.0));
metal::float4x4 mat4comp = metal::float4x4(metal::float4(1.0, 0.0, 0.0, 0.0), metal::float4(0.0, 1.0, 0.0, 0.0), metal::float4(0.0, 0.0, 1.0, 0.0), metal::float4(0.0, 0.0, 0.0, 1.0));
- float _e39 = foo.a.x;
- return _e39;
+ unnamed_8 = metal::uint2(0u);
+ unnamed_9 = metal::float2x2(metal::float2(0.0), metal::float2(0.0));
+ for(int _i=0; _i<4; ++_i) unnamed_10.inner[_i] = type_13 {0, 1, 2, 3}.inner[_i];
+ float _e70 = foo.a.x;
+ return _e70;
}
void modulo(
diff --git a/tests/out/spv/operators.spvasm b/tests/out/spv/operators.spvasm
--- a/tests/out/spv/operators.spvasm
+++ b/tests/out/spv/operators.spvasm
@@ -1,14 +1,16 @@
; SPIR-V
; Version: 1.1
; Generator: rspirv
-; Bound: 180
+; Bound: 214
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
-OpEntryPoint GLCompute %168 "main"
-OpExecutionMode %168 LocalSize 1 1 1
-OpMemberDecorate %23 0 Offset 0
-OpMemberDecorate %23 1 Offset 16
+OpEntryPoint GLCompute %202 "main"
+OpExecutionMode %202 LocalSize 1 1 1
+OpMemberDecorate %27 0 Offset 0
+OpMemberDecorate %27 1 Offset 16
+OpDecorate %32 ArrayStride 32
+OpDecorate %33 ArrayStride 4
%2 = OpTypeVoid
%4 = OpTypeFloat 32
%3 = OpConstant %4 1.0
diff --git a/tests/out/spv/operators.spvasm b/tests/out/spv/operators.spvasm
--- a/tests/out/spv/operators.spvasm
+++ b/tests/out/spv/operators.spvasm
@@ -26,208 +28,245 @@ OpMemberDecorate %23 1 Offset 16
%16 = OpConstant %4 4.0
%17 = OpConstant %8 5
%18 = OpConstant %8 2
-%19 = OpTypeVector %4 4
-%20 = OpTypeVector %8 4
-%21 = OpTypeVector %10 4
-%22 = OpTypeVector %4 3
-%23 = OpTypeStruct %19 %8
-%24 = OpTypeVector %4 2
-%25 = OpTypeMatrix %24 2
-%26 = OpTypeMatrix %19 4
-%27 = OpConstantComposite %19 %3 %3 %3 %3
-%28 = OpConstantComposite %19 %5 %5 %5 %5
-%29 = OpConstantComposite %19 %6 %6 %6 %6
-%30 = OpConstantComposite %20 %7 %7 %7 %7
-%33 = OpTypeFunction %19
-%74 = OpTypeFunction %8
-%81 = OpConstantNull %8
-%85 = OpTypeFunction %22 %22
-%87 = OpTypeVector %10 3
-%94 = OpTypePointer Function %23
-%97 = OpTypeFunction %4
-%109 = OpTypePointer Function %19
-%110 = OpTypePointer Function %4
-%112 = OpTypeInt 32 0
-%111 = OpConstant %112 0
-%117 = OpTypeFunction %2
-%121 = OpTypeVector %8 3
-%143 = OpTypePointer Function %8
-%32 = OpFunction %19 None %33
-%31 = OpLabel
-OpBranch %34
-%34 = OpLabel
-%35 = OpSelect %8 %9 %7 %11
-%37 = OpCompositeConstruct %21 %9 %9 %9 %9
-%36 = OpSelect %19 %37 %27 %28
-%38 = OpCompositeConstruct %21 %12 %12 %12 %12
-%39 = OpSelect %19 %38 %28 %27
-%40 = OpExtInst %19 %1 FMix %28 %27 %29
-%42 = OpCompositeConstruct %19 %13 %13 %13 %13
-%41 = OpExtInst %19 %1 FMix %28 %27 %42
-%43 = OpCompositeExtract %8 %30 0
-%44 = OpBitcast %4 %43
-%45 = OpBitcast %19 %30
-%46 = OpConvertFToS %20 %28
-%47 = OpCompositeConstruct %20 %35 %35 %35 %35
-%48 = OpIAdd %20 %47 %46
-%49 = OpConvertSToF %19 %48
-%50 = OpFAdd %19 %49 %36
-%51 = OpFAdd %19 %50 %40
-%52 = OpFAdd %19 %51 %41
-%53 = OpCompositeConstruct %19 %44 %44 %44 %44
-%54 = OpFAdd %19 %52 %53
-%55 = OpFAdd %19 %54 %45
-OpReturnValue %55
+%20 = OpTypeInt 32 0
+%19 = OpConstant %20 0
+%21 = OpConstant %8 3
+%22 = OpConstant %20 4
+%23 = OpTypeVector %4 4
+%24 = OpTypeVector %8 4
+%25 = OpTypeVector %10 4
+%26 = OpTypeVector %4 3
+%27 = OpTypeStruct %23 %8
+%28 = OpTypeVector %4 2
+%29 = OpTypeMatrix %28 2
+%30 = OpTypeMatrix %23 4
+%31 = OpTypeVector %20 2
+%32 = OpTypeArray %27 %21
+%33 = OpTypeArray %8 %22
+%34 = OpConstantComposite %23 %3 %3 %3 %3
+%35 = OpConstantComposite %23 %5 %5 %5 %5
+%36 = OpConstantComposite %23 %6 %6 %6 %6
+%37 = OpConstantComposite %24 %7 %7 %7 %7
+%38 = OpConstantComposite %31 %19 %19
+%39 = OpConstantComposite %28 %5 %5
+%40 = OpConstantComposite %29 %39 %39
+%41 = OpConstantComposite %23 %5 %5 %5 %5
+%42 = OpConstantComposite %27 %41 %11
+%43 = OpConstantComposite %32 %42 %42 %42
+%46 = OpTypeFunction %23
+%87 = OpTypeFunction %8
+%94 = OpConstantNull %8
+%98 = OpTypeFunction %26 %26
+%100 = OpTypeVector %10 3
+%107 = OpTypePointer Function %27
+%109 = OpTypePointer Function %10
+%111 = OpTypePointer Function %8
+%113 = OpTypePointer Function %20
+%115 = OpTypePointer Function %4
+%117 = OpTypePointer Function %31
+%119 = OpTypePointer Function %29
+%121 = OpTypePointer Function %32
+%126 = OpTypePointer Function %33
+%129 = OpTypeFunction %4
+%146 = OpTypePointer Function %23
+%147 = OpTypePointer Function %4
+%152 = OpTypeFunction %2
+%156 = OpTypeVector %8 3
+%45 = OpFunction %23 None %46
+%44 = OpLabel
+OpBranch %47
+%47 = OpLabel
+%48 = OpSelect %8 %9 %7 %11
+%50 = OpCompositeConstruct %25 %9 %9 %9 %9
+%49 = OpSelect %23 %50 %34 %35
+%51 = OpCompositeConstruct %25 %12 %12 %12 %12
+%52 = OpSelect %23 %51 %35 %34
+%53 = OpExtInst %23 %1 FMix %35 %34 %36
+%55 = OpCompositeConstruct %23 %13 %13 %13 %13
+%54 = OpExtInst %23 %1 FMix %35 %34 %55
+%56 = OpCompositeExtract %8 %37 0
+%57 = OpBitcast %4 %56
+%58 = OpBitcast %23 %37
+%59 = OpConvertFToS %24 %35
+%60 = OpCompositeConstruct %24 %48 %48 %48 %48
+%61 = OpIAdd %24 %60 %59
+%62 = OpConvertSToF %23 %61
+%63 = OpFAdd %23 %62 %49
+%64 = OpFAdd %23 %63 %53
+%65 = OpFAdd %23 %64 %54
+%66 = OpCompositeConstruct %23 %57 %57 %57 %57
+%67 = OpFAdd %23 %65 %66
+%68 = OpFAdd %23 %67 %58
+OpReturnValue %68
OpFunctionEnd
-%57 = OpFunction %19 None %33
-%56 = OpLabel
-OpBranch %58
-%58 = OpLabel
-%59 = OpCompositeConstruct %24 %14 %14
-%60 = OpCompositeConstruct %24 %3 %3
-%61 = OpFAdd %24 %60 %59
-%62 = OpCompositeConstruct %24 %15 %15
-%63 = OpFSub %24 %61 %62
-%64 = OpCompositeConstruct %24 %16 %16
-%65 = OpFDiv %24 %63 %64
-%66 = OpCompositeConstruct %20 %17 %17 %17 %17
-%67 = OpCompositeConstruct %20 %18 %18 %18 %18
-%68 = OpSMod %20 %66 %67
-%69 = OpVectorShuffle %19 %65 %65 0 1 0 1
-%70 = OpConvertSToF %19 %68
-%71 = OpFAdd %19 %69 %70
-OpReturnValue %71
+%70 = OpFunction %23 None %46
+%69 = OpLabel
+OpBranch %71
+%71 = OpLabel
+%72 = OpCompositeConstruct %28 %14 %14
+%73 = OpCompositeConstruct %28 %3 %3
+%74 = OpFAdd %28 %73 %72
+%75 = OpCompositeConstruct %28 %15 %15
+%76 = OpFSub %28 %74 %75
+%77 = OpCompositeConstruct %28 %16 %16
+%78 = OpFDiv %28 %76 %77
+%79 = OpCompositeConstruct %24 %17 %17 %17 %17
+%80 = OpCompositeConstruct %24 %18 %18 %18 %18
+%81 = OpSMod %24 %79 %80
+%82 = OpVectorShuffle %23 %78 %78 0 1 0 1
+%83 = OpConvertSToF %23 %81
+%84 = OpFAdd %23 %82 %83
+OpReturnValue %84
OpFunctionEnd
-%73 = OpFunction %8 None %74
-%72 = OpLabel
-OpBranch %75
-%75 = OpLabel
-%76 = OpLogicalNot %10 %9
-OpSelectionMerge %77 None
-OpBranchConditional %76 %78 %79
-%78 = OpLabel
+%86 = OpFunction %8 None %87
+%85 = OpLabel
+OpBranch %88
+%88 = OpLabel
+%89 = OpLogicalNot %10 %9
+OpSelectionMerge %90 None
+OpBranchConditional %89 %91 %92
+%91 = OpLabel
OpReturnValue %7
-%79 = OpLabel
-%80 = OpNot %8 %7
-OpReturnValue %80
-%77 = OpLabel
-OpReturnValue %81
+%92 = OpLabel
+%93 = OpNot %8 %7
+OpReturnValue %93
+%90 = OpLabel
+OpReturnValue %94
OpFunctionEnd
-%84 = OpFunction %22 None %85
-%83 = OpFunctionParameter %22
-%82 = OpLabel
-OpBranch %86
-%86 = OpLabel
-%88 = OpCompositeConstruct %22 %5 %5 %5
-%89 = OpFUnordNotEqual %87 %83 %88
-%90 = OpCompositeConstruct %22 %5 %5 %5
-%91 = OpCompositeConstruct %22 %3 %3 %3
-%92 = OpSelect %22 %89 %91 %90
-OpReturnValue %92
-OpFunctionEnd
-%96 = OpFunction %4 None %97
+%97 = OpFunction %26 None %98
+%96 = OpFunctionParameter %26
%95 = OpLabel
-%93 = OpVariable %94 Function
-OpBranch %98
-%98 = OpLabel
-%99 = OpCompositeConstruct %19 %3 %3 %3 %3
-%100 = OpCompositeConstruct %23 %99 %7
-OpStore %93 %100
-%101 = OpCompositeConstruct %24 %3 %5
-%102 = OpCompositeConstruct %24 %5 %3
-%103 = OpCompositeConstruct %25 %101 %102
-%104 = OpCompositeConstruct %19 %3 %5 %5 %5
-%105 = OpCompositeConstruct %19 %5 %3 %5 %5
-%106 = OpCompositeConstruct %19 %5 %5 %3 %5
-%107 = OpCompositeConstruct %19 %5 %5 %5 %3
-%108 = OpCompositeConstruct %26 %104 %105 %106 %107
-%113 = OpAccessChain %110 %93 %111 %111
-%114 = OpLoad %4 %113
-OpReturnValue %114
-OpFunctionEnd
-%116 = OpFunction %2 None %117
-%115 = OpLabel
-OpBranch %118
-%118 = OpLabel
-%119 = OpSMod %8 %7 %7
-%120 = OpFRem %4 %3 %3
-%122 = OpCompositeConstruct %121 %7 %7 %7
-%123 = OpCompositeConstruct %121 %7 %7 %7
-%124 = OpSMod %121 %122 %123
-%125 = OpCompositeConstruct %22 %3 %3 %3
-%126 = OpCompositeConstruct %22 %3 %3 %3
-%127 = OpFRem %22 %125 %126
-OpReturn
+OpBranch %99
+%99 = OpLabel
+%101 = OpCompositeConstruct %26 %5 %5 %5
+%102 = OpFUnordNotEqual %100 %96 %101
+%103 = OpCompositeConstruct %26 %5 %5 %5
+%104 = OpCompositeConstruct %26 %3 %3 %3
+%105 = OpSelect %26 %102 %104 %103
+OpReturnValue %105
OpFunctionEnd
-%129 = OpFunction %2 None %117
-%128 = OpLabel
+%128 = OpFunction %4 None %129
+%127 = OpLabel
+%123 = OpVariable %117 Function
+%118 = OpVariable %119 Function %40
+%112 = OpVariable %113 Function %19
+%106 = OpVariable %107 Function
+%124 = OpVariable %119 Function
+%120 = OpVariable %121 Function %43
+%114 = OpVariable %115 Function %5
+%108 = OpVariable %109 Function %12
+%125 = OpVariable %126 Function
+%122 = OpVariable %107 Function %42
+%116 = OpVariable %117 Function %38
+%110 = OpVariable %111 Function %11
OpBranch %130
%130 = OpLabel
-%131 = OpCompositeConstruct %19 %3 %5 %5 %5
-%132 = OpCompositeConstruct %19 %5 %3 %5 %5
-%133 = OpCompositeConstruct %19 %5 %5 %3 %5
-%134 = OpCompositeConstruct %19 %5 %5 %5 %3
-%135 = OpCompositeConstruct %26 %131 %132 %133 %134
-%136 = OpMatrixTimesScalar %26 %135 %14
+%131 = OpCompositeConstruct %23 %3 %3 %3 %3
+%132 = OpCompositeConstruct %27 %131 %7
+OpStore %106 %132
+%133 = OpCompositeConstruct %28 %3 %5
+%134 = OpCompositeConstruct %28 %5 %3
+%135 = OpCompositeConstruct %29 %133 %134
+%136 = OpCompositeConstruct %23 %3 %5 %5 %5
+%137 = OpCompositeConstruct %23 %5 %3 %5 %5
+%138 = OpCompositeConstruct %23 %5 %5 %3 %5
+%139 = OpCompositeConstruct %23 %5 %5 %5 %3
+%140 = OpCompositeConstruct %30 %136 %137 %138 %139
+%141 = OpCompositeConstruct %31 %19 %19
+OpStore %123 %141
+%142 = OpCompositeConstruct %28 %5 %5
+%143 = OpCompositeConstruct %28 %5 %5
+%144 = OpCompositeConstruct %29 %142 %143
+OpStore %124 %144
+%145 = OpCompositeConstruct %33 %11 %7 %18 %21
+OpStore %125 %145
+%148 = OpAccessChain %147 %106 %19 %19
+%149 = OpLoad %4 %148
+OpReturnValue %149
+OpFunctionEnd
+%151 = OpFunction %2 None %152
+%150 = OpLabel
+OpBranch %153
+%153 = OpLabel
+%154 = OpSMod %8 %7 %7
+%155 = OpFRem %4 %3 %3
+%157 = OpCompositeConstruct %156 %7 %7 %7
+%158 = OpCompositeConstruct %156 %7 %7 %7
+%159 = OpSMod %156 %157 %158
+%160 = OpCompositeConstruct %26 %3 %3 %3
+%161 = OpCompositeConstruct %26 %3 %3 %3
+%162 = OpFRem %26 %160 %161
+OpReturn
+OpFunctionEnd
+%164 = OpFunction %2 None %152
+%163 = OpLabel
+OpBranch %165
+%165 = OpLabel
+%166 = OpCompositeConstruct %23 %3 %5 %5 %5
+%167 = OpCompositeConstruct %23 %5 %3 %5 %5
+%168 = OpCompositeConstruct %23 %5 %5 %3 %5
+%169 = OpCompositeConstruct %23 %5 %5 %5 %3
+%170 = OpCompositeConstruct %30 %166 %167 %168 %169
+%171 = OpMatrixTimesScalar %30 %170 %14
OpReturn
OpFunctionEnd
-%138 = OpFunction %2 None %117
-%137 = OpLabel
-OpBranch %139
-%139 = OpLabel
-%140 = OpLogicalOr %10 %9 %12
-%141 = OpLogicalAnd %10 %9 %12
+%173 = OpFunction %2 None %152
+%172 = OpLabel
+OpBranch %174
+%174 = OpLabel
+%175 = OpLogicalOr %10 %9 %12
+%176 = OpLogicalAnd %10 %9 %12
OpReturn
OpFunctionEnd
-%145 = OpFunction %2 None %117
-%144 = OpLabel
-%142 = OpVariable %143 Function %7
-OpBranch %146
-%146 = OpLabel
-%147 = OpLoad %8 %142
-%148 = OpIAdd %8 %147 %7
-OpStore %142 %148
-%149 = OpLoad %8 %142
-%150 = OpISub %8 %149 %7
-OpStore %142 %150
-%151 = OpLoad %8 %142
-%152 = OpLoad %8 %142
-%153 = OpIMul %8 %151 %152
-OpStore %142 %153
-%154 = OpLoad %8 %142
-%155 = OpLoad %8 %142
-%156 = OpSDiv %8 %154 %155
-OpStore %142 %156
-%157 = OpLoad %8 %142
-%158 = OpSMod %8 %157 %7
-OpStore %142 %158
-%159 = OpLoad %8 %142
-%160 = OpBitwiseXor %8 %159 %11
-OpStore %142 %160
-%161 = OpLoad %8 %142
-%162 = OpBitwiseAnd %8 %161 %11
-OpStore %142 %162
-%163 = OpLoad %8 %142
-%164 = OpIAdd %8 %163 %7
-OpStore %142 %164
-%165 = OpLoad %8 %142
-%166 = OpISub %8 %165 %7
-OpStore %142 %166
+%179 = OpFunction %2 None %152
+%178 = OpLabel
+%177 = OpVariable %111 Function %7
+OpBranch %180
+%180 = OpLabel
+%181 = OpLoad %8 %177
+%182 = OpIAdd %8 %181 %7
+OpStore %177 %182
+%183 = OpLoad %8 %177
+%184 = OpISub %8 %183 %7
+OpStore %177 %184
+%185 = OpLoad %8 %177
+%186 = OpLoad %8 %177
+%187 = OpIMul %8 %185 %186
+OpStore %177 %187
+%188 = OpLoad %8 %177
+%189 = OpLoad %8 %177
+%190 = OpSDiv %8 %188 %189
+OpStore %177 %190
+%191 = OpLoad %8 %177
+%192 = OpSMod %8 %191 %7
+OpStore %177 %192
+%193 = OpLoad %8 %177
+%194 = OpBitwiseXor %8 %193 %11
+OpStore %177 %194
+%195 = OpLoad %8 %177
+%196 = OpBitwiseAnd %8 %195 %11
+OpStore %177 %196
+%197 = OpLoad %8 %177
+%198 = OpIAdd %8 %197 %7
+OpStore %177 %198
+%199 = OpLoad %8 %177
+%200 = OpISub %8 %199 %7
+OpStore %177 %200
OpReturn
OpFunctionEnd
-%168 = OpFunction %2 None %117
-%167 = OpLabel
-OpBranch %169
-%169 = OpLabel
-%170 = OpFunctionCall %19 %32
-%171 = OpFunctionCall %19 %57
-%172 = OpFunctionCall %8 %73
-%173 = OpVectorShuffle %22 %27 %27 0 1 2
-%174 = OpFunctionCall %22 %84 %173
-%175 = OpFunctionCall %4 %96
-%176 = OpFunctionCall %2 %116
-%177 = OpFunctionCall %2 %129
-%178 = OpFunctionCall %2 %138
-%179 = OpFunctionCall %2 %145
+%202 = OpFunction %2 None %152
+%201 = OpLabel
+OpBranch %203
+%203 = OpLabel
+%204 = OpFunctionCall %23 %45
+%205 = OpFunctionCall %23 %70
+%206 = OpFunctionCall %8 %86
+%207 = OpVectorShuffle %26 %34 %34 0 1 2
+%208 = OpFunctionCall %26 %97 %207
+%209 = OpFunctionCall %4 %128
+%210 = OpFunctionCall %2 %151
+%211 = OpFunctionCall %2 %164
+%212 = OpFunctionCall %2 %173
+%213 = OpFunctionCall %2 %179
OpReturn
OpFunctionEnd
\ No newline at end of file
diff --git a/tests/out/wgsl/operators.wgsl b/tests/out/wgsl/operators.wgsl
--- a/tests/out/wgsl/operators.wgsl
+++ b/tests/out/wgsl/operators.wgsl
@@ -40,12 +40,26 @@ fn bool_cast(x: vec3<f32>) -> vec3<f32> {
fn constructors() -> f32 {
var foo: Foo;
+ var unnamed: bool = false;
+ var unnamed_1: i32 = 0;
+ var unnamed_2: u32 = 0u;
+ var unnamed_3: f32 = 0.0;
+ var unnamed_4: vec2<u32> = vec2<u32>(0u, 0u);
+ var unnamed_5: mat2x2<f32> = mat2x2<f32>(vec2<f32>(0.0, 0.0), vec2<f32>(0.0, 0.0));
+ var unnamed_6: array<Foo,3> = array<Foo,3>(Foo(vec4<f32>(0.0, 0.0, 0.0, 0.0), 0), Foo(vec4<f32>(0.0, 0.0, 0.0, 0.0), 0), Foo(vec4<f32>(0.0, 0.0, 0.0, 0.0), 0));
+ var unnamed_7: Foo = Foo(vec4<f32>(0.0, 0.0, 0.0, 0.0), 0);
+ var unnamed_8: vec2<u32>;
+ var unnamed_9: mat2x2<f32>;
+ var unnamed_10: array<i32,4u>;
foo = Foo(vec4<f32>(1.0), 1);
let mat2comp = mat2x2<f32>(vec2<f32>(1.0, 0.0), vec2<f32>(0.0, 1.0));
let mat4comp = mat4x4<f32>(vec4<f32>(1.0, 0.0, 0.0, 0.0), vec4<f32>(0.0, 1.0, 0.0, 0.0), vec4<f32>(0.0, 0.0, 1.0, 0.0), vec4<f32>(0.0, 0.0, 0.0, 1.0));
- let _e39 = foo.a.x;
- return _e39;
+ unnamed_8 = vec2<u32>(0u);
+ unnamed_9 = mat2x2<f32>(vec2<f32>(0.0), vec2<f32>(0.0));
+ unnamed_10 = array<i32,4u>(0, 1, 2, 3);
+ let _e70 = foo.a.x;
+ return _e70;
}
fn modulo() {
diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -157,10 +157,82 @@ fn bad_type_cast() {
}
"#,
r#"error: cannot cast a vec2<f32> to a i32
- ┌─ wgsl:3:27
+ ┌─ wgsl:3:28
│
3 │ return i32(vec2<f32>(0.0));
- │ ^^^^^^^^^^^^^^^^ cannot cast a vec2<f32> to a i32
+ │ ^^^^^^^^^^^^^^ cannot cast a vec2<f32> to a i32
+
+"#,
+ );
+}
+
+#[test]
+fn type_not_constructible() {
+ check(
+ r#"
+ fn x() {
+ var _ = atomic<i32>(0);
+ }
+ "#,
+ r#"error: type `atomic` is not constructible
+ ┌─ wgsl:3:25
+ │
+3 │ var _ = atomic<i32>(0);
+ │ ^^^^^^ type is not constructible
+
+"#,
+ );
+}
+
+#[test]
+fn type_not_inferrable() {
+ check(
+ r#"
+ fn x() {
+ var _ = vec2();
+ }
+ "#,
+ r#"error: type can't be inferred
+ ┌─ wgsl:3:25
+ │
+3 │ var _ = vec2();
+ │ ^^^^ type can't be inferred
+
+"#,
+ );
+}
+
+#[test]
+fn unexpected_constructor_parameters() {
+ check(
+ r#"
+ fn x() {
+ var _ = i32(0, 1);
+ }
+ "#,
+ r#"error: unexpected components
+ ┌─ wgsl:3:31
+ │
+3 │ var _ = i32(0, 1);
+ │ ^^ unexpected components
+
+"#,
+ );
+}
+
+#[test]
+fn constructor_parameter_type_mismatch() {
+ check(
+ r#"
+ fn x() {
+ var _ = mat2x2<f32>(array(0, 1), vec2(2, 3));
+ }
+ "#,
+ r#"error: invalid type for constructor component at index [0]
+ ┌─ wgsl:3:37
+ │
+3 │ var _ = mat2x2<f32>(array(0, 1), vec2(2, 3));
+ │ ^^^^^^^^^^^ invalid component type
"#,
);
| WGSL constructor doesn't check length
The following code produces an error message that was unhelpful
```
@stage(vertex)
fn passthrough(@location(0) pos: vec2<f32>) -> @builtin(position) vec4<f32> {
return vec4<f32>(pos);
}
```
The error is:
```
[2022-03-13T21:53:32Z ERROR naga::valid::function] Returning Some(Vector { size: Bi, kind: Float, width: 4 }) where Some(Vector { size: Quad, kind: Float, width: 4 }) is expected
error:
┌─ triangle.wgsl:3:10
│
3 │ return vec4<f32>(pos);
│ ^^^^^^^^^^^^^^^ naga::Expression [2]
Entry point passthrough at Vertex is invalid:
The `return` value Some([2]) does not match the function return value
```
From the log output, it seems like the `4` in the type constructor expression `vec4<f32>(pos)` got ignored, and the expression is typed as a `vec2`, which doesn't match the return type.
| 2022-03-27T00:38:42 | 0.8 | ea832a9eec13560560c017072d4318d5d942e5e5 | [
"convert_wgsl",
"bad_type_cast",
"constructor_parameter_type_mismatch",
"type_not_constructible",
"type_not_inferrable",
"unexpected_constructor_parameters"
] | [
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_unique",
"back::msl::test_error_size",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [
"back::msl::writer::test_stack_size"
] | [] | 80 | |
gfx-rs/naga | 1,787 | gfx-rs__naga-1787 | [
"1721"
] | 05f050fad485763bf7bf6e1ad4206007b52bc345 | diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3743,6 +3743,40 @@ impl Parser {
Some(crate::Statement::Loop { body, continuing })
}
+ "while" => {
+ let _ = lexer.next();
+ let mut body = crate::Block::new();
+
+ let (condition, span) = lexer.capture_span(|lexer| {
+ emitter.start(context.expressions);
+ let condition = self.parse_general_expression(
+ lexer,
+ context.as_expression(&mut body, &mut emitter),
+ )?;
+ lexer.expect(Token::Paren('{'))?;
+ body.extend(emitter.finish(context.expressions));
+ Ok(condition)
+ })?;
+ let mut reject = crate::Block::new();
+ reject.push(crate::Statement::Break, NagaSpan::default());
+ body.push(
+ crate::Statement::If {
+ condition,
+ accept: crate::Block::new(),
+ reject,
+ },
+ NagaSpan::from(span),
+ );
+
+ while !lexer.skip(Token::Paren('}')) {
+ self.parse_statement(lexer, context.reborrow(), &mut body, false)?;
+ }
+
+ Some(crate::Statement::Loop {
+ body,
+ continuing: crate::Block::new(),
+ })
+ }
"for" => {
let _ = lexer.next();
lexer.expect(Token::Paren('('))?;
| diff --git a/src/front/wgsl/tests.rs b/src/front/wgsl/tests.rs
--- a/src/front/wgsl/tests.rs
+++ b/src/front/wgsl/tests.rs
@@ -262,6 +262,32 @@ fn parse_loop() {
",
)
.unwrap();
+ parse_str(
+ "
+ fn main() {
+ var found: bool = false;
+ var i: i32 = 0;
+ while !found {
+ if i == 10 {
+ found = true;
+ }
+
+ i = i + 1;
+ }
+ }
+ ",
+ )
+ .unwrap();
+ parse_str(
+ "
+ fn main() {
+ while true {
+ break;
+ }
+ }
+ ",
+ )
+ .unwrap();
parse_str(
"
fn main() {
diff --git a/src/front/wgsl/tests.rs b/src/front/wgsl/tests.rs
--- a/src/front/wgsl/tests.rs
+++ b/src/front/wgsl/tests.rs
@@ -418,7 +444,7 @@ fn parse_struct_instantiation() {
a: f32,
b: vec3<f32>,
};
-
+
@stage(fragment)
fn fs_main() {
var foo: Foo = Foo(0.0, vec3<f32>(0.0, 1.0, 42.0));
diff --git a/tests/in/collatz.wgsl b/tests/in/collatz.wgsl
--- a/tests/in/collatz.wgsl
+++ b/tests/in/collatz.wgsl
@@ -14,10 +14,7 @@ var<storage,read_write> v_indices: PrimeIndices;
fn collatz_iterations(n_base: u32) -> u32 {
var n = n_base;
var i: u32 = 0u;
- loop {
- if n <= 1u {
- break;
- }
+ while n > 1u {
if n % 2u == 0u {
n = n / 2u;
}
diff --git a/tests/out/hlsl/collatz.hlsl b/tests/out/hlsl/collatz.hlsl
--- a/tests/out/hlsl/collatz.hlsl
+++ b/tests/out/hlsl/collatz.hlsl
@@ -9,7 +9,8 @@ uint collatz_iterations(uint n_base)
n = n_base;
while(true) {
uint _expr5 = n;
- if ((_expr5 <= 1u)) {
+ if ((_expr5 > 1u)) {
+ } else {
break;
}
uint _expr8 = n;
diff --git a/tests/out/ir/collatz.ron b/tests/out/ir/collatz.ron
--- a/tests/out/ir/collatz.ron
+++ b/tests/out/ir/collatz.ron
@@ -125,7 +125,7 @@
),
Constant(2),
Binary(
- op: LessEqual,
+ op: Greater,
left: 6,
right: 7,
),
diff --git a/tests/out/ir/collatz.ron b/tests/out/ir/collatz.ron
--- a/tests/out/ir/collatz.ron
+++ b/tests/out/ir/collatz.ron
@@ -199,10 +199,10 @@
)),
If(
condition: 8,
- accept: [
+ accept: [],
+ reject: [
Break,
],
- reject: [],
),
Emit((
start: 8,
diff --git a/tests/out/msl/collatz.msl b/tests/out/msl/collatz.msl
--- a/tests/out/msl/collatz.msl
+++ b/tests/out/msl/collatz.msl
@@ -21,7 +21,8 @@ uint collatz_iterations(
n = n_base;
while(true) {
uint _e5 = n;
- if (_e5 <= 1u) {
+ if (_e5 > 1u) {
+ } else {
break;
}
uint _e8 = n;
diff --git a/tests/out/spv/collatz.spvasm b/tests/out/spv/collatz.spvasm
--- a/tests/out/spv/collatz.spvasm
+++ b/tests/out/spv/collatz.spvasm
@@ -57,9 +57,9 @@ OpLoopMerge %22 %24 None
OpBranch %23
%23 = OpLabel
%25 = OpLoad %4 %13
-%27 = OpULessThanEqual %26 %25 %5
+%27 = OpUGreaterThan %26 %25 %5
OpSelectionMerge %28 None
-OpBranchConditional %27 %29 %28
+OpBranchConditional %27 %28 %29
%29 = OpLabel
OpBranch %22
%28 = OpLabel
diff --git a/tests/out/wgsl/collatz.wgsl b/tests/out/wgsl/collatz.wgsl
--- a/tests/out/wgsl/collatz.wgsl
+++ b/tests/out/wgsl/collatz.wgsl
@@ -12,7 +12,8 @@ fn collatz_iterations(n_base: u32) -> u32 {
n = n_base;
loop {
let _e5 = n;
- if (_e5 <= 1u) {
+ if (_e5 > 1u) {
+ } else {
break;
}
let _e8 = n;
| While loop in WGSL
See https://github.com/gpuweb/gpuweb/pull/2590
| 2022-03-22T18:25:33 | 0.8 | ea832a9eec13560560c017072d4318d5d942e5e5 | [
"front::wgsl::tests::parse_loop",
"convert_wgsl"
] | [
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::layout::test_physical_layout_in_words",
"front::glsl::const... | [
"back::msl::writer::test_stack_size"
] | [] | 81 | |
gfx-rs/naga | 1,402 | gfx-rs__naga-1402 | [
"1019"
] | dc8a41de04068b4766c1e74f9d77f765e815172a | diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -135,6 +135,11 @@ bitflags::bitflags! {
}
}
+struct BlockInfo {
+ stages: ShaderStages,
+ finished: bool,
+}
+
struct BlockContext<'a> {
abilities: ControlFlowAbility,
info: &'a FunctionInfo,
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -326,7 +331,7 @@ impl super::Validator {
&mut self,
statements: &[crate::Statement],
context: &BlockContext,
- ) -> Result<ShaderStages, FunctionError> {
+ ) -> Result<BlockInfo, FunctionError> {
use crate::{Statement as S, TypeInner as Ti};
let mut finished = false;
let mut stages = ShaderStages::all();
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -345,7 +350,9 @@ impl super::Validator {
}
}
S::Block(ref block) => {
- stages &= self.validate_block(block, context)?;
+ let info = self.validate_block(block, context)?;
+ stages &= info.stages;
+ finished = info.finished;
}
S::If {
condition,
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -359,8 +366,8 @@ impl super::Validator {
} => {}
_ => return Err(FunctionError::InvalidIfType(condition)),
}
- stages &= self.validate_block(accept, context)?;
- stages &= self.validate_block(reject, context)?;
+ stages &= self.validate_block(accept, context)?.stages;
+ stages &= self.validate_block(reject, context)?.stages;
}
S::Switch {
selector,
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -385,9 +392,9 @@ impl super::Validator {
let sub_context =
context.with_abilities(pass_through_abilities | ControlFlowAbility::BREAK);
for case in cases {
- stages &= self.validate_block(&case.body, &sub_context)?;
+ stages &= self.validate_block(&case.body, &sub_context)?.stages;
}
- stages &= self.validate_block(default, &sub_context)?;
+ stages &= self.validate_block(default, &sub_context)?.stages;
}
S::Loop {
ref body,
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -397,18 +404,22 @@ impl super::Validator {
// because the continuing{} block inherits the scope
let base_expression_count = self.valid_expression_list.len();
let pass_through_abilities = context.abilities & ControlFlowAbility::RETURN;
- stages &= self.validate_block_impl(
- body,
- &context.with_abilities(
- pass_through_abilities
- | ControlFlowAbility::BREAK
- | ControlFlowAbility::CONTINUE,
- ),
- )?;
- stages &= self.validate_block_impl(
- continuing,
- &context.with_abilities(ControlFlowAbility::empty()),
- )?;
+ stages &= self
+ .validate_block_impl(
+ body,
+ &context.with_abilities(
+ pass_through_abilities
+ | ControlFlowAbility::BREAK
+ | ControlFlowAbility::CONTINUE,
+ ),
+ )?
+ .stages;
+ stages &= self
+ .validate_block_impl(
+ continuing,
+ &context.with_abilities(ControlFlowAbility::empty()),
+ )?
+ .stages;
for handle in self.valid_expression_list.drain(base_expression_count..) {
self.valid_expression_set.remove(handle.index());
}
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -593,20 +604,20 @@ impl super::Validator {
}
}
}
- Ok(stages)
+ Ok(BlockInfo { stages, finished })
}
fn validate_block(
&mut self,
statements: &[crate::Statement],
context: &BlockContext,
- ) -> Result<ShaderStages, FunctionError> {
+ ) -> Result<BlockInfo, FunctionError> {
let base_expression_count = self.valid_expression_list.len();
- let stages = self.validate_block_impl(statements, context)?;
+ let info = self.validate_block_impl(statements, context)?;
for handle in self.valid_expression_list.drain(base_expression_count..) {
self.valid_expression_set.remove(handle.index());
}
- Ok(stages)
+ Ok(info)
}
fn validate_local_var(
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -694,10 +705,12 @@ impl super::Validator {
}
if self.flags.contains(ValidationFlags::BLOCKS) {
- let stages = self.validate_block(
- &fun.body,
- &BlockContext::new(fun, module, &info, &mod_info.functions),
- )?;
+ let stages = self
+ .validate_block(
+ &fun.body,
+ &BlockContext::new(fun, module, &info, &mod_info.functions),
+ )?
+ .stages;
info.available_stages &= stages;
}
Ok(info)
| diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -848,3 +848,34 @@ fn invalid_local_vars() {
if local_var_name == "not_okay"
}
}
+
+#[test]
+fn dead_code() {
+ check_validation_error! {
+ "
+ fn dead_code_after_if(condition: bool) -> i32 {
+ if (condition) {
+ return 1;
+ } else {
+ return 2;
+ }
+ return 3;
+ }
+ ":
+ Ok(_)
+ }
+ check_validation_error! {
+ "
+ fn dead_code_after_block() -> i32 {
+ {
+ return 1;
+ }
+ return 2;
+ }
+ ":
+ Err(naga::valid::ValidationError::Function {
+ error: naga::valid::FunctionError::InstructionsAfterReturn,
+ ..
+ })
+ }
+}
| Naga doesn't detect unreachable code
The following test, if added to `wgsl-errors.rs`, should fail, but it passes:
```
check_validation_error! {
"
fn dead_code_after_if(condition: bool) -> i32 {
if (condition) {
return 1;
} else {
return 2;
}
return 3;
}
",
"
fn dead_code_after_block() -> i32 {
{
return 1;
}
return 2;
}
":
Ok(_)
}
```
| 2021-09-20T23:46:59 | 0.6 | 2e7d629aefe6857ade4f96fe2c3dc1a09f0fa4db | [
"dead_code"
] | [
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"back::msl::test_error_size",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [
"back::msl::writer::test_stack_size"
] | [] | 82 | |
gfx-rs/naga | 1,390 | gfx-rs__naga-1390 | [
"1386"
] | d7ca7d43b987c17f553db2823d833646ff1d48e0 | diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -336,7 +336,7 @@ fn consume_token(mut input: &str, generic: bool) -> (Token<'_>, &str) {
}
}
'0'..='9' => consume_number(input),
- 'a'..='z' | 'A'..='Z' | '_' => {
+ 'a'..='z' | 'A'..='Z' => {
let (word, rest) = consume_any(input, |c| c.is_ascii_alphanumeric() || c == '_');
(Token::Word(word), rest)
}
| diff --git a/src/front/wgsl/lexer.rs b/src/front/wgsl/lexer.rs
--- a/src/front/wgsl/lexer.rs
+++ b/src/front/wgsl/lexer.rs
@@ -655,6 +655,7 @@ fn test_tokens() {
);
sub_test("No¾", &[Token::Word("No"), Token::Unknown('¾')]);
sub_test("No好", &[Token::Word("No"), Token::Unknown('好')]);
+ sub_test("_No", &[Token::Unknown('_'), Token::Word("No")]);
sub_test("\"\u{2}ПЀ\u{0}\"", &[Token::String("\u{2}ПЀ\u{0}")]); // https://github.com/gfx-rs/naga/issues/90
}
| naga should reject identifiers beginning with underscore
This should fail compilation. But Naga allows it.
```
let _123 = 123;
```
| 2021-09-18T04:35:00 | 0.6 | 2e7d629aefe6857ade4f96fe2c3dc1a09f0fa4db | [
"front::wgsl::lexer::test_tokens"
] | [
"arena::tests::fetch_or_append_unique",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::writer::test_write_physical_layout",
"back::spv::layout::t... | [
"back::msl::writer::test_stack_size"
] | [] | 83 | |
gfx-rs/naga | 1,384 | gfx-rs__naga-1384 | [
"1382"
] | 73f9d072079246ccf119c4e27de2b624afa6e466 | diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -51,56 +51,6 @@ enum Indirection {
Reference,
}
-/// Return the sort of indirection that `expr`'s plain form evaluates to.
-///
-/// An expression's 'plain form' is the most general rendition of that
-/// expression into WGSL, lacking `&` or `*` operators:
-///
-/// - The plain form of `LocalVariable(x)` is simply `x`, which is a reference
-/// to the local variable's storage.
-///
-/// - The plain form of `GlobalVariable(g)` is simply `g`, which is usually a
-/// reference to the global variable's storage. However, globals in the
-/// `Handle` storage class are immutable, and `GlobalVariable` expressions for
-/// those produce the value directly, not a pointer to it. Such
-/// `GlobalVariable` expressions are `Ordinary`.
-///
-/// - `Access` and `AccessIndex` are `Reference` when their `base` operand is a
-/// pointer. If they are applied directly to a composite value, they are
-/// `Ordinary`.
-///
-/// Note that `FunctionArgument` expressions are never `Reference`, even when
-/// the argument's type is `Pointer`. `FunctionArgument` always evaluates to the
-/// argument's value directly, so any pointer it produces is merely the value
-/// passed by the caller.
-fn plain_form_indirection(
- expr: Handle<crate::Expression>,
- module: &Module,
- func_ctx: &back::FunctionCtx<'_>,
-) -> Indirection {
- use crate::Expression as Ex;
- match func_ctx.expressions[expr] {
- Ex::LocalVariable(_) => Indirection::Reference,
- Ex::GlobalVariable(handle) => {
- let global = &module.global_variables[handle];
- match global.class {
- crate::StorageClass::Handle => Indirection::Ordinary,
- _ => Indirection::Reference,
- }
- }
- Ex::Access { base, .. } | Ex::AccessIndex { base, .. } => {
- let base_ty = func_ctx.info[base].ty.inner_with(&module.types);
- match *base_ty {
- crate::TypeInner::Pointer { .. } | crate::TypeInner::ValuePointer { .. } => {
- Indirection::Reference
- }
- _ => Indirection::Ordinary,
- }
- }
- _ => Indirection::Ordinary,
- }
-}
-
pub struct Writer<W> {
out: W,
names: crate::FastHashMap<NameKey, String>,
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -630,15 +580,64 @@ impl<W: Write> Writer<W> {
}
TypeInner::Pointer { base, class } => {
let (storage, maybe_access) = storage_class_str(class);
+ // Everything but `StorageClass::Handle` gives us a `storage` name, but
+ // Naga IR never produces pointers to handles, so it doesn't matter much
+ // how we write such a type. Just write it as the base type alone.
if let Some(class) = storage {
write!(self.out, "ptr<{}, ", class)?;
+ }
+ self.write_type(module, base)?;
+ if storage.is_some() {
if let Some(access) = maybe_access {
write!(self.out, ", {}", access)?;
}
+ write!(self.out, ">")?;
}
- self.write_type(module, base)?;
- if storage.is_some() {
+ }
+ TypeInner::ValuePointer {
+ size: None,
+ kind,
+ width: _,
+ class,
+ } => {
+ let (storage, maybe_access) = storage_class_str(class);
+ if let Some(class) = storage {
+ write!(self.out, "ptr<{}, {}", class, scalar_kind_str(kind))?;
+ if let Some(access) = maybe_access {
+ write!(self.out, ", {}", access)?;
+ }
+ write!(self.out, ">")?;
+ } else {
+ return Err(Error::Unimplemented(format!(
+ "ValuePointer to StorageClass::Handle {:?}",
+ inner
+ )));
+ }
+ }
+ TypeInner::ValuePointer {
+ size: Some(size),
+ kind,
+ width: _,
+ class,
+ } => {
+ let (storage, maybe_access) = storage_class_str(class);
+ if let Some(class) = storage {
+ write!(
+ self.out,
+ "ptr<{}, vec{}<{}>",
+ class,
+ back::vector_size_str(size),
+ scalar_kind_str(kind)
+ )?;
+ if let Some(access) = maybe_access {
+ write!(self.out, ", {}", access)?;
+ }
write!(self.out, ">")?;
+ } else {
+ return Err(Error::Unimplemented(format!(
+ "ValuePointer to StorageClass::Handle {:?}",
+ inner
+ )));
}
}
_ => {
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -976,6 +975,65 @@ impl<W: Write> Writer<W> {
Ok(())
}
+ /// Return the sort of indirection that `expr`'s plain form evaluates to.
+ ///
+ /// An expression's 'plain form' is the most general rendition of that
+ /// expression into WGSL, lacking `&` or `*` operators:
+ ///
+ /// - The plain form of `LocalVariable(x)` is simply `x`, which is a reference
+ /// to the local variable's storage.
+ ///
+ /// - The plain form of `GlobalVariable(g)` is simply `g`, which is usually a
+ /// reference to the global variable's storage. However, globals in the
+ /// `Handle` storage class are immutable, and `GlobalVariable` expressions for
+ /// those produce the value directly, not a pointer to it. Such
+ /// `GlobalVariable` expressions are `Ordinary`.
+ ///
+ /// - `Access` and `AccessIndex` are `Reference` when their `base` operand is a
+ /// pointer. If they are applied directly to a composite value, they are
+ /// `Ordinary`.
+ ///
+ /// Note that `FunctionArgument` expressions are never `Reference`, even when
+ /// the argument's type is `Pointer`. `FunctionArgument` always evaluates to the
+ /// argument's value directly, so any pointer it produces is merely the value
+ /// passed by the caller.
+ fn plain_form_indirection(
+ &self,
+ expr: Handle<crate::Expression>,
+ module: &Module,
+ func_ctx: &back::FunctionCtx<'_>,
+ ) -> Indirection {
+ use crate::Expression as Ex;
+
+ // Named expressions are `let` expressions, which apply the Load Rule,
+ // so if their type is a Naga pointer, then that must be a WGSL pointer
+ // as well.
+ if self.named_expressions.contains_key(&expr) {
+ return Indirection::Ordinary;
+ }
+
+ match func_ctx.expressions[expr] {
+ Ex::LocalVariable(_) => Indirection::Reference,
+ Ex::GlobalVariable(handle) => {
+ let global = &module.global_variables[handle];
+ match global.class {
+ crate::StorageClass::Handle => Indirection::Ordinary,
+ _ => Indirection::Reference,
+ }
+ }
+ Ex::Access { base, .. } | Ex::AccessIndex { base, .. } => {
+ let base_ty = func_ctx.info[base].ty.inner_with(&module.types);
+ match *base_ty {
+ crate::TypeInner::Pointer { .. } | crate::TypeInner::ValuePointer { .. } => {
+ Indirection::Reference
+ }
+ _ => Indirection::Ordinary,
+ }
+ }
+ _ => Indirection::Ordinary,
+ }
+ }
+
fn start_named_expr(
&mut self,
module: &Module,
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -1031,27 +1089,46 @@ impl<W: Write> Writer<W> {
func_ctx: &back::FunctionCtx<'_>,
requested: Indirection,
) -> BackendResult {
- use crate::Expression;
-
- if let Some(name) = self.named_expressions.get(&expr) {
- write!(self.out, "{}", name)?;
- return Ok(());
- }
-
// If the plain form of the expression is not what we need, emit the
// operator necessary to correct that.
- let plain = plain_form_indirection(expr, module, func_ctx);
- let opened_paren = match (requested, plain) {
+ let plain = self.plain_form_indirection(expr, module, func_ctx);
+ match (requested, plain) {
(Indirection::Ordinary, Indirection::Reference) => {
write!(self.out, "(&")?;
- true
+ self.write_expr_plain_form(module, expr, func_ctx, plain)?;
+ write!(self.out, ")")?;
}
(Indirection::Reference, Indirection::Ordinary) => {
write!(self.out, "(*")?;
- true
+ self.write_expr_plain_form(module, expr, func_ctx, plain)?;
+ write!(self.out, ")")?;
}
- (_, _) => false,
- };
+ (_, _) => self.write_expr_plain_form(module, expr, func_ctx, plain)?,
+ }
+
+ Ok(())
+ }
+
+ /// Write the 'plain form' of `expr`.
+ ///
+ /// An expression's 'plain form' is the most general rendition of that
+ /// expression into WGSL, lacking `&` or `*` operators. The plain forms of
+ /// `LocalVariable(x)` and `GlobalVariable(g)` are simply `x` and `g`. Such
+ /// Naga expressions represent both WGSL pointers and references; it's the
+ /// caller's responsibility to distinguish those cases appropriately.
+ fn write_expr_plain_form(
+ &mut self,
+ module: &Module,
+ expr: Handle<crate::Expression>,
+ func_ctx: &back::FunctionCtx<'_>,
+ indirection: Indirection,
+ ) -> BackendResult {
+ use crate::Expression;
+
+ if let Some(name) = self.named_expressions.get(&expr) {
+ write!(self.out, "{}", name)?;
+ return Ok(());
+ }
let expression = &func_ctx.expressions[expr];
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -1124,7 +1201,7 @@ impl<W: Write> Writer<W> {
}
// TODO: copy-paste from glsl-out
Expression::Access { base, index } => {
- self.write_expr_with_indirection(module, base, func_ctx, plain)?;
+ self.write_expr_with_indirection(module, base, func_ctx, indirection)?;
write!(self.out, "[")?;
self.write_expr(module, index, func_ctx)?;
write!(self.out, "]")?
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -1134,7 +1211,7 @@ impl<W: Write> Writer<W> {
let base_ty_res = &func_ctx.info[base].ty;
let mut resolved = base_ty_res.inner_with(&module.types);
- self.write_expr_with_indirection(module, base, func_ctx, plain)?;
+ self.write_expr_with_indirection(module, base, func_ctx, indirection)?;
let base_ty_handle = match *resolved {
TypeInner::Pointer { base, class: _ } => {
diff --git a/src/back/wgsl/writer.rs b/src/back/wgsl/writer.rs
--- a/src/back/wgsl/writer.rs
+++ b/src/back/wgsl/writer.rs
@@ -1561,10 +1638,6 @@ impl<W: Write> Writer<W> {
Expression::CallResult(_) | Expression::AtomicResult { .. } => {}
}
- if opened_paren {
- write!(self.out, ")")?
- };
-
Ok(())
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3223,7 +3223,7 @@ impl Parser {
.resolve_type(expr_id)?;
let expr_inner = context.typifier.get(expr_id, context.types);
let given_inner = &context.types[ty].inner;
- if given_inner != expr_inner {
+ if !given_inner.equivalent(expr_inner, context.types) {
log::error!(
"Given type {:?} doesn't match expected {:?}",
given_inner,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3284,7 +3284,7 @@ impl Parser {
Some(ty) => {
let expr_inner = context.typifier.get(value, context.types);
let given_inner = &context.types[ty].inner;
- if given_inner != expr_inner {
+ if !given_inner.equivalent(expr_inner, context.types) {
log::error!(
"Given type {:?} doesn't match expected {:?}",
given_inner,
diff --git a/src/proc/mod.rs b/src/proc/mod.rs
--- a/src/proc/mod.rs
+++ b/src/proc/mod.rs
@@ -7,6 +7,8 @@ mod namer;
mod terminator;
mod typifier;
+use std::cmp::PartialEq;
+
pub use index::IndexableLength;
pub use layouter::{Alignment, InvalidBaseType, Layouter, TypeLayout};
pub use namer::{EntryPointIndex, NameKey, Namer};
diff --git a/src/proc/mod.rs b/src/proc/mod.rs
--- a/src/proc/mod.rs
+++ b/src/proc/mod.rs
@@ -130,6 +132,50 @@ impl super::TypeInner {
Self::Image { .. } | Self::Sampler { .. } => 0,
}
}
+
+ /// Return the canoncal form of `self`, or `None` if it's already in
+ /// canonical form.
+ ///
+ /// Certain types have multiple representations in `TypeInner`. This
+ /// function converts all forms of equivalent types to a single
+ /// representative of their class, so that simply applying `Eq` to the
+ /// result indicates whether the types are equivalent, as far as Naga IR is
+ /// concerned.
+ pub fn canonical_form(&self, types: &crate::Arena<crate::Type>) -> Option<crate::TypeInner> {
+ use crate::TypeInner as Ti;
+ match *self {
+ Ti::Pointer { base, class } => match types[base].inner {
+ Ti::Scalar { kind, width } => Some(Ti::ValuePointer {
+ size: None,
+ kind,
+ width,
+ class,
+ }),
+ Ti::Vector { size, kind, width } => Some(Ti::ValuePointer {
+ size: Some(size),
+ kind,
+ width,
+ class,
+ }),
+ _ => None,
+ },
+ _ => None,
+ }
+ }
+
+ /// Compare `self` and `rhs` as types.
+ ///
+ /// This is mostly the same as `<TypeInner as Eq>::eq`, but it treats
+ /// `ValuePointer` and `Pointer` types as equivalent.
+ ///
+ /// When you know that one side of the comparison is never a pointer, it's
+ /// fine to not bother with canonicalization, and just compare `TypeInner`
+ /// values with `==`.
+ pub fn equivalent(&self, rhs: &crate::TypeInner, types: &crate::Arena<crate::Type>) -> bool {
+ let left = self.canonical_form(types);
+ let right = rhs.canonical_form(types);
+ left.as_ref().unwrap_or(self) == right.as_ref().unwrap_or(rhs)
+ }
}
impl super::MathFunction {
diff --git a/src/valid/compose.rs b/src/valid/compose.rs
--- a/src/valid/compose.rs
+++ b/src/valid/compose.rs
@@ -96,7 +96,11 @@ pub fn validate_compose(
});
}
for (index, comp_res) in component_resolutions.enumerate() {
- if comp_res.inner_with(type_arena) != &type_arena[base].inner {
+ let base_inner = &type_arena[base].inner;
+ let comp_res_inner = comp_res.inner_with(type_arena);
+ // We don't support arrays of pointers, but it seems best not to
+ // embed that assumption here, so use `TypeInner::equivalent`.
+ if !base_inner.equivalent(comp_res_inner, type_arena) {
log::error!("Array component[{}] type {:?}", index, comp_res);
return Err(ComposeError::ComponentType {
index: index as u32,
diff --git a/src/valid/compose.rs b/src/valid/compose.rs
--- a/src/valid/compose.rs
+++ b/src/valid/compose.rs
@@ -113,7 +117,11 @@ pub fn validate_compose(
}
for (index, (member, comp_res)) in members.iter().zip(component_resolutions).enumerate()
{
- if comp_res.inner_with(type_arena) != &type_arena[member.ty].inner {
+ let member_inner = &type_arena[member.ty].inner;
+ let comp_res_inner = comp_res.inner_with(type_arena);
+ // We don't support pointers in structs, but it seems best not to embed
+ // that assumption here, so use `TypeInner::equivalent`.
+ if !comp_res_inner.equivalent(member_inner, type_arena) {
log::error!("Struct component[{}] type {:?}", index, comp_res);
return Err(ComposeError::ComponentType {
index: index as u32,
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -243,7 +243,8 @@ impl super::Validator {
let ty = context
.resolve_type_impl(expr, &self.valid_expression_set)
.map_err(|error| CallError::Argument { index, error })?;
- if ty != &context.types[arg.ty].inner {
+ let arg_inner = &context.types[arg.ty].inner;
+ if !ty.equivalent(arg_inner, context.types) {
return Err(CallError::ArgumentType {
index,
required: arg.ty,
diff --git a/src/valid/function.rs b/src/valid/function.rs
--- a/src/valid/function.rs
+++ b/src/valid/function.rs
@@ -448,7 +449,17 @@ impl super::Validator {
.map(|expr| context.resolve_type(expr, &self.valid_expression_set))
.transpose()?;
let expected_ty = context.return_type.map(|ty| &context.types[ty].inner);
- if value_ty != expected_ty {
+ // We can't return pointers, but it seems best not to embed that
+ // assumption here, so use `TypeInner::equivalent` for comparison.
+ let okay = match (value_ty, expected_ty) {
+ (None, None) => true,
+ (Some(value_inner), Some(expected_inner)) => {
+ value_inner.equivalent(expected_inner, context.types)
+ }
+ (_, _) => false,
+ };
+
+ if !okay {
log::error!(
"Returning {:?} where {:?} is expected",
value_ty,
| diff --git /dev/null b/tests/in/pointers.param.ron
new file mode 100644
--- /dev/null
+++ b/tests/in/pointers.param.ron
@@ -0,0 +1,7 @@
+(
+ spv: (
+ version: (1, 2),
+ debug: true,
+ adjust_coordinate_space: false,
+ ),
+)
diff --git /dev/null b/tests/in/pointers.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/in/pointers.wgsl
@@ -0,0 +1,5 @@
+fn f() {
+ var v: vec2<i32>;
+ let px = &v.x;
+ *px = 10;
+}
diff --git /dev/null b/tests/out/spv/pointers.spvasm
new file mode 100644
--- /dev/null
+++ b/tests/out/spv/pointers.spvasm
@@ -0,0 +1,29 @@
+; SPIR-V
+; Version: 1.2
+; Generator: rspirv
+; Bound: 16
+OpCapability Shader
+OpCapability Linkage
+%1 = OpExtInstImport "GLSL.std.450"
+OpMemoryModel Logical GLSL450
+OpSource GLSL 450
+OpName %6 "v"
+OpName %9 "f"
+%2 = OpTypeVoid
+%4 = OpTypeInt 32 1
+%3 = OpConstant %4 10
+%5 = OpTypeVector %4 2
+%7 = OpTypePointer Function %5
+%10 = OpTypeFunction %2
+%12 = OpTypePointer Function %4
+%14 = OpTypeInt 32 0
+%13 = OpConstant %14 0
+%9 = OpFunction %2 None %10
+%8 = OpLabel
+%6 = OpVariable %7 Function
+OpBranch %11
+%11 = OpLabel
+%15 = OpAccessChain %12 %6 %13
+OpStore %15 %3
+OpReturn
+OpFunctionEnd
\ No newline at end of file
diff --git /dev/null b/tests/out/wgsl/pointers.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/out/wgsl/pointers.wgsl
@@ -0,0 +1,8 @@
+fn f() {
+ var v: vec2<i32>;
+
+ let px: ptr<function, i32> = (&v.x);
+ (*px) = 10;
+ return;
+}
+
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -480,6 +480,7 @@ fn convert_wgsl() {
"access",
Targets::SPIRV | Targets::METAL | Targets::GLSL | Targets::HLSL | Targets::WGSL,
),
+ ("pointers", Targets::SPIRV | Targets::WGSL),
(
"control-flow",
Targets::SPIRV | Targets::METAL | Targets::GLSL | Targets::HLSL | Targets::WGSL,
diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -687,6 +687,24 @@ fn invalid_functions() {
}
}
+#[test]
+fn pointer_type_equivalence() {
+ check_validation_error! {
+ r#"
+ fn f(pv: ptr<function, vec2<f32>>, pf: ptr<function, f32>) { }
+
+ fn g() {
+ var m: mat2x2<f32>;
+ let pv: ptr<function, vec2<f32>> = &m.x;
+ let pf: ptr<function, f32> = &m.x.x;
+
+ f(pv, pf);
+ }
+ "#:
+ Ok(_)
+ }
+}
+
#[test]
fn missing_bindings() {
check_validation_error! {
| Naga produces incorrect WGSL for let-bound pointers
The following WGSL input generates incorrect WGSL output:
```
fn f() {
var v: vec2<i32>;
let px = &v.x;
*px = 10;
}
```
The output omits the `*`:
```
fn f() {
var v: vec2<i32>;
let px: ptr<function, i32> = (&v.x);
px = 10;
return;
}
```
This is because the fix for #1332 does not properly address named expressions.
| 2021-09-17T09:50:57 | 0.6 | 2e7d629aefe6857ade4f96fe2c3dc1a09f0fa4db | [
"convert_wgsl"
] | [
"arena::tests::append_unique",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::writer:... | [
"back::msl::writer::test_stack_size"
] | [] | 84 | |
gfx-rs/naga | 1,367 | gfx-rs__naga-1367 | [
"1356"
] | 933ecc4a443cb882bce0283fe77897bba6007080 | diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -165,9 +165,9 @@ pub enum Error<'a> {
MissingType(Span),
InvalidAtomicPointer(Span),
InvalidAtomicOperandType(Span),
+ Pointer(&'static str, Span),
NotPointer(Span),
- AddressOfNotReference(Span),
- AssignmentNotReference(Span),
+ NotReference(&'static str, Span),
Other,
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -419,14 +419,14 @@ impl<'a> Error<'a> {
labels: vec![(span.clone(), "expression is not a pointer".into())],
notes: vec![],
},
- Error::AddressOfNotReference(ref span) => ParseError {
- message: "the operand of the `&` operator must be a reference".to_string(),
+ Error::NotReference(what, ref span) => ParseError {
+ message: format!("{} must be a reference", what),
labels: vec![(span.clone(), "expression is not a reference".into())],
notes: vec![],
},
- Error::AssignmentNotReference(ref span) => ParseError {
- message: "the left-hand side of an assignment be a reference".to_string(),
- labels: vec![(span.clone(), "expression is not a reference".into())],
+ Error::Pointer(what, ref span) => ParseError {
+ message: format!("{} must not be a pointer", what),
+ labels: vec![(span.clone(), "expression is a pointer".into())],
notes: vec![],
},
Error::Other => ParseError {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -692,15 +692,15 @@ trait StringValueLookup<'a> {
type Value;
fn lookup(&self, key: &'a str, span: Span) -> Result<Self::Value, Error<'a>>;
}
-impl<'a> StringValueLookup<'a> for FastHashMap<&'a str, Handle<crate::Expression>> {
- type Value = Handle<crate::Expression>;
+impl<'a> StringValueLookup<'a> for FastHashMap<&'a str, TypedExpression> {
+ type Value = TypedExpression;
fn lookup(&self, key: &'a str, span: Span) -> Result<Self::Value, Error<'a>> {
self.get(key).cloned().ok_or(Error::UnknownIdent(span, key))
}
}
struct StatementContext<'input, 'temp, 'out> {
- lookup_ident: &'temp mut FastHashMap<&'input str, Handle<crate::Expression>>,
+ lookup_ident: &'temp mut FastHashMap<&'input str, TypedExpression>,
typifier: &'temp mut super::Typifier,
variables: &'out mut Arena<crate::LocalVariable>,
expressions: &'out mut Arena<crate::Expression>,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -758,7 +758,7 @@ struct SamplingContext {
}
struct ExpressionContext<'input, 'temp, 'out> {
- lookup_ident: &'temp FastHashMap<&'input str, Handle<crate::Expression>>,
+ lookup_ident: &'temp FastHashMap<&'input str, TypedExpression>,
typifier: &'temp mut super::Typifier,
expressions: &'out mut Arena<crate::Expression>,
types: &'out mut Arena<crate::Type>,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -811,7 +811,7 @@ impl<'a> ExpressionContext<'a, '_, '_> {
image_name: &'a str,
span: Span,
) -> Result<SamplingContext, Error<'a>> {
- let image = self.lookup_ident.lookup(image_name, span.clone())?;
+ let image = self.lookup_ident.lookup(image_name, span.clone())?.handle;
Ok(SamplingContext {
image,
arrayed: match *self.resolve_type(image)? {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1567,7 +1567,7 @@ impl Parser {
lexer.close_arguments()?;
crate::Expression::ImageSample {
image: sc.image,
- sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?,
+ sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?.handle,
coordinate,
array_index,
offset,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1600,7 +1600,7 @@ impl Parser {
lexer.close_arguments()?;
crate::Expression::ImageSample {
image: sc.image,
- sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?,
+ sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?.handle,
coordinate,
array_index,
offset,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1633,7 +1633,7 @@ impl Parser {
lexer.close_arguments()?;
crate::Expression::ImageSample {
image: sc.image,
- sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?,
+ sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?.handle,
coordinate,
array_index,
offset,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1668,7 +1668,7 @@ impl Parser {
lexer.close_arguments()?;
crate::Expression::ImageSample {
image: sc.image,
- sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?,
+ sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?.handle,
coordinate,
array_index,
offset,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1701,7 +1701,7 @@ impl Parser {
lexer.close_arguments()?;
crate::Expression::ImageSample {
image: sc.image,
- sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?,
+ sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?.handle,
coordinate,
array_index,
offset,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1734,7 +1734,7 @@ impl Parser {
lexer.close_arguments()?;
crate::Expression::ImageSample {
image: sc.image,
- sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?,
+ sampler: ctx.lookup_ident.lookup(sampler_name, sampler_span)?.handle,
coordinate,
array_index,
offset,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1746,7 +1746,10 @@ impl Parser {
let _ = lexer.next();
lexer.open_arguments()?;
let (image_name, image_span) = lexer.next_ident_with_span()?;
- let image = ctx.lookup_ident.lookup(image_name, image_span.clone())?;
+ let image = ctx
+ .lookup_ident
+ .lookup(image_name, image_span.clone())?
+ .handle;
lexer.expect(Token::Separator(','))?;
let coordinate = self.parse_general_expression(lexer, ctx.reborrow())?;
let (class, arrayed) = match *ctx.resolve_type(image)? {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1779,7 +1782,7 @@ impl Parser {
let _ = lexer.next();
lexer.open_arguments()?;
let (image_name, image_span) = lexer.next_ident_with_span()?;
- let image = ctx.lookup_ident.lookup(image_name, image_span)?;
+ let image = ctx.lookup_ident.lookup(image_name, image_span)?.handle;
let level = if lexer.skip(Token::Separator(',')) {
let expr = self.parse_general_expression(lexer, ctx.reborrow())?;
Some(expr)
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1796,7 +1799,7 @@ impl Parser {
let _ = lexer.next();
lexer.open_arguments()?;
let (image_name, image_span) = lexer.next_ident_with_span()?;
- let image = ctx.lookup_ident.lookup(image_name, image_span)?;
+ let image = ctx.lookup_ident.lookup(image_name, image_span)?.handle;
lexer.close_arguments()?;
crate::Expression::ImageQuery {
image,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1807,7 +1810,7 @@ impl Parser {
let _ = lexer.next();
lexer.open_arguments()?;
let (image_name, image_span) = lexer.next_ident_with_span()?;
- let image = ctx.lookup_ident.lookup(image_name, image_span)?;
+ let image = ctx.lookup_ident.lookup(image_name, image_span)?.handle;
lexer.close_arguments()?;
crate::Expression::ImageQuery {
image,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -1818,7 +1821,7 @@ impl Parser {
let _ = lexer.next();
lexer.open_arguments()?;
let (image_name, image_span) = lexer.next_ident_with_span()?;
- let image = ctx.lookup_ident.lookup(image_name, image_span)?;
+ let image = ctx.lookup_ident.lookup(image_name, image_span)?.handle;
lexer.close_arguments()?;
crate::Expression::ImageQuery {
image,
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2074,18 +2077,15 @@ impl Parser {
)
}
(Token::Word(word), span) => {
- if let Some(&handle) = ctx.lookup_ident.get(word) {
+ if let Some(definition) = ctx.lookup_ident.get(word) {
let _ = lexer.next();
self.pop_scope(lexer);
// Not all identifiers constitute references in WGSL.
- let is_reference = match ctx.expressions[handle] {
- // `let`-bound identifiers don't evaluate to references. But that
- // means `let` declarations apply the Load Rule to their values,
- // so we will never see a `LocalVariable` in `lookup_ident` for a
- // `let` binding. Thus, a Naga `LocalVariable` always means a WGSL
- // `var` binding.
- crate::Expression::LocalVariable(_) => true,
+ let is_reference = match ctx.expressions[definition.handle] {
+ // `let`-bound identifiers don't evaluate to references: `let`
+ // declarations apply the Load Rule to their values.
+ crate::Expression::LocalVariable(_) => definition.is_reference,
// Global variables in the `Handle` storage class do not evaluate
// to references.
crate::Expression::GlobalVariable(global) => {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2095,7 +2095,7 @@ impl Parser {
};
TypedExpression {
- handle,
+ handle: definition.handle,
is_reference,
}
} else if let Some(expr) =
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2135,43 +2135,56 @@ impl Parser {
mut handle,
mut is_reference,
} = expr;
+ let mut prefix_span = lexer.span_from(span_start);
loop {
+ // Step lightly around `resolve_type`'s mutable borrow.
+ ctx.resolve_type(handle)?;
+
+ // Find the type of the composite whose elements, components or members we're
+ // accessing, skipping through references: except for swizzles, the `Access`
+ // or `AccessIndex` expressions we'd generate are the same either way.
+ //
+ // Pointers, however, are not permitted. For error checks below, note whether
+ // the base expression is a WGSL pointer.
+ let temp_inner;
+ let (composite, wgsl_pointer) = match *ctx.typifier.get(handle, ctx.types) {
+ crate::TypeInner::Pointer { base, .. } => (&ctx.types[base].inner, !is_reference),
+ crate::TypeInner::ValuePointer {
+ size: None,
+ kind,
+ width,
+ ..
+ } => {
+ temp_inner = crate::TypeInner::Scalar { kind, width };
+ (&temp_inner, !is_reference)
+ }
+ crate::TypeInner::ValuePointer {
+ size: Some(size),
+ kind,
+ width,
+ ..
+ } => {
+ temp_inner = crate::TypeInner::Vector { size, kind, width };
+ (&temp_inner, !is_reference)
+ }
+ ref other => (other, false),
+ };
+
let expression = match lexer.peek().0 {
Token::Separator('.') => {
let _ = lexer.next();
let (name, name_span) = lexer.next_ident_with_span()?;
- // Step lightly around `resolve_type`'s mutable borrow.
- ctx.resolve_type(handle)?;
-
- // Find the type of the composite whose components or members we're
- // accessing, skipping through pointers: except for swizzles, the
- // `Access` or `AccessIndex` expressions we'd generate are the same
- // either way.
- let temp_inner;
- let composite = match *ctx.typifier.get(handle, ctx.types) {
- crate::TypeInner::Pointer { base, .. } => &ctx.types[base].inner,
- crate::TypeInner::ValuePointer {
- size: None,
- kind,
- width,
- ..
- } => {
- temp_inner = crate::TypeInner::Scalar { kind, width };
- &temp_inner
- }
- crate::TypeInner::ValuePointer {
- size: Some(size),
- kind,
- width,
- ..
- } => {
- temp_inner = crate::TypeInner::Vector { size, kind, width };
- &temp_inner
- }
- ref other => other,
- };
+ // WGSL doesn't allow accessing members on pointers, or swizzling
+ // them. But Naga IR doesn't distinguish pointers and references, so
+ // we must check here.
+ if wgsl_pointer {
+ return Err(Error::Pointer(
+ "the value accessed by a `.member` expression",
+ prefix_span,
+ ));
+ }
let access = match *composite {
crate::TypeInner::Struct { ref members, .. } => {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2219,6 +2232,15 @@ impl Parser {
let index = self.parse_general_expression(lexer, ctx.reborrow())?;
let close_brace_span = lexer.expect_span(Token::Paren(']'))?;
+ // WGSL doesn't allow pointers to be subscripted. But Naga IR doesn't
+ // distinguish pointers and references, so we must check here.
+ if wgsl_pointer {
+ return Err(Error::Pointer(
+ "the value indexed by a `[]` subscripting expression",
+ prefix_span,
+ ));
+ }
+
if let crate::Expression::Constant(constant) = ctx.expressions[index] {
let expr_span = open_brace_span.end..close_brace_span.start;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2248,9 +2270,10 @@ impl Parser {
_ => break,
};
+ prefix_span = lexer.span_from(span_start);
handle = ctx
.expressions
- .append(expression, NagaSpan::from(lexer.span_from(span_start)));
+ .append(expression, NagaSpan::from(prefix_span.clone()));
}
Ok(TypedExpression {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -2327,7 +2350,7 @@ impl Parser {
.get_span(operand.handle)
.to_range()
.unwrap_or_else(|| self.peek_scope(lexer));
- return Err(Error::AddressOfNotReference(span));
+ return Err(Error::NotReference("the operand of the `&` operator", span));
}
// No code is generated. We just declare the pointer a reference now.
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3065,7 +3088,10 @@ impl Parser {
// The left hand side of an assignment must be a reference.
if !reference.is_reference {
let span = span_start..lexer.current_byte_offset();
- return Err(Error::AssignmentNotReference(span));
+ return Err(Error::NotReference(
+ "the left-hand side of an assignment",
+ span,
+ ));
}
lexer.expect(Token::Operation('='))?;
let value = self.parse_general_expression(lexer, context.reborrow())?;
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3183,7 +3209,13 @@ impl Parser {
}
}
block.extend(emitter.finish(context.expressions));
- context.lookup_ident.insert(name, expr_id);
+ context.lookup_ident.insert(
+ name,
+ TypedExpression {
+ handle: expr_id,
+ is_reference: false,
+ },
+ );
context
.named_expressions
.insert(expr_id, String::from(name));
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3291,7 +3323,13 @@ impl Parser {
let expr_id = context
.expressions
.append(crate::Expression::LocalVariable(var_id), Default::default());
- context.lookup_ident.insert(name, expr_id);
+ context.lookup_ident.insert(
+ name,
+ TypedExpression {
+ handle: expr_id,
+ is_reference: true,
+ },
+ );
if let Init::Variable(value) = init {
Some(crate::Statement::Store {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3597,7 +3635,8 @@ impl Parser {
let (image_name, image_span) = lexer.next_ident_with_span()?;
let image = context
.lookup_ident
- .lookup(image_name, image_span.clone())?;
+ .lookup(image_name, image_span.clone())?
+ .handle;
lexer.expect(Token::Separator(','))?;
let mut expr_context = context.as_expression(block, &mut emitter);
let arrayed = match *expr_context.resolve_type(image)? {
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3722,15 +3761,22 @@ impl Parser {
// populate initial expressions
let mut expressions = Arena::new();
for (&name, expression) in lookup_global_expression.iter() {
- let span = match *expression {
- crate::Expression::GlobalVariable(handle) => {
- module.global_variables.get_span(handle)
- }
- crate::Expression::Constant(handle) => module.constants.get_span(handle),
+ let (span, is_reference) = match *expression {
+ crate::Expression::GlobalVariable(handle) => (
+ module.global_variables.get_span(handle),
+ module.global_variables[handle].class != crate::StorageClass::Handle,
+ ),
+ crate::Expression::Constant(handle) => (module.constants.get_span(handle), false),
_ => unreachable!(),
};
- let expr_handle = expressions.append(expression.clone(), span);
- lookup_ident.insert(name, expr_handle);
+ let expression = expressions.append(expression.clone(), span);
+ lookup_ident.insert(
+ name,
+ TypedExpression {
+ handle: expression,
+ is_reference,
+ },
+ );
}
// read parameter list
let mut arguments = Vec::new();
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -3747,11 +3793,17 @@ impl Parser {
let (param_name, param_name_span, param_type, _access) =
self.parse_variable_ident_decl(lexer, &mut module.types, &mut module.constants)?;
let param_index = arguments.len() as u32;
- let expression_token = expressions.append(
+ let expression = expressions.append(
crate::Expression::FunctionArgument(param_index),
NagaSpan::from(param_name_span),
);
- lookup_ident.insert(param_name, expression_token);
+ lookup_ident.insert(
+ param_name,
+ TypedExpression {
+ handle: expression,
+ is_reference: false,
+ },
+ );
arguments.push(crate::FunctionArgument {
name: Some(param_name.to_string()),
ty: param_type,
| diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -493,6 +493,44 @@ fn local_var_missing_type() {
);
}
+#[test]
+fn postfix_pointers() {
+ check(
+ r#"
+ fn main() {
+ var v: vec4<f32> = vec4<f32>(1.0, 1.0, 1.0, 1.0);
+ let pv = &v;
+ let a = *pv[3]; // Problematic line
+ }
+ "#,
+ r#"error: the value indexed by a `[]` subscripting expression must not be a pointer
+ ┌─ wgsl:5:26
+ │
+5 │ let a = *pv[3]; // Problematic line
+ │ ^^ expression is a pointer
+
+"#,
+ );
+
+ check(
+ r#"
+ struct S { m: i32; };
+ fn main() {
+ var s: S = S(42);
+ let ps = &s;
+ let a = *ps.m; // Problematic line
+ }
+ "#,
+ r#"error: the value accessed by a `.member` expression must not be a pointer
+ ┌─ wgsl:6:26
+ │
+6 │ let a = *ps.m; // Problematic line
+ │ ^^ expression is a pointer
+
+"#,
+ );
+}
+
macro_rules! check_validation_error {
// We want to support an optional guard expression after the pattern, so
// that we can check values we can't match against, like strings.
diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -778,6 +816,13 @@ fn valid_access() {
// `Access` to a `ValuePointer`.
return temp[i][j];
}
+ ",
+ "
+ fn main() {
+ var v: vec4<f32> = vec4<f32>(1.0, 1.0, 1.0, 1.0);
+ let pv = &v;
+ let a = (*pv)[3];
+ }
":
Ok(_)
}
diff --git a/tests/wgsl-errors.rs b/tests/wgsl-errors.rs
--- a/tests/wgsl-errors.rs
+++ b/tests/wgsl-errors.rs
@@ -789,7 +834,7 @@ fn invalid_local_vars() {
"
struct Unsized { data: array<f32>; };
fn local_ptr_dynamic_array(okay: ptr<storage, Unsized>) {
- var not_okay: ptr<storage, array<f32>> = okay.data;
+ var not_okay: ptr<storage, array<f32>> = &(*okay).data;
}
":
Err(naga::valid::ValidationError::Function {
| [wgsl-in] Missing parentheses around dereference
https://github.com/gfx-rs/naga/pull/1348 fixed this for wgsl-out, but the following still incorrectly validates:
```
[[stage(vertex)]]
fn main() -> [[builtin(position)]] vec4<f32> {
var v: vec4<f32> = vec4<f32>(1.0, 1.0, 1.0, 1.0);
let pv = &v;
let a = *pv[3]; // Problematic line
return v;
}
```
```
cargo run -- test.wgsl
```
[According to the WGSL grammar](https://www.w3.org/TR/WGSL/#expression-grammar), [] should bind more tightly than *, so `*pv[3]` should not pass validation. The correct code would be `(*pv)[3]`.
naga @ https://github.com/gfx-rs/naga/commit/0e66930aff1340d5c636df616d573cc326702158
| I've reproduced this. Indeed, `*pv[3]` parenthesizes as `*(pv[3])`, which is ill-typed.
Naga's WGSL front end doesn't follow the grammar in the spec very closely, so I think there are a number of problems along these lines. See also #1352.
This was not fixed by #1360. The [approach](https://github.com/gfx-rs/naga/blob/933ecc4a443cb882bce0283fe77897bba6007080/src/front/wgsl/mod.rs#L2083-L2087) I took to handling `let` bindings is incorrect. | 2021-09-15T11:04:10 | 0.6 | 2e7d629aefe6857ade4f96fe2c3dc1a09f0fa4db | [
"postfix_pointers",
"valid_access"
] | [
"arena::tests::fetch_or_append_unique",
"arena::tests::fetch_or_append_non_unique",
"back::msl::test_error_size",
"arena::tests::append_unique",
"arena::tests::append_non_unique",
"back::spv::layout::test_logical_layout_in_words",
"back::spv::layout::test_physical_layout_in_words",
"front::glsl::const... | [
"back::msl::writer::test_stack_size"
] | [] | 85 |
gfx-rs/naga | 377 | gfx-rs__naga-377 | [
"367"
] | 292304b66f27c7f07b855c929bdf671c14dd904d | diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -265,24 +265,31 @@ enum Composition {
}
impl Composition {
+ //TODO: could be `const fn` once MSRV allows
+ fn letter_pos(letter: char) -> u32 {
+ match letter {
+ 'x' | 'r' => 0,
+ 'y' | 'g' => 1,
+ 'z' | 'b' => 2,
+ 'w' | 'a' => 3,
+ _ => !0,
+ }
+ }
+
fn make<'a>(
base: Handle<crate::Expression>,
base_size: crate::VectorSize,
name: &'a str,
expressions: &mut Arena<crate::Expression>,
) -> Result<Self, Error<'a>> {
- const MEMBERS: [char; 4] = ['x', 'y', 'z', 'w'];
-
Ok(if name.len() > 1 {
let mut components = Vec::with_capacity(name.len());
for ch in name.chars() {
- let expr = crate::Expression::AccessIndex {
- base,
- index: MEMBERS[..base_size as usize]
- .iter()
- .position(|&m| m == ch)
- .ok_or(Error::BadAccessor(name))? as u32,
- };
+ let index = Self::letter_pos(ch);
+ if index >= base_size as u32 {
+ return Err(Error::BadAccessor(name));
+ }
+ let expr = crate::Expression::AccessIndex { base, index };
components.push(expressions.append(expr));
}
diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -295,10 +302,10 @@ impl Composition {
Composition::Multi(size, components)
} else {
let ch = name.chars().next().ok_or(Error::BadAccessor(name))?;
- let index = MEMBERS[..base_size as usize]
- .iter()
- .position(|&m| m == ch)
- .ok_or(Error::BadAccessor(name))? as u32;
+ let index = Self::letter_pos(ch);
+ if index >= base_size as u32 {
+ return Err(Error::BadAccessor(name));
+ }
Composition::Single(crate::Expression::AccessIndex { base, index })
})
}
| diff --git a/src/front/wgsl/tests.rs b/src/front/wgsl/tests.rs
--- a/src/front/wgsl/tests.rs
+++ b/src/front/wgsl/tests.rs
@@ -150,5 +150,7 @@ fn parse_texture_load() {
#[test]
fn parse_postfix() {
+ parse_str("fn foo() { const x: f32 = vec4<f32>(1.0, 2.0, 3.0, 4.0).xyz.rgbr.aaaa.wz.g; }")
+ .unwrap();
parse_str("fn foo() { const x: f32 = fract(vec2<f32>(0.5, 1.0)).x; }").unwrap();
}
| Accessor rgba doesn't work
` ParseError { error: BadAccessor("r"), scopes: [FunctionDecl, Block, Statement, GeneralExpr, SingularExpr], pos: (36, 32) }'`
It seems it is supposed to according to the specs.
```wgsl
fn main() {
var test: vec2<f32> = vec2<f32>(1.0, 1.0);
out_target = vec4<f32>( test.r, 1.0, 0.0 , 1.0);
}
```
| 2021-01-27T05:55:16 | 0.2 | 292304b66f27c7f07b855c929bdf671c14dd904d | [
"front::wgsl::tests::parse_postfix"
] | [
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::spv::layout_tests::test_instruction_add_operand",
"back::spv::layout_tests::test_instruction_add_operands",
"back::spv::layout_tests::test_instruct... | [] | [] | 86 | |
gfx-rs/naga | 376 | gfx-rs__naga-376 | [
"368"
] | 986550aff8767fe541fb9e2fc3a7e01e16f1dc21 | diff --git a/src/front/wgsl/mod.rs b/src/front/wgsl/mod.rs
--- a/src/front/wgsl/mod.rs
+++ b/src/front/wgsl/mod.rs
@@ -947,12 +947,13 @@ impl Parser {
Some(expr) => ctx.expressions.append(expr),
None => {
*lexer = backup;
- let handle = self.parse_primary_expression(lexer, ctx.reborrow())?;
- self.parse_postfix(lexer, ctx, handle)?
+ self.parse_primary_expression(lexer, ctx.reborrow())?
}
};
+
+ let post_handle = self.parse_postfix(lexer, ctx, handle)?;
self.scopes.pop();
- Ok(handle)
+ Ok(post_handle)
}
fn parse_equality_expression<'a>(
| diff --git a/src/front/wgsl/tests.rs b/src/front/wgsl/tests.rs
--- a/src/front/wgsl/tests.rs
+++ b/src/front/wgsl/tests.rs
@@ -147,3 +147,8 @@ fn parse_texture_load() {
)
.unwrap();
}
+
+#[test]
+fn parse_postfix() {
+ parse_str("fn foo() { const x: f32 = fract(vec2<f32>(0.5, 1.0)).x; }").unwrap();
+}
| Cannot swizzle the return value of a function
I couldn't swizzle function return values such as `textureSample( ... ).x`
`ParseError { error: Unexpected(Separator('.')), scopes: [FunctionDecl, Block, Statement], pos: (34, 86) }',`
```wgsl
[[stage(fragment)]]
fn main() {
var test: vec2<f32> = vec2<f32>(1.0, 1.0);
out_target = vec4<f32>( fract(test).x, 1.0, 0.0, 1.0);
}
```
| 2021-01-27T05:40:56 | 0.2 | 292304b66f27c7f07b855c929bdf671c14dd904d | [
"front::wgsl::tests::parse_postfix"
] | [
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_unique",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::spv::layout_tests::test_instruction_add_operand",
"back::spv::layout_tests::test_instruction_add_operands",
"back::spv::layout_tests::test_instruct... | [] | [] | 87 | |
gfx-rs/naga | 749 | gfx-rs__naga-749 | [
"646"
] | 36bace2b41a776caa12d2e0ecac9991c08204aa6 | diff --git a/src/back/msl/writer.rs b/src/back/msl/writer.rs
--- a/src/back/msl/writer.rs
+++ b/src/back/msl/writer.rs
@@ -138,7 +138,12 @@ impl<'a> Display for TypeContext<'a> {
};
let (texture_str, msaa_str, kind, access) = match class {
crate::ImageClass::Sampled { kind, multi } => {
- ("texture", if multi { "_ms" } else { "" }, kind, "sample")
+ let (msaa_str, access) = if multi {
+ ("_ms", "read")
+ } else {
+ ("", "sample")
+ };
+ ("texture", msaa_str, kind, access)
}
crate::ImageClass::Depth => ("depth", "", crate::ScalarKind::Float, "sample"),
crate::ImageClass::Storage(format) => {
diff --git a/src/back/msl/writer.rs b/src/back/msl/writer.rs
--- a/src/back/msl/writer.rs
+++ b/src/back/msl/writer.rs
@@ -510,9 +515,9 @@ impl<W: Write> Writer<W> {
write!(self.out, ")")?;
}
crate::ImageDimension::Cube => {
- write!(self.out, "int(")?;
+ write!(self.out, "int3(")?;
self.put_image_query(image, "width", level, context)?;
- write!(self.out, ").xxx")?;
+ write!(self.out, ")")?;
}
}
Ok(())
diff --git a/src/back/spv/instructions.rs b/src/back/spv/instructions.rs
--- a/src/back/spv/instructions.rs
+++ b/src/back/spv/instructions.rs
@@ -594,6 +594,14 @@ impl super::Instruction {
instruction
}
+ pub(super) fn image_query(op: Op, result_type_id: Word, id: Word, image: Word) -> Self {
+ let mut instruction = Self::new(op);
+ instruction.set_type(result_type_id);
+ instruction.set_result(id);
+ instruction.add_operand(image);
+ instruction
+ }
+
//
// Conversion Instructions
//
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -21,6 +21,8 @@ pub enum Error {
MissingCapabilities(Vec<spirv::Capability>),
#[error("unimplemented {0}")]
FeatureNotImplemented(&'static str),
+ #[error("module is not validated properly: {0}")]
+ Validation(&'static str),
}
#[derive(Default)]
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -186,7 +188,7 @@ fn map_dim(dim: crate::ImageDimension) -> spirv::Dim {
match dim {
crate::ImageDimension::D1 => spirv::Dim::Dim1D,
crate::ImageDimension::D2 => spirv::Dim::Dim2D,
- crate::ImageDimension::D3 => spirv::Dim::Dim2D,
+ crate::ImageDimension::D3 => spirv::Dim::Dim3D,
crate::ImageDimension::Cube => spirv::Dim::DimCube,
}
}
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -1261,11 +1263,14 @@ impl Writer {
crate::VectorSize::Bi => crate::VectorSize::Tri,
crate::VectorSize::Tri => crate::VectorSize::Quad,
crate::VectorSize::Quad => {
- unimplemented!("Unable to extend the vec4 coordinate")
+ return Err(Error::Validation("extending vec4 coordinate"));
}
}
}
- ref other => unimplemented!("wrong coordinate type {:?}", other),
+ ref other => {
+ log::error!("wrong coordinate type {:?}", other);
+ return Err(Error::Validation("coordinate type"));
+ }
};
let array_index_f32_id = self.id_gen.next();
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -1894,12 +1899,16 @@ impl Writer {
instruction.add_operand(index_id);
}
- if instruction.type_id != Some(result_type_id) {
+ let inst_type_id = instruction.type_id;
+ block.body.push(instruction);
+ if inst_type_id != Some(result_type_id) {
let sub_id = self.id_gen.next();
- let index_id = self.get_index_constant(0, &ir_module.types)?;
- let sub_instruction =
- Instruction::vector_extract_dynamic(result_type_id, sub_id, id, index_id);
- block.body.push(sub_instruction);
+ block.body.push(Instruction::composite_extract(
+ result_type_id,
+ sub_id,
+ id,
+ &[0],
+ ));
sub_id
} else {
id
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -2062,10 +2071,12 @@ impl Writer {
if needs_sub_access {
let sub_id = self.id_gen.next();
- let index_id = self.get_index_constant(0, &ir_module.types)?;
- let sub_instruction =
- Instruction::vector_extract_dynamic(result_type_id, sub_id, id, index_id);
- block.body.push(sub_instruction);
+ block.body.push(Instruction::composite_extract(
+ result_type_id,
+ sub_id,
+ id,
+ &[0],
+ ));
sub_id
} else {
id
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -2098,9 +2109,147 @@ impl Writer {
});
id
}
- crate::Expression::ImageQuery { .. }
- | crate::Expression::Relational { .. }
- | crate::Expression::ArrayLength(_) => {
+ crate::Expression::ImageQuery { image, query } => {
+ use crate::{ImageClass as Ic, ImageDimension as Id, ImageQuery as Iq};
+
+ let image_id = self.get_expression_global(ir_function, image);
+ let image_type = fun_info[image].ty.handle().unwrap();
+ let (dim, arrayed, class) = match ir_module.types[image_type].inner {
+ crate::TypeInner::Image {
+ dim,
+ arrayed,
+ class,
+ } => (dim, arrayed, class),
+ _ => {
+ return Err(Error::Validation("image type"));
+ }
+ };
+
+ match query {
+ Iq::Size { level } => {
+ let dim_coords = match dim {
+ Id::D1 => 1,
+ Id::D2 | Id::Cube => 2,
+ Id::D3 => 3,
+ };
+ let extended_size_type_id = {
+ let array_coords = if arrayed { 1 } else { 0 };
+ let vector_size = match dim_coords + array_coords {
+ 2 => Some(crate::VectorSize::Bi),
+ 3 => Some(crate::VectorSize::Tri),
+ 4 => Some(crate::VectorSize::Quad),
+ _ => None,
+ };
+ self.get_type_id(
+ &ir_module.types,
+ LookupType::Local(LocalType::Value {
+ vector_size,
+ kind: crate::ScalarKind::Sint,
+ width: 4,
+ pointer_class: None,
+ }),
+ )?
+ };
+
+ let (query_op, level_id) = match class {
+ Ic::Storage(_) => (spirv::Op::ImageQuerySize, None),
+ _ => {
+ let level_id = match level {
+ Some(expr) => self.cached[expr],
+ None => self.get_index_constant(0, &ir_module.types)?,
+ };
+ (spirv::Op::ImageQuerySizeLod, Some(level_id))
+ }
+ };
+ // The ID of the vector returned by SPIR-V, which contains the dimensions
+ // as well as the layer count.
+ let id_extended = self.id_gen.next();
+ let mut inst = Instruction::image_query(
+ query_op,
+ extended_size_type_id,
+ id_extended,
+ image_id,
+ );
+ if let Some(expr_id) = level_id {
+ inst.add_operand(expr_id);
+ }
+ block.body.push(inst);
+
+ if result_type_id != extended_size_type_id {
+ let id = self.id_gen.next();
+ let components = match dim {
+ // always pick the first component, and duplicate it for all 3 dimensions
+ Id::Cube => &[0u32, 0, 0][..],
+ _ => &[0u32, 1, 2, 3][..dim_coords],
+ };
+ block.body.push(Instruction::vector_shuffle(
+ result_type_id,
+ id,
+ id_extended,
+ id_extended,
+ components,
+ ));
+ id
+ } else {
+ id_extended
+ }
+ }
+ Iq::NumLevels => {
+ let id = self.id_gen.next();
+ block.body.push(Instruction::image_query(
+ spirv::Op::ImageQueryLevels,
+ result_type_id,
+ id,
+ image_id,
+ ));
+ id
+ }
+ Iq::NumLayers => {
+ let vec_size = match dim {
+ Id::D1 => crate::VectorSize::Bi,
+ Id::D2 | Id::Cube => crate::VectorSize::Tri,
+ Id::D3 => crate::VectorSize::Quad,
+ };
+ let extended_size_type_id = self.get_type_id(
+ &ir_module.types,
+ LookupType::Local(LocalType::Value {
+ vector_size: Some(vec_size),
+ kind: crate::ScalarKind::Sint,
+ width: 4,
+ pointer_class: None,
+ }),
+ )?;
+ let id_extended = self.id_gen.next();
+ let mut inst = Instruction::image_query(
+ spirv::Op::ImageQuerySizeLod,
+ extended_size_type_id,
+ id_extended,
+ image_id,
+ );
+ inst.add_operand(self.get_index_constant(0, &ir_module.types)?);
+ block.body.push(inst);
+ let id = self.id_gen.next();
+ block.body.push(Instruction::composite_extract(
+ result_type_id,
+ id,
+ id_extended,
+ &[vec_size as u32 - 1],
+ ));
+ id
+ }
+ Iq::NumSamples => {
+ let id = self.id_gen.next();
+ block.body.push(Instruction::image_query(
+ spirv::Op::ImageQuerySamples,
+ result_type_id,
+ id,
+ image_id,
+ ));
+ id
+ }
+ }
+ }
+ crate::Expression::Relational { .. } | crate::Expression::ArrayLength(_) => {
log::error!("unimplemented {:?}", ir_function.expressions[expr_handle]);
return Err(Error::FeatureNotImplemented("expression"));
}
| diff --git a/tests/in/image-copy.wgsl /dev/null
--- a/tests/in/image-copy.wgsl
+++ /dev/null
@@ -1,16 +0,0 @@
-[[group(0), binding(1)]]
-var image_src: [[access(read)]] texture_storage_2d<rgba8uint>;
-[[group(0), binding(2)]]
-var image_dst: [[access(write)]] texture_storage_1d<r32uint>;
-
-[[stage(compute), workgroup_size(16)]]
-fn main(
- [[builtin(local_invocation_id)]] local_id: vec3<u32>,
- //TODO: https://github.com/gpuweb/gpuweb/issues/1590
- //[[builtin(workgroup_size)]] wg_size: vec3<u32>
-) {
- let dim = textureDimensions(image_src);
- let itc = dim * vec2<i32>(local_id.xy) % vec2<i32>(10, 20);
- let value = textureLoad(image_src, itc);
- textureStore(image_dst, itc.x, value);
-}
diff --git a/tests/in/image-copy.param.ron b/tests/in/image.param.ron
--- a/tests/in/image-copy.param.ron
+++ b/tests/in/image.param.ron
@@ -1,6 +1,6 @@
(
spv_version: (1, 1),
- spv_capabilities: [ Shader, Image1D, Sampled1D ],
+ spv_capabilities: [ Shader, ImageQuery, Image1D, Sampled1D ],
spv_debug: true,
spv_adjust_coordinate_space: false,
msl_custom: false,
diff --git /dev/null b/tests/in/image.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/in/image.wgsl
@@ -0,0 +1,60 @@
+[[group(0), binding(1)]]
+var image_src: [[access(read)]] texture_storage_2d<rgba8uint>;
+[[group(0), binding(2)]]
+var image_dst: [[access(write)]] texture_storage_1d<r32uint>;
+
+[[stage(compute), workgroup_size(16)]]
+fn main(
+ [[builtin(local_invocation_id)]] local_id: vec3<u32>,
+ //TODO: https://github.com/gpuweb/gpuweb/issues/1590
+ //[[builtin(workgroup_size)]] wg_size: vec3<u32>
+) {
+ let dim = textureDimensions(image_src);
+ let itc = dim * vec2<i32>(local_id.xy) % vec2<i32>(10, 20);
+ let value = textureLoad(image_src, itc);
+ textureStore(image_dst, itc.x, value);
+}
+
+[[group(0), binding(0)]]
+var image_1d: texture_1d<f32>;
+[[group(0), binding(1)]]
+var image_2d: texture_2d<f32>;
+[[group(0), binding(2)]]
+var image_2d_array: texture_2d_array<f32>;
+[[group(0), binding(3)]]
+var image_cube: texture_cube<f32>;
+[[group(0), binding(4)]]
+var image_cube_array: texture_cube_array<f32>;
+[[group(0), binding(5)]]
+var image_3d: texture_3d<f32>;
+[[group(0), binding(6)]]
+var image_aa: texture_multisampled_2d<f32>;
+
+[[stage(vertex)]]
+fn queries() -> [[builtin(position)]] vec4<f32> {
+ let dim_1d = textureDimensions(image_1d);
+ let dim_2d = textureDimensions(image_2d);
+ let num_levels_2d = textureNumLevels(image_2d);
+ let dim_2d_lod = textureDimensions(image_2d, 1);
+ let dim_2d_array = textureDimensions(image_2d_array);
+ let num_levels_2d_array = textureNumLevels(image_2d_array);
+ let dim_2d_array_lod = textureDimensions(image_2d_array, 1);
+ let num_layers_2d = textureNumLayers(image_2d_array);
+ let dim_cube = textureDimensions(image_cube);
+ let num_levels_cube = textureNumLevels(image_cube);
+ let dim_cube_lod = textureDimensions(image_cube, 1);
+ let dim_cube_array = textureDimensions(image_cube_array);
+ let num_levels_cube_array = textureNumLevels(image_cube_array);
+ let dim_cube_array_lod = textureDimensions(image_cube_array, 1);
+ let num_layers_cube = textureNumLayers(image_cube_array);
+ let dim_3d = textureDimensions(image_3d);
+ let num_levels_3d = textureNumLevels(image_3d);
+ let dim_3d_lod = textureDimensions(image_3d, 1);
+ let num_samples_aa = textureNumSamples(image_aa);
+
+ let sum = dim_1d + dim_2d.y + dim_2d_lod.y + dim_2d_array.y + dim_2d_array_lod.y +
+ num_layers_2d + dim_cube.y + dim_cube_lod.y + dim_cube_array.y + dim_cube_array_lod.y +
+ num_layers_cube + dim_3d.z + dim_3d_lod.z + num_samples_aa +
+ num_levels_2d + num_levels_2d_array + num_levels_3d + num_levels_cube + num_levels_cube_array;
+ return vec4<f32>(f32(sum));
+}
diff --git a/tests/out/image-copy.msl /dev/null
--- a/tests/out/image-copy.msl
+++ /dev/null
@@ -1,16 +0,0 @@
-#include <metal_stdlib>
-#include <simd/simd.h>
-
-
-struct main1Input {
-};
-kernel void main1(
- metal::uint3 local_id [[thread_position_in_threadgroup]]
-, metal::texture2d<uint, metal::access::read> image_src [[user(fake0)]]
-, metal::texture1d<uint, metal::access::write> image_dst [[user(fake0)]]
-) {
- metal::int2 _e10 = (int2(image_src.get_width(), image_src.get_height()) * static_cast<int2>(local_id.xy)) % metal::int2(10, 20);
- metal::uint4 _e11 = image_src.read(metal::uint2(_e10));
- image_dst.write(_e11, metal::uint(_e10.x));
- return;
-}
diff --git /dev/null b/tests/out/image.msl
new file mode 100644
--- /dev/null
+++ b/tests/out/image.msl
@@ -0,0 +1,32 @@
+#include <metal_stdlib>
+#include <simd/simd.h>
+
+
+struct main1Input {
+};
+kernel void main1(
+ metal::uint3 local_id [[thread_position_in_threadgroup]]
+, metal::texture2d<uint, metal::access::read> image_src [[user(fake0)]]
+, metal::texture1d<uint, metal::access::write> image_dst [[user(fake0)]]
+) {
+ metal::int2 _e10 = (int2(image_src.get_width(), image_src.get_height()) * static_cast<int2>(local_id.xy)) % metal::int2(10, 20);
+ metal::uint4 _e11 = image_src.read(metal::uint2(_e10));
+ image_dst.write(_e11, metal::uint(_e10.x));
+ return;
+}
+
+
+struct queriesOutput {
+ metal::float4 member1 [[position]];
+};
+vertex queriesOutput queries(
+ metal::texture1d<float, metal::access::sample> image_1d [[user(fake0)]]
+, metal::texture2d<float, metal::access::sample> image_2d [[user(fake0)]]
+, metal::texture2d_array<float, metal::access::sample> image_2d_array [[user(fake0)]]
+, metal::texturecube<float, metal::access::sample> image_cube [[user(fake0)]]
+, metal::texturecube_array<float, metal::access::sample> image_cube_array [[user(fake0)]]
+, metal::texture3d<float, metal::access::sample> image_3d [[user(fake0)]]
+, metal::texture2d_ms<float, metal::access::read> image_aa [[user(fake0)]]
+) {
+ return queriesOutput { float4(static_cast<float>((((((((((((((((((int(image_1d.get_width()) + int2(image_2d.get_width(), image_2d.get_height()).y) + int2(image_2d.get_width(1), image_2d.get_height(1)).y) + int2(image_2d_array.get_width(), image_2d_array.get_height()).y) + int2(image_2d_array.get_width(1), image_2d_array.get_height(1)).y) + int(image_2d_array.get_array_size())) + int3(image_cube.get_width()).y) + int3(image_cube.get_width(1)).y) + int3(image_cube_array.get_width()).y) + int3(image_cube_array.get_width(1)).y) + int(image_cube_array.get_array_size())) + int3(image_3d.get_width(), image_3d.get_height(), image_3d.get_depth()).z) + int3(image_3d.get_width(1), image_3d.get_height(1), image_3d.get_depth(1)).z) + int(image_aa.get_num_samples())) + int(image_2d.get_num_mip_levels())) + int(image_2d_array.get_num_mip_levels())) + int(image_3d.get_num_mip_levels())) + int(image_cube.get_num_mip_levels())) + int(image_cube_array.get_num_mip_levels()))) };
+}
diff --git /dev/null b/tests/out/image.spvasm
new file mode 100644
--- /dev/null
+++ b/tests/out/image.spvasm
@@ -0,0 +1,181 @@
+; SPIR-V
+; Version: 1.1
+; Generator: rspirv
+; Bound: 127
+OpCapability ImageQuery
+OpCapability Image1D
+OpCapability Shader
+OpCapability Sampled1D
+%1 = OpExtInstImport "GLSL.std.450"
+OpMemoryModel Logical GLSL450
+OpEntryPoint GLCompute %43 "main" %40
+OpEntryPoint Vertex %61 "queries" %59
+OpExecutionMode %43 LocalSize 16 1 1
+OpSource GLSL 450
+OpName %21 "image_src"
+OpName %23 "image_dst"
+OpName %25 "image_1d"
+OpName %27 "image_2d"
+OpName %29 "image_2d_array"
+OpName %31 "image_cube"
+OpName %33 "image_cube_array"
+OpName %35 "image_3d"
+OpName %37 "image_aa"
+OpName %40 "local_id"
+OpName %43 "main"
+OpName %61 "queries"
+OpDecorate %21 NonWritable
+OpDecorate %21 DescriptorSet 0
+OpDecorate %21 Binding 1
+OpDecorate %23 NonReadable
+OpDecorate %23 DescriptorSet 0
+OpDecorate %23 Binding 2
+OpDecorate %25 DescriptorSet 0
+OpDecorate %25 Binding 0
+OpDecorate %27 DescriptorSet 0
+OpDecorate %27 Binding 1
+OpDecorate %29 DescriptorSet 0
+OpDecorate %29 Binding 2
+OpDecorate %31 DescriptorSet 0
+OpDecorate %31 Binding 3
+OpDecorate %33 DescriptorSet 0
+OpDecorate %33 Binding 4
+OpDecorate %35 DescriptorSet 0
+OpDecorate %35 Binding 5
+OpDecorate %37 DescriptorSet 0
+OpDecorate %37 Binding 6
+OpDecorate %40 BuiltIn LocalInvocationId
+OpDecorate %59 BuiltIn Position
+%2 = OpTypeVoid
+%4 = OpTypeInt 32 1
+%3 = OpConstant %4 10
+%5 = OpConstant %4 20
+%6 = OpConstant %4 1
+%8 = OpTypeInt 32 0
+%7 = OpTypeImage %8 2D 0 0 0 2 Rgba8ui
+%9 = OpTypeImage %8 1D 0 0 0 2 R32ui
+%10 = OpTypeVector %8 3
+%11 = OpTypeVector %4 2
+%13 = OpTypeFloat 32
+%12 = OpTypeImage %13 1D 0 0 0 1 Unknown
+%14 = OpTypeImage %13 2D 0 0 0 1 Unknown
+%15 = OpTypeImage %13 2D 0 1 0 1 Unknown
+%16 = OpTypeImage %13 Cube 0 0 0 1 Unknown
+%17 = OpTypeImage %13 Cube 0 1 0 1 Unknown
+%18 = OpTypeImage %13 3D 0 0 0 1 Unknown
+%19 = OpTypeImage %13 2D 0 0 1 1 Unknown
+%20 = OpTypeVector %13 4
+%22 = OpTypePointer UniformConstant %7
+%21 = OpVariable %22 UniformConstant
+%24 = OpTypePointer UniformConstant %9
+%23 = OpVariable %24 UniformConstant
+%26 = OpTypePointer UniformConstant %12
+%25 = OpVariable %26 UniformConstant
+%28 = OpTypePointer UniformConstant %14
+%27 = OpVariable %28 UniformConstant
+%30 = OpTypePointer UniformConstant %15
+%29 = OpVariable %30 UniformConstant
+%32 = OpTypePointer UniformConstant %16
+%31 = OpVariable %32 UniformConstant
+%34 = OpTypePointer UniformConstant %17
+%33 = OpVariable %34 UniformConstant
+%36 = OpTypePointer UniformConstant %18
+%35 = OpVariable %36 UniformConstant
+%38 = OpTypePointer UniformConstant %19
+%37 = OpVariable %38 UniformConstant
+%41 = OpTypePointer Input %10
+%40 = OpVariable %41 Input
+%44 = OpTypeFunction %2
+%49 = OpTypeVector %8 2
+%55 = OpTypeVector %8 4
+%60 = OpTypePointer Output %20
+%59 = OpVariable %60 Output
+%70 = OpConstant %4 0
+%75 = OpTypeVector %4 3
+%43 = OpFunction %2 None %44
+%39 = OpLabel
+%42 = OpLoad %10 %40
+%45 = OpLoad %7 %21
+%46 = OpLoad %9 %23
+OpBranch %47
+%47 = OpLabel
+%48 = OpImageQuerySize %11 %45
+%50 = OpVectorShuffle %49 %42 %42 0 1
+%51 = OpBitcast %11 %50
+%52 = OpIMul %11 %48 %51
+%53 = OpCompositeConstruct %11 %3 %5
+%54 = OpSMod %11 %52 %53
+%56 = OpImageRead %55 %45 %54
+%57 = OpCompositeExtract %4 %54 0
+OpImageWrite %46 %57 %56
+OpReturn
+OpFunctionEnd
+%61 = OpFunction %2 None %44
+%58 = OpLabel
+%62 = OpLoad %12 %25
+%63 = OpLoad %14 %27
+%64 = OpLoad %15 %29
+%65 = OpLoad %16 %31
+%66 = OpLoad %17 %33
+%67 = OpLoad %18 %35
+%68 = OpLoad %19 %37
+OpBranch %69
+%69 = OpLabel
+%71 = OpImageQuerySizeLod %4 %62 %70
+%72 = OpImageQuerySizeLod %11 %63 %70
+%73 = OpImageQueryLevels %4 %63
+%74 = OpImageQuerySizeLod %11 %63 %6
+%76 = OpImageQuerySizeLod %75 %64 %70
+%77 = OpVectorShuffle %11 %76 %76 0 1
+%78 = OpImageQueryLevels %4 %64
+%79 = OpImageQuerySizeLod %75 %64 %6
+%80 = OpVectorShuffle %11 %79 %79 0 1
+%81 = OpImageQuerySizeLod %75 %64 %70
+%82 = OpCompositeExtract %4 %81 2
+%83 = OpImageQuerySizeLod %11 %65 %70
+%84 = OpVectorShuffle %75 %83 %83 0 0 0
+%85 = OpImageQueryLevels %4 %65
+%86 = OpImageQuerySizeLod %11 %65 %6
+%87 = OpVectorShuffle %75 %86 %86 0 0 0
+%88 = OpImageQuerySizeLod %75 %66 %70
+%89 = OpImageQueryLevels %4 %66
+%90 = OpImageQuerySizeLod %75 %66 %6
+%91 = OpImageQuerySizeLod %75 %66 %70
+%92 = OpCompositeExtract %4 %91 2
+%93 = OpImageQuerySizeLod %75 %67 %70
+%94 = OpImageQueryLevels %4 %67
+%95 = OpImageQuerySizeLod %75 %67 %6
+%96 = OpImageQuerySamples %4 %68
+%97 = OpCompositeExtract %4 %72 1
+%98 = OpIAdd %4 %71 %97
+%99 = OpCompositeExtract %4 %74 1
+%100 = OpIAdd %4 %98 %99
+%101 = OpCompositeExtract %4 %77 1
+%102 = OpIAdd %4 %100 %101
+%103 = OpCompositeExtract %4 %80 1
+%104 = OpIAdd %4 %102 %103
+%105 = OpIAdd %4 %104 %82
+%106 = OpCompositeExtract %4 %84 1
+%107 = OpIAdd %4 %105 %106
+%108 = OpCompositeExtract %4 %87 1
+%109 = OpIAdd %4 %107 %108
+%110 = OpCompositeExtract %4 %88 1
+%111 = OpIAdd %4 %109 %110
+%112 = OpCompositeExtract %4 %90 1
+%113 = OpIAdd %4 %111 %112
+%114 = OpIAdd %4 %113 %92
+%115 = OpCompositeExtract %4 %93 2
+%116 = OpIAdd %4 %114 %115
+%117 = OpCompositeExtract %4 %95 2
+%118 = OpIAdd %4 %116 %117
+%119 = OpIAdd %4 %118 %96
+%120 = OpIAdd %4 %119 %73
+%121 = OpIAdd %4 %120 %78
+%122 = OpIAdd %4 %121 %94
+%123 = OpIAdd %4 %122 %85
+%124 = OpIAdd %4 %123 %89
+%125 = OpConvertSToF %13 %124
+%126 = OpCompositeConstruct %20 %125 %125 %125 %125
+OpStore %59 %126
+OpReturn
+OpFunctionEnd
\ No newline at end of file
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -235,8 +235,7 @@ fn convert_wgsl() {
Targets::SPIRV | Targets::METAL | Targets::IR | Targets::ANALYSIS,
),
("shadow", Targets::SPIRV | Targets::METAL | Targets::GLSL),
- //SPIR-V is blocked by https://github.com/gfx-rs/naga/issues/646
- ("image-copy", Targets::METAL),
+ ("image", Targets::SPIRV | Targets::METAL),
("texture-array", Targets::SPIRV | Targets::METAL),
("operators", Targets::SPIRV | Targets::METAL | Targets::GLSL),
(
| Implement image queries in SPIR-V backend
`Expression::ImageQuery { .. }` support is missing
| 2021-04-22T13:42:10 | 0.3 | 77a64da189e6ed2c4631ca03cb01b8d8124c4dad | [
"convert_wgsl"
] | [
"arena::tests::append_non_unique",
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::writer::test_write_physical_layout",
"front::glsl::constan... | [
"back::msl::writer::test_stack_size"
] | [] | 88 | |
gfx-rs/naga | 723 | gfx-rs__naga-723 | [
"635"
] | daf4c82376e10d03488d7c71773b2acc1b9f7de4 | diff --git a/src/back/spv/instructions.rs b/src/back/spv/instructions.rs
--- a/src/back/spv/instructions.rs
+++ b/src/back/spv/instructions.rs
@@ -433,12 +433,12 @@ impl super::Instruction {
}
pub(super) fn store(
- pointer_type_id: Word,
+ pointer_id: Word,
object_id: Word,
memory_access: Option<spirv::MemoryAccess>,
) -> Self {
let mut instruction = Self::new(Op::Store);
- instruction.add_operand(pointer_type_id);
+ instruction.add_operand(pointer_id);
instruction.add_operand(object_id);
if let Some(memory_access) = memory_access {
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -70,6 +70,7 @@ struct Function {
signature: Option<Instruction>,
parameters: Vec<Instruction>,
variables: crate::FastHashMap<Handle<crate::LocalVariable>, LocalVariable>,
+ internal_variables: Vec<LocalVariable>,
blocks: Vec<Block>,
entry_point_context: Option<EntryPointContext>,
}
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -86,6 +87,9 @@ impl Function {
for local_var in self.variables.values() {
local_var.instruction.to_words(sink);
}
+ for internal_var in self.internal_variables.iter() {
+ internal_var.instruction.to_words(sink);
+ }
}
for instruction in block.body.iter() {
instruction.to_words(sink);
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -339,6 +343,20 @@ impl Writer {
}
}
+ fn get_expression_type_id(
+ &mut self,
+ arena: &Arena<crate::Type>,
+ tr: &TypeResolution,
+ ) -> Result<Word, Error> {
+ let lookup_ty = match *tr {
+ TypeResolution::Handle(ty_handle) => LookupType::Handle(ty_handle),
+ TypeResolution::Value(ref inner) => {
+ LookupType::Local(self.physical_layout.make_local(inner).unwrap())
+ }
+ };
+ self.get_type_id(arena, lookup_ty)
+ }
+
fn get_pointer_id(
&mut self,
arena: &Arena<crate::Type>,
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -649,11 +667,6 @@ impl Writer {
};
self.check(exec_model.required_capabilities())?;
- if self.flags.contains(WriterFlags::DEBUG) {
- self.debugs
- .push(Instruction::name(function_id, &entry_point.name));
- }
-
Ok(Instruction::entry_point(
exec_model,
function_id,
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -1288,23 +1301,74 @@ impl Writer {
})
}
+ #[allow(clippy::too_many_arguments)]
+ fn promote_access_expression_to_variable(
+ &mut self,
+ ir_types: &Arena<crate::Type>,
+ result_type_id: Word,
+ container_id: Word,
+ container_resolution: &TypeResolution,
+ index_id: Word,
+ element_ty: Handle<crate::Type>,
+ block: &mut Block,
+ ) -> Result<(Word, LocalVariable), Error> {
+ let container_type_id = self.get_expression_type_id(ir_types, container_resolution)?;
+ let pointer_type_id = self.id_gen.next();
+ Instruction::type_pointer(
+ pointer_type_id,
+ spirv::StorageClass::Function,
+ container_type_id,
+ )
+ .to_words(&mut self.logical_layout.declarations);
+
+ let variable = {
+ let id = self.id_gen.next();
+ LocalVariable {
+ id,
+ instruction: Instruction::variable(
+ pointer_type_id,
+ id,
+ spirv::StorageClass::Function,
+ None,
+ ),
+ }
+ };
+ block
+ .body
+ .push(Instruction::store(variable.id, container_id, None));
+
+ let element_pointer_id = self.id_gen.next();
+ let element_pointer_type_id =
+ self.get_pointer_id(ir_types, element_ty, spirv::StorageClass::Function)?;
+ block.body.push(Instruction::access_chain(
+ element_pointer_type_id,
+ element_pointer_id,
+ variable.id,
+ &[index_id],
+ ));
+ let id = self.id_gen.next();
+ block.body.push(Instruction::load(
+ result_type_id,
+ id,
+ element_pointer_id,
+ None,
+ ));
+
+ Ok((id, variable))
+ }
+
/// Cache an expression for a value.
- fn cache_expression_value<'a>(
+ fn cache_expression_value(
&mut self,
- ir_module: &'a crate::Module,
+ ir_module: &crate::Module,
ir_function: &crate::Function,
fun_info: &FunctionInfo,
expr_handle: Handle<crate::Expression>,
block: &mut Block,
function: &mut Function,
) -> Result<(), Error> {
- let result_lookup_ty = match fun_info[expr_handle].ty {
- TypeResolution::Handle(ty_handle) => LookupType::Handle(ty_handle),
- TypeResolution::Value(ref inner) => {
- LookupType::Local(self.physical_layout.make_local(inner).unwrap())
- }
- };
- let result_type_id = self.get_type_id(&ir_module.types, result_lookup_ty)?;
+ let result_type_id =
+ self.get_expression_type_id(&ir_module.types, &fun_info[expr_handle].ty)?;
let id = match ir_function.expressions[expr_handle] {
crate::Expression::Access { base, index } => {
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -1318,10 +1382,10 @@ impl Writer {
0
} else {
let index_id = self.cached[index];
+ let base_id = self.cached[base];
match *fun_info[base].ty.inner_with(&ir_module.types) {
crate::TypeInner::Vector { .. } => {
let id = self.id_gen.next();
- let base_id = self.cached[base];
block.body.push(Instruction::vector_extract_dynamic(
result_type_id,
id,
diff --git a/src/back/spv/writer.rs b/src/back/spv/writer.rs
--- a/src/back/spv/writer.rs
+++ b/src/back/spv/writer.rs
@@ -1330,7 +1394,21 @@ impl Writer {
));
id
}
- //TODO: support `crate::TypeInner::Array { .. }` ?
+ crate::TypeInner::Array {
+ base: ty_element, ..
+ } => {
+ let (id, variable) = self.promote_access_expression_to_variable(
+ &ir_module.types,
+ result_type_id,
+ base_id,
+ &fun_info[base].ty,
+ index_id,
+ ty_element,
+ block,
+ )?;
+ function.internal_variables.push(variable);
+ id
+ }
ref other => {
log::error!("Unable to access {:?}", other);
return Err(Error::FeatureNotImplemented("access for type"));
| diff --git /dev/null b/tests/in/access.param.ron
new file mode 100644
--- /dev/null
+++ b/tests/in/access.param.ron
@@ -0,0 +1,7 @@
+(
+ spv_version: (1, 1),
+ spv_capabilities: [ Shader, Image1D, Sampled1D ],
+ spv_debug: true,
+ spv_adjust_coordinate_space: false,
+ msl_custom: false,
+)
diff --git /dev/null b/tests/in/access.wgsl
new file mode 100644
--- /dev/null
+++ b/tests/in/access.wgsl
@@ -0,0 +1,8 @@
+// This snapshot tests accessing various containers, dereferencing pointers.
+
+[[stage(vertex)]]
+fn foo([[builtin(vertex_index)]] vi: u32) -> [[builtin(position)]] vec4<f32> {
+ let array = array<i32, 5>(1, 2, 3, 4, 5);
+ let value = array[vi];
+ return vec4<f32>(vec4<i32>(value));
+}
diff --git a/tests/in/interpolate.param.ron b/tests/in/interpolate.param.ron
--- a/tests/in/interpolate.param.ron
+++ b/tests/in/interpolate.param.ron
@@ -4,5 +4,5 @@
spv_debug: true,
spv_adjust_coordinate_space: true,
msl_custom: false,
- glsl_desktop_version: Some(400)
+ glsl_desktop_version: Some(400)
)
diff --git a/tests/in/operators.wgsl b/tests/in/operators.wgsl
--- a/tests/in/operators.wgsl
+++ b/tests/in/operators.wgsl
@@ -1,6 +1,6 @@
[[stage(vertex)]]
fn splat() -> [[builtin(position)]] vec4<f32> {
- let a = (1.0 + vec2<f32>(2.0) - 3.0) / 4.0;
- let b = vec4<i32>(5) % 2;
- return a.xyxy + vec4<f32>(b);
+ let a = (1.0 + vec2<f32>(2.0) - 3.0) / 4.0;
+ let b = vec4<i32>(5) % 2;
+ return a.xyxy + vec4<f32>(b);
}
diff --git a/tests/in/shadow.param.ron b/tests/in/shadow.param.ron
--- a/tests/in/shadow.param.ron
+++ b/tests/in/shadow.param.ron
@@ -1,8 +1,8 @@
(
- spv_flow_dump_prefix: "",
- spv_version: (1, 2),
- spv_capabilities: [ Shader ],
- spv_debug: true,
- spv_adjust_coordinate_space: true,
- msl_custom: false,
+ spv_flow_dump_prefix: "",
+ spv_version: (1, 2),
+ spv_capabilities: [ Shader ],
+ spv_debug: true,
+ spv_adjust_coordinate_space: true,
+ msl_custom: false,
)
diff --git /dev/null b/tests/out/access.msl
new file mode 100644
--- /dev/null
+++ b/tests/out/access.msl
@@ -0,0 +1,15 @@
+#include <metal_stdlib>
+#include <simd/simd.h>
+
+typedef int type3[5];
+
+struct fooInput {
+};
+struct fooOutput {
+ metal::float4 member [[position]];
+};
+vertex fooOutput foo(
+ metal::uint vi [[vertex_id]]
+) {
+ return fooOutput { static_cast<float4>(type3 {1, 2, 3, 4, 5}[vi]) };
+}
diff --git /dev/null b/tests/out/access.spvasm
new file mode 100644
--- /dev/null
+++ b/tests/out/access.spvasm
@@ -0,0 +1,50 @@
+; SPIR-V
+; Version: 1.1
+; Generator: rspirv
+; Bound: 31
+OpCapability Image1D
+OpCapability Shader
+OpCapability Sampled1D
+%1 = OpExtInstImport "GLSL.std.450"
+OpMemoryModel Logical GLSL450
+OpEntryPoint Vertex %19 "foo" %14 %17
+OpSource GLSL 450
+OpName %14 "vi"
+OpName %19 "foo"
+OpDecorate %12 ArrayStride 4
+OpDecorate %14 BuiltIn VertexIndex
+OpDecorate %17 BuiltIn Position
+%2 = OpTypeVoid
+%4 = OpTypeInt 32 1
+%3 = OpConstant %4 5
+%5 = OpConstant %4 1
+%6 = OpConstant %4 2
+%7 = OpConstant %4 3
+%8 = OpConstant %4 4
+%9 = OpTypeInt 32 0
+%11 = OpTypeFloat 32
+%10 = OpTypeVector %11 4
+%12 = OpTypeArray %4 %3
+%15 = OpTypePointer Input %9
+%14 = OpVariable %15 Input
+%18 = OpTypePointer Output %10
+%17 = OpVariable %18 Output
+%20 = OpTypeFunction %2
+%23 = OpTypePointer Function %12
+%26 = OpTypePointer Function %4
+%28 = OpTypeVector %4 4
+%19 = OpFunction %2 None %20
+%13 = OpLabel
+%24 = OpVariable %23 Function
+%16 = OpLoad %9 %14
+OpBranch %21
+%21 = OpLabel
+%22 = OpCompositeConstruct %12 %5 %6 %7 %8 %3
+OpStore %24 %22
+%25 = OpAccessChain %26 %24 %16
+%27 = OpLoad %4 %25
+%29 = OpCompositeConstruct %28 %27 %27 %27 %27
+%30 = OpConvertSToF %10 %29
+OpStore %17 %30
+OpReturn
+OpFunctionEnd
\ No newline at end of file
diff --git a/tests/out/boids.spvasm b/tests/out/boids.spvasm
--- a/tests/out/boids.spvasm
+++ b/tests/out/boids.spvasm
@@ -38,7 +38,6 @@ OpName %36 "vel"
OpName %37 "i"
OpName %40 "global_invocation_id"
OpName %43 "main"
-OpName %43 "main"
OpMemberDecorate %16 0 Offset 0
OpMemberDecorate %16 1 Offset 8
OpDecorate %17 Block
diff --git a/tests/out/collatz.spvasm b/tests/out/collatz.spvasm
--- a/tests/out/collatz.spvasm
+++ b/tests/out/collatz.spvasm
@@ -17,7 +17,6 @@ OpName %15 "i"
OpName %18 "collatz_iterations"
OpName %45 "global_id"
OpName %48 "main"
-OpName %48 "main"
OpDecorate %8 ArrayStride 4
OpDecorate %9 Block
OpMemberDecorate %9 0 Offset 0
diff --git a/tests/out/interpolate.spvasm b/tests/out/interpolate.spvasm
--- a/tests/out/interpolate.spvasm
+++ b/tests/out/interpolate.spvasm
@@ -25,7 +25,6 @@ OpName %33 "centroid"
OpName %35 "sample"
OpName %37 "perspective"
OpName %38 "main"
-OpName %38 "main"
OpName %76 "position"
OpName %79 "flat"
OpName %82 "linear"
diff --git a/tests/out/interpolate.spvasm b/tests/out/interpolate.spvasm
--- a/tests/out/interpolate.spvasm
+++ b/tests/out/interpolate.spvasm
@@ -33,7 +32,6 @@ OpName %85 "centroid"
OpName %88 "sample"
OpName %91 "perspective"
OpName %93 "main"
-OpName %93 "main"
OpMemberDecorate %23 0 Offset 0
OpMemberDecorate %23 1 Offset 16
OpMemberDecorate %23 2 Offset 20
diff --git a/tests/out/quad.spvasm b/tests/out/quad.spvasm
--- a/tests/out/quad.spvasm
+++ b/tests/out/quad.spvasm
@@ -21,10 +21,8 @@ OpName %22 "uv"
OpName %24 "uv"
OpName %26 "position"
OpName %28 "main"
-OpName %28 "main"
OpName %48 "uv"
OpName %51 "main"
-OpName %51 "main"
OpMemberDecorate %9 0 Offset 0
OpMemberDecorate %9 1 Offset 16
OpDecorate %12 DescriptorSet 0
diff --git a/tests/out/shadow.spvasm b/tests/out/shadow.spvasm
--- a/tests/out/shadow.spvasm
+++ b/tests/out/shadow.spvasm
@@ -29,7 +29,6 @@ OpName %71 "i"
OpName %74 "raw_normal"
OpName %77 "position"
OpName %82 "fs_main"
-OpName %82 "fs_main"
OpDecorate %14 Block
OpMemberDecorate %14 0 Offset 0
OpMemberDecorate %17 0 Offset 0
diff --git a/tests/out/texture-array.spvasm b/tests/out/texture-array.spvasm
--- a/tests/out/texture-array.spvasm
+++ b/tests/out/texture-array.spvasm
@@ -16,7 +16,6 @@ OpName %14 "sampler"
OpName %16 "pc"
OpName %19 "tex_coord"
OpName %24 "main"
-OpName %24 "main"
OpDecorate %8 Block
OpMemberDecorate %8 0 Offset 0
OpDecorate %11 DescriptorSet 0
diff --git a/tests/snapshots.rs b/tests/snapshots.rs
--- a/tests/snapshots.rs
+++ b/tests/snapshots.rs
@@ -229,6 +229,7 @@ fn convert_wgsl() {
"interpolate",
Targets::SPIRV | Targets::METAL | Targets::GLSL,
),
+ ("access", Targets::SPIRV | Targets::METAL),
];
for &(name, targets) in inputs.iter() {
| Indexing constant arrays in SPIR-V
```rust
struct VertexOutput {
[[builtin(position)]] position: vec4<f32>;
[[location(0)]] tex_coords: vec2<f32>;
};
[[stage(vertex)]]
fn vs_main([[builtin(vertex_index)]] index: u32) -> VertexOutput {
const triangle = array<vec2<f32>, 3u>(
vec2<f32>(-1.0, -1.0),
vec2<f32>( 3.0, -1.0),
vec2<f32>(-1.0, 3.0)
);
var out: VertexOutput;
out.position = vec4<f32>(triangle[index], 0.0, 1.0);
out.tex_coords = 0.5 * triangle[index] + vec2<f32>(0.5, 0.5);
return out;
}
[[group(0), binding(0)]]
var r_color: texture_2d<f32>;
[[group(0), binding(1)]]
var r_sampler: sampler;
[[stage(fragment)]]
fn fs_main(in: VertexOutput) -> [[location(0)]] vec4<f32> {
return textureSample(r_color, r_sampler, in.tex_coords);
}
```
See https://github.com/gfx-rs/wgpu/issues/1295
| 2021-04-16T11:12:42 | 0.3 | 77a64da189e6ed2c4631ca03cb01b8d8124c4dad | [
"convert_wgsl"
] | [
"arena::tests::append_unique",
"arena::tests::fetch_or_append_non_unique",
"arena::tests::append_non_unique",
"arena::tests::fetch_or_append_unique",
"back::msl::test_error_size",
"back::spv::writer::test_write_physical_layout",
"back::spv::layout::test_physical_layout_in_words",
"back::spv::layout::t... | [
"back::msl::writer::test_stack_size"
] | [] | 89 | |
dimforge/nalgebra | 1,384 | dimforge__nalgebra-1384 | [
"1380"
] | a803815bd1d22a052690cfde3ff7fae26cd91152 | diff --git a/src/linalg/inverse.rs b/src/linalg/inverse.rs
--- a/src/linalg/inverse.rs
+++ b/src/linalg/inverse.rs
@@ -145,13 +145,44 @@ where
{
let m = m.as_slice();
- out[(0, 0)] = m[5].clone() * m[10].clone() * m[15].clone()
+ let cofactor00 = m[5].clone() * m[10].clone() * m[15].clone()
- m[5].clone() * m[11].clone() * m[14].clone()
- m[9].clone() * m[6].clone() * m[15].clone()
+ m[9].clone() * m[7].clone() * m[14].clone()
+ m[13].clone() * m[6].clone() * m[11].clone()
- m[13].clone() * m[7].clone() * m[10].clone();
+ let cofactor01 = -m[4].clone() * m[10].clone() * m[15].clone()
+ + m[4].clone() * m[11].clone() * m[14].clone()
+ + m[8].clone() * m[6].clone() * m[15].clone()
+ - m[8].clone() * m[7].clone() * m[14].clone()
+ - m[12].clone() * m[6].clone() * m[11].clone()
+ + m[12].clone() * m[7].clone() * m[10].clone();
+
+ let cofactor02 = m[4].clone() * m[9].clone() * m[15].clone()
+ - m[4].clone() * m[11].clone() * m[13].clone()
+ - m[8].clone() * m[5].clone() * m[15].clone()
+ + m[8].clone() * m[7].clone() * m[13].clone()
+ + m[12].clone() * m[5].clone() * m[11].clone()
+ - m[12].clone() * m[7].clone() * m[9].clone();
+
+ let cofactor03 = -m[4].clone() * m[9].clone() * m[14].clone()
+ + m[4].clone() * m[10].clone() * m[13].clone()
+ + m[8].clone() * m[5].clone() * m[14].clone()
+ - m[8].clone() * m[6].clone() * m[13].clone()
+ - m[12].clone() * m[5].clone() * m[10].clone()
+ + m[12].clone() * m[6].clone() * m[9].clone();
+
+ let det = m[0].clone() * cofactor00.clone()
+ + m[1].clone() * cofactor01.clone()
+ + m[2].clone() * cofactor02.clone()
+ + m[3].clone() * cofactor03.clone();
+
+ if det.is_zero() {
+ return false;
+ }
+ out[(0, 0)] = cofactor00;
+
out[(1, 0)] = -m[1].clone() * m[10].clone() * m[15].clone()
+ m[1].clone() * m[11].clone() * m[14].clone()
+ m[9].clone() * m[2].clone() * m[15].clone()
diff --git a/src/linalg/inverse.rs b/src/linalg/inverse.rs
--- a/src/linalg/inverse.rs
+++ b/src/linalg/inverse.rs
@@ -173,12 +204,7 @@ where
- m[9].clone() * m[2].clone() * m[7].clone()
+ m[9].clone() * m[3].clone() * m[6].clone();
- out[(0, 1)] = -m[4].clone() * m[10].clone() * m[15].clone()
- + m[4].clone() * m[11].clone() * m[14].clone()
- + m[8].clone() * m[6].clone() * m[15].clone()
- - m[8].clone() * m[7].clone() * m[14].clone()
- - m[12].clone() * m[6].clone() * m[11].clone()
- + m[12].clone() * m[7].clone() * m[10].clone();
+ out[(0, 1)] = cofactor01;
out[(1, 1)] = m[0].clone() * m[10].clone() * m[15].clone()
- m[0].clone() * m[11].clone() * m[14].clone()
diff --git a/src/linalg/inverse.rs b/src/linalg/inverse.rs
--- a/src/linalg/inverse.rs
+++ b/src/linalg/inverse.rs
@@ -201,12 +227,7 @@ where
+ m[8].clone() * m[2].clone() * m[7].clone()
- m[8].clone() * m[3].clone() * m[6].clone();
- out[(0, 2)] = m[4].clone() * m[9].clone() * m[15].clone()
- - m[4].clone() * m[11].clone() * m[13].clone()
- - m[8].clone() * m[5].clone() * m[15].clone()
- + m[8].clone() * m[7].clone() * m[13].clone()
- + m[12].clone() * m[5].clone() * m[11].clone()
- - m[12].clone() * m[7].clone() * m[9].clone();
+ out[(0, 2)] = cofactor02;
out[(1, 2)] = -m[0].clone() * m[9].clone() * m[15].clone()
+ m[0].clone() * m[11].clone() * m[13].clone()
diff --git a/src/linalg/inverse.rs b/src/linalg/inverse.rs
--- a/src/linalg/inverse.rs
+++ b/src/linalg/inverse.rs
@@ -222,12 +243,7 @@ where
+ m[12].clone() * m[1].clone() * m[7].clone()
- m[12].clone() * m[3].clone() * m[5].clone();
- out[(0, 3)] = -m[4].clone() * m[9].clone() * m[14].clone()
- + m[4].clone() * m[10].clone() * m[13].clone()
- + m[8].clone() * m[5].clone() * m[14].clone()
- - m[8].clone() * m[6].clone() * m[13].clone()
- - m[12].clone() * m[5].clone() * m[10].clone()
- + m[12].clone() * m[6].clone() * m[9].clone();
+ out[(0, 3)] = cofactor03;
out[(3, 2)] = -m[0].clone() * m[5].clone() * m[11].clone()
+ m[0].clone() * m[7].clone() * m[9].clone()
diff --git a/src/linalg/inverse.rs b/src/linalg/inverse.rs
--- a/src/linalg/inverse.rs
+++ b/src/linalg/inverse.rs
@@ -257,21 +273,12 @@ where
+ m[8].clone() * m[1].clone() * m[6].clone()
- m[8].clone() * m[2].clone() * m[5].clone();
- let det = m[0].clone() * out[(0, 0)].clone()
- + m[1].clone() * out[(0, 1)].clone()
- + m[2].clone() * out[(0, 2)].clone()
- + m[3].clone() * out[(0, 3)].clone();
+ let inv_det = T::one() / det;
- if !det.is_zero() {
- let inv_det = T::one() / det;
-
- for j in 0..4 {
- for i in 0..4 {
- out[(i, j)] *= inv_det.clone();
- }
+ for j in 0..4 {
+ for i in 0..4 {
+ out[(i, j)] *= inv_det.clone();
}
- true
- } else {
- false
}
+ true
}
| diff --git a/tests/core/matrix.rs b/tests/core/matrix.rs
--- a/tests/core/matrix.rs
+++ b/tests/core/matrix.rs
@@ -1263,6 +1263,16 @@ fn column_iterator_double_ended_mut() {
assert_eq!(col_iter_mut.next(), None);
}
+#[test]
+fn test_inversion_failure_leaves_matrix4_unchanged() {
+ let mut mat = na::Matrix4::new(
+ 1.0, 2.0, 3.0, 4.0, 2.0, 4.0, 6.0, 8.0, 3.0, 6.0, 9.0, 12.0, 4.0, 8.0, 12.0, 16.0,
+ );
+ let expected = mat.clone();
+ assert!(!mat.try_inverse_mut());
+ assert_eq!(mat, expected);
+}
+
#[test]
#[cfg(feature = "rayon")]
fn parallel_column_iteration() {
| Inverting a 4x4 matrix with `try_inverse_mut` doesn't leave `self` unchanged if the inversion fails.
Calling `try_inverse_mut` on a 4x4 matrix uses [`do_inverse4`](https://github.com/dimforge/nalgebra/blob/a803815bd1d22a052690cfde3ff7fae26cd91152/src/linalg/inverse.rs#L139) which modifies `out` before checking the determinant.
| 2024-04-19T23:01:31 | 0.32 | a803815bd1d22a052690cfde3ff7fae26cd91152 | [
"core::matrix::test_inversion_failure_leaves_matrix4_unchanged"
] | [
"base::indexing::dimrange_range_usize",
"base::indexing::dimrange_rangefrom_dimname",
"base::indexing::dimrange_rangefull",
"base::indexing::dimrange_rangefrom_usize",
"base::indexing::dimrange_rangeinclusive_usize",
"base::indexing::dimrange_rangeto_usize",
"base::indexing::dimrange_rangetoinclusive_us... | [] | [
"geometry::rotation::proptest_tests::slerp_takes_shortest_path_3"
] | 90 | |
dimforge/nalgebra | 1,369 | dimforge__nalgebra-1369 | [
"1368"
] | 749a9fee17028291fd47b8ea3c8d3baab53229a4 | diff --git a/src/linalg/householder.rs b/src/linalg/householder.rs
--- a/src/linalg/householder.rs
+++ b/src/linalg/householder.rs
@@ -34,6 +34,17 @@ pub fn reflection_axis_mut<T: ComplexField, D: Dim, S: StorageMut<T, D>>(
if !factor.is_zero() {
column.unscale_mut(factor.sqrt());
+
+ // Normalize again, making sure the vector is unit-sized.
+ // If `factor` had a very small value, the first normalization
+ // (dividing by `factor.sqrt()`) might end up with a slightly
+ // non-unit vector (especially when using 32-bits float).
+ // Decompositions strongly rely on that unit-vector property,
+ // so we run a second normalization (that is much more numerically
+ // stable since the norm is close to 1) to ensure it has a unit
+ // size.
+ let _ = column.normalize_mut();
+
(-signed_norm, true)
} else {
// TODO: not sure why we don't have a - sign here.
| diff --git a/tests/linalg/eigen.rs b/tests/linalg/eigen.rs
--- a/tests/linalg/eigen.rs
+++ b/tests/linalg/eigen.rs
@@ -1,4 +1,4 @@
-use na::DMatrix;
+use na::{DMatrix, Matrix3};
#[cfg(feature = "proptest-support")]
mod proptest_tests {
diff --git a/tests/linalg/eigen.rs b/tests/linalg/eigen.rs
--- a/tests/linalg/eigen.rs
+++ b/tests/linalg/eigen.rs
@@ -116,6 +116,31 @@ fn symmetric_eigen_singular_24x24() {
);
}
+// Regression test for #1368
+#[test]
+fn very_small_deviation_from_identity_issue_1368() {
+ let m = Matrix3::<f32>::new(
+ 1.0,
+ 3.1575704e-23,
+ 8.1146196e-23,
+ 3.1575704e-23,
+ 1.0,
+ 1.7471054e-22,
+ 8.1146196e-23,
+ 1.7471054e-22,
+ 1.0,
+ );
+
+ for v in m
+ .try_symmetric_eigen(f32::EPSILON, 0)
+ .unwrap()
+ .eigenvalues
+ .into_iter()
+ {
+ assert_relative_eq!(*v, 1.);
+ }
+}
+
// #[cfg(feature = "arbitrary")]
// quickcheck! {
// TODO: full eigendecomposition is not implemented yet because of its complexity when some
| SVD Computes Wrong Singular Values for Very Small-Valued Matrices
This should be reproducible with `nalgebra = { version = "0.32.4", features = ["rand"] }`
```
use nalgebra::Matrix3;
fn main() {
let id = Matrix3::<f32>::identity();
loop {
let r = Matrix3::<f32>::new_random();
let m = id + r * 0.000000000000000000001;
if m == id {
panic!("too small")
}
let svd = m.svd_unordered(true, true);
if svd.singular_values.iter().any(|s| (s - 1.).abs() > 0.1) {
println!("Input Matrix: {m:#?}");
println!("Singular Values: {:#?}", svd.singular_values);
panic!();
}
}
}
```
Example output:
```
Input Matrix: [
[
1.0,
6.23101e-24,
2.2194742e-23,
],
[
7.9115625e-24,
1.0,
6.47089e-22,
],
[
1.922059e-23,
9.848782e-22,
1.0,
],
]
Singular Values: [
[
2.1223645,
1.0000001,
1.0,
],
]
```
| There was a similar issue in https://github.com/dimforge/nalgebra/pull/1314, but I think the fix for that was included in 0.32.4. Maybe an interesting reference all the same. | 2024-03-09T00:51:58 | 0.32 | a803815bd1d22a052690cfde3ff7fae26cd91152 | [
"linalg::eigen::very_small_deviation_from_identity_issue_1368"
] | [
"base::indexing::dimrange_range_usize",
"base::indexing::dimrange_usize",
"base::indexing::dimrange_rangefrom_usize",
"base::indexing::dimrange_rangefrom_dimname",
"linalg::symmetric_eigen::test::wilkinson_shift_zero",
"linalg::symmetric_eigen::test::wilkinson_shift_zero_det",
"linalg::symmetric_eigen::... | [] | [] | 91 |
anoma/namada | 2,667 | anoma__namada-2667 | [
"2665"
] | 2ba001dd639d0fe14c4461c4e4811778d4ff50d8 | diff --git /dev/null b/.changelog/unreleased/bug-fixes/2667-fix-last-update-usage.md
new file mode 100644
--- /dev/null
+++ b/.changelog/unreleased/bug-fixes/2667-fix-last-update-usage.md
@@ -0,0 +1,2 @@
+- Fix the setting of the last update field in an Epoched data structure.
+ ([\#2667](https://github.com/anoma/namada/pull/2667))
\ No newline at end of file
diff --git a/crates/proof_of_stake/src/epoched.rs b/crates/proof_of_stake/src/epoched.rs
--- a/crates/proof_of_stake/src/epoched.rs
+++ b/crates/proof_of_stake/src/epoched.rs
@@ -444,13 +444,11 @@ where
// panic!("WARNING: no data existing in
// {new_oldest_epoch}"); }
self.set_oldest_epoch(storage, new_oldest_epoch)?;
-
- // Update the epoch of the last update to the current epoch
- let key = self.get_last_update_storage_key();
- storage.write(&key, current_epoch)?;
- return Ok(());
}
}
+ // Update the epoch of the last update to the current epoch
+ let key = self.get_last_update_storage_key();
+ storage.write(&key, current_epoch)?;
Ok(())
}
diff --git a/crates/proof_of_stake/src/lib.rs b/crates/proof_of_stake/src/lib.rs
--- a/crates/proof_of_stake/src/lib.rs
+++ b/crates/proof_of_stake/src/lib.rs
@@ -2205,7 +2205,8 @@ where
// Promote the next below-capacity validator to consensus
promote_next_below_capacity_validator_to_consensus(
storage,
- pipeline_epoch,
+ current_epoch,
+ params.pipeline_len,
)?;
}
diff --git a/crates/proof_of_stake/src/lib.rs b/crates/proof_of_stake/src/lib.rs
--- a/crates/proof_of_stake/src/lib.rs
+++ b/crates/proof_of_stake/src/lib.rs
@@ -2290,8 +2291,8 @@ where
validator_state_handle(validator).set(
storage,
ValidatorState::Jailed,
- pipeline_epoch,
- 0,
+ current_epoch,
+ params.pipeline_len,
)?;
return Ok(());
}
diff --git a/crates/proof_of_stake/src/lib.rs b/crates/proof_of_stake/src/lib.rs
--- a/crates/proof_of_stake/src/lib.rs
+++ b/crates/proof_of_stake/src/lib.rs
@@ -2698,10 +2699,14 @@ where
// Remove the validator from the set starting at the update epoch and up
// thru the pipeline epoch.
- let pipeline_epoch = current_epoch + params.pipeline_len;
- for epoch in
- Epoch::iter_bounds_inclusive(validator_set_update_epoch, pipeline_epoch)
- {
+ let start = validator_set_update_epoch
+ .0
+ .checked_sub(current_epoch.0)
+ .unwrap(); // Safe unwrap
+ let end = params.pipeline_len;
+
+ for offset in start..=end {
+ let epoch = current_epoch + offset;
let prev_state = validator_state_handle(validator)
.get(storage, epoch, params)?
.expect("Expected to find a valid validator.");
diff --git a/crates/proof_of_stake/src/lib.rs b/crates/proof_of_stake/src/lib.rs
--- a/crates/proof_of_stake/src/lib.rs
+++ b/crates/proof_of_stake/src/lib.rs
@@ -2716,9 +2721,11 @@ where
// For the pipeline epoch only:
// promote the next max inactive validator to the active
// validator set at the pipeline offset
- if epoch == pipeline_epoch {
+ if offset == params.pipeline_len {
promote_next_below_capacity_validator_to_consensus(
- storage, epoch,
+ storage,
+ current_epoch,
+ offset,
)?;
}
}
diff --git a/crates/proof_of_stake/src/validator_set_update.rs b/crates/proof_of_stake/src/validator_set_update.rs
--- a/crates/proof_of_stake/src/validator_set_update.rs
+++ b/crates/proof_of_stake/src/validator_set_update.rs
@@ -521,11 +521,13 @@ where
/// consensus set already.
pub fn promote_next_below_capacity_validator_to_consensus<S>(
storage: &mut S,
- epoch: Epoch,
+ current_epoch: Epoch,
+ offset: u64,
) -> namada_storage::Result<()>
where
S: StorageRead + StorageWrite,
{
+ let epoch = current_epoch + offset;
let below_cap_set = below_capacity_validator_set_handle().at(&epoch);
let max_below_capacity_amount =
get_max_below_capacity_validator_amount(&below_cap_set, storage)?;
diff --git a/crates/proof_of_stake/src/validator_set_update.rs b/crates/proof_of_stake/src/validator_set_update.rs
--- a/crates/proof_of_stake/src/validator_set_update.rs
+++ b/crates/proof_of_stake/src/validator_set_update.rs
@@ -550,8 +552,8 @@ where
validator_state_handle(&promoted_validator).set(
storage,
ValidatorState::Consensus,
- epoch,
- 0,
+ current_epoch,
+ offset,
)?;
}
diff --git a/crates/proof_of_stake/src/validator_set_update.rs b/crates/proof_of_stake/src/validator_set_update.rs
--- a/crates/proof_of_stake/src/validator_set_update.rs
+++ b/crates/proof_of_stake/src/validator_set_update.rs
@@ -828,6 +830,7 @@ where
.at(&val_stake)
.insert(storage, val_position, val_address)?;
}
+
// Purge consensus and below-capacity validator sets
consensus_validator_set.update_data(storage, params, current_epoch)?;
below_capacity_validator_set.update_data(storage, params, current_epoch)?;
diff --git a/crates/proof_of_stake/src/validator_set_update.rs b/crates/proof_of_stake/src/validator_set_update.rs
--- a/crates/proof_of_stake/src/validator_set_update.rs
+++ b/crates/proof_of_stake/src/validator_set_update.rs
@@ -847,7 +850,6 @@ where
let prev = new_positions_handle.insert(storage, validator, position)?;
debug_assert!(prev.is_none());
}
- validator_set_positions_handle.set_last_update(storage, current_epoch)?;
// Purge old epochs of validator positions
validator_set_positions_handle.update_data(
| diff --git a/crates/proof_of_stake/src/tests/test_validator.rs b/crates/proof_of_stake/src/tests/test_validator.rs
--- a/crates/proof_of_stake/src/tests/test_validator.rs
+++ b/crates/proof_of_stake/src/tests/test_validator.rs
@@ -1320,8 +1320,20 @@ fn test_purge_validator_information_aux(validators: Vec<GenesisValidator>) {
// Check that there is validator data for epochs 0 - pipeline_len
check_is_data(&s, current_epoch, Epoch(params.owned.pipeline_len));
+ assert_eq!(
+ consensus_val_set.get_last_update(&s).unwrap().unwrap(),
+ Epoch(0)
+ );
+ assert_eq!(
+ validator_positions.get_last_update(&s).unwrap().unwrap(),
+ Epoch(0)
+ );
+ assert_eq!(
+ validator_positions.get_last_update(&s).unwrap().unwrap(),
+ Epoch(0)
+ );
- // Advance to epoch 1
+ // Advance to epoch `default_past_epochs`
for _ in 0..default_past_epochs {
current_epoch = advance_epoch(&mut s, ¶ms);
}
diff --git a/crates/proof_of_stake/src/tests/test_validator.rs b/crates/proof_of_stake/src/tests/test_validator.rs
--- a/crates/proof_of_stake/src/tests/test_validator.rs
+++ b/crates/proof_of_stake/src/tests/test_validator.rs
@@ -1333,6 +1345,18 @@ fn test_purge_validator_information_aux(validators: Vec<GenesisValidator>) {
Epoch(0),
Epoch(params.owned.pipeline_len + default_past_epochs),
);
+ assert_eq!(
+ consensus_val_set.get_last_update(&s).unwrap().unwrap(),
+ Epoch(default_past_epochs)
+ );
+ assert_eq!(
+ validator_positions.get_last_update(&s).unwrap().unwrap(),
+ Epoch(default_past_epochs)
+ );
+ assert_eq!(
+ validator_positions.get_last_update(&s).unwrap().unwrap(),
+ Epoch(default_past_epochs)
+ );
current_epoch = advance_epoch(&mut s, ¶ms);
assert_eq!(current_epoch.0, default_past_epochs + 1);
| All uses of `Epoched` past epoch tracking and handling of `last_update` may not be consistent
A storage dump from epoch 8 at height 35201 had the following storage ket structure for a validator:
```
"#tnam1qgqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqc8j2fp/validator/#tnam1q96tsxckvm4n937efh7t78w4j40m5783wsqcz3ye/state/last_update" = "0900000000000000"
"#tnam1qgqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqc8j2fp/validator/#tnam1q96tsxckvm4n937efh7t78w4j40m5783wsqcz3ye/state/lazy_map/data/000000000000E" = "00"
"#tnam1qgqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqc8j2fp/validator/#tnam1q96tsxckvm4n937efh7t78w4j40m5783wsqcz3ye/state/lazy_map/data/000000000000G" = "01"
"#tnam1qgqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqc8j2fp/validator/#tnam1q96tsxckvm4n937efh7t78w4j40m5783wsqcz3ye/state/lazy_map/data/000000000000I" = "00"
"#tnam1qgqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqc8j2fp/validator/#tnam1q96tsxckvm4n937efh7t78w4j40m5783wsqcz3ye/state/oldest_epoch" = "0700000000000000"
```
I would not have expected `last_update = 9` and would've expected at least one more epoch in the past stored, since pipeline_len epochs should be kept in the past for the `ValidatorStates`. It is possible that function calls that manipulate this data may not be performed consistently.
Check that all these operations are consistent. Furthermore, recheck that the `NestedEpoched` objects like the validator sets and positions and addresses are storing what is expected.
| 2024-02-20T12:43:01 | 0.31 | cc3edde4d572ff41f19d24b5b7da60406acdfdf4 | [
"tests::test_validator::test_purge_validator_information"
] | [
"tests::test_helper_fns::test_apply_list_slashes",
"tests::test_helper_fns::test_compute_amount_after_slashing_withdraw",
"tests::test_helper_fns::test_compute_amount_after_slashing_unbond",
"tests::test_helper_fns::test_compute_slashable_amount",
"tests::test_helper_fns::test_compute_slash_bond_at_epoch",
... | [] | [] | 92 | |
anoma/namada | 2,628 | anoma__namada-2628 | [
"2626"
] | cc3edde4d572ff41f19d24b5b7da60406acdfdf4 | diff --git /dev/null b/.changelog/unreleased/bug-fixes/2628-fix-validator-voting-period.md
new file mode 100644
--- /dev/null
+++ b/.changelog/unreleased/bug-fixes/2628-fix-validator-voting-period.md
@@ -0,0 +1,2 @@
+- Fixes the computation of the valid validator voting period.
+ ([\#2628](https://github.com/anoma/namada/pull/2628))
\ No newline at end of file
diff --git a/crates/apps/src/lib/client/rpc.rs b/crates/apps/src/lib/client/rpc.rs
--- a/crates/apps/src/lib/client/rpc.rs
+++ b/crates/apps/src/lib/client/rpc.rs
@@ -1271,7 +1271,7 @@ pub async fn query_proposal_result(
(proposal_result, proposal_query)
{
display_line!(context.io(), "Proposal Id: {} ", proposal_id);
- if current_epoch >= proposal_query.voting_end_epoch {
+ if current_epoch > proposal_query.voting_end_epoch {
display_line!(context.io(), "{:4}{}", "", proposal_result);
} else {
display_line!(
diff --git a/crates/core/src/storage.rs b/crates/core/src/storage.rs
--- a/crates/core/src/storage.rs
+++ b/crates/core/src/storage.rs
@@ -1171,6 +1171,14 @@ impl Mul<u64> for Epoch {
}
}
+impl Mul<Epoch> for u64 {
+ type Output = Epoch;
+
+ fn mul(self, rhs: Epoch) -> Self::Output {
+ Epoch(self * rhs.0)
+ }
+}
+
impl Div<u64> for Epoch {
type Output = Epoch;
diff --git a/crates/governance/src/storage/proposal.rs b/crates/governance/src/storage/proposal.rs
--- a/crates/governance/src/storage/proposal.rs
+++ b/crates/governance/src/storage/proposal.rs
@@ -535,9 +535,11 @@ impl StorageProposal {
is_validator: bool,
) -> bool {
if is_validator {
- self.voting_start_epoch <= current_epoch
- && current_epoch * 3
- <= self.voting_start_epoch + self.voting_end_epoch * 2
+ crate::utils::is_valid_validator_voting_period(
+ current_epoch,
+ self.voting_start_epoch,
+ self.voting_end_epoch,
+ )
} else {
let valid_start_epoch = current_epoch >= self.voting_start_epoch;
let valid_end_epoch = current_epoch <= self.voting_end_epoch;
diff --git a/crates/governance/src/utils.rs b/crates/governance/src/utils.rs
--- a/crates/governance/src/utils.rs
+++ b/crates/governance/src/utils.rs
@@ -432,8 +432,10 @@ pub fn compute_proposal_result(
}
}
-/// Calculate the valid voting window for validator given a proposal epoch
-/// details
+/// Calculate the valid voting window for a validator given proposal epoch
+/// details. The valid window is within 2/3 of the voting period.
+/// NOTE: technically the window can be more generous than 2/3 since the end
+/// epoch is a valid epoch for voting too.
pub fn is_valid_validator_voting_period(
current_epoch: Epoch,
voting_start_epoch: Epoch,
diff --git a/crates/governance/src/utils.rs b/crates/governance/src/utils.rs
--- a/crates/governance/src/utils.rs
+++ b/crates/governance/src/utils.rs
@@ -442,9 +444,10 @@ pub fn is_valid_validator_voting_period(
if voting_start_epoch >= voting_end_epoch {
false
} else {
- let duration = voting_end_epoch - voting_start_epoch;
- let two_third_duration = (duration / 3) * 2;
- current_epoch <= voting_start_epoch + two_third_duration
+ // From e_cur <= e_start + 2/3 * (e_end - e_start)
+ let is_within_two_thirds =
+ 3 * current_epoch <= voting_start_epoch + 2 * voting_end_epoch;
+ current_epoch >= voting_start_epoch && is_within_two_thirds
}
}
diff --git a/crates/namada/src/ledger/governance/mod.rs b/crates/namada/src/ledger/governance/mod.rs
--- a/crates/namada/src/ledger/governance/mod.rs
+++ b/crates/namada/src/ledger/governance/mod.rs
@@ -545,6 +545,12 @@ where
return Ok(false);
}
+ // TODO: HACK THAT NEEDS TO BE PROPERLY FIXED WITH PARAM
+ let latency = 30u64;
+ if start_epoch.0 - current_epoch.0 > latency {
+ return Ok(false);
+ }
+
Ok((end_epoch - start_epoch) % min_period == 0
&& (end_epoch - start_epoch).0 >= min_period)
}
diff --git a/crates/sdk/src/tx.rs b/crates/sdk/src/tx.rs
--- a/crates/sdk/src/tx.rs
+++ b/crates/sdk/src/tx.rs
@@ -1994,9 +1994,15 @@ pub async fn build_vote_proposal(
let is_validator = rpc::is_validator(context.client(), voter).await?;
if !proposal.can_be_voted(epoch, is_validator) {
- if tx.force {
- eprintln!("Invalid proposal {} vote period.", proposal_id);
- } else {
+ eprintln!("Proposal {} cannot be voted on anymore.", proposal_id);
+ if is_validator {
+ eprintln!(
+ "NB: voter address {} is a validator, and validators can only \
+ vote on proposals within the first 2/3 of the voting period.",
+ voter
+ );
+ }
+ if !tx.force {
return Err(Error::from(
TxSubmitError::InvalidProposalVotingPeriod(proposal_id),
));
| diff --git a/crates/governance/src/utils.rs b/crates/governance/src/utils.rs
--- a/crates/governance/src/utils.rs
+++ b/crates/governance/src/utils.rs
@@ -1347,7 +1350,7 @@ mod test {
}
#[test]
- fn test_proposal_fifthteen() {
+ fn test_proposal_fifteen() {
let mut proposal_votes = ProposalVotes::default();
let validator_address = address::testing::established_address_1();
diff --git a/crates/governance/src/utils.rs b/crates/governance/src/utils.rs
--- a/crates/governance/src/utils.rs
+++ b/crates/governance/src/utils.rs
@@ -1401,4 +1404,44 @@ mod test {
assert!(!proposal_result.two_thirds_nay_over_two_thirds_total())
}
+
+ #[test]
+ fn test_validator_voting_period() {
+ assert!(!is_valid_validator_voting_period(
+ 0.into(),
+ 2.into(),
+ 4.into()
+ ));
+ assert!(is_valid_validator_voting_period(
+ 2.into(),
+ 2.into(),
+ 4.into()
+ ));
+ assert!(is_valid_validator_voting_period(
+ 3.into(),
+ 2.into(),
+ 4.into()
+ ));
+ assert!(!is_valid_validator_voting_period(
+ 4.into(),
+ 2.into(),
+ 4.into()
+ ));
+
+ assert!(is_valid_validator_voting_period(
+ 3.into(),
+ 2.into(),
+ 5.into()
+ ));
+ assert!(is_valid_validator_voting_period(
+ 4.into(),
+ 2.into(),
+ 5.into()
+ ));
+ assert!(!is_valid_validator_voting_period(
+ 5.into(),
+ 2.into(),
+ 5.into()
+ ));
+ }
}
| Gov VP: buggy `is_valid_validator_voting_period`
The function `is_valid_validator_voting_period` in the governance VP incorrectly does division on u64 before multiplication, which sometimes unintentionally leads to a computation of 0. It is also does not align with the same computation in `StorageProposal::can_be_voted`.
The offending lines are:
```
let duration = voting_end_epoch - voting_start_epoch;
let two_third_duration = (duration / 3) * 2;
current_epoch <= voting_start_epoch + two_third_duration
```
Consider an example of a proposal with start epoch of 4, end epoch of 6, and a validator voting in epoch 5. In the client for `vote-proposal`, the vote is deemed valid by `StorageProposal::can_be_voted`. However, in the VP, `two_third_duration = 0` due to `2 / 3`, and then the vote is deemed invalid and rejected.
| 2024-02-16T00:53:51 | 0.31 | cc3edde4d572ff41f19d24b5b7da60406acdfdf4 | [
"utils::test::test_validator_voting_period"
] | [
"utils::test::test_proposal_eight",
"utils::test::test_proposal_eleven",
"utils::test::test_proposal_four",
"utils::test::test_proposal_six",
"utils::test::test_proposal_fourteen",
"utils::test::test_proposal_thirteen",
"utils::test::test_proposal_ten",
"utils::test::test_proposal_one",
"utils::test... | [] | [] | 93 | |
anoma/namada | 3,888 | anoma__namada-3888 | [
"3872"
] | ad1d4b3aa6c0d9052418b6897c9c13da0c9613f4 | diff --git /dev/null b/.changelog/unreleased/improvements/3882-better-batch-construction.md
new file mode 100644
--- /dev/null
+++ b/.changelog/unreleased/improvements/3882-better-batch-construction.md
@@ -0,0 +1,2 @@
+- Improved batch construction to reduce the size of the resulting tx.
+ ([\#3882](https://github.com/anoma/namada/pull/3882))
\ No newline at end of file
diff --git a/crates/sdk/src/signing.rs b/crates/sdk/src/signing.rs
--- a/crates/sdk/src/signing.rs
+++ b/crates/sdk/src/signing.rs
@@ -61,7 +61,7 @@ use crate::wallet::{Wallet, WalletIo};
use crate::{args, rpc, Namada};
/// A structure holding the signing data to craft a transaction
-#[derive(Clone, PartialEq)]
+#[derive(Clone)]
pub struct SigningTxData {
/// The address owning the transaction
pub owner: Option<Address>,
diff --git a/crates/sdk/src/signing.rs b/crates/sdk/src/signing.rs
--- a/crates/sdk/src/signing.rs
+++ b/crates/sdk/src/signing.rs
@@ -75,6 +75,27 @@ pub struct SigningTxData {
pub fee_payer: common::PublicKey,
}
+impl PartialEq for SigningTxData {
+ fn eq(&self, other: &Self) -> bool {
+ if !(self.owner == other.owner
+ && self.threshold == other.threshold
+ && self.account_public_keys_map == other.account_public_keys_map
+ && self.fee_payer == other.fee_payer)
+ {
+ return false;
+ }
+
+ // Check equivalence of the public keys ignoring the specific ordering
+ if self.public_keys.len() != other.public_keys.len() {
+ return false;
+ }
+
+ self.public_keys
+ .iter()
+ .all(|pubkey| other.public_keys.contains(pubkey))
+ }
+}
+
/// Find the public key for the given address and try to load the keypair
/// for it from the wallet.
///
diff --git a/crates/sdk/src/signing.rs b/crates/sdk/src/signing.rs
--- a/crates/sdk/src/signing.rs
+++ b/crates/sdk/src/signing.rs
@@ -290,7 +311,7 @@ where
Ok(fee_payer_keypair) => {
tx.sign_wrapper(fee_payer_keypair);
}
- // The case where tge fee payer also signs the inner transaction
+ // The case where the fee payer also signs the inner transaction
Err(_)
if signing_data.public_keys.contains(&signing_data.fee_payer) =>
{
diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -145,16 +145,49 @@ impl Tx {
/// Add a new inner tx to the transaction. Returns `false` if the
/// commitments already existed in the collection. This function expects a
- /// transaction carrying a single inner tx as input
- pub fn add_inner_tx(&mut self, other: Tx, cmt: TxCommitments) -> bool {
- if !self.header.batch.insert(cmt) {
+ /// transaction carrying a single inner tx as input and the provided
+ /// commitment is assumed to be present in the transaction without further
+ /// validation
+ pub fn add_inner_tx(&mut self, other: Tx, mut cmt: TxCommitments) -> bool {
+ if self.header.batch.contains(&cmt) {
return false;
}
- // TODO: avoid duplicated sections to reduce the size of the message
- self.sections.extend(other.sections);
+ for section in other.sections {
+ // PartialEq implementation of Section relies on an implementation
+ // on the inner types that doesn't account for the possible salt
+ // which is the correct behavior for this logic
+ if let Some(duplicate) =
+ self.sections.iter().find(|&sec| sec == §ion)
+ {
+ // Avoid pushing a duplicated section. Adjust the commitment of
+ // this inner tx for the different section's salt if needed
+ match duplicate {
+ Section::Code(_) => {
+ if cmt.code_hash == section.get_hash() {
+ cmt.code_hash = duplicate.get_hash();
+ }
+ }
+ Section::Data(_) => {
+ if cmt.data_hash == section.get_hash() {
+ cmt.data_hash = duplicate.get_hash();
+ }
+ }
+ Section::ExtraData(_) => {
+ if cmt.memo_hash == section.get_hash() {
+ cmt.memo_hash = duplicate.get_hash();
+ }
+ }
+ // Other sections don't have a direct commitment in the
+ // header
+ _ => (),
+ }
+ } else {
+ self.sections.push(section);
+ }
+ }
- true
+ self.header.batch.insert(cmt)
}
/// Get the transaction header
| diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -1324,6 +1357,11 @@ mod test {
let data_bytes2 = "WASD".as_bytes();
let memo_bytes2 = "hjkl".as_bytes();
+ // Some duplicated sections
+ let code_bytes3 = code_bytes1;
+ let data_bytes3 = data_bytes2;
+ let memo_bytes3 = memo_bytes1;
+
let inner_tx1 = {
let mut tx = Tx::default();
diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -1352,10 +1390,25 @@ mod test {
tx
};
+ let inner_tx3 = {
+ let mut tx = Tx::default();
+
+ let code = Code::new(code_bytes3.to_owned(), None);
+ tx.set_code(code);
+
+ let data = Data::new(data_bytes3.to_owned());
+ tx.set_data(data);
+
+ tx.add_memo(memo_bytes3);
+
+ tx
+ };
+
let cmt1 = inner_tx1.first_commitments().unwrap().to_owned();
- let cmt2 = inner_tx2.first_commitments().unwrap().to_owned();
+ let mut cmt2 = inner_tx2.first_commitments().unwrap().to_owned();
+ let mut cmt3 = inner_tx3.first_commitments().unwrap().to_owned();
- // Batch `inner_tx1` and `inner_tx1` into `tx`
+ // Batch `inner_tx1`, `inner_tx2` and `inner_tx3` into `tx`
let tx = {
let mut tx = Tx::default();
diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -1364,10 +1417,22 @@ mod test {
assert_eq!(tx.header.batch.len(), 1);
tx.add_inner_tx(inner_tx2, cmt2.clone());
+ // Update cmt2 with the hash of cmt1 code section
+ cmt2.code_hash = cmt1.code_hash;
assert_eq!(tx.first_commitments().unwrap(), &cmt1);
assert_eq!(tx.header.batch.len(), 2);
assert_eq!(tx.header.batch.get_index(1).unwrap(), &cmt2);
+ tx.add_inner_tx(inner_tx3, cmt3.clone());
+ // Update cmt3 with the hash of cmt1 code and memo sections and the
+ // hash of cmt2 data section
+ cmt3.code_hash = cmt1.code_hash;
+ cmt3.data_hash = cmt2.data_hash;
+ cmt3.memo_hash = cmt1.memo_hash;
+ assert_eq!(tx.first_commitments().unwrap(), &cmt1);
+ assert_eq!(tx.header.batch.len(), 3);
+ assert_eq!(tx.header.batch.get_index(2).unwrap(), &cmt3);
+
tx
};
diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -1390,5 +1455,40 @@ mod test {
assert!(tx.memo(&cmt2).is_some());
assert_eq!(tx.memo(&cmt2).unwrap(), memo_bytes2);
+
+ // Check sections of `inner_tx3`
+ assert!(tx.code(&cmt3).is_some());
+ assert_eq!(tx.code(&cmt3).unwrap(), code_bytes3);
+
+ assert!(tx.data(&cmt3).is_some());
+ assert_eq!(tx.data(&cmt3).unwrap(), data_bytes3);
+
+ assert!(tx.memo(&cmt3).is_some());
+ assert_eq!(tx.memo(&cmt3).unwrap(), memo_bytes3);
+
+ // Check that the redundant sections have been included only once in the
+ // batch
+ assert_eq!(tx.sections.len(), 5);
+ assert_eq!(
+ tx.sections
+ .iter()
+ .filter(|section| section.code_sec().is_some())
+ .count(),
+ 1
+ );
+ assert_eq!(
+ tx.sections
+ .iter()
+ .filter(|section| section.data().is_some())
+ .count(),
+ 2
+ );
+ assert_eq!(
+ tx.sections
+ .iter()
+ .filter(|section| section.extra_data_sec().is_some())
+ .count(),
+ 2
+ );
}
}
| Avoid duplicated sections when batching
In the function we use to construct a batch we push all the sections of an inner tx in the batch, even though some of these might already be present.
https://github.com/anoma/namada/blob/810c258af3ddf2483af4d1306644db09bb2987e3/crates/tx/src/types.rs#L149-L158
We should try to check if a section is already in the batch and, in this case, avoid pushing another identical one. This would lead to smaller transactions and a lower gas cost. In doing so, though, we should pay attention to the `Commitments` of that tx, which are computed on the salted sections: we might need to update the commitments to point to the equivalent section with a different salt.
| 2024-10-04T17:18:51 | 0.44 | 03d9a589f54a43ebda2faaddb462f4991041ad26 | [
"types::test::test_batched_tx_sections"
] | [
"data::test_process_tx::test_process_tx_raw_tx_no_data",
"data::wrapper::test_gas_limits::test_gas_limit_roundtrip",
"tests::encoding_round_trip",
"types::test::test_protocol_tx_signing",
"data::test_process_tx::test_process_tx_raw_tx_some_signed_data",
"types::test::test_tx_json_serialization",
"data::... | [] | [] | 94 | |
anoma/namada | 3,882 | anoma__namada-3882 | [
"3872"
] | 418ef64b9e65bcf381faf4d37391d3a5ce6fdb2a | diff --git /dev/null b/.changelog/unreleased/improvements/3882-better-batch-construction.md
new file mode 100644
--- /dev/null
+++ b/.changelog/unreleased/improvements/3882-better-batch-construction.md
@@ -0,0 +1,2 @@
+- Improved batch construction to reduce the size of the resulting tx.
+ ([\#3882](https://github.com/anoma/namada/pull/3882))
\ No newline at end of file
diff --git a/crates/sdk/src/signing.rs b/crates/sdk/src/signing.rs
--- a/crates/sdk/src/signing.rs
+++ b/crates/sdk/src/signing.rs
@@ -61,7 +61,7 @@ use crate::wallet::{Wallet, WalletIo};
use crate::{args, rpc, Namada};
/// A structure holding the signing data to craft a transaction
-#[derive(Clone, PartialEq)]
+#[derive(Clone)]
pub struct SigningTxData {
/// The address owning the transaction
pub owner: Option<Address>,
diff --git a/crates/sdk/src/signing.rs b/crates/sdk/src/signing.rs
--- a/crates/sdk/src/signing.rs
+++ b/crates/sdk/src/signing.rs
@@ -75,6 +75,27 @@ pub struct SigningTxData {
pub fee_payer: common::PublicKey,
}
+impl PartialEq for SigningTxData {
+ fn eq(&self, other: &Self) -> bool {
+ if !(self.owner == other.owner
+ && self.threshold == other.threshold
+ && self.account_public_keys_map == other.account_public_keys_map
+ && self.fee_payer == other.fee_payer)
+ {
+ return false;
+ }
+
+ // Check equivalence of the public keys ignoring the specific ordering
+ if self.public_keys.len() != other.public_keys.len() {
+ return false;
+ }
+
+ self.public_keys
+ .iter()
+ .all(|pubkey| other.public_keys.contains(pubkey))
+ }
+}
+
/// Find the public key for the given address and try to load the keypair
/// for it from the wallet.
///
diff --git a/crates/sdk/src/signing.rs b/crates/sdk/src/signing.rs
--- a/crates/sdk/src/signing.rs
+++ b/crates/sdk/src/signing.rs
@@ -290,7 +311,7 @@ where
Ok(fee_payer_keypair) => {
tx.sign_wrapper(fee_payer_keypair);
}
- // The case where tge fee payer also signs the inner transaction
+ // The case where the fee payer also signs the inner transaction
Err(_)
if signing_data.public_keys.contains(&signing_data.fee_payer) =>
{
diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -145,16 +145,49 @@ impl Tx {
/// Add a new inner tx to the transaction. Returns `false` if the
/// commitments already existed in the collection. This function expects a
- /// transaction carrying a single inner tx as input
- pub fn add_inner_tx(&mut self, other: Tx, cmt: TxCommitments) -> bool {
- if !self.header.batch.insert(cmt) {
+ /// transaction carrying a single inner tx as input and the provided
+ /// commitment is assumed to be present in the transaction without further
+ /// validation
+ pub fn add_inner_tx(&mut self, other: Tx, mut cmt: TxCommitments) -> bool {
+ if self.header.batch.contains(&cmt) {
return false;
}
- // TODO: avoid duplicated sections to reduce the size of the message
- self.sections.extend(other.sections);
+ for section in other.sections {
+ // PartialEq implementation of Section relies on an implementation
+ // on the inner types that doesn't account for the possible salt
+ // which is the correct behavior for this logic
+ if let Some(duplicate) =
+ self.sections.iter().find(|&sec| sec == §ion)
+ {
+ // Avoid pushing a duplicated section. Adjust the commitment of
+ // this inner tx for the different section's salt if needed
+ match duplicate {
+ Section::Code(_) => {
+ if cmt.code_hash == section.get_hash() {
+ cmt.code_hash = duplicate.get_hash();
+ }
+ }
+ Section::Data(_) => {
+ if cmt.data_hash == section.get_hash() {
+ cmt.data_hash = duplicate.get_hash();
+ }
+ }
+ Section::ExtraData(_) => {
+ if cmt.memo_hash == section.get_hash() {
+ cmt.memo_hash = duplicate.get_hash();
+ }
+ }
+ // Other sections don't have a direct commitment in the
+ // header
+ _ => (),
+ }
+ } else {
+ self.sections.push(section);
+ }
+ }
- true
+ self.header.batch.insert(cmt)
}
/// Get the transaction header
| diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -1324,6 +1357,11 @@ mod test {
let data_bytes2 = "WASD".as_bytes();
let memo_bytes2 = "hjkl".as_bytes();
+ // Some duplicated sections
+ let code_bytes3 = code_bytes1;
+ let data_bytes3 = data_bytes2;
+ let memo_bytes3 = memo_bytes1;
+
let inner_tx1 = {
let mut tx = Tx::default();
diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -1352,10 +1390,25 @@ mod test {
tx
};
+ let inner_tx3 = {
+ let mut tx = Tx::default();
+
+ let code = Code::new(code_bytes3.to_owned(), None);
+ tx.set_code(code);
+
+ let data = Data::new(data_bytes3.to_owned());
+ tx.set_data(data);
+
+ tx.add_memo(memo_bytes3);
+
+ tx
+ };
+
let cmt1 = inner_tx1.first_commitments().unwrap().to_owned();
- let cmt2 = inner_tx2.first_commitments().unwrap().to_owned();
+ let mut cmt2 = inner_tx2.first_commitments().unwrap().to_owned();
+ let mut cmt3 = inner_tx3.first_commitments().unwrap().to_owned();
- // Batch `inner_tx1` and `inner_tx1` into `tx`
+ // Batch `inner_tx1`, `inner_tx2` and `inner_tx3` into `tx`
let tx = {
let mut tx = Tx::default();
diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -1364,10 +1417,22 @@ mod test {
assert_eq!(tx.header.batch.len(), 1);
tx.add_inner_tx(inner_tx2, cmt2.clone());
+ // Update cmt2 with the hash of cmt1 code section
+ cmt2.code_hash = cmt1.code_hash;
assert_eq!(tx.first_commitments().unwrap(), &cmt1);
assert_eq!(tx.header.batch.len(), 2);
assert_eq!(tx.header.batch.get_index(1).unwrap(), &cmt2);
+ tx.add_inner_tx(inner_tx3, cmt3.clone());
+ // Update cmt3 with the hash of cmt1 code and memo sections and the
+ // hash of cmt2 data section
+ cmt3.code_hash = cmt1.code_hash;
+ cmt3.data_hash = cmt2.data_hash;
+ cmt3.memo_hash = cmt1.memo_hash;
+ assert_eq!(tx.first_commitments().unwrap(), &cmt1);
+ assert_eq!(tx.header.batch.len(), 3);
+ assert_eq!(tx.header.batch.get_index(2).unwrap(), &cmt3);
+
tx
};
diff --git a/crates/tx/src/types.rs b/crates/tx/src/types.rs
--- a/crates/tx/src/types.rs
+++ b/crates/tx/src/types.rs
@@ -1390,5 +1455,40 @@ mod test {
assert!(tx.memo(&cmt2).is_some());
assert_eq!(tx.memo(&cmt2).unwrap(), memo_bytes2);
+
+ // Check sections of `inner_tx3`
+ assert!(tx.code(&cmt3).is_some());
+ assert_eq!(tx.code(&cmt3).unwrap(), code_bytes3);
+
+ assert!(tx.data(&cmt3).is_some());
+ assert_eq!(tx.data(&cmt3).unwrap(), data_bytes3);
+
+ assert!(tx.memo(&cmt3).is_some());
+ assert_eq!(tx.memo(&cmt3).unwrap(), memo_bytes3);
+
+ // Check that the redundant sections have been included only once in the
+ // batch
+ assert_eq!(tx.sections.len(), 5);
+ assert_eq!(
+ tx.sections
+ .iter()
+ .filter(|section| section.code_sec().is_some())
+ .count(),
+ 1
+ );
+ assert_eq!(
+ tx.sections
+ .iter()
+ .filter(|section| section.data().is_some())
+ .count(),
+ 2
+ );
+ assert_eq!(
+ tx.sections
+ .iter()
+ .filter(|section| section.extra_data_sec().is_some())
+ .count(),
+ 2
+ );
}
}
| Avoid duplicated sections when batching
In the function we use to construct a batch we push all the sections of an inner tx in the batch, even though some of these might already be present.
https://github.com/anoma/namada/blob/810c258af3ddf2483af4d1306644db09bb2987e3/crates/tx/src/types.rs#L149-L158
We should try to check if a section is already in the batch and, in this case, avoid pushing another identical one. This would lead to smaller transactions and a lower gas cost. In doing so, though, we should pay attention to the `Commitments` of that tx, which are computed on the salted sections: we might need to update the commitments to point to the equivalent section with a different salt.
| 2024-10-03T21:52:35 | 0.44 | 03d9a589f54a43ebda2faaddb462f4991041ad26 | [
"types::test::test_batched_tx_sections"
] | [
"data::test_process_tx::test_process_tx_raw_tx_no_data",
"data::test_process_tx::test_process_tx_raw_tx_some_data",
"data::test_process_tx::test_process_tx_raw_tx_some_signed_data",
"tests::encoding_round_trip",
"data::test_process_tx::test_process_tx_wrapper_tx",
"sign::test::test_standalone_signing",
... | [] | [] | 95 | |
anoma/namada | 1,993 | anoma__namada-1993 | [
"648",
"454"
] | 7ed1058700efb9466e351f08c20f6bc9a77a3515 | diff --git /dev/null b/.changelog/unreleased/bug-fixes/1993-fst-epoch-start-height.md
new file mode 100644
--- /dev/null
+++ b/.changelog/unreleased/bug-fixes/1993-fst-epoch-start-height.md
@@ -0,0 +1,2 @@
+- Fix the start block height of the first epoch.
+ ([\#1993](https://github.com/anoma/namada/pull/1993))
\ No newline at end of file
diff --git a/apps/src/lib/node/ledger/shell/finalize_block.rs b/apps/src/lib/node/ledger/shell/finalize_block.rs
--- a/apps/src/lib/node/ledger/shell/finalize_block.rs
+++ b/apps/src/lib/node/ledger/shell/finalize_block.rs
@@ -702,11 +702,8 @@ where
.pred_epochs
.first_block_heights[last_epoch.0 as usize]
.0;
- let num_blocks_in_last_epoch = if first_block_of_last_epoch == 0 {
- self.wl_storage.storage.block.height.0 - 1
- } else {
- self.wl_storage.storage.block.height.0 - first_block_of_last_epoch
- };
+ let num_blocks_in_last_epoch =
+ self.wl_storage.storage.block.height.0 - first_block_of_last_epoch;
// Read the rewards accumulator and calculate the new rewards products
// for the previous epoch
diff --git a/core/src/ledger/storage/mod.rs b/core/src/ledger/storage/mod.rs
--- a/core/src/ledger/storage/mod.rs
+++ b/core/src/ledger/storage/mod.rs
@@ -142,8 +142,8 @@ pub struct BlockStorage<H: StorageHasher> {
pub hash: BlockHash,
/// From the start of `FinalizeBlock` until the end of `Commit`, this is
/// height of the block that is going to be committed. Otherwise, it is the
- /// height of the most recently committed block, or `BlockHeight(0)` if no
- /// block has been committed yet.
+ /// height of the most recently committed block, or `BlockHeight::sentinel`
+ /// (0) if no block has been committed yet.
pub height: BlockHeight,
/// From the start of `FinalizeBlock` until the end of `Commit`, this is
/// height of the block that is going to be committed. Otherwise it is the
diff --git a/core/src/ledger/storage/mod.rs b/core/src/ledger/storage/mod.rs
--- a/core/src/ledger/storage/mod.rs
+++ b/core/src/ledger/storage/mod.rs
@@ -963,6 +963,9 @@ where
} = parameters.epoch_duration;
self.next_epoch_min_start_height = initial_height + min_num_of_blocks;
self.next_epoch_min_start_time = genesis_time + min_duration;
+ self.block.pred_epochs = Epochs {
+ first_block_heights: vec![initial_height],
+ };
self.update_epoch_in_merkle_tree()
}
diff --git a/core/src/types/storage.rs b/core/src/types/storage.rs
--- a/core/src/types/storage.rs
+++ b/core/src/types/storage.rs
@@ -155,9 +155,9 @@ impl BlockResults {
}
}
-/// Height of a block, i.e. the level.
+/// Height of a block, i.e. the level. The `default` is the
+/// [`BlockHeight::sentinel`] value, which doesn't correspond to any block.
#[derive(
- Default,
Clone,
Copy,
BorshSerialize,
diff --git a/core/src/types/storage.rs b/core/src/types/storage.rs
--- a/core/src/types/storage.rs
+++ b/core/src/types/storage.rs
@@ -174,6 +174,12 @@ impl BlockResults {
)]
pub struct BlockHeight(pub u64);
+impl Default for BlockHeight {
+ fn default() -> Self {
+ Self::sentinel()
+ }
+}
+
impl Display for BlockHeight {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.0)
diff --git a/core/src/types/storage.rs b/core/src/types/storage.rs
--- a/core/src/types/storage.rs
+++ b/core/src/types/storage.rs
@@ -1148,6 +1165,7 @@ impl Mul for Epoch {
#[derive(
Clone,
Debug,
+ Default,
PartialEq,
Eq,
PartialOrd,
diff --git a/core/src/types/storage.rs b/core/src/types/storage.rs
--- a/core/src/types/storage.rs
+++ b/core/src/types/storage.rs
@@ -1162,16 +1180,6 @@ pub struct Epochs {
pub first_block_heights: Vec<BlockHeight>,
}
-impl Default for Epochs {
- /// Initialize predecessor epochs, assuming starting on the epoch 0 and
- /// block height 0.
- fn default() -> Self {
- Self {
- first_block_heights: vec![BlockHeight::default()],
- }
- }
-}
-
impl Epochs {
/// Record start of a new epoch at the given block height
pub fn new_epoch(&mut self, block_height: BlockHeight) {
| diff --git a/apps/src/lib/node/ledger/storage/mod.rs b/apps/src/lib/node/ledger/storage/mod.rs
--- a/apps/src/lib/node/ledger/storage/mod.rs
+++ b/apps/src/lib/node/ledger/storage/mod.rs
@@ -498,8 +498,9 @@ mod tests {
None,
Some(5),
);
+ let new_epoch_start = BlockHeight(1);
storage
- .begin_block(BlockHash::default(), BlockHeight(1))
+ .begin_block(BlockHash::default(), new_epoch_start)
.expect("begin_block failed");
let key = Key::parse("key").expect("cannot parse the key string");
diff --git a/apps/src/lib/node/ledger/storage/mod.rs b/apps/src/lib/node/ledger/storage/mod.rs
--- a/apps/src/lib/node/ledger/storage/mod.rs
+++ b/apps/src/lib/node/ledger/storage/mod.rs
@@ -509,12 +510,13 @@ mod tests {
.expect("write failed");
storage.block.epoch = storage.block.epoch.next();
- storage.block.pred_epochs.new_epoch(BlockHeight(1));
+ storage.block.pred_epochs.new_epoch(new_epoch_start);
let batch = PersistentStorage::batch();
storage.commit_block(batch).expect("commit failed");
+ let new_epoch_start = BlockHeight(6);
storage
- .begin_block(BlockHash::default(), BlockHeight(6))
+ .begin_block(BlockHash::default(), new_epoch_start)
.expect("begin_block failed");
let key = Key::parse("key2").expect("cannot parse the key string");
diff --git a/apps/src/lib/node/ledger/storage/mod.rs b/apps/src/lib/node/ledger/storage/mod.rs
--- a/apps/src/lib/node/ledger/storage/mod.rs
+++ b/apps/src/lib/node/ledger/storage/mod.rs
@@ -524,18 +526,19 @@ mod tests {
.expect("write failed");
storage.block.epoch = storage.block.epoch.next();
- storage.block.pred_epochs.new_epoch(BlockHeight(6));
+ storage.block.pred_epochs.new_epoch(new_epoch_start);
let batch = PersistentStorage::batch();
storage.commit_block(batch).expect("commit failed");
let result = storage.get_merkle_tree(1.into());
assert!(result.is_ok(), "The tree at Height 1 should be restored");
+ let new_epoch_start = BlockHeight(11);
storage
- .begin_block(BlockHash::default(), BlockHeight(11))
+ .begin_block(BlockHash::default(), new_epoch_start)
.expect("begin_block failed");
storage.block.epoch = storage.block.epoch.next();
- storage.block.pred_epochs.new_epoch(BlockHeight(11));
+ storage.block.pred_epochs.new_epoch(new_epoch_start);
let batch = PersistentStorage::batch();
storage.commit_block(batch).expect("commit failed");
diff --git a/core/src/ledger/storage/mod.rs b/core/src/ledger/storage/mod.rs
--- a/core/src/ledger/storage/mod.rs
+++ b/core/src/ledger/storage/mod.rs
@@ -1297,6 +1300,12 @@ mod tests {
minimum_gas_price: BTreeMap::default(),
};
parameters.init_storage(&mut wl_storage).unwrap();
+ // Initialize pred_epochs to the current height
+ wl_storage
+ .storage
+ .block
+ .pred_epochs
+ .new_epoch(wl_storage.storage.block.height);
let epoch_before = wl_storage.storage.last_epoch;
assert_eq!(epoch_before, wl_storage.storage.block.epoch);
diff --git a/core/src/types/storage.rs b/core/src/types/storage.rs
--- a/core/src/types/storage.rs
+++ b/core/src/types/storage.rs
@@ -267,6 +273,17 @@ impl TryFrom<i64> for BlockHeight {
}
}
impl BlockHeight {
+ /// The first block height 1.
+ pub const fn first() -> Self {
+ Self(1)
+ }
+
+ /// A sentinel value block height 0 may be used before any block is
+ /// committed or in queries to read from the latest committed block.
+ pub const fn sentinel() -> Self {
+ Self(0)
+ }
+
/// Get the height of the next block
pub fn next_height(&self) -> BlockHeight {
BlockHeight(self.0 + 1)
diff --git a/core/src/types/storage.rs b/core/src/types/storage.rs
--- a/core/src/types/storage.rs
+++ b/core/src/types/storage.rs
@@ -1614,11 +1622,13 @@ mod tests {
#[test]
fn test_predecessor_epochs_and_heights() {
- let mut epochs = Epochs::default();
+ let mut epochs = Epochs {
+ first_block_heights: vec![BlockHeight::first()],
+ };
println!("epochs {:#?}", epochs);
assert_eq!(
epochs.get_start_height_of_epoch(Epoch(0)),
- Some(BlockHeight(0))
+ Some(BlockHeight(1))
);
assert_eq!(epochs.get_epoch(BlockHeight(0)), Some(Epoch(0)));
diff --git a/core/src/types/storage.rs b/core/src/types/storage.rs
--- a/core/src/types/storage.rs
+++ b/core/src/types/storage.rs
@@ -1630,14 +1640,15 @@ mod tests {
Some(BlockHeight(10))
);
assert_eq!(epochs.get_epoch(BlockHeight(0)), Some(Epoch(0)));
+ assert_eq!(epochs.get_epoch_start_height(BlockHeight(0)), None);
assert_eq!(
- epochs.get_epoch_start_height(BlockHeight(0)),
- Some(BlockHeight(0))
+ epochs.get_epoch_start_height(BlockHeight(1)),
+ Some(BlockHeight(1))
);
assert_eq!(epochs.get_epoch(BlockHeight(9)), Some(Epoch(0)));
assert_eq!(
epochs.get_epoch_start_height(BlockHeight(9)),
- Some(BlockHeight(0))
+ Some(BlockHeight(1))
);
assert_eq!(epochs.get_epoch(BlockHeight(10)), Some(Epoch(1)));
assert_eq!(
diff --git a/ethereum_bridge/src/test_utils.rs b/ethereum_bridge/src/test_utils.rs
--- a/ethereum_bridge/src/test_utils.rs
+++ b/ethereum_bridge/src/test_utils.rs
@@ -227,6 +227,12 @@ pub fn init_storage_with_validators(
.write(&protocol_pk_key(validator), protocol_key)
.expect("Test failed");
}
+ // Initialize pred_epochs to the current height
+ wl_storage
+ .storage
+ .block
+ .pred_epochs
+ .new_epoch(wl_storage.storage.block.height);
wl_storage.commit_block().expect("Test failed");
all_keys
| Mismatch in initial block heights at genesis
The initial block height of the chain according to `request::InitChain` from tower ABCI is 1 ([see code here](https://github.com/anoma/namada/blob/546a043f023e329b815d7cb6dfa49d33425a02a9/apps/src/lib/node/ledger/shell/init_chain.rs#L52-L55)), which does not match the initial block height of the chain in the namada protocol, which is 0.
The initial block height of 1 from tower ABCI is used to determine when the next epoch begins [right here](https://github.com/anoma/namada/blob/546a043f023e329b815d7cb6dfa49d33425a02a9/apps/src/lib/node/ledger/shell/init_chain.rs#L64-L69), so possibly epoch 1 (the second epoch) starts a block later than intended.
`storage.last_height` should be an `Option<BlockHeight>`
In our `Storage` struct, the type of the `last_height` field (representing the height of the most recently committed block) is just `BlockHeight`, but when a chain is first initialized, no block has yet been committed so it would be clearer if this was `Option<BlockHeight>` and initially `None` rather than `BlockHeight(0)` before the first block has been committed (or some other way of refactoring this to be clearer).
| Related: https://github.com/anoma/namada/issues/377
We should look to remove `BlockHeight(0)` in general from our code since it doesn't exist (and change `BlockHeight::default()` to be `BlockHeight(1)` instead of `BlockHeight(0)`)
Good catch! Looks like we should be setting our block height from the `initial_height` we receive on init chain.
There's one place where `BlockHeight(0)` is used - in ABCI queries, where it's used as a special meaning for "last committed height"
Additionally, we should initialize `storage.block.pre_epochs` with empty `first_block_heights` and write it from init-chain too
thanks @james-chf for clearing this up in another discussion. There is no genesis block, `InitChain` initializes state to be used for the first block, which will have the `BlockHeight(1)` - the confusing part for me was that `InitChain` request comes with `height = 1`, which really means "prepare the genesis state for the first block, which will have height 1".
The issue with `storage.block.pre_epochs` is actually wrong though - we should use the `InitChain` height to initialize its value.
@tzemanovic What still needs to be done here - just the block height type fix?
> @tzemanovic What still needs to be done here - just the block height type fix?
the default value for `pred_epochs.first_block_heights` is incorrect - we're using 0 height, but it should be set from the value received in `init_chain` (typically 1)
| 2023-10-17T14:02:47 | 0.23 | 7ed1058700efb9466e351f08c20f6bc9a77a3515 | [
"ledger::storage::tests::update_epoch_after_its_duration",
"protocol::transactions::bridge_pool_roots::test_apply_bp_roots_to_storage::test_bp_roots_across_epoch_boundaries",
"protocol::transactions::votes::update::tests::test_calculate_one_vote_seen",
"protocol::transactions::votes::update::tests::test_calcu... | [
"ledger::eth_bridge::storage::bridge_pool::test_bridge_pool_tree::test_contains_key",
"ledger::eth_bridge::storage::bridge_pool::test_bridge_pool_tree::test_proof_all_leaves",
"ledger::eth_bridge::storage::bridge_pool::test_bridge_pool_tree::test_delete_all_keys",
"ledger::eth_bridge::storage::bridge_pool::te... | [] | [] | 96 |
tweag/nickel | 1,296 | tweag__nickel-1296 | [
"952"
] | 1be55fd6a31be8b635f542d6b1221bb1dea164f7 | diff --git a/src/error.rs b/src/error.rs
--- a/src/error.rs
+++ b/src/error.rs
@@ -141,17 +141,25 @@ pub enum EvalError {
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum IllegalPolymorphicTailAction {
+ FieldAccess { field: String },
Map,
Merge,
+ RecordRemove { field: String },
}
impl IllegalPolymorphicTailAction {
- fn message(&self) -> &'static str {
+ fn message(&self) -> String {
use IllegalPolymorphicTailAction::*;
match self {
- Map => "cannot map over a record sealed by a polymorphic contract",
- Merge => "cannot merge a record sealed by a polymorphic contract",
+ FieldAccess { field } => {
+ format!("cannot access field `{field}` sealed by a polymorphic contract")
+ }
+ Map => "cannot map over a record sealed by a polymorphic contract".to_owned(),
+ Merge => "cannot merge a record sealed by a polymorphic contract".to_owned(),
+ RecordRemove { field } => {
+ format!("cannot remove field `{field}` sealed by a polymorphic contract")
+ }
}
}
}
diff --git a/src/eval/operation.rs b/src/eval/operation.rs
--- a/src/eval/operation.rs
+++ b/src/eval/operation.rs
@@ -449,12 +449,24 @@ impl<R: ImportResolver, C: Cache> VirtualMachine<R, C> {
self.call_stack.enter_field(id, pos, value.pos, pos_op);
Ok(Closure { body: value, env })
}
- None => Err(EvalError::FieldMissing(
- id.into_label(),
- String::from("(.)"),
- RichTerm { term: t, pos },
- pos_op,
- )), //TODO include the position of operators on the stack
+ None => match record.sealed_tail.as_ref() {
+ Some(t) if t.has_field(&id) => {
+ Err(EvalError::IllegalPolymorphicTailAccess {
+ action: IllegalPolymorphicTailAction::FieldAccess {
+ field: id.to_string(),
+ },
+ evaluated_arg: t.label.get_evaluated_arg(&self.cache),
+ label: t.label.clone(),
+ call_stack: std::mem::take(&mut self.call_stack),
+ })
+ }
+ _ => Err(EvalError::FieldMissing(
+ id.into_label(),
+ String::from("(.)"),
+ RichTerm { term: t, pos },
+ pos_op,
+ )),
+ }, //TODO include the position of operators on the stack
}
} else {
// Not using mk_type_error! because of a non-uniform message
diff --git a/src/eval/operation.rs b/src/eval/operation.rs
--- a/src/eval/operation.rs
+++ b/src/eval/operation.rs
@@ -1552,15 +1564,24 @@ impl<R: ImportResolver, C: Cache> VirtualMachine<R, C> {
env: env2,
})
}
- None => Err(EvalError::FieldMissing(
- id.into_inner(),
- String::from("(.$)"),
- RichTerm {
- term: t2,
- pos: pos2,
- },
- pos_op,
- )),
+ None => match record.sealed_tail.as_ref() {
+ Some(t) if t.has_dyn_field(&id) => Err(EvalError::IllegalPolymorphicTailAccess {
+ action: IllegalPolymorphicTailAction::FieldAccess { field: id.to_string() },
+ evaluated_arg: t.label.get_evaluated_arg(&self.cache),
+ label: t.label.clone(),
+ call_stack: std::mem::take(&mut self.call_stack)
+ }),
+ _ => Err(EvalError::FieldMissing(
+ id.into_inner(),
+ String::from("(.$)"),
+ RichTerm {
+ term: t2,
+ pos: pos2,
+ },
+ pos_op,
+ )),
+ }
+
}
} else {
// Not using mk_type_error! because of a non-uniform message
diff --git a/src/eval/operation.rs b/src/eval/operation.rs
--- a/src/eval/operation.rs
+++ b/src/eval/operation.rs
@@ -1636,9 +1657,14 @@ impl<R: ImportResolver, C: Cache> VirtualMachine<R, C> {
value: None,
metadata: FieldMetadata { opt: true, ..},
..
- }) =>
- {
- Err(EvalError::FieldMissing(
+ }) => match record.sealed_tail.as_ref() {
+ Some(t) if t.has_dyn_field(&id) => Err(EvalError::IllegalPolymorphicTailAccess {
+ action: IllegalPolymorphicTailAction::RecordRemove { field: id.to_string() },
+ evaluated_arg: t.label.get_evaluated_arg(&self.cache),
+ label: t.label.clone(),
+ call_stack: std::mem::take(&mut self.call_stack)
+ }),
+ _ => Err(EvalError::FieldMissing(
id.into_inner(),
String::from("record_remove"),
RichTerm::new(
diff --git a/src/eval/operation.rs b/src/eval/operation.rs
--- a/src/eval/operation.rs
+++ b/src/eval/operation.rs
@@ -1646,7 +1672,7 @@ impl<R: ImportResolver, C: Cache> VirtualMachine<R, C> {
pos2,
),
pos_op,
- ))
+ )),
}
_ => {
Ok(Closure {
diff --git a/src/eval/operation.rs b/src/eval/operation.rs
--- a/src/eval/operation.rs
+++ b/src/eval/operation.rs
@@ -2393,8 +2419,13 @@ impl<R: ImportResolver, C: Cache> VirtualMachine<R, C> {
&mut env,
env4,
);
- r.sealed_tail =
- Some(record::SealedTail::new(*s, label.clone(), tail_as_var));
+ let fields = tail.fields.keys().cloned().collect();
+ r.sealed_tail = Some(record::SealedTail::new(
+ *s,
+ label.clone(),
+ tail_as_var,
+ fields,
+ ));
let body = RichTerm::from(Term::Record(r));
Ok(Closure { body, env })
diff --git a/src/term/record.rs b/src/term/record.rs
--- a/src/term/record.rs
+++ b/src/term/record.rs
@@ -439,14 +439,24 @@ pub struct SealedTail {
pub label: Label,
/// The term which is sealed.
term: RichTerm,
+ /// The field names of the sealed fields.
+ // You may find yourself wondering why this is a `Vec` rather than a
+ // `HashSet` given we only ever do containment checks against it.
+ // In brief: we'd need to use a `HashSet<String>`, which would mean
+ // allocating `fields.len()` `String`s in a fairly hot codepath.
+ // Since we only ever check whether the tail contains a specific field
+ // when we already know we're going to raise an error, it's not really
+ // an issue to have a linear lookup there, so we do that instead.
+ fields: Vec<Ident>,
}
impl SealedTail {
- pub fn new(sealing_key: SealingKey, label: Label, term: RichTerm) -> Self {
+ pub fn new(sealing_key: SealingKey, label: Label, term: RichTerm, fields: Vec<Ident>) -> Self {
Self {
sealing_key,
label,
term,
+ fields,
}
}
diff --git a/src/term/record.rs b/src/term/record.rs
--- a/src/term/record.rs
+++ b/src/term/record.rs
@@ -458,4 +468,12 @@ impl SealedTail {
None
}
}
+
+ pub fn has_field(&self, field: &Ident) -> bool {
+ self.fields.contains(field)
+ }
+
+ pub fn has_dyn_field(&self, field: &str) -> bool {
+ self.fields.iter().any(|i| i.label() == field)
+ }
}
| diff --git a/tests/integration/contracts_fail.rs b/tests/integration/contracts_fail.rs
--- a/tests/integration/contracts_fail.rs
+++ b/tests/integration/contracts_fail.rs
@@ -210,28 +210,28 @@ fn records_contracts_poly() {
f { a | default = 100, b = 1 }
"#,
),
+ (
+ "statically accessing a sealed field violates parametricity",
+ r#"
+ let f | forall a. { ; a } -> { ; a } = fun x => %seq% x.a x in f { a = 1 }
+ "#,
+ ),
+ (
+ "dynamically accessing a sealed field violates parametricity",
+ r#"
+ let f | forall a. { ; a } -> { ; a } = fun x => %seq% x."a" x in f { a = 1 }
+ "#,
+ ),
+ (
+ "removing a sealed field violates parametricity",
+ r#"
+ let remove_x | forall r. { ; r } -> { ; r } = fun r => %record_remove% "x" r in
+ remove_x { x = 1 }
+ "#,
+ ),
] {
assert_raise_tail_blame!(input, "failed on case: {}", name);
}
-
- // These are currently FieldMissing errors rather than BlameErrors as we
- // don't yet inspect the sealed tail to see if the requested field is inside.
- // Once we do, these tests will start failing, and if you're currently here
- // for that reason, you can just move the examples into the array above.
- let bad_acc = "let f | forall a. { ; a} -> { ; a} = fun x => %seq% x.a x in f";
- assert_matches!(
- eval(format!("{bad_acc} {{a=1}}")),
- Err(Error::EvalError(EvalError::FieldMissing(..)))
- );
-
- let remove_sealed_field = r#"
- let remove_x | forall r. { ; r } -> { ; r } = fun r => %record_remove% "x" r in
- remove_x { x = 1 }
- "#;
- assert_matches!(
- eval(remove_sealed_field),
- Err(Error::EvalError(EvalError::FieldMissing(..)))
- );
}
#[test]
| Correctly assign blame when accessing a field in the tail of a polymorphically-sealed record
Currently accessing a field which is sealed into the tail of a polymorphically-sealed record raises a missing field error. However, given that this is essentially an attempt to violate the polymorphic record contract that sealed the record in the first place, it would be more appropriate to raise a blame error.
With the previous implementation this was not feasible, however once #949 is merged it should get a lot easier.
In practical terms, the following programs should fail with blame errors:
```
let f | forall a. { ; a} -> { ; a} = fun x => %seq% x.a x in f
```
```
let remove_x | forall r. { ; r } -> { ; r } = fun r => %record_remove% "x" r in
remove_x { x = 1 }
```
| 2023-05-05T20:22:31 | 0.3 | 1be55fd6a31be8b635f542d6b1221bb1dea164f7 | [
"contracts_fail::records_contracts_poly"
] | [
"environment::tests::test_deepness",
"environment::tests::test_env_base",
"environment::tests::test_clone",
"environment::tests::test_iter_elem",
"environment::tests::test_iter_layer",
"eval::merge::split::tests::all_left",
"eval::merge::split::tests::all_center",
"eval::merge::split::tests::all_right... | [] | [] | 97 | |
tweag/nickel | 1,272 | tweag__nickel-1272 | [
"1228"
] | 116fdb3591cf75ded2ccc411dd18fe2d1f368369 | diff --git a/lsp/nls/src/requests/completion.rs b/lsp/nls/src/requests/completion.rs
--- a/lsp/nls/src/requests/completion.rs
+++ b/lsp/nls/src/requests/completion.rs
@@ -248,8 +248,8 @@ fn find_fields_from_type(
) -> Vec<IdentWithType> {
match &ty.types {
TypeF::Record(row) => find_fields_from_rrows(row, path, info),
- TypeF::Dict(ty) => match path.pop() {
- Some(..) => find_fields_from_type(ty, path, info),
+ TypeF::Dict { type_fields, .. } => match path.pop() {
+ Some(..) => find_fields_from_type(type_fields, path, info),
_ => Vec::new(),
},
TypeF::Flat(term) => find_fields_from_term(term, path, info),
diff --git a/src/label.rs b/src/label.rs
--- a/src/label.rs
+++ b/src/label.rs
@@ -212,8 +212,8 @@ but this field doesn't exist in {}",
..path_span
})
}
- (TypeF::Dict(ty), next @ Some(Elem::Dict)) => {
- let path_span = span(path_it, ty)?;
+ (TypeF::Dict { type_fields, .. }, next @ Some(Elem::Dict)) => {
+ let path_span = span(path_it, type_fields)?;
Some(PathSpan {
last: path_span.last.or_else(|| next.copied()),
diff --git a/src/parser/grammar.lalrpop b/src/parser/grammar.lalrpop
--- a/src/parser/grammar.lalrpop
+++ b/src/parser/grammar.lalrpop
@@ -62,10 +62,7 @@ use crate::{
array::Array,
make as mk_term,
},
- types::{
- Types, TypeF, EnumRows, EnumRowsF, RecordRows, RecordRowsF,
- VarKind
- },
+ types::*,
position::{TermPos, RawSpan},
label::Label,
};
diff --git a/src/parser/grammar.lalrpop b/src/parser/grammar.lalrpop
--- a/src/parser/grammar.lalrpop
+++ b/src/parser/grammar.lalrpop
@@ -912,7 +909,29 @@ TypeAtom: Types = {
Types::from(TypeF::Enum(ty))
},
"{" "_" ":" <t: WithPos<Types>> "}" => {
- Types::from(TypeF::Dict(Box::new(t)))
+ Types::from(TypeF::Dict {
+ type_fields: Box::new(t),
+ flavour: DictTypeFlavour::Type
+ })
+ },
+ // Although a dictionary contracts isn't really a type, we treat it like as
+ // a type for now - at least syntactically - as they are represented using a
+ // `TypeF::Dict` constructor in the AST. This just simpler for many reasons
+ // (error reporting of contracts and in particular type paths, LSP, and so
+ // on.)
+ //
+ // However, note that we use a fixed type as an argument. This has the
+ // effect of preventing dictionary contracts from catpuring type variables
+ // that could be in scope. For example, we want `forall a. {_ | a}` to fail
+ // (see https://github.com/tweag/nickel/issues/1228). Fixing type variables
+ // right away inside the dictionary contract (before the enclosing `forall`
+ // is fixed) will indeed turn it into a term variable, and raise an unbound
+ // type variable error.
+ "{" "_" "|" <t: WithPos<FixedType>> "}" => {
+ Types::from(TypeF::Dict {
+ type_fields: Box::new(t),
+ flavour: DictTypeFlavour::Contract
+ })
},
"_" => {
let id = *next_wildcard_id;
diff --git a/src/parser/uniterm.rs b/src/parser/uniterm.rs
--- a/src/parser/uniterm.rs
+++ b/src/parser/uniterm.rs
@@ -12,8 +12,8 @@ use crate::{
LabeledType, MergePriority, RichTerm, Term, TypeAnnotation,
},
types::{
- EnumRows, EnumRowsIteratorItem, RecordRow, RecordRows, RecordRowsF, TypeF, Types,
- UnboundTypeVariableError, VarKind,
+ DictTypeFlavour, EnumRows, EnumRowsIteratorItem, RecordRow, RecordRows, RecordRowsF, TypeF,
+ Types, UnboundTypeVariableError, VarKind,
},
};
diff --git a/src/parser/uniterm.rs b/src/parser/uniterm.rs
--- a/src/parser/uniterm.rs
+++ b/src/parser/uniterm.rs
@@ -616,7 +616,7 @@ pub(super) trait FixTypeVars {
///
/// Once again because `forall`s only bind variables locally, and don't bind inside contracts, we
/// don't have to recurse into contracts and this pass will only visit each node of the AST at most
- /// once in total.
+ /// once in total (and most probably much less so).
///
/// There is one subtlety with unirecords, though. A unirecord can still be in interpreted as a
/// record type later. Take the following example:
diff --git a/src/parser/uniterm.rs b/src/parser/uniterm.rs
--- a/src/parser/uniterm.rs
+++ b/src/parser/uniterm.rs
@@ -678,6 +678,11 @@ impl FixTypeVars for Types {
| TypeF::String
| TypeF::Symbol
| TypeF::Flat(_)
+ // We don't fix type variables inside a dictionary contract. A dictionary contract
+ // should not be considered as a static type, but instead work as a contract. In
+ // particular mustn't be allowed to capture type variables from the enclosing type: see
+ // https://github.com/tweag/nickel/issues/1228.
+ | TypeF::Dict { flavour: DictTypeFlavour::Contract, ..}
| TypeF::Wildcard(_) => Ok(()),
TypeF::Arrow(ref mut s, ref mut t) => {
(*s).fix_type_vars_env(bound_vars.clone(), span)?;
diff --git a/src/parser/uniterm.rs b/src/parser/uniterm.rs
--- a/src/parser/uniterm.rs
+++ b/src/parser/uniterm.rs
@@ -711,7 +716,7 @@ impl FixTypeVars for Types {
Ok(())
}
- TypeF::Dict(ref mut ty) | TypeF::Array(ref mut ty) => {
+ TypeF::Dict { type_fields: ref mut ty, flavour: DictTypeFlavour::Type} | TypeF::Array(ref mut ty) => {
(*ty).fix_type_vars_env(bound_vars, span)
}
TypeF::Enum(ref mut erows) => erows.fix_type_vars_env(bound_vars, span),
diff --git a/src/pretty.rs b/src/pretty.rs
--- a/src/pretty.rs
+++ b/src/pretty.rs
@@ -836,11 +836,17 @@ where
}
Enum(erows) => erows.pretty(allocator).enclose("[|", "|]"),
Record(rrows) => rrows.pretty(allocator).braces(),
- Dict(ty) => allocator
+ Dict {
+ type_fields: ty,
+ flavour: attrs,
+ } => allocator
.line()
.append(allocator.text("_"))
.append(allocator.space())
- .append(allocator.text(":"))
+ .append(match attrs {
+ DictTypeFlavour::Type => allocator.text(":"),
+ DictTypeFlavour::Contract => allocator.text("|"),
+ })
.append(allocator.space())
.append(ty.pretty(allocator))
.append(allocator.line())
diff --git a/src/stdlib.rs b/src/stdlib.rs
--- a/src/stdlib.rs
+++ b/src/stdlib.rs
@@ -75,7 +75,8 @@ pub mod internals {
generate_accessor!(enums);
generate_accessor!(enum_fail);
generate_accessor!(record);
- generate_accessor!(dyn_record);
+ generate_accessor!(dict_type);
+ generate_accessor!(dict_contract);
generate_accessor!(record_extend);
generate_accessor!(forall_tail);
generate_accessor!(dyn_tail);
diff --git a/src/transform/free_vars.rs b/src/transform/free_vars.rs
--- a/src/transform/free_vars.rs
+++ b/src/transform/free_vars.rs
@@ -186,9 +186,11 @@ impl CollectFreeVars for Types {
| TypeF::Symbol
| TypeF::Var(_)
| TypeF::Wildcard(_) => (),
- TypeF::Forall { body: ty, .. } | TypeF::Dict(ty) | TypeF::Array(ty) => {
- ty.as_mut().collect_free_vars(set)
+ TypeF::Forall { body: ty, .. }
+ | TypeF::Dict {
+ type_fields: ty, ..
}
+ | TypeF::Array(ty) => ty.as_mut().collect_free_vars(set),
// No term can appear anywhere in a enum row type, hence we can stop here.
TypeF::Enum(_) => (),
TypeF::Record(rrows) => rrows.collect_free_vars(set),
diff --git a/src/typecheck/eq.rs b/src/typecheck/eq.rs
--- a/src/typecheck/eq.rs
+++ b/src/typecheck/eq.rs
@@ -445,7 +445,17 @@ fn type_eq_bounded<E: TermEnvironment>(
| (TypeF::Bool, TypeF::Bool)
| (TypeF::Symbol, TypeF::Symbol)
| (TypeF::String, TypeF::String) => true,
- (TypeF::Dict(uty1), TypeF::Dict(uty2)) | (TypeF::Array(uty1), TypeF::Array(uty2)) => {
+ (
+ TypeF::Dict {
+ type_fields: uty1,
+ flavour: attrs1,
+ },
+ TypeF::Dict {
+ type_fields: uty2,
+ flavour: attrs2,
+ },
+ ) if attrs1 == attrs2 => type_eq_bounded(state, uty1, env1, uty2, env2),
+ (TypeF::Array(uty1), TypeF::Array(uty2)) => {
type_eq_bounded(state, uty1, env1, uty2, env2)
}
(TypeF::Arrow(s1, t1), TypeF::Arrow(s2, t2)) => {
diff --git a/src/typecheck/mk_uniftype.rs b/src/typecheck/mk_uniftype.rs
--- a/src/typecheck/mk_uniftype.rs
+++ b/src/typecheck/mk_uniftype.rs
@@ -1,5 +1,6 @@
//! Helpers for building `TypeWrapper`s.
-use super::{TypeF, UnifType};
+use super::UnifType;
+use crate::types::{DictTypeFlavour, TypeF};
/// Multi-ary arrow constructor for types implementing `Into<TypeWrapper>`.
#[macro_export]
diff --git a/src/typecheck/mk_uniftype.rs b/src/typecheck/mk_uniftype.rs
--- a/src/typecheck/mk_uniftype.rs
+++ b/src/typecheck/mk_uniftype.rs
@@ -94,11 +95,14 @@ macro_rules! generate_builder {
};
}
-pub fn dyn_record<T>(ty: T) -> UnifType
+pub fn dict<T>(ty: T) -> UnifType
where
T: Into<UnifType>,
{
- UnifType::Concrete(TypeF::Dict(Box::new(ty.into())))
+ UnifType::Concrete(TypeF::Dict {
+ type_fields: Box::new(ty.into()),
+ flavour: DictTypeFlavour::Type,
+ })
}
pub fn array<T>(ty: T) -> UnifType
diff --git a/src/typecheck/mod.rs b/src/typecheck/mod.rs
--- a/src/typecheck/mod.rs
+++ b/src/typecheck/mod.rs
@@ -35,7 +35,7 @@
//!
//! ```nickel
//! # Rejected
-//! let id = fun x => x in seq (id "a") (id 5) : Number
+//! (let id = fun x => x in std.seq (id "a") (id 5)) : Number
//! ```
//!
//! Indeed, `id` is given the type `_a -> _a`, where `_a` is a unification variable, but is not
diff --git a/src/typecheck/mod.rs b/src/typecheck/mod.rs
--- a/src/typecheck/mod.rs
+++ b/src/typecheck/mod.rs
@@ -48,7 +48,7 @@
//!
//! ```nickel
//! # Accepted
-//! let id : forall a. a -> a = fun x => x in seq (id "a") (id 5) : Num
+//! (let id : forall a. a -> a = fun x => x in std.seq (id "a") (id 5)) : Num
//! ```
//!
//! In walk mode, the type of let-bound expressions is inferred in a shallow way (see
diff --git a/src/typecheck/mod.rs b/src/typecheck/mod.rs
--- a/src/typecheck/mod.rs
+++ b/src/typecheck/mod.rs
@@ -1015,7 +1015,7 @@ fn walk_type<L: Linearizer>(
}
TypeF::Record(rrows) => walk_rrows(state, ctxt, lin, linearizer, rrows),
TypeF::Flat(t) => walk(state, ctxt, lin, linearizer, t),
- TypeF::Dict(ty2)
+ TypeF::Dict { type_fields: ty2, .. }
| TypeF::Array(ty2)
| TypeF::Forall {body: ty2, ..} => walk_type(state, ctxt, lin, linearizer, ty2),
}
diff --git a/src/typecheck/mod.rs b/src/typecheck/mod.rs
--- a/src/typecheck/mod.rs
+++ b/src/typecheck/mod.rs
@@ -1397,7 +1397,7 @@ fn check<L: Linearizer>(
// element type.
Term::RecRecord(record, dynamic, ..) if !dynamic.is_empty() => {
let ty_dict = state.table.fresh_type_uvar();
- unify(state, &ctxt, ty, mk_uniftype::dyn_record(ty_dict.clone()))
+ unify(state, &ctxt, ty, mk_uniftype::dict(ty_dict.clone()))
.map_err(|err| err.into_typecheck_err(state, rt.pos))?;
//TODO: should we insert in the environment the checked type, or the actual type?
diff --git a/src/typecheck/mod.rs b/src/typecheck/mod.rs
--- a/src/typecheck/mod.rs
+++ b/src/typecheck/mod.rs
@@ -1439,7 +1439,11 @@ fn check<L: Linearizer>(
let root_ty = ty.clone().into_root(state.table);
- if let UnifType::Concrete(TypeF::Dict(rec_ty)) = root_ty {
+ if let UnifType::Concrete(TypeF::Dict {
+ type_fields: rec_ty,
+ ..
+ }) = root_ty
+ {
// Checking for a dictionary
record
.fields
diff --git a/src/typecheck/mod.rs b/src/typecheck/mod.rs
--- a/src/typecheck/mod.rs
+++ b/src/typecheck/mod.rs
@@ -2201,7 +2205,14 @@ pub fn unify(
err.into_unif_err(mk_uty_record!(; rrows1), mk_uty_record!(; rrows2))
})
}
- (TypeF::Dict(t1), TypeF::Dict(t2)) => unify(state, ctxt, *t1, *t2),
+ (
+ TypeF::Dict {
+ type_fields: uty1, ..
+ },
+ TypeF::Dict {
+ type_fields: uty2, ..
+ },
+ ) => unify(state, ctxt, *uty1, *uty2),
(
TypeF::Forall {
var: var1,
diff --git a/src/typecheck/mod.rs b/src/typecheck/mod.rs
--- a/src/typecheck/mod.rs
+++ b/src/typecheck/mod.rs
@@ -2715,7 +2726,7 @@ impl ConstrainFreshRRowsVar for UnifType {
| TypeF::Wildcard(_) => (),
TypeF::Record(rrows) => rrows.constrain_fresh_rrows_var(state, var_id),
TypeF::Array(uty)
- | TypeF::Dict(uty) => uty.constrain_fresh_rrows_var(state, var_id),
+ | TypeF::Dict { type_fields: uty, .. } => uty.constrain_fresh_rrows_var(state, var_id),
},
UnifType::Constant(_) | UnifType::Contract(..) => (),
}
diff --git a/src/typecheck/mod.rs b/src/typecheck/mod.rs
--- a/src/typecheck/mod.rs
+++ b/src/typecheck/mod.rs
@@ -2777,9 +2788,10 @@ impl ConstrainFreshERowsVar for UnifType {
| TypeF::Wildcard(_) => (),
TypeF::Enum(erows) => erows.constrain_fresh_erows_var(state, var_id),
TypeF::Record(rrows) => rrows.constrain_fresh_erows_var(state, var_id),
- TypeF::Array(uty) | TypeF::Dict(uty) => {
- uty.constrain_fresh_erows_var(state, var_id)
- }
+ TypeF::Array(uty)
+ | TypeF::Dict {
+ type_fields: uty, ..
+ } => uty.constrain_fresh_erows_var(state, var_id),
},
UnifType::Constant(_) | UnifType::Contract(..) => (),
}
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -104,8 +104,8 @@ pub fn get_uop_type(
let f_type = mk_uty_arrow!(TypeF::String, a.clone(), b.clone());
(
- mk_uniftype::dyn_record(a),
- mk_uty_arrow!(f_type, mk_uniftype::dyn_record(b)),
+ mk_uniftype::dict(a),
+ mk_uty_arrow!(f_type, mk_uniftype::dict(b)),
)
}
// forall a b. a -> b -> b
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -127,7 +127,7 @@ pub fn get_uop_type(
let ty_a = UnifType::UnifVar(state.table.fresh_type_var_id());
(
- mk_uniftype::dyn_record(ty_a),
+ mk_uniftype::dict(ty_a),
mk_uniftype::array(mk_uniftype::str()),
)
}
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -135,10 +135,7 @@ pub fn get_uop_type(
UnaryOp::ValuesOf() => {
let ty_a = UnifType::UnifVar(state.table.fresh_type_var_id());
- (
- mk_uniftype::dyn_record(ty_a.clone()),
- mk_uniftype::array(ty_a),
- )
+ (mk_uniftype::dict(ty_a.clone()), mk_uniftype::array(ty_a))
}
// Str -> Str
UnaryOp::StrTrim() => (mk_uniftype::str(), mk_uniftype::str()),
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -269,11 +266,7 @@ pub fn get_bop_type(
BinaryOp::DynAccess() => {
let res = UnifType::UnifVar(state.table.fresh_type_var_id());
- (
- mk_uniftype::str(),
- mk_uniftype::dyn_record(res.clone()),
- res,
- )
+ (mk_uniftype::str(), mk_uniftype::dict(res.clone()), res)
}
// forall a. Str -> {_ : a} -> a -> {_ : a}
BinaryOp::DynExtend {
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -283,8 +276,8 @@ pub fn get_bop_type(
let res = UnifType::UnifVar(state.table.fresh_type_var_id());
(
mk_uniftype::str(),
- mk_uniftype::dyn_record(res.clone()),
- mk_uty_arrow!(res.clone(), mk_uniftype::dyn_record(res)),
+ mk_uniftype::dict(res.clone()),
+ mk_uty_arrow!(res.clone(), mk_uniftype::dict(res)),
)
}
// forall a. Str -> {_ : a} -> {_ : a}
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -295,8 +288,8 @@ pub fn get_bop_type(
let res = UnifType::UnifVar(state.table.fresh_type_var_id());
(
mk_uniftype::str(),
- mk_uniftype::dyn_record(res.clone()),
- mk_uty_arrow!(res.clone(), mk_uniftype::dyn_record(res)),
+ mk_uniftype::dict(res.clone()),
+ mk_uty_arrow!(res.clone(), mk_uniftype::dict(res)),
)
}
// forall a. Str -> { _ : a } -> { _ : a}
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -304,8 +297,8 @@ pub fn get_bop_type(
let res = UnifType::UnifVar(state.table.fresh_type_var_id());
(
mk_uniftype::str(),
- mk_uniftype::dyn_record(res.clone()),
- mk_uniftype::dyn_record(res),
+ mk_uniftype::dict(res.clone()),
+ mk_uniftype::dict(res),
)
}
// forall a. Str -> {_: a} -> Bool
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -313,7 +306,7 @@ pub fn get_bop_type(
let ty_elt = UnifType::UnifVar(state.table.fresh_type_var_id());
(
mk_uniftype::str(),
- mk_uniftype::dyn_record(ty_elt),
+ mk_uniftype::dict(ty_elt),
mk_uniftype::bool(),
)
}
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -384,7 +377,7 @@ pub fn get_bop_type(
// forall a. Dyn -> {_: a} -> Dyn -> {_: a}
BinaryOp::RecordLazyAssume() => {
let ty_field = UnifType::UnifVar(state.table.fresh_type_var_id());
- let ty_dict = mk_uniftype::dyn_record(ty_field);
+ let ty_dict = mk_uniftype::dict(ty_field);
(
mk_uniftype::dynamic(),
ty_dict.clone(),
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -442,8 +435,8 @@ pub fn get_nop_type(
vec![
mk_uniftype::dynamic(),
mk_uniftype::dynamic(),
- mk_uniftype::dyn_record(mk_uniftype::dynamic()),
- mk_uniftype::dyn_record(mk_uniftype::dynamic()),
+ mk_uniftype::dict(mk_uniftype::dynamic()),
+ mk_uniftype::dict(mk_uniftype::dynamic()),
],
mk_uniftype::dynamic(),
),
diff --git a/src/typecheck/operation.rs b/src/typecheck/operation.rs
--- a/src/typecheck/operation.rs
+++ b/src/typecheck/operation.rs
@@ -452,7 +445,7 @@ pub fn get_nop_type(
vec![
mk_uniftype::dynamic(),
mk_uniftype::dynamic(),
- mk_uniftype::dyn_record(mk_uniftype::dynamic()),
+ mk_uniftype::dict(mk_uniftype::dynamic()),
],
mk_uniftype::dynamic(),
),
diff --git a/src/types.rs b/src/types.rs
--- a/src/types.rs
+++ b/src/types.rs
@@ -161,6 +161,28 @@ impl TryFrom<&Term> for VarKind {
}
}
+/// Flavour of a dictionary type. There are currently two way of writing a dictionary type-ish
+/// object: as a dictionary contract `{_ | T}` or as a dictionary type `{_ : T}`. Ideally, the
+/// former wouldn't even be a type but mostly syntactic sugar for a builtin contract application,
+/// or maybe a proper AST node.
+///
+/// However, the LSP needs to handle both dictionary types and contracts specifically in order to
+/// provide good completion. As we added dictionary contract just before 1.0 to fix a non trivial
+/// issue with respect to polymorphic contracts ([GitHub
+/// issue](https://github.com/tweag/nickel/issues/1228)), the solution to just tweak dictionary
+/// types to be able to hold both kinds - generating a different contract - seemed to be the
+/// simplest to preserve the user experience (LSP, handling of dictionary when reporting a contract
+/// blame, etc.).
+///
+/// Dictionary contracts might get a proper AST node later on.
+#[derive(Clone, Debug, Copy, Eq, PartialEq)]
+pub enum DictTypeFlavour {
+ /// Dictionary type (`{_ : T}`)
+ Type,
+ /// Dictionary contract (`{_ | T}`)
+ Contract,
+}
+
/// A Nickel type.
///
/// # Generic representation (functor)
diff --git a/src/types.rs b/src/types.rs
--- a/src/types.rs
+++ b/src/types.rs
@@ -265,7 +287,10 @@ pub enum TypeF<Ty, RRows, ERows> {
/// A record type, composed of a sequence of record rows.
Record(RRows),
/// A dictionary type.
- Dict(Ty),
+ Dict {
+ type_fields: Ty,
+ flavour: DictTypeFlavour,
+ },
/// A parametrized array.
Array(Ty),
/// A type wildcard, wrapping an ID unique within a given file.
diff --git a/src/types.rs b/src/types.rs
--- a/src/types.rs
+++ b/src/types.rs
@@ -505,7 +530,13 @@ impl<Ty, RRows, ERows> TypeF<Ty, RRows, ERows> {
}),
TypeF::Enum(erows) => Ok(TypeF::Enum(f_erows(erows, state)?)),
TypeF::Record(rrows) => Ok(TypeF::Record(f_rrows(rrows, state)?)),
- TypeF::Dict(t) => Ok(TypeF::Dict(f(t, state)?)),
+ TypeF::Dict {
+ type_fields,
+ flavour: attrs,
+ } => Ok(TypeF::Dict {
+ type_fields: f(type_fields, state)?,
+ flavour: attrs,
+ }),
TypeF::Array(t) => Ok(TypeF::Array(f(t, state)?)),
TypeF::Wildcard(i) => Ok(TypeF::Wildcard(i)),
}
diff --git a/src/types.rs b/src/types.rs
--- a/src/types.rs
+++ b/src/types.rs
@@ -946,8 +977,23 @@ impl Types {
}
TypeF::Enum(ref erows) => erows.subcontract()?,
TypeF::Record(ref rrows) => rrows.subcontract(vars, pol, sy)?,
- TypeF::Dict(ref ty) => {
- mk_app!(internals::dyn_record(), ty.subcontract(vars, pol, sy)?)
+ TypeF::Dict {
+ ref type_fields,
+ flavour: DictTypeFlavour::Contract,
+ } => {
+ mk_app!(
+ internals::dict_contract(),
+ type_fields.subcontract(vars, pol, sy)?
+ )
+ }
+ TypeF::Dict {
+ ref type_fields,
+ flavour: DictTypeFlavour::Type,
+ } => {
+ mk_app!(
+ internals::dict_type(),
+ type_fields.subcontract(vars, pol, sy)?
+ )
}
TypeF::Wildcard(_) => internals::dynamic(),
};
diff --git a/src/types.rs b/src/types.rs
--- a/src/types.rs
+++ b/src/types.rs
@@ -1091,9 +1137,16 @@ impl Display for Types {
}
write!(f, ". {curr}")
}
- TypeF::Enum(row) => write!(f, "[|{row}|]"),
- TypeF::Record(row) => write!(f, "{{{row}}}"),
- TypeF::Dict(ty) => write!(f, "{{_: {ty}}}"),
+ TypeF::Enum(row) => write!(f, "[| {row} |]"),
+ TypeF::Record(row) => write!(f, "{{ {row} }}"),
+ TypeF::Dict {
+ type_fields,
+ flavour: DictTypeFlavour::Type,
+ } => write!(f, "{{ _ : {type_fields} }}"),
+ TypeF::Dict {
+ type_fields,
+ flavour: DictTypeFlavour::Contract,
+ } => write!(f, "{{ _ | {type_fields} }}"),
TypeF::Arrow(dom, codom) => match dom.types {
TypeF::Arrow(_, _) | TypeF::Forall { .. } => write!(f, "({dom}) -> {codom}"),
_ => write!(f, "{dom} -> {codom}"),
diff --git a/stdlib/internals.ncl b/stdlib/internals.ncl
--- a/stdlib/internals.ncl
+++ b/stdlib/internals.ncl
@@ -104,12 +104,25 @@
else
%blame% (%label_with_message% "not a record" label),
- "$dyn_record" = fun contract label value =>
+ # Lazy dictionary contract for `{_ | T}`
+ "$dict_contract" = fun contract label value =>
if %typeof% value == `Record then
%record_lazy_assume% (%go_dict% label) value (fun _field => contract)
else
%blame% (%label_with_message% "not a record" label),
+ # Eager dictionary contract for `{_ : T}`
+ "$dict_type" = fun contract label value =>
+ if %typeof% value == `Record then
+ %record_map%
+ value
+ (
+ fun _field field_value =>
+ %assume% contract (%go_dict% label) field_value
+ )
+ else
+ %blame% (%label_with_message% "not a record" label),
+
"$forall_tail" = fun sealing_key acc label value =>
let current_polarity = %polarity% label in
let polarity = (%lookup_type_variable% sealing_key label).polarity in
diff --git a/stdlib/internals.ncl b/stdlib/internals.ncl
--- a/stdlib/internals.ncl
+++ b/stdlib/internals.ncl
@@ -152,8 +165,7 @@
# Recursive priorities operators
- "$rec_force" = fun value => %rec_force% (%force% value),
- "$rec_default" = fun value => %rec_default% (%force% value),
+ "$rec_force" = fun value => %rec_force% (%force% value),"$rec_default" = fun value => %rec_default% (%force% value),
# Provide access to std.contract.Equal within the initial environement. Merging
# makes use of `std.contract.Equal`, but it can't blindly substitute such an
diff --git a/stdlib/internals.ncl b/stdlib/internals.ncl
--- a/stdlib/internals.ncl
+++ b/stdlib/internals.ncl
@@ -162,3 +174,4 @@
# environment and prevents it from being shadowed.
"$stdlib_contract_equal" = std.contract.Equal,
}
+
| diff --git a/src/types.rs b/src/types.rs
--- a/src/types.rs
+++ b/src/types.rs
@@ -1125,6 +1178,7 @@ mod test {
/// Note that there are infinitely many string representations of the same type since, for
/// example, spaces are ignored: for the outcome of this function to be meaningful, the
/// original type must be written in the same way as types are formatted.
+ #[track_caller]
fn assert_format_eq(s: &str) {
let ty = parse_type(s);
assert_eq!(s, &format!("{ty}"));
diff --git a/src/types.rs b/src/types.rs
--- a/src/types.rs
+++ b/src/types.rs
@@ -1138,15 +1192,17 @@ mod test {
assert_format_eq("((Number -> Number) -> Number) -> Number");
assert_format_eq("Number -> (forall a. a -> String) -> String");
- assert_format_eq("{_: String}");
- assert_format_eq("{_: (String -> String) -> String}");
+ assert_format_eq("{ _ : String }");
+ assert_format_eq("{ _ : (String -> String) -> String }");
+ assert_format_eq("{ _ | String }");
+ assert_format_eq("{ _ | (String -> String) -> String }");
- assert_format_eq("{x: (Bool -> Bool) -> Bool, y: Bool}");
- assert_format_eq("forall r. {x: Bool, y: Bool, z: Bool ; r}");
- assert_format_eq("{x: Bool, y: Bool, z: Bool}");
+ assert_format_eq("{ x: (Bool -> Bool) -> Bool, y: Bool }");
+ assert_format_eq("forall r. { x: Bool, y: Bool, z: Bool ; r }");
+ assert_format_eq("{ x: Bool, y: Bool, z: Bool }");
- assert_format_eq("[|`a, `b, `c, `d|]");
- assert_format_eq("forall r. [|`tag1, `tag2, `tag3 ; r|]");
+ assert_format_eq("[| `a, `b, `c, `d |]");
+ assert_format_eq("forall r. [| `tag1, `tag2, `tag3 ; r |]");
assert_format_eq("Array Number");
assert_format_eq("Array (Array Number)");
diff --git a/src/types.rs b/src/types.rs
--- a/src/types.rs
+++ b/src/types.rs
@@ -1156,7 +1212,7 @@ mod test {
assert_format_eq("_");
assert_format_eq("_ -> _");
- assert_format_eq("{x: _, y: Bool}");
- assert_format_eq("{_: _}");
+ assert_format_eq("{ x: _, y: Bool }");
+ assert_format_eq("{ _ : _ }");
}
}
diff --git a/tests/integration/contracts_fail.rs b/tests/integration/contracts_fail.rs
--- a/tests/integration/contracts_fail.rs
+++ b/tests/integration/contracts_fail.rs
@@ -1,6 +1,6 @@
use assert_matches::assert_matches;
use codespan::Files;
-use nickel_lang::error::{Error, EvalError, IntoDiagnostics};
+use nickel_lang::error::{Error, EvalError, IntoDiagnostics, TypecheckError};
use nickel_lang_utilities::{eval, program_from_expr};
diff --git a/tests/integration/contracts_fail.rs b/tests/integration/contracts_fail.rs
--- a/tests/integration/contracts_fail.rs
+++ b/tests/integration/contracts_fail.rs
@@ -296,9 +296,11 @@ fn records_contracts_closed() {
fn dictionary_contracts() {
use nickel_lang::label::ty_path::Elem;
- assert_raise_blame!("%force% (({foo} | {_: Number}) & {foo = \"string\"}) true");
+ // dictionary contracts propagate through merging, as opposed to contracts derived from
+ // dictionary types
+ assert_raise_blame!("%force% (({foo} | {_ | Number}) & {foo = \"string\"}) true");
- let res = eval("%force% ({foo = 1} | {_: String}) false");
+ let res = eval("%force% ({foo = 1} | {_ | String}) false");
match &res {
Err(Error::EvalError(EvalError::BlameError {
evaluated_arg: _,
diff --git a/tests/integration/contracts_fail.rs b/tests/integration/contracts_fail.rs
--- a/tests/integration/contracts_fail.rs
+++ b/tests/integration/contracts_fail.rs
@@ -339,3 +341,16 @@ fn type_path_with_aliases() {
"let Foo = {foo: Number} in %force% (((fun x => {foo = \"a\"}) | Dyn -> Foo) null)",
);
}
+
+#[test]
+fn contracts_dont_capture_typevar() {
+ assert_matches!(
+ eval("forall a b. b -> Array b -> {_ | a}"),
+ Err(Error::TypecheckError(TypecheckError::UnboundIdentifier(ident, _))) if ident.label() == "a"
+ );
+
+ assert_matches!(
+ eval("forall a b. b -> Array b -> {foo | a -> Num}"),
+ Err(Error::TypecheckError(TypecheckError::UnboundIdentifier(ident, _))) if ident.label() == "a"
+ );
+}
diff --git a/tests/integration/pass/records.ncl b/tests/integration/pass/records.ncl
--- a/tests/integration/pass/records.ncl
+++ b/tests/integration/pass/records.ncl
@@ -118,12 +118,12 @@ let {check, ..} = import "lib/assert.ncl" in
# recursive overriding with dictionaries
# regression tests for [#892](https://github.com/tweag/nickel/issues/892)
- (({a = 1, b = a} | {_: Number}) & { a | force = 2}).b == 2,
+ (({a = 1, b = a} | {_ | Number}) & { a | force = 2}).b == 2,
({
b = { foo = c.foo },
c = {}
- } | {_: {
+ } | {_ | {
foo | default = 0
} }).b.foo == 0,
diff --git a/tests/integration/pass/records.ncl b/tests/integration/pass/records.ncl
--- a/tests/integration/pass/records.ncl
+++ b/tests/integration/pass/records.ncl
@@ -132,5 +132,9 @@ let {check, ..} = import "lib/assert.ncl" in
# regression test for [#1224](https://github.com/tweag/nickel/issues/1224)
std.record.fields ({} | { field | optional = "value" }) == [ "field" ],
+
+ # check that record type don't propagate through merging
+ ({foo = "a"} | {_ : String}) & {foo | force = 1} & {bar = false}
+ == {foo = 1, bar = false},
]
|> check
diff --git a/tests/snapshot/snapshots/snapshot__error_dictionary_contract_fail.ncl.snap b/tests/snapshot/snapshots/snapshot__error_dictionary_contract_fail.ncl.snap
--- a/tests/snapshot/snapshots/snapshot__error_dictionary_contract_fail.ncl.snap
+++ b/tests/snapshot/snapshots/snapshot__error_dictionary_contract_fail.ncl.snap
@@ -6,8 +6,8 @@ error: contract broken by a value
┌─ [INPUTS_PATH]/errors/dictionary_contract_fail.ncl:1:9
│
1 │ { foo = 1, bar = "bar" } | {_: String}
- │ ^ ------ expected dictionary field type
+ │ - ------ expected dictionary field type
│ │
- │ applied to this expression
+ │ evaluated to this expression
| Polymorphic contracts and contract propagation seem fundamentally incompatible
#1141 made dictionary contracts (or, should I say, dictionary type, as we will see that the difference matters) behave more lazily, and be overriding friendly. The gist is that `{foo = ..., bar = ..., baz = ...} | {_ : T}` is now strictly equivalent to write `{foo | T = ..., bar | T = ..., baz | T = ...}`. This unveiled a subtle but important question about the interaction between polymorphic contracts and the fact that merging propagates contracts.
## Propagation
You can read more into RFC005, but _propagation_ in Nickel means that if I merge `{foo | Number | default = 5}` with `{foo = "a"}`, the contract will propagate to `"a"` and blame:
```
nickel> {foo | Number | default = 5} & {foo = "a"}
error: contract broken by a value
┌─ repl-input-3:1:39
│
1 │ {foo | Number | default = 5} & {foo = "a"}
│ ------ ^^^ applied to this expression
│ │
│ expected type
│
┌─ <unknown> (generated by evaluation):1:1
│
1 │ "a"
│ --- evaluated to this value
```
This is inspired from CUE and NixOS, in the spirit that merging can only refine the values and strengthen the contraints (in practice, the contracts).
## Polymorphic contracts
Nickel implements polymorphic contracts which enforce parametricity. This is done thanks to a technique called sealing. Anything which enters a polymorphic function as a generic value is sealed, and this value is only unsealed when exiting the function, ensuring that a polymorphic function can't inspect a generic argument: it can only pass it around to other functions (with a matching type) or return it.
## The issue
Here is the problematic case which appeared after #1141. Recall the type signature of `record.insert`:
```nickel
insert : forall a. String -> a -> {_: a} -> {_: a}
```
Now, let's run the following example on current master:
```nickel
nickel> let singleton_foo = fun value => record.insert "foo" value {}
in (singleton_foo {sub_left = 1}) & {foo.sub_right = 2}
error: contract broken by a function
┌─ <stdlib/record.ncl>:65:29
│
65 │ : forall a. String -> a -> { _ : a } -> { _ : a }
│ - expected type of an argument of an inner call
│
┌─ <unknown> (generated by evaluation):1:1
│
1 │ x
│ - evaluated to this value
│
= This error may happen in the following situation:
[...]
```
This program looks totally valid (and it is), yet it blames `record.insert` for breaking the polymorphic contract. Here is what happens, when calling to `record.insert`:
1. As a polymorphic argument, `{sub_left = 1}` is sealed, becoming `seal key_a {sub_left = 1}`
2. `{_: a}` is applied to `{}`, to no effect
3. The call to the `%record_insert%` primop produces `{foo = seal key_a {sub_left = 1}}`
4. The outer dictionary contract is applied (this time corresponding to an unseal operation), giving `{foo | Unseal key_a = seal key_a {sub_left = 1}}`.
So far, so good, and if we stopped here, we would be fine. However, because of the new semantics of dictionary contract and propagation, when we merge with `{foo = {sub_right = 2}}`, the contract attached to `foo` is lazily propagated, giving:
```nickel
{foo | Unseal key_a = (seal key_a {sub_left = 1}) & {sub_right = 2}}
```
Now, when we try to evaluate this, the interpreter has to force `seal key_a {sub_left = 1}` and complains that we are trying to use a sealed value. Intuitively, we would like to get something like `{foo = (unseal key_a (seal key_a {sub_left = 1})) & {sub_right = 2}}` instead.
## Has this anything to do with dictionaries?
In fact, this issue hasn't much to do with dictionary contracts per se. It looks like we could reproduce this issue on records with a statically-known shape:
```
nickel>
let set_foo : forall a. a -> {foo : a} =
fun x => {foo = x}
in
(set_foo {sub_left = 1}) & {foo.sub_right = 2}
{ foo = { sub_left = 1, sub_right = 2 } }
nickel>
```
Ugh. It doesn't complain when using record types. The reason is that, although it might mostly be incidental, or even accidental, record types - once transformed as contracts - behave in a different way as record contracts (if we used `|` instead of `:`). The checks are more eager (with respect to missing/extra record fields), they have to handle the polymorphic tail, and they basically `record.map` the contracts onto the field value, instead of using the mechanism of storing lazy contracts at the position of fields. **In particular, the field contracts derived from a record type don't propagate**. Which turns out to be the right thing in this specific case.
### Using record contracts
Ok, so let's transform the example to use record contracts instead:
```
nickel>
let set_foo : forall a. a -> {foo | a} =
fun x => {foo = x}
in
(set_foo {sub_left = 1}) & {foo.sub_right = 2}
error: unbound identifier
┌─ repl-input-6:2:37
│
2 │ let set_foo : forall a. a -> {foo | a} =
│ ^ this identifier is unbound
```
Oh. When implementing RFC002, we made the choice to scope type variables only through static types. That is, in a term `forall a. body`, all the direct descendant of `body` in the AST (`body` included) which are syntactically static types (as defined by RFC002) will have `a` as a type variable in scope. However, this stops at any node that is not a static type. `{foo | T}` isn't a static type, because of the use of `|`. So the variable doesn't cross the boundary.
Thus, currently, **the parser syntactically rejects the use of a record contract with free type variables inside**, which appears to prevent problematic example to even be parsed in the first place.
So, all in all, we can't trigger the issue with records only. This is due to a bunch of reason, but mostly that record types and record contracts have a different semantics and capabilities, although part of it might be a bit accidental. Record contracts are lazy and propagate but they can't embed any polymorphic contract (well, they can have a `forall x . ...` inside, but that's a not a problem: the issue is with free type variable descending from a polymorphic contract). Record types, on the other hand, can include polymorphic subcontracts, but they don't propagate. This explains why we had to wait for the change of semantics of the dictionary type, which somehow combines the two (it allows polymorphic contracts, because it is first and foremost a static type, but its implementation is lazy and it's getting propagated), to unveil the interaction.
## Solutions
The heart of the problem is not to be minimized. It really seems like polymorphic contracts are fundamentally incompatible with lazy propagation: propagation ends up applying sealing and unsealing contracts to terms that had nothing to do with the polymorphic call, and it feels like type variables somehow escape their scope, in some sense. It's clearly incorrect, but what to do is not totally clear. Here are some leads:
### The radical one: annihilation of polymorphic contracts
A radical solution is to get rid of polymorphic sealing and unsealing altogether. It's a complex part of the contract system, has been interacting in non trivial ways with recursive overriding as well (see e.g. #1175), so it's not clear if they're paying for their complexity. This makes contracts check less things, but would allow to unify type and contracts (at least at runtime) in an indistinguishable way.
### Endorse the difference between static types and contracts
Another point of view is that the (seemingly partly accidental) distinction between contracts derived from static types and pure record contracts is not that absurd in the end. This distinction might correspond to two different usages of validation, in the same way as since RFC005, `{foo | Number = 5}` and `{foo = (5 | Number)}` don't have the same semantics anymore: the latter is not a metadata, but a mere contract annotation, and doesn't propagate through merging.
For example, would it be very intuitive that calling a function like `swap_foo_bar_num : {bar : Number, foo : Number} -> {bar : Number, foo : Number}` would suddenly forbid subsequent overriding of `foo` or `bar` with anything else than a number? This seems to be a local contract, concerned with this specific function call.
So, maybe the idea that static types correspond to local checks, that don't propagate/contaminate out of their scope, while record contracts correspond to metadata, which do, isn't so silly. In that case, one potential solution to our problem would be to introduce a dictionary contract variant `{_ | T}`, which would behave as the current dictionary type operationally, would propagate, but would also not be able to be used with free type variables. The static variant `{_ : T}` would then be reverted to its previous semantics, which is to mostly map a contract onto a record, eagerly. Note that the issue of static record types or dictionary types not preserving the recursivity of records is a real issue (one of the original motivation of the change), but is orthogonal: we can very much preserve recursivity while preventing the descendant contracts from propagating.
| Regarding our way forward, I would like to see a dictionary contract syntax `{_ | T}` anyway. I've had informal discussions with people unfamiliar with Nickel and writing record contracts with `|`, but needing to write dictionary contracts as `{_ : T}` even outside a statically typed context was confusing to them.
Once we have this separate syntax it would be a relatively small change to revert the contract conversion of `{_ : T}` to before #1141 and keep the new lazy semantics for `{_ | T}`. So, I guess, this is just a long winded way of saying I prefer the second of your two alternatives 😅
I have another question now: let's say we apply the second proposal. How should behave a contract like `{foo : Number, bar | String}`? Should the number contract of `Foo` propagate or not? In fact I've always found it dubious to use a type annotation on something that doesn't have a definition (outside of writing a record type, of course), or mixing `:` and `|` . Hence I would err on the side of the this doesn't have any effect, and issue a warning, or even an error on parsing.
Yes, `{foo : Number, bar | String}` doesn't seem like a sensible thing to write in my opinion as well. I would be in favor of making that syntactically invalid, say by requiring a type annotation in a record to always be part of a definition with `=`?
Update: for now, the decision was made to go with the approach explained above. That is, there is a distinction between type and contracts, which is that contracts propagate, will types don't. We will also add a variant of the dictionary contract `{_ | Type}`, which will correspond to the current dictionary contract, and the dictionary type `{_ : Type}` will correspond to mapping type annotations ` : ` onto the values, which won't propagate. | 2023-04-21T02:10:34 | 0.3 | 1be55fd6a31be8b635f542d6b1221bb1dea164f7 | [
"types::test::types_pretty_printing",
"contracts_fail::contracts_dont_capture_typevar",
"contracts_fail::dictionary_contracts",
"pass::check_file_tests_integration_pass_records_ncl",
"pretty::records",
"check_error_snapshots_tests_snapshot_inputs_errors_dictionary_contract_fail_ncl"
] | [
"environment::tests::test_deepness",
"environment::tests::test_env_base",
"environment::tests::test_clone",
"environment::tests::test_iter_elem",
"environment::tests::test_iter_layer",
"eval::merge::split::tests::all_center",
"eval::merge::split::tests::all_left",
"eval::merge::split::tests::all_right... | [] | [] | 98 |
tweag/nickel | 979 | tweag__nickel-979 | [
"961"
] | 2d6197d653ca7f7924c9d93398c37496a013bae3 | diff --git a/benches/mantis/tasks/telegraf.ncl b/benches/mantis/tasks/telegraf.ncl
--- a/benches/mantis/tasks/telegraf.ncl
+++ b/benches/mantis/tasks/telegraf.ncl
@@ -42,7 +42,7 @@ let types = import "../schemas/nomad/types.ncl" in
[outputs.influxdb]
database = "telegraf"
urls = [ "http://172.16.0.20:8428" ]
- "%m
+ "%
},
..
}
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -10,7 +10,7 @@
Example:
singleton "foo"
>> [ "foo" ]
- "%m
+ "%
= fun x => [x],
forEach: forall a b. (Array a) -> (a -> b) -> (Array b)
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -23,7 +23,7 @@
toString x
)
>> [ "1" "2" ]
- "%m
+ "%
= fun xs f => array.map f xs,
foldr: forall a b. (a -> b -> b) -> b -> (Array a) -> b
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -40,7 +40,7 @@
strange = foldr (fun int str => toString (int + 1) @@ str) "a"
strange [ 1 2 3 4 ]
=> "2345a"
- "%m
+ "%
= fun op nul list =>
let len = array.length list in
let rec fold_ = fun n =>
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -51,7 +51,7 @@
fold: forall a b. (a -> b -> b) -> b -> (Array a) -> b
| doc m%"
- `fold` is an alias of `foldr` for historic reasons "%m
+ `fold` is an alias of `foldr` for historic reasons "%
# FIXME(Profpatsch): deprecate?
= foldr,
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -69,7 +69,7 @@
lstrange = foldl (str: int: str + toString (int + 1)) "a"
lstrange [ 1 2 3 4 ]
=> "a2345"
- "%m
+ "%
= fun op nul list =>
let rec foldl_ = fun n =>
if n == -1
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -84,7 +84,7 @@
The difference is that evaluation is forced upon access. Usually used
with small whole results (in contrast with lazily-generated list or large
lists where only a part is consumed.)
- "%m
+ "%
= fun op nul list => array.foldl op nul list,
imap0: forall a b. (Num -> a -> b) -> (Array a) -> (Array b)
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -94,7 +94,7 @@
Example:
imap0 (fun i v => "%{v}-%{%to_string% i}") ["a", "b"]
>> [ "a-0", "b-1" ]
- "%%m
+ "%%
= fun f list => array.generate (fun n => f n (array.elem_at n list)) (array.length list),
imap1: forall a b. (Num -> a -> b) -> (Array a) -> (Array b)
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -104,7 +104,7 @@
Example:
imap1 (fun i v => "%{v}-%{toString i}") ["a", "b"]
>> [ "a-1", "b-2" ]
- "%%m
+ "%%
= fun f list => array.generate (fun n => f (n + 1) (array.elem_at n list)) (array.length list),
concatMap: forall a b. (a -> (Array b)) -> (Array a) -> (Array b)
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -114,7 +114,7 @@
Example:
concatMap (fun x => [x] @ ["z"]) ["a", "b"]
>> [ "a", "z", "b", "z" ]
- "%m
+ "%
= (fun f list => array.flatten (array.map f list)),
flatten: Dyn -> Array Dyn
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -127,7 +127,7 @@
>> [1, 2, 3, 4, 5]
flatten 1
>> [1]
- "%m
+ "%
= fun x =>
if builtin.is_array x
then concatMap (fun y => flatten y) (x | Array Dyn)
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -140,7 +140,7 @@
Example:
remove 3 [ 1, 3, 4, 3 ]
>> [ 1, 4 ]
- "%m
+ "%
= # Element to remove from the list
fun e => array.filter ((!=) e),
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -157,7 +157,7 @@
>> 3
findSingle (fun x => x == 3) "none" "multiple" [ 1, 9 ]
>> "none"
- "%m
+ "%
= fun
# Predicate
pred
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -183,7 +183,7 @@
>> 6
findFirst (fun x => x > 9) 7 [ 1, 6, 4 ]
>> 7
- "%m
+ "%
= fun
# Predicate
pred
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -204,7 +204,7 @@
>> true
any builtin.is_str [ 1, { } ]
>> false
- "%m
+ "%
= fun pred => foldr (fun x y => if pred x then true else y) false,
all: forall a. (a -> Bool) -> Array a -> Bool
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -217,7 +217,7 @@
>> true
all (fun x => x < 3) [ 1, 2, 3 ]
>> false
- "%m
+ "%
= fun pred => foldr (fun x y => if pred x then y else false) true,
count: forall a. (a -> Bool) -> Array a -> Num
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -228,7 +228,7 @@
Example:
count (fun x => x == 3) [ 3, 2, 3, 4, 6 ]
>> 2
- "%m
+ "%
= fun
# Predicate
pred => foldl_ (fun c x => if pred x then c + 1 else c) 0,
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -244,7 +244,7 @@
>> [ "foo" ]
optional' false "foo"
>> [ ]
- "%m
+ "%
= fun cond elem => if cond then [elem] else [],
optionals: forall a. Bool -> Array a -> Array a
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -256,7 +256,7 @@
>> [ 2, 3 ]
optionals false [ 2, 3 ]
>> [ ]
- "%m
+ "%
= fun
# Condition
cond
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -276,7 +276,7 @@
>> [ 1, 2 ]
toList "hi"
>> [ "hi "]
- "%m
+ "%
= fun x => if builtin.is_array x then (x | Array Dyn) else [x],
range: Num -> Num -> Array Num
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -288,7 +288,7 @@
>> [ 2, 3, 4 ]
range 3 2
>> [ ]
- "%m
+ "%
= fun
# First integer in the range
first
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -307,7 +307,7 @@
Example:
partition (fun x => x > 2) [ 5, 1, 2, 3, 4 ]
>> { right = [ 5, 3, 4 ], wrong = [ 1, 2 ] }
- "%m
+ "%
= fun pred =>
foldr (fun h t =>
if pred h
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -334,7 +334,7 @@
mate = [ { name = "mate", script = "gnome-session &" } ],
xfce = [ { name = "xfce", script = "xfce4-session &" } ],
}
- "%m
+ "%
= fun pred => foldl_ (fun r e =>
let key = pred e in
{ "%{key}" = (r."%{key}" @ [e]) } & (record.remove key r)
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -347,7 +347,7 @@
Example:
groupBy_ builtins.add 0 (fun x => boolToString (x > 2)) [ 5, 1, 2, 3, 4 ]
>> { true = 12, false = 3 }
- "%m
+ "%
= fun op nul pred lst => record.map (fun name value => foldl op nul value) (groupBy pred lst),
zipListsWith: forall a b c. (a -> b -> c) -> Array a -> Array b -> Array c
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -359,7 +359,7 @@
Example:
zipListsWith (fun a b => a @@ b) ["h", "l"] ["e", "o"]
>> ["he", "lo"]
- "%m
+ "%
= fun
# Function to zip elements of both lists
f
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -377,7 +377,7 @@
Example:
zipLists [ 1, 2 ] [ "a", "b" ]
>> [ { fst = 1, snd = "a" }, { fst = 2, snd = "b" } ]
- "%m
+ "%
= zipListsWith (fun fst snd => { fst = fst, snd = snd }),
reverseList: forall a. Array a -> Array a
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -387,7 +387,7 @@
Example:
reverseList [ "b", "o", "j" ]
>> [ "j", "o", "b" ]
- "%m
+ "%
= fun xs =>
let l = array.length xs in
array.generate (fun n => array.elem_at (l - n - 1) xs) l,
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -413,7 +413,7 @@
visited = [ "/" "/home/user" ], # elements leading to the cycle (in reverse order)
rest = [ "/home" "other" ], # everything else
- "%m
+ "%
= fun stopOnCycles before list =>
let rec dfs_ = fun us visited_ rest_ =>
let c = array.filter (fun x => before x us) visited_ in
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -452,7 +452,7 @@
== { result = [ "other", "/", "/home", "/home/user" ], }
toposort (a: b: a < b) [ 3, 2, 1 ] == { result = [ 1, 2, 3 ], }
- "%m
+ "%
= fun before list =>
let dfsthis = listDfs true before list in
let toporest = toposort before (dfsthis.visited @ dfsthis.rest) in
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -481,7 +481,7 @@
Example:
sort (a: b: a < b) [ 5, 3, 7 ]
>> [ 3, 5, 7 ]
- "%m
+ "%
= fun strictLess list =>
let len = array.length list in
let first = array.head list in
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -511,7 +511,7 @@
>> 1
compareLists compare [ "a", "b" ] [ "a", "c" ]
>> 1
- "%m
+ "%
= fun cmp a b =>
if a == []
then if b == []
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -536,7 +536,7 @@
# >> ["10.5.16.62" "10.46.133.149" "10.54.16.25"]
# naturalSort ["v0.2", "v0.15", "v0.0.9"]
# >> [ "v0.0.9", "v0.2", "v0.15" ]
-# "%m
+# "%
# # TODO: broken. how to implement it in nickel?
# = fun lst =>
# let vectorise = fun s => array.map (fun x => if builtin.is_array x then (array.head x) | Num else x | Num) (string.split "(0|[1-9][0-9]*)" s) | Array Dyn
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -555,7 +555,7 @@
>> [ "a", "b" ]
take 2 [ ]
>> [ ]
- "%m
+ "%
= fun
# Number of elements to take
count => sublist 0 count,
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -570,7 +570,7 @@
>> [ "c", "d" ]
drop 2 [ ]
>> [ ]
- "%m
+ "%
= fun
# Number of elements to drop
count
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -588,7 +588,7 @@
>> [ "b", "c", "d" ]
sublist 1 3 [ ]
>> [ ]
- "%m
+ "%
= fun
# Index at which to start the sublist
start
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -613,7 +613,7 @@
Example:
last [ 1, 2, 3 ]
>> 3
- "%m
+ "%
= fun lst =>
#assert lib.assertMsg (lst != []) "lists.last: list must not be empty!",
array.elem_at (array.length lst - 1) lst,
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -627,7 +627,7 @@
Example:
init [ 1, 2, 3 ]
>> [ 1, 2 ]
- "%m
+ "%
= fun lst =>
#assert lib.assertMsg (lst != []) "lists.init: list must not be empty!",
take (array.length lst - 1) lst,
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -639,7 +639,7 @@
Example:
unique [ 3, 2, 3, 4 ]
>> [ 3, 2, 4 ]
- "%m
+ "%
= foldl_ (fun acc e => if array.elem e acc then acc else acc @ [ e ]) [],
intersectLists: Array Dyn -> Array Dyn -> Array Dyn
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -649,7 +649,7 @@
Example:
intersectLists [ 1, 2, 3 ] [ 6, 3, 2 ]
>> [ 3, 2 ]
- "%m
+ "%
= fun e => array.filter (fun x => array.elem x e),
subtractLists: Array Dyn -> Array Dyn -> Array Dyn
diff --git a/benches/nixpkgs/lists.ncl b/benches/nixpkgs/lists.ncl
--- a/benches/nixpkgs/lists.ncl
+++ b/benches/nixpkgs/lists.ncl
@@ -659,14 +659,14 @@
Example:
subtractLists [ 3, 2 ] [ 1, 2, 3, 4, 5, 3 ]
>> [ 1, 4, 5 ]
- "%m
+ "%
= fun e => array.filter (fun x => !(array.elem x e)),
mutuallyExclusive: Array Dyn -> Array Dyn -> Bool
| doc m%"
Test if two lists have no common element.
It should be slightly more efficient than (intersectLists a b == [])
- "%m
+ "%
= fun a b => array.length a == 0 || !(any (fun x => array.elem x a) b),
}
diff --git a/benches/strings/interpolation.ncl b/benches/strings/interpolation.ncl
--- a/benches/strings/interpolation.ncl
+++ b/benches/strings/interpolation.ncl
@@ -104,5 +104,5 @@
Ue3d9Jd3nZeSetAzBBqk
DvN7NvTzdt4EyvUCq4Se
a9TLAq2eVr0exurkvJZN
- "%m
+ "%
}
diff --git a/doc/manual/syntax.md b/doc/manual/syntax.md
--- a/doc/manual/syntax.md
+++ b/doc/manual/syntax.md
@@ -93,7 +93,7 @@ false
### Strings
Nickel can work with sequences of characters, or strings. Strings are enclosed
-by `" ... "` for a single line string or by `m%" ... "%m` for a multiline
+by `" ... "` for a single line string or by `m%" ... "%` for a multiline
string. They can be concatenated with the operator `++`. Strings must be UTF-8
valid.
diff --git a/doc/manual/syntax.md b/doc/manual/syntax.md
--- a/doc/manual/syntax.md
+++ b/doc/manual/syntax.md
@@ -107,7 +107,7 @@ Examples:
"Hello, World!"
> m%"Well, if this isn't a multiline string?
-Yes it is, indeed it is"%m
+Yes it is, indeed it is"%
"Well, if this isn't a string?
Yes it is, indeed it is"
diff --git a/doc/manual/syntax.md b/doc/manual/syntax.md
--- a/doc/manual/syntax.md
+++ b/doc/manual/syntax.md
@@ -136,7 +136,7 @@ This line has no indentation.
This line is indented.
This line is even more indented.
This line has no more indentation.
-"%m
+"%
"This line has no indentation.
This line is indented.
This line is even more indented.
diff --git a/doc/manual/syntax.md b/doc/manual/syntax.md
--- a/doc/manual/syntax.md
+++ b/doc/manual/syntax.md
@@ -148,10 +148,10 @@ The only special sequence in a multiline string is the string interpolation.
Examples:
```text
-> m%"Multiline\nString?"%m
+> m%"Multiline\nString?"%
"Multiline\nString?"
-> m%"Multiline%{"\n"}String"%m
+> m%"Multiline%{"\n"}String"%
"Multiline
String"
```
diff --git a/doc/manual/syntax.md b/doc/manual/syntax.md
--- a/doc/manual/syntax.md
+++ b/doc/manual/syntax.md
@@ -163,16 +163,16 @@ the same amount of `%` as in the delimiters.
Examples:
```text
-> m%%"Hello World"%%m
+> m%%"Hello World"%%
"Hello World"
-> m%%%%%"Hello World"%%%%%m
+> m%%%%%"Hello World"%%%%%
"Hello World"
-> let w = "World" in m%%"Hello %{w}"%%m
+> let w = "World" in m%%"Hello %{w}"%%
"Hello %{w}"
-> let w = "World" in m%%"Hello %%{w}"%%m
+> let w = "World" in m%%"Hello %%{w}"%%
"Hello World"
```
diff --git a/doc/manual/syntax.md b/doc/manual/syntax.md
--- a/doc/manual/syntax.md
+++ b/doc/manual/syntax.md
@@ -183,14 +183,14 @@ indented string interpolation and the indentation would behave as expected:
> let log = m%"
if log:
print("log:", s)
-"%m in m%"
+"% in m%"
def concat(str_array, log=false):
res = []
for s in str_array:
%{log}
res.append(s)
return res
-"%m
+"%
"def concat(str_array, log=false):
res = []
for s in str_array:
diff --git a/doc/manual/syntax.md b/doc/manual/syntax.md
--- a/doc/manual/syntax.md
+++ b/doc/manual/syntax.md
@@ -528,7 +528,7 @@ Adding documentation can be done with `| doc < string >`. Examples:
it is based on facts rather than being invented or imagined,
and is accurate and reliable.
(Collins dictionary)
- "%m
+ "%
true
```
diff --git a/examples/config-gcc/config-gcc.ncl b/examples/config-gcc/config-gcc.ncl
--- a/examples/config-gcc/config-gcc.ncl
+++ b/examples/config-gcc/config-gcc.ncl
@@ -25,7 +25,7 @@ let GccFlag =
contract.blame_with "expected record or string" label in
let Path =
- let pattern = m%"^(.+)/([^/]+)$"%m in
+ let pattern = m%"^(.+)/([^/]+)$"% in
fun label value =>
if builtin.is_str value then
if string.is_match pattern value then
diff --git a/examples/config-gcc/config-gcc.ncl b/examples/config-gcc/config-gcc.ncl
--- a/examples/config-gcc/config-gcc.ncl
+++ b/examples/config-gcc/config-gcc.ncl
@@ -37,7 +37,7 @@ let Path =
let SharedObjectFile = fun label value =>
if builtin.is_str value then
- if string.is_match m%"\.so$"%m value then
+ if string.is_match m%"\.so$"% value then
value
else
contract.blame_with "not an .so file" label
diff --git a/rfcs/004-typechecking.md b/rfcs/004-typechecking.md
--- a/rfcs/004-typechecking.md
+++ b/rfcs/004-typechecking.md
@@ -220,14 +220,14 @@ subtractLists: forall a. Array a -> Array a -> Array a
Example:
subtractLists [ 3, 2 ] [ 1, 2, 3, 4, 5, 3 ]
>> [ 1, 4, 5 ]
- "%m
+ "%
= fun e => array.filter (fun x => !(array.elem (x | Dyn) (e | Array Dyn))),
mutuallyExclusive: forall a. Array a -> Array a -> Bool
| doc m%"
Test if two lists have no common element.
It should be slightly more efficient than (intersectLists a b == [])
- "%m
+ "%
= fun a b =>
array.length a == 0
|| !(any (fun x => array.elem (x | Dyn) (a | Array Dyn)) b),
diff --git a/rfcs/004-typechecking.md b/rfcs/004-typechecking.md
--- a/rfcs/004-typechecking.md
+++ b/rfcs/004-typechecking.md
@@ -253,7 +253,7 @@ flatten: Dyn -> Array Dyn
>> [1, 2, 3, 4, 5]
flatten 1
>> [1]
- "%m
+ "%
= fun x =>
if builtin.is_array x then
concatMap (fun y => flatten y) (x | Array Dyn)
diff --git a/rfcs/005-metadata-rework.md b/rfcs/005-metadata-rework.md
--- a/rfcs/005-metadata-rework.md
+++ b/rfcs/005-metadata-rework.md
@@ -171,7 +171,7 @@ let Package = { name | Str, drv | Drv, .. } in
},
build = m%"
%{build_inputs.foo.drv.out_path}/bin/foo $out
- "%m,
+ "%,
} & {
build_inputs = {
foo = { name = "foo", drv.out_path = "/fake/path" },
diff --git a/src/parser/grammar.lalrpop b/src/parser/grammar.lalrpop
--- a/src/parser/grammar.lalrpop
+++ b/src/parser/grammar.lalrpop
@@ -511,7 +511,7 @@ StringStart : StringKind = {
StringEnd : StringKind = {
"\"" => StringKind::Standard,
- "\"%m" => StringKind::Multiline,
+ "\"%" => StringKind::Multiline,
};
ChunkLiteral : String =
diff --git a/src/parser/grammar.lalrpop b/src/parser/grammar.lalrpop
--- a/src/parser/grammar.lalrpop
+++ b/src/parser/grammar.lalrpop
@@ -884,7 +884,7 @@ extern {
"=>" => Token::Normal(NormalToken::DoubleArrow),
"_" => Token::Normal(NormalToken::Underscore),
"\"" => Token::Normal(NormalToken::DoubleQuote),
- "\"%m" => Token::MultiStr(MultiStringToken::End),
+ "\"%" => Token::MultiStr(MultiStringToken::End),
"m%\"" => Token::Normal(NormalToken::MultiStringStart(<usize>)),
"Num" => Token::Normal(NormalToken::Num),
diff --git a/src/parser/lexer.rs b/src/parser/lexer.rs
--- a/src/parser/lexer.rs
+++ b/src/parser/lexer.rs
@@ -363,7 +363,7 @@ pub enum MultiStringToken<'input> {
/// variable number of `%` character, so the lexer matches candidate end delimiter, compare the
/// number of characters, and either emit the `End` token above, or turn the `CandidateEnd` to a
/// `FalseEnd` otherwise
- #[regex("\"%+m")]
+ #[regex("\"%+")]
CandidateEnd(&'input str),
/// Same as `CandidateEnd`, but for interpolation
#[regex("%+\\{")]
diff --git a/src/parser/lexer.rs b/src/parser/lexer.rs
--- a/src/parser/lexer.rs
+++ b/src/parser/lexer.rs
@@ -487,9 +487,9 @@ impl<'input> Lexer<'input> {
self.count = 0;
}
- fn enter_indstr(&mut self, hash_count: usize) {
+ fn enter_indstr(&mut self, percent_count: usize) {
self.enter_strlike(|lexer| ModalLexer::MultiStr(lexer.morph()));
- self.count = hash_count;
+ self.count = percent_count;
}
fn enter_normal(&mut self) {
diff --git a/src/parser/lexer.rs b/src/parser/lexer.rs
--- a/src/parser/lexer.rs
+++ b/src/parser/lexer.rs
@@ -568,7 +568,7 @@ impl<'input> Lexer<'input> {
s: &'input str,
span: Range<usize>,
) -> (Option<Token<'input>>, Range<usize>) {
- let split_at = s.len() - self.count + 1;
+ let split_at = s.len() - self.count;
let next_token = Token::MultiStr(MultiStringToken::Interpolation);
let next_span = Range {
start: span.start + split_at,
diff --git a/src/parser/lexer.rs b/src/parser/lexer.rs
--- a/src/parser/lexer.rs
+++ b/src/parser/lexer.rs
@@ -605,8 +605,12 @@ impl<'input> Iterator for Lexer<'input> {
Some(Normal(NormalToken::DoubleQuote | NormalToken::StrEnumTagBegin)) => {
self.enter_str()
}
- Some(Normal(NormalToken::MultiStringStart(hash_count))) => {
- self.enter_indstr(*hash_count)
+ Some(Normal(NormalToken::MultiStringStart(delim_size))) => {
+ // for interpolation & closing delimeters we only care about
+ // the number of `%`s (plus the opening `"` or `{`) so we
+ // drop the "kind marker" size here (i.e. the `m` character).
+ let size_without_kind_marker = delim_size - 1;
+ self.enter_indstr(size_without_kind_marker)
}
Some(Normal(NormalToken::LBrace)) => self.count += 1,
Some(Normal(NormalToken::RBrace)) => {
diff --git a/src/parser/lexer.rs b/src/parser/lexer.rs
--- a/src/parser/lexer.rs
+++ b/src/parser/lexer.rs
@@ -631,14 +635,14 @@ impl<'input> Iterator for Lexer<'input> {
// If we encounter a `CandidateInterp` token with the right number of characters, this is
// an interpolation sequence.
//
- // Note that the number of characters may be greater than `count`: in `m"%
- // %%%{foo} "%m`, the lexer will process `%%%{` as a candidate interpolation with 4
- // characters, while `count` is 1. In that case, we must emit an `%%` literal and put
- // an interpolation token in the buffer.
+ // Note that the number of characters may be greater than `count`: in `m%" %%%{foo} "%`,
+ // the lexer will process `%%%{` as a candidate interpolation with 4 characters, while
+ // `count` is 2. In that case, we must emit a `%%` literal and put an interpolation
+ // token in the buffer.
Some(MultiStr(MultiStringToken::CandidateInterpolation(s)))
- if s.len() >= (self.count - 1) =>
+ if s.len() >= self.count =>
{
- if s.len() == (self.count - 1) {
+ if s.len() == self.count {
token = Some(MultiStr(MultiStringToken::Interpolation));
self.enter_normal();
} else {
diff --git a/src/parser/lexer.rs b/src/parser/lexer.rs
--- a/src/parser/lexer.rs
+++ b/src/parser/lexer.rs
@@ -659,7 +663,7 @@ impl<'input> Iterator for Lexer<'input> {
// The interpolation token is put in the buffer such that it will be returned next
// time.
//
- // For example, in `m%%""%%%{exp}"%%m`, the `"%%%{` is a `QuotesCandidateInterpolation`
+ // For example, in `m%%""%%%{exp}"%%`, the `"%%%{` is a `QuotesCandidateInterpolation`
// which is split as a `"%` literal followed by an interpolation token.
Some(MultiStr(MultiStringToken::QuotesCandidateInterpolation(s)))
if s.len() >= self.count =>
diff --git a/src/parser/lexer.rs b/src/parser/lexer.rs
--- a/src/parser/lexer.rs
+++ b/src/parser/lexer.rs
@@ -694,14 +698,14 @@ impl<'input> Iterator for Lexer<'input> {
)));
}
}
- // If we encounter a `CandidateEnd` token with the right number of characters, this is
- // the end of a multiline string
+ // If we encounter a `CandidateEnd` token with the same number of `%`s as the
+ // starting token then it is the end of a multiline string
Some(MultiStr(MultiStringToken::CandidateEnd(s))) if s.len() == self.count => {
token = Some(MultiStr(MultiStringToken::End));
self.leave_indstr()
}
// Otherwise, it is just part of the string, so we transform the token into a
- // `FalseEnd` one
+ // `Literal` one
Some(MultiStr(MultiStringToken::CandidateEnd(s))) => {
token = Some(MultiStr(MultiStringToken::Literal(s)))
}
diff --git a/src/parser/utils.rs b/src/parser/utils.rs
--- a/src/parser/utils.rs
+++ b/src/parser/utils.rs
@@ -24,7 +24,7 @@ use crate::{
};
/// Distinguish between the standard string separators `"`/`"` and the multi-line string separators
-/// `m%"`/`"%m` in the parser.
+/// `m%"`/`"%` in the parser.
#[derive(Copy, Clone, Eq, PartialEq, Debug)]
pub enum StringKind {
Standard,
diff --git a/src/parser/utils.rs b/src/parser/utils.rs
--- a/src/parser/utils.rs
+++ b/src/parser/utils.rs
@@ -527,7 +527,7 @@ pub fn min_indent(chunks: &[StrChunk<RichTerm>]) -> usize {
/// baseline
/// ${x}
/// end
-/// "%m
+/// "%
/// ```
///
/// gives
diff --git a/src/parser/utils.rs b/src/parser/utils.rs
--- a/src/parser/utils.rs
+++ b/src/parser/utils.rs
@@ -548,7 +548,7 @@ pub fn min_indent(chunks: &[StrChunk<RichTerm>]) -> usize {
/// baseline
/// ${x} sth
/// end
-/// "%m
+/// "%
/// ```
///
/// gives
diff --git a/src/parser/utils.rs b/src/parser/utils.rs
--- a/src/parser/utils.rs
+++ b/src/parser/utils.rs
@@ -578,7 +578,7 @@ pub fn strip_indent(mut chunks: Vec<StrChunk<RichTerm>>) -> Vec<StrChunk<RichTer
// ${x} ${y}
// ${x}
// string
- // "%m
+ // "%
// ```
//
// We don't know at the time we process the expression `${x}` if it wil have to be re-indented,
diff --git a/src/pretty.rs b/src/pretty.rs
--- a/src/pretty.rs
+++ b/src/pretty.rs
@@ -12,15 +12,14 @@ fn min_interpolate_sign(text: &str) -> usize {
reg.find_iter(text)
.map(|m| {
let d = m.end() - m.start();
- // We iterate over all sequences `%+{` and `"%+m`, which could clash with the interpolation
+ // We iterate over all sequences `%+{` and `"%+`, which could clash with the interpolation
// syntax, and return the maximum number of `%` insead each sequence.
//
- // For the case of a closing delimiter `"%m`, we could actually be slightly smarter as we
+ // For the case of a closing delimiter `"%`, we could actually be slightly smarter as we
// don't necessarily need more `%`, but just a different number of `%`. For example, if the
- // string contains only one `"%%m`, then single `%` delimiters like `m%"` and `"%m` would
- // be fine. But picking the maximum
- //
- // TODO: Improve the `"%*m` case if necessary.
+ // string contains only one `"%%`, then single `%` delimiters like `m%"` and `"%` would
+ // be fine. But picking the maximum results in a simpler algorithm for now, which we can
+ // update later if necessary.
if m.as_str().ends_with('{') {
d
} else {
diff --git a/src/pretty.rs b/src/pretty.rs
--- a/src/pretty.rs
+++ b/src/pretty.rs
@@ -314,11 +313,7 @@ where
} else {
"".to_string()
},
- if multiline {
- format!("{}m", interp)
- } else {
- "".to_string()
- },
+ if multiline { interp } else { "".to_string() },
)
}
Fun(id, rt) => {
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -11,7 +11,7 @@
([ 1 ] | NonEmpty) =>
[ 1 ]
```
- "%m
+ "%
= fun label value =>
if %typeof% value == `Array then
if %length% value != 0 then
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -30,7 +30,7 @@
head [ "this is the head", "this is not" ] =>
"this is the head"
```
- "%m
+ "%
= fun l => %head% l,
tail : forall a. Array a -> Array a
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -42,7 +42,7 @@
tail [ 1, 2, 3 ] =>
[ 2, 3 ]
```
- "%m
+ "%
= fun l => %tail% l,
length : forall a. Array a -> Num
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -54,7 +54,7 @@
length [ "Hello,", " World!" ] =>
2
```
- "%m
+ "%
= fun l => %length% l,
map : forall a b. (a -> b) -> Array a -> Array b
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -67,7 +67,7 @@
map (fun x => x + 1) [ 1, 2, 3 ] =>
[ 2, 3, 4 ]
```
- "%m
+ "%
= fun f l => %map% l f,
elem_at : forall a. Num -> Array a -> a
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -79,7 +79,7 @@
elem_at 3 [ "zero" "one" "two" "three" "four" ] =>
"three"
```
- "%m
+ "%
= fun n l => %elem_at% l n,
concat : forall a. Array a -> Array a -> Array a
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -91,7 +91,7 @@
concat [ 1, 2, 3 ] [ 4, 5, 6 ] =>
[ 1, 2, 3, 4, 5, 6 ]
```
- "%m
+ "%
= fun l1 l2 => l1 @ l2,
foldl : forall a b. (a -> b -> a) -> a -> Array b -> a
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -107,7 +107,7 @@
(((0 + 1) + 2) 3) =>
6
```
- "%m
+ "%
= fun f acc l =>
if %length% l == 0 then
acc
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -126,7 +126,7 @@
((([] @ [3]) @ [2]) @ [1]) =>
[ 3, 2, 1 ]
```
- "%m
+ "%
= fun f fst l =>
if %length% l == 0 then
fst
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -142,7 +142,7 @@
cons 1 [ 2, 3 ] =>
[ 1, 2, 3 ]
```
- "%m
+ "%
= fun x l => [x] @ l,
reverse : forall a. Array a -> Array a
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -154,7 +154,7 @@
reverse [ 1, 2, 3 ] =>
[ 3, 2, 1 ]
```
- "%m
+ "%
= fun l => foldl (fun acc e => [e] @ acc) [] l,
filter : forall a. (a -> Bool) -> Array a -> Array a
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -166,7 +166,7 @@
filter (fun x => x <= 3) [ 4, 3, 2, 5, 1 ] =>
[ 3, 2, 1 ]
```
- "%m
+ "%
= fun pred l => foldl (fun acc x => if pred x then acc @ [x] else acc) [] l,
flatten : forall a. Array (Array a) -> Array a
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -178,7 +178,7 @@
flatten [[1, 2], [3, 4]] =>
[1, 2, 3, 4]
```
- "%m
+ "%
= fun l => fold (fun l acc => l @ acc) [] l,
all : forall a. (a -> Bool) -> Array a -> Bool
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -192,7 +192,7 @@
all (fun x => x < 3) [ 1, 2 3 ] =>
false
```
- "%m
+ "%
= fun pred l => fold (fun x acc => if pred x then acc else false) true l,
any : forall a. (a -> Bool) -> Array a -> Bool
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -206,7 +206,7 @@
any (fun x => x < 3) [ 5, 6, 7, 8 ] =>
false
```
- "%m
+ "%
= fun pred l => fold (fun x acc => if pred x then true else acc) false l,
elem : Dyn -> Array Dyn -> Bool
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -218,7 +218,7 @@
elem 3 [ 1, 2, 3, 4, 5 ] =>
true
```
- "%m
+ "%
= fun elt => any (fun x => x == elt),
partition : forall a. (a -> Bool) -> Array a -> {right: Array a, wrong: Array a}
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -231,7 +231,7 @@
partition (fun x => x < 5) [ 2, 4, 5, 3, 7, 8, 6 ] =>
{ right = [ 3, 4, 2 ], wrong = [ 6, 8, 7, 5 ] }
```
- "%m
+ "%
= fun pred l =>
let aux = fun acc x =>
if (pred x) then
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -251,7 +251,7 @@
generate function.id 4 =>
[ 0, 1, 2, 3 ]
```
- "%m
+ "%
= fun f n => %generate% n f,
sort | forall a. (a -> a -> [| `Lesser, `Equal, `Greater |]) -> Array a -> Array a
diff --git a/stdlib/array.ncl b/stdlib/array.ncl
--- a/stdlib/array.ncl
+++ b/stdlib/array.ncl
@@ -263,7 +263,7 @@
sort (fun x y => if x < y then `Lesser else if (x == y) then `Equal else `Greater) [ 4, 5, 1, 2 ] =>
[ 1, 2, 4, 5 ]
```
- "%m
+ "%
= fun cmp l =>
let first = %head% l in
let parts = partition (fun x => (cmp x first == `Lesser)) (%tail% l) in
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -11,7 +11,7 @@
is_num "Hello, World!" =>
false
```
- "%m
+ "%
= fun x => %typeof% x == `Num,
is_bool : Dyn -> Bool
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -25,7 +25,7 @@
is_bool 42 =>
false
```
- "%m
+ "%
= fun x => %typeof% x == `Bool,
is_str : Dyn -> Bool
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -39,7 +39,7 @@
is_str "Hello, World!" =>
true
```
- "%m
+ "%
= fun x => %typeof% x == `Str,
is_enum : Dyn -> Bool
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -53,7 +53,7 @@
is_enum `false =>
true
```
- "%m
+ "%
= fun x => %typeof% x == `Enum,
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -68,7 +68,7 @@
is_fun 42 =>
false
```
- "%m
+ "%
= fun x => %typeof% x == `Fun,
is_array : Dyn -> Bool
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -82,7 +82,7 @@
is_array 42 =>
false
```
- "%m
+ "%
= fun x => %typeof% x == `Array,
is_record : Dyn -> Bool
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -96,7 +96,7 @@
is_record { hello = "Hello", world = "World" } =>
true
```
- "%m
+ "%
= fun x => %typeof% x == `Record,
typeof : Dyn -> [|
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -120,7 +120,7 @@
typeof (fun x => x) =>
`Fun
```
- "%m
+ "%
= fun x => %typeof% x,
seq : forall a. Dyn -> a -> a
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -136,7 +136,7 @@
seq { tooFar = 42 / 0 } 37 =>
37
```
- "%m
+ "%
= fun x y => %seq% x y,
deep_seq : forall a. Dyn -> a -> a
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -152,7 +152,7 @@
deep_seq { tooFar = 42 / 0 } 37 =>
error
```
- "%m
+ "%
= fun x y => %deep_seq% x y,
hash : [| `Md5, `Sha1, `Sha256, `Sha512 |] -> Str -> Str
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -164,7 +164,7 @@
hash `Md5 "hunter2" =>
"2ab96390c7dbe3439de74d0c9b0b1767"
```
- "%m
+ "%
= fun type s => %hash% type s,
serialize : [| `Json, `Toml, `Yaml |] -> Dyn -> Str
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -179,7 +179,7 @@
"world": "World"
}"
```
- "%m
+ "%
= fun format x => %serialize% format (%force% x),
deserialize : [| `Json, `Toml, `Yaml |] -> Str -> Dyn
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -191,7 +191,7 @@
deserialize `Json "{ \"hello\": \"Hello\", \"world\": \"World\" }"
{ hello = "Hello", world = "World" }
```
- "%m
+ "%
= fun format x => %deserialize% format x,
to_str | string.Stringable -> Str
diff --git a/stdlib/builtin.ncl b/stdlib/builtin.ncl
--- a/stdlib/builtin.ncl
+++ b/stdlib/builtin.ncl
@@ -207,7 +207,7 @@
"Foo"
from null =>
"null"
- "%m
+ "%
= fun x => %to_str% x,
},
diff --git a/stdlib/contract.ncl b/stdlib/contract.ncl
--- a/stdlib/contract.ncl
+++ b/stdlib/contract.ncl
@@ -17,7 +17,7 @@
if value == 0 then value
else contract.blame label
```
- "%m
+ "%
= fun l => %blame% l,
blame_with
diff --git a/stdlib/contract.ncl b/stdlib/contract.ncl
--- a/stdlib/contract.ncl
+++ b/stdlib/contract.ncl
@@ -38,7 +38,7 @@
else contract.blame_with "Not zero" label in
0 | IsZero
```
- "%m
+ "%
= fun msg l => %blame% (%tag% msg l),
from_predicate
diff --git a/stdlib/contract.ncl b/stdlib/contract.ncl
--- a/stdlib/contract.ncl
+++ b/stdlib/contract.ncl
@@ -53,7 +53,7 @@
let IsZero = contract.from_predicate (fun x => x == 0) in
0 | IsZero
```
- "%m
+ "%
= fun pred l v => if pred v then v else %blame% l,
tag
diff --git a/stdlib/contract.ncl b/stdlib/contract.ncl
--- a/stdlib/contract.ncl
+++ b/stdlib/contract.ncl
@@ -76,7 +76,7 @@
value in
5 | Contract
```
- "%m
+ "%
= fun msg l => %tag% msg l,
apply
diff --git a/stdlib/contract.ncl b/stdlib/contract.ncl
--- a/stdlib/contract.ncl
+++ b/stdlib/contract.ncl
@@ -101,7 +101,7 @@
let Contract = Nullable {foo | Num} in
({foo = 1} | Contract)
```
- "%m
+ "%
= fun contract label value => %assume% contract label value,
},
}
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -11,7 +11,7 @@
(42 | Int) =>
42
```
- "%m
+ "%
= fun label value =>
if %typeof% value == `Num then
if value % 1 == 0 then
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -34,7 +34,7 @@
(-4 | Nat) =>
error
```
- "%m
+ "%
= fun label value =>
if %typeof% value == `Num then
if value % 1 == 0 && value >= 0 then
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -57,7 +57,7 @@
(-4 | PosNat) =>
error
```
- "%m
+ "%
= fun label value =>
if %typeof% value == `Num then
if value % 1 == 0 && value > 0 then
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -78,7 +78,7 @@
(0.0 | NonZero) =>
error
```
- "%m
+ "%
= fun label value =>
if %typeof% value == `Num then
if value != 0 then
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -99,7 +99,7 @@
is_int 1.5 =>
false
```
- "%m
+ "%
= fun x => x % 1 == 0,
min : Num -> Num -> Num
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -111,7 +111,7 @@
min (-1337) 42 =>
-1337
```
- "%m
+ "%
= fun x y => if x <= y then x else y,
max : Num -> Num -> Num
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -123,7 +123,7 @@
max (-1337) 42 =>
42
```
- "%m
+ "%
= fun x y => if x >= y then x else y,
floor : Num -> Num
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -135,7 +135,7 @@
floor 42.5 =>
42
```
- "%m
+ "%
= fun x =>
if x >= 0
then x - (x % 1)
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -152,7 +152,7 @@
abs 42 =>
42
```
- "%m
+ "%
= fun x => if x < 0 then -x else x,
fract : Num -> Num
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -166,7 +166,7 @@
fract 42 =>
0
```
- "%m
+ "%
= fun x => x % 1,
trunc : Num -> Num
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -180,7 +180,7 @@
trunc 42.5 =>
42
```
- "%m
+ "%
= fun x => x - (x % 1),
pow : Num -> Num -> Num
diff --git a/stdlib/num.ncl b/stdlib/num.ncl
--- a/stdlib/num.ncl
+++ b/stdlib/num.ncl
@@ -192,7 +192,7 @@
pow 2 8 =>
256
```
- "%m
+ "%
= fun x n => %pow% x n,
}
}
diff --git a/stdlib/record.ncl b/stdlib/record.ncl
--- a/stdlib/record.ncl
+++ b/stdlib/record.ncl
@@ -12,7 +12,7 @@
map (fun s x => x + 1) { hello = 1, world = 2 } =>
{ hello = 2, world = 3 }
```
- "%m
+ "%
= fun f r => %record_map% r f,
fields | { ; Dyn} -> Array Str
diff --git a/stdlib/record.ncl b/stdlib/record.ncl
--- a/stdlib/record.ncl
+++ b/stdlib/record.ncl
@@ -23,7 +23,7 @@
fields { one = 1, two = 2 } =>
[ "one", "two" ]
```
- "%m
+ "%
= fun r => %fields% r,
values | { ; Dyn} -> Array Dyn
diff --git a/stdlib/record.ncl b/stdlib/record.ncl
--- a/stdlib/record.ncl
+++ b/stdlib/record.ncl
@@ -34,7 +34,7 @@
values { one = 1, world = "world" }
[ 1, "world" ]
```
- "%m
+ "%
= fun r => %values% r,
has_field : forall a. Str -> {_ : a} -> Bool
diff --git a/stdlib/record.ncl b/stdlib/record.ncl
--- a/stdlib/record.ncl
+++ b/stdlib/record.ncl
@@ -47,7 +47,7 @@
has_field "one" { one = 1, two = 2 } =>
true
```
- "%m
+ "%
= fun field r => %has_field% field r,
insert : forall a. Str -> a -> {_: a} -> {_: a}
diff --git a/stdlib/record.ncl b/stdlib/record.ncl
--- a/stdlib/record.ncl
+++ b/stdlib/record.ncl
@@ -64,7 +64,7 @@
|> insert "length" 10*1000 =>
{"file.txt" = "data/text", "length" = 10000}
```
- "%%m
+ "%%
= fun field content r => %record_insert% field r content,
remove : forall a. Str -> {_: a} -> {_: a}
diff --git a/stdlib/record.ncl b/stdlib/record.ncl
--- a/stdlib/record.ncl
+++ b/stdlib/record.ncl
@@ -76,7 +76,7 @@
remove "foo" foo { foo = "foo", bar = "bar" } =>
{ bar = "bar }
```
- "%m
+ "%
= fun field r => %record_remove% field r,
update | forall a. Str -> a -> {_: a} -> {_: a}
diff --git a/stdlib/record.ncl b/stdlib/record.ncl
--- a/stdlib/record.ncl
+++ b/stdlib/record.ncl
@@ -100,7 +100,7 @@
update "bar" 1 {foo = bar + 1, bar | default = 0 } =>
{ foo = 1, bar = 1 }
```
- "%m
+ "%
= fun field content r =>
let r = if %has_field% field r then
%record_remove% field r
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -14,7 +14,7 @@
(true | BoolLiteral) =>
error
```
- "%m
+ "%
= fun l s =>
if %typeof% s == `Str then
if s == "true" || s == "True" then
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -39,8 +39,8 @@
(42 | NumLiteral) =>
error
```
- "%m
- = let pattern = m%"^[+-]?(\d+(\.\d*)?(e[+-]?\d+)?|\.\d+(e[+-]?\d+)?)$"%m in
+ "%
+ = let pattern = m%"^[+-]?(\d+(\.\d*)?(e[+-]?\d+)?|\.\d+(e[+-]?\d+)?)$"% in
let is_num_literal = %str_is_match% pattern in
fun l s =>
if %typeof% s == `Str then
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -66,7 +66,7 @@
(1 | CharLiteral) =>
error
```
- "%m
+ "%
= fun l s =>
if %typeof% s == `Str then
if length s == 1 then
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -89,7 +89,7 @@
("tag" | EnumTag) =>
error
```
- "%m
+ "%
= contract.from_predicate builtin.is_enum,
Stringable
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -114,7 +114,7 @@
({foo = "baz"} | Stringable) =>
error
```
- "%m
+ "%
= contract.from_predicate (fun value =>
let type = builtin.typeof value in
value == null
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -136,7 +136,7 @@
(42 | NonEmpty) =>
error
```
- "%m
+ "%
= fun l s =>
if %typeof% s == `Str then
if %str_length% s > 0 then
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -155,7 +155,7 @@
join ", " [ "Hello", "World!" ] =>
"Hello, World!"
```
- "%m
+ "%
= fun sep l =>
if %length% l == 0 then
""
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -173,7 +173,7 @@
split "." "1,2,3" =>
[ "1,2,3" ]
```
- "%m
+ "%
= fun sep s => %str_split% s sep,
trim : Str -> Str
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -187,7 +187,7 @@
trim "1 2 3 " =>
"1 2 3"
```
- "%m
+ "%
= fun s => %str_trim% s,
chars : Str -> Array Str
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -199,11 +199,11 @@
chars "Hello" =>
[ "H", "e", "l", "l", "o" ]
```
- "%m
+ "%
= fun s => %str_chars% s,
code | CharLiteral -> Num
- | doc m%"
+ | doc m%%"
Results in the ascii code of the given character.
For example:
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -215,11 +215,11 @@
code "å" =>
error
```
- "%m
+ "%%
= fun s => %char_code% s,
from_code | Num -> CharLiteral
- | doc m%"
+ | doc m%%"
Results in the character for a given ascii code. Any number outside the ascii range results in an error.
For example:
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -231,7 +231,7 @@
from_code 128 =>
error
```
- "%m
+ "%%
= fun s => %char_from_code% s,
uppercase : Str -> Str
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -248,7 +248,7 @@
uppercase "." =>
"."
```
- "%m
+ "%
= fun s => %str_uppercase% s,
lowercase : Str -> Str
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -265,7 +265,7 @@
lowercase "." =>
"."
```
- "%m
+ "%
= fun s => %str_lowercase% s,
contains: Str -> Str -> Bool
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -281,7 +281,7 @@
contains "ghj" "abcdef" =>
false
```
- "%m
+ "%
= fun subs s => %str_contains% s subs,
replace: Str -> Str -> Str -> Str
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -295,7 +295,7 @@
replace "" "A" "abcdef" =>
"AaAbAcAdAeAfA"
```
- "%m
+ "%
= fun pattern replace s =>
%str_replace% s pattern replace,
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -310,7 +310,7 @@
replace_regex "\\d+" "\"a\" is not" "This 37 is a number." =>
"This \"a\" is not a number."
```
- "%m
+ "%
= fun pattern replace s =>
%str_replace_regex% s pattern replace,
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -346,7 +346,7 @@
["0", "42", "0.5"] |> array.all is_number' =>
true
```
- "%m
+ "%
= fun regex => %str_is_match% regex,
match : Str -> Str -> {match: Str, index: Num, groups: Array Str}
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -365,7 +365,7 @@
Note that this function may perform better by sharing its partial application between multiple calls,
because in this case the underlying regular expression will only be compiled once (See the documentation
of `string.is_match` for more details).
- "%m
+ "%
= fun regex => %str_match% regex,
length : Str -> Num
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -379,7 +379,7 @@
length "hi" =>
2
```
- "%m
+ "%
= fun s => %str_length% s,
substring: Num -> Num -> Str -> Str
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -395,7 +395,7 @@
substring (-3) 4 "abcdef" =>
error
```
- "%m
+ "%
= fun start end s => %str_substr% s start end,
from | Stringable -> Str
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -411,7 +411,7 @@
"Foo"
from null =>
"null"
- "%m
+ "%
= fun x => %to_str% x,
from_num | Num -> Str
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -423,7 +423,7 @@
from_num 42 =>
"42"
```
- "%m
+ "%
= from,
# from_enum | < | Dyn> -> Str = fun tag => %to_str% tag,
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -436,7 +436,7 @@
from_enum `MyEnum =>
"MyEnum"
```
- "%m
+ "%
= from,
from_bool | Bool -> Str
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -448,7 +448,7 @@
from_bool true =>
"true"
```
- "%m
+ "%
= from,
to_num | NumLiteral -> Num
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -460,7 +460,7 @@
to_num "123" =>
123
```
- "%m
+ "%
= fun s => %num_from_str% s,
to_bool | BoolLiteral -> Bool
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -476,7 +476,7 @@
to_bool "false" =>
false
```
- "%m
+ "%
= fun s => s == "true",
# to_enum | Str -> < | Dyn> = fun s => %enum_from_str% s,
diff --git a/stdlib/string.ncl b/stdlib/string.ncl
--- a/stdlib/string.ncl
+++ b/stdlib/string.ncl
@@ -489,7 +489,7 @@
to_enum "Hello" =>
`Hello
```
- "%m
+ "%
= fun s => %enum_from_str% s,
}
}
| diff --git a/benches/mantis/deploy.ncl b/benches/mantis/deploy.ncl
--- a/benches/mantis/deploy.ncl
+++ b/benches/mantis/deploy.ncl
@@ -51,7 +51,7 @@ fun vars =>
mantis.blockchains.testnet-internal-nomad.bootstrap-nodes = [
%{string.join ",\n" bootstrapNodes."%{vars.namespace}"})
]
- "%m,
+ "%,
logLevel | default = "INFO",
loggers = defaultLoggers,
job = vars.job,
diff --git a/src/parser/tests.rs b/src/parser/tests.rs
--- a/src/parser/tests.rs
+++ b/src/parser/tests.rs
@@ -197,7 +197,7 @@ fn enum_terms() {
("whitespace between backtick & identifier", "` test"),
("invalid identifier", "`$s"),
("empty raw identifier", "`"),
- ("multiline string", "`m%\"words\"%m"),
+ ("multiline string", "`m%\"words\"%"),
("interpolation", "`\"%{x}\""),
];
diff --git a/src/parser/tests.rs b/src/parser/tests.rs
--- a/src/parser/tests.rs
+++ b/src/parser/tests.rs
@@ -337,9 +337,9 @@ fn string_lexing() {
);
assert_eq!(
- lex_without_pos(r##"m%""%"%m"##),
+ lex_without_pos(r#"m%%""%"%%"#),
Ok(vec![
- Token::Normal(NormalToken::MultiStringStart(3)),
+ Token::Normal(NormalToken::MultiStringStart(4)),
Token::MultiStr(MultiStringToken::Literal("\"%")),
Token::MultiStr(MultiStringToken::End),
])
diff --git a/src/parser/tests.rs b/src/parser/tests.rs
--- a/src/parser/tests.rs
+++ b/src/parser/tests.rs
@@ -398,19 +398,19 @@ fn ascii_escape() {
assert_eq!(parse_without_pos("\"\\x08\""), mk_single_chunk("\x08"));
assert_eq!(parse_without_pos("\"\\x7F\""), mk_single_chunk("\x7F"));
- assert_eq!(parse_without_pos("m%\"\\x[f\"%m"), mk_single_chunk("\\x[f"));
- assert_eq!(parse_without_pos("m%\"\\x0\"%m"), mk_single_chunk("\\x0"));
- assert_eq!(parse_without_pos("m%\"\\x0z\"%m"), mk_single_chunk("\\x0z"));
- assert_eq!(parse_without_pos("m%\"\\x00\"%m"), mk_single_chunk("\\x00"));
- assert_eq!(parse_without_pos("m%\"\\x08\"%m"), mk_single_chunk("\\x08"));
- assert_eq!(parse_without_pos("m%\"\\x7F\"%m"), mk_single_chunk("\\x7F"));
+ assert_eq!(parse_without_pos("m%\"\\x[f\"%"), mk_single_chunk("\\x[f"));
+ assert_eq!(parse_without_pos("m%\"\\x0\"%"), mk_single_chunk("\\x0"));
+ assert_eq!(parse_without_pos("m%\"\\x0z\"%"), mk_single_chunk("\\x0z"));
+ assert_eq!(parse_without_pos("m%\"\\x00\"%"), mk_single_chunk("\\x00"));
+ assert_eq!(parse_without_pos("m%\"\\x08\"%"), mk_single_chunk("\\x08"));
+ assert_eq!(parse_without_pos("m%\"\\x7F\"%"), mk_single_chunk("\\x7F"));
}
/// Regression test for [#230](https://github.com/tweag/nickel/issues/230).
#[test]
fn multiline_str_escape() {
assert_eq!(
- parse_without_pos(r##"m%"%Hel%%lo%%%"%m"##),
+ parse_without_pos(r##"m%"%Hel%%lo%%%"%"##),
mk_single_chunk("%Hel%%lo%%%"),
);
}
diff --git a/tests/integration/pass/strings.ncl b/tests/integration/pass/strings.ncl
--- a/tests/integration/pass/strings.ncl
+++ b/tests/integration/pass/strings.ncl
@@ -12,17 +12,17 @@ let {check, ..} = import "testlib.ncl" in
== "Hello, world! Welcome in the world-universe",
# regression test for issue #361 (https://github.com/tweag/nickel/issues/361)
- m%""%{"foo"}""%m == "\"foo\"",
- m%"""%m == "\"",
- m%""%"%"%"%m == "\"%\"%\"%",
+ m%""%{"foo"}""% == "\"foo\"",
+ m%"""% == "\"",
# regression test for issue #596 (https://github.com/tweag/nickel/issues/596)
- let s = "Hello" in m%%""%%{s}" World"%%m == "\"Hello\" World",
- let s = "Hello" in m%%""%%%{s}" World"%%m == "\"%Hello\" World",
- m%%""%%s"%%m == "\"%%s",
+ let s = "Hello" in m%%""%%{s}" World"%% == "\"Hello\" World",
+ let s = "Hello" in m%%""%%%{s}" World"%% == "\"%Hello\" World",
+ m%"%s"% == "%s",
+ m%%"%%s"%% == "%%s",
# regression test for issue #659 (https://github.com/tweag/nickel/issues/659)
- let b = "x" in m%"a%%{b}c"%m == "a%xc",
- m%"%Hel%%{"1"}lo%%%{"2"}"%m == "%Hel%1lo%%2",
+ let b = "x" in m%"a%%{b}c"% == "a%xc",
+ m%"%Hel%%{"1"}lo%%%{"2"}"% == "%Hel%1lo%%2",
]
|> check
diff --git a/tests/integration/pass/testlib.ncl b/tests/integration/pass/testlib.ncl
--- a/tests/integration/pass/testlib.ncl
+++ b/tests/integration/pass/testlib.ncl
@@ -7,7 +7,7 @@
However, we apply this contract to each test instead because it provides
fine-grained error reporting, pinpointoing the exact expression that failed
in an array of tests, as opposed to the pure boolean solution.
- "%m
+ "%
= fun l x => x || %blame% l,
# We can't use a static type because x | Assert is not of type bool.
# We could use additional contracts to make it work but it's not worth the
diff --git a/tests/integration/pass/testlib.ncl b/tests/integration/pass/testlib.ncl
--- a/tests/integration/pass/testlib.ncl
+++ b/tests/integration/pass/testlib.ncl
@@ -25,6 +25,6 @@
```
But will give more precise error reporting when one test fails.
- "%m
+ "%
= array.foldl (fun acc test => (test | Assert) && acc) true,
}
| Simpler closing delimiters for special strings
**Is your feature request related to a problem? Please describe.**
Currently, the delimiters for multiline strings are `m%"`/`"%m` (or with more consecutive `%`, as long as the numbers of `%` of the opening and the closing delimiter match).
Repeating the `m` is not very useful (having the `%` is though, to avoid having to escape `"` inside multiline strings). New team members also sometimes didn't remember if the closing delimiters were `"%m` or `"m%`.
This syntax scheme also doesn't scale well for other type of strings: we can imagine having raw strings at some point, using the same scheme `r%"`, but also language-specific strings to enable highlighting in the editor, such as `cpp-lang%"`, as well as another multicharacters delimiter with #948. With several alphanumeric characters in the delimiter, this scheme becomes really clunky.
**Describe the solution you'd like**
For multiline strings, and all future special strings literals, have the same closing delimiter `"%` or `"%...%`, dropping the `m`. This is backward-incompatible. We could still allow both `"%m` and `"%` to remedy this, but honestly we're at a stage were we can afford to do such breaking changes, and this will prove much better in the long term than having to put up with a legacy alternative syntax.
For the record, this what Rust does with raw strings `r#"..."#`, as well as C++ (where you don't have to repeat the `R` or the `u8` at the end of special strings).
| 2022-12-07T18:58:58 | 0.2 | 2d6197d653ca7f7924c9d93398c37496a013bae3 | [
"parser::tests::ascii_escape",
"parser::tests::multiline_str_escape",
"parser::tests::string_lexing",
"pass::annot_parsing",
"pass::complete",
"pass::importing",
"pass::functions",
"pass::builtins",
"pass::arrays",
"pass::quote_in_indentifier",
"pass::multiple_overrides",
"pass::recursive_let"... | [
"environment::tests::test_clone",
"environment::tests::test_deepness",
"environment::tests::test_env_base",
"environment::tests::test_iter_layer",
"environment::tests::test_iter_elem",
"eval::merge::hashmap::tests::all_center",
"eval::merge::hashmap::tests::all_left",
"eval::merge::hashmap::tests::mix... | [] | [] | 99 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.