added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:35:24.320391
2020-12-23T01:21:48
773350171
{ "authors": [ "Veykril" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10494", "repo": "rust-analyzer/rust-analyzer", "url": "https://github.com/rust-analyzer/rust-analyzer/pull/7010" }
gharchive/pull-request
Update ungrammar for const block patterns Fixes #6848 Adds const blocks and const block patterns to the AST and parses them. Blocked on https://github.com/rust-analyzer/ungrammar/pull/17/, will merge that PR there once this one gets the OK so I can remove the local ungrammar dependency path and fix the Cargo.lock. bors r=matklad
2025-04-01T04:35:24.329072
2024-03-12T05:08:15
2180742059
{ "authors": [ "coveralls", "junderw", "oneforalone", "sanket1729" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10495", "repo": "rust-bitcoin/rust-bitcoin", "url": "https://github.com/rust-bitcoin/rust-bitcoin/pull/2574" }
gharchive/pull-request
add brc20 inscription example A simple brc20 transfer inscription example, as it can also be an example for taproot script-path spending. Pull Request Test Coverage Report for Build<PHONE_NUMBER> Details 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 84.043% Totals Change from base Build<PHONE_NUMBER>: 0.0% Covered Lines: 19540 Relevant Lines: 23250 💛 - Coveralls Hello @oneforalone, thanks for the contribution. I don't feel I know enough about BRC-20 to review this PR. I am not opposed to inscriptions in principle, I don't feel it belongs as rust-bitcoin example. We also don't have examples for creating lightning transactions, liquid multisig creation or any other exotic scheme. I would recommend creating an example crate using rust-bitcoin demonstrating the flow for creating inscriptions. I would be happy directing all the inscription related questions/queries to that repo. Actually, I did push the code to my personal repo for brc20 demo (https://github.com/oneforalone/brc20-demo.git). I don't feel it belongs as rust-bitcoin example. We also don't have examples for creating lightning transactions, liquid multisig creation or any other exotic scheme. Okay, got it, but what if make this brc20 example as a taproot script path spending example? Okay, got it, but what if make this brc20 example as a taproot script path spending example? Usually, examples should be minimalist and hyper-focused on the topic they wish to cover. Someone who says "I want to try a script path spend, but I don't know how." and finds this example should not be required to then ask themselves "ok, what is brc20? how does it work?" A script path spend example should ideally just have a single CHECKSIG, or maybe a multisig spend using CHECKSIGADD (since that would be the most likely use case) but even that is stretching a bit. Okay, thanks for your explanation, now I'm closing this PR.
2025-04-01T04:35:24.335436
2024-10-31T01:42:32
2625748680
{ "authors": [ "apoelstra", "jamillambert", "tcharding", "yancyribbens" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10496", "repo": "rust-bitcoin/rust-bitcoin", "url": "https://github.com/rust-bitcoin/rust-bitcoin/pull/3539" }
gharchive/pull-request
Improve the amount module Improve the amount module by doing: Patch 1: Add/update from_int_btc and from_int_btc_const functions Patch 2: Add type to debug output for Amount Patch 3: Fix incorrect docs Patch 4: Make docs use correct style and be uniform across the two amount types Patch 5: Fix docs on FromStr Sorry in advance for asking you to review this @apoelstra. I threw up #3541 as well which is the same as this without the last 5 patches in case there is any push back against the last 5. The from_int stuff is quite different and will need review please. As for the div stuff I think we can do it as a follow up? I believe this PR can be reviewed for merge please, the checked_div_by_weight rename is left for the other issue (#3563). The change to from_int_btc to address an overflow from a number of BTC larger than the max supply seems like an unnecessary change to the API. The change to from_int_btc to address an overflow from a number of BTC larger than the max supply seems like an unnecessary change to the API. I actually prefer this over a panic since it allows downstream to handle such an overflow. Looking over the comments the changes to the API look like they are worthwhile. Ideally there would be a deprecation cycle to give a clear upgrade path, although I'm not sure of a good way to do that in this case where the name isn't being changed, only the behavior. 3df5aa144a6366f5be53fee09dbd6979eb985cc6 looks good except that the new docs on div have a run-on sentence. (Though maybe it's OK since we have to update those anyway when we split the checked method.) There were no acks so I fixed the run on sentence. One day I'll learn how to use full stops and commas - don't hold you breath though. Thanks! Will ACK, but will give a couple days for Jamil and Yancy to chime in. Looks good.
2025-04-01T04:35:24.368077
2019-08-12T16:23:00
479736086
{ "authors": [ "cramertj", "stbuehler" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10497", "repo": "rust-lang-nursery/futures-rs", "url": "https://github.com/rust-lang-nursery/futures-rs/pull/1797" }
gharchive/pull-request
Fix use-after-free on panic during ArcWake::wake_by_ref Wrap temporary Arc<>s in ManuallyDrop early instead of calling forget() later: that way even during unwinding for panics it doesn't drop the refcount it doesn't actually own. Also it means wake_by_ref doesn't need an unwind section anymore. Same thing in increase_refcount (although Arc::clone should only abort, not unwind). Nice! Would you mind adding a test for this? Added a test (if you prefer it in a separate commit before/after the fix I can split it of course). The test fails for me without the fix applied: error: process didn't exit successfully: `[...]/futures/target/debug/deps/arc_wake-e1af0914f1200023` (signal: 11, SIGSEGV: invalid memory reference) Wonderful, thank you!
2025-04-01T04:35:24.369632
2016-03-29T12:52:09
144255541
{ "authors": [ "Kimundi", "SethDusek", "SoniEx2" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10498", "repo": "rust-lang-nursery/lazy-static.rs", "url": "https://github.com/rust-lang-nursery/lazy-static.rs/issues/39" }
gharchive/issue
Mutability support Any way to support DerefMut? Lazy statics behave like regular statics in the sense of only allowing shared, &T access to a value. If you want mutability you need to put your values behind a Mutex, for example: lazy_static! { static ref X: Mutex<u32> = Mutex::new(0); } fn main() { *X.lock() = 100; } Yes but regular statics allow mutability though. Shouldn't lazy_static also allow mutable statics?
2025-04-01T04:35:24.372875
2017-06-05T07:28:02
233514567
{ "authors": [ "crumblingstatue", "est31" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10499", "repo": "rust-lang-nursery/rustfmt", "url": "https://github.com/rust-lang-nursery/rustfmt/issues/1632" }
gharchive/issue
rustfmt writes to files even if they weren't changed This means that if you for example run cargo fmt, it will write to all files in the project, meaning cargo build will have to recompile your project, even though nothing changed. I'm not sure what effects this has on incremental compilation. Does it matter what files were changed, or does it work on the semantic data? AFAIK incremental compilation bases on hashing, so there should be no effect on that front. However, incr comp will be turned off by default on release builds, so it does have an effect here.
2025-04-01T04:35:24.374211
2017-07-05T16:06:18
240700634
{ "authors": [ "topecongiro" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10500", "repo": "rust-lang-nursery/rustfmt", "url": "https://github.com/rust-lang-nursery/rustfmt/pull/1769" }
gharchive/pull-request
Comment vertical alignment This PR implements vertical alignment for comments after elements in list-like structures which use write_list (struct, enum, function call, generics, etc.). Added commits to use closure instead of function and add tests. As you mentioned, the first version had a bug that aligned item may exceed max_width. The added commit fixes this by considering the width of comments as well as the width of items.
2025-04-01T04:35:24.376178
2016-01-01T23:54:23
124567592
{ "authors": [ "LukasKalbertodt", "brson", "tbu-" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10501", "repo": "rust-lang-nursery/rustup.rs", "url": "https://github.com/rust-lang-nursery/rustup.rs/issues/43" }
gharchive/issue
Description is confusing for people who don't know what multirust is The readme says "Multirust-rs is a reimplementation of multirust in rust", but it doesn't say what multirust is. For someone coming in cold, it doesn't offer any clues as to why they want to be using multirust-rs, or what they might want to use it for. /u/othermike on Reddit And maybe add a "reasons to switch from original multirust" section to README.me.. readme has been rewritten.
2025-04-01T04:35:24.390075
2022-11-01T09:41:51
1431147789
{ "authors": [ "alexjurkiewicz", "ehuss", "yanns" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10502", "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/issues/11320" }
gharchive/issue
Since update to Ventura, compilation failing with ld: framework not found Security Problem Since update from macos to Ventura, I cannot compile any cargo projects. The compilation is failing with: ld: framework not found Security Steps $ cargo new --bin hello Created binary (application) `hello` package $ cd hello $ cargo run Compiling hello v0.1.0 (/Users/yannsimon/projects/rust/hello) error: linking with `clang` failed: exit status: 1 | = note: "clang" "-m64" "-arch" "x86_64" "/var/folders/hn/zkgzcdkn49x3xc8ftxf408tr0000gn/T/rustcDXqqBA/symbols.o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099.1ty3db8ccstq54hu.rcgu.o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099.3l0el9rhcus2s8bb.rcgu.o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099.3yc6oulqpabn4tz7.rcgu.o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099.4ct2vhgf5g3b917t.rcgu.o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099.4hxk5lqwm4nr9jeb.rcgu.o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099.bvqrr199b4tl9hn.rcgu.o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099.cksen5lx2kwwd0u.rcgu.o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099.in0wedfw8k7ia79.rcgu.o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099.2y0g7sbr5nf93vr6.rcgu.o" "-L" "/Users/yannsimon/projects/rust/hello/target/debug/deps" "-L" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libstd-0f7ee384278ce82b.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libpanic_unwind-6023318e4257fdb3.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libobject-50ed95d28fda9497.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libmemchr-114781e2905bc242.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libaddr2line-841a5df74cbbcf8e.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libgimli-9b35810dd2e8e276.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/librustc_demangle-d44decaafa04c51d.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libstd_detect-aa335e35e4a7724c.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libhashbrown-7ac72202be300078.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libminiz_oxide-2930c6f21f36f92f.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libadler-654445a53da668f3.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/librustc_std_workspace_alloc-ce034a3eed8d4113.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libunwind-51412ab8efb0f481.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libcfg_if-1c20aac4d9e33893.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/liblibc-5559092a2ede5191.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/liballoc-05250b6a4768a099.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/librustc_std_workspace_core-10f98b32877a2067.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libcore-56d27115b82c9961.rlib" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib/libcompiler_builtins-8b28a0a374c38504.rlib" "-lSystem" "-lresolv" "-lc" "-lm" "-liconv" "-L" "/Users/yannsimon/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/x86_64-apple-darwin/lib" "-o" "/Users/yannsimon/projects/rust/hello/target/debug/deps/hello-5fcd4c46a90f3099" "-Wl,-dead_strip" "-nodefaultlibs" = note: ld: library not found for -lSystem clang-12: error: linker command failed with exit code 1 (use -v to see invocation) error: could not compile `hello` due to previous error Possible Solution(s) No response Notes I've install Xcode, xcodebuild. I've tried to re-install them. I could not fix that issue. Version cargo 1.64.0 (387270bc7 2022-09-16) release: 1.64.0 commit-hash: 387270bc7f446d17869c7f208207c73231d6a252 commit-date: 2022-09-16 host: x86_64-apple-darwin libgit2: 1.4.2 (sys:0.14.2 vendored) libcurl: 7.84.0 (sys:0.4.55+curl-7.83.1 system ssl:(SecureTransport) LibreSSL/3.3.6) os: Mac OS 13.0.0 [64-bit] Sorry you're having that trouble. Can you run a few commands and paste the output here? xcodebuild -version xcode-select -p which clang clang --version To fix the first issue: $ sudo xcode-select -s /Applications/Xcode.app/Contents/Developer $ sudo xcodebuild -license accept $ xcodebuild -version Xcode 14.0.1 Build version 14A400 $ which clang /usr/local/opt/llvm@12/bin/clang $ clang --version Homebrew clang version 12.0.1 This is using clang from homebrew. I would check your PATH to remove it so that the xcode version in /usr/bin is used instead. IIRC, homebrew does not place clang in /usr/local/bin by default. Thanks for the help! ❤️ I've tried the following, but it does not help: $ /usr/bin/clang --version Apple clang version 14.0.0 (clang-14<IP_ADDRESS>) Target: x86_64-apple-darwin22.1.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin $ export PATH="/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin:$PATH" $ clang --version clang --version Apple clang version 14.0.0 (clang-14<IP_ADDRESS>) Target: x86_64-apple-darwin22.1.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin I've removed /usr/local/opt/llvm@12/bin from the $PATH and now it's working. Thanks a lot! This can apparently also happen if you have /Library/Developer/CommandLineTools/usr/bin/ in your path
2025-04-01T04:35:24.396330
2023-02-28T01:15:02
1602209018
{ "authors": [ "weihanglo", "zkat" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10503", "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/issues/11779" }
gharchive/issue
Workspace dependency features don't resolve correctly with target-dependent features Problem When using [workspace.dependencies] and foo = { workspace = true }, resolver v2 doesn't seem to correctly load only the features for the current target. Everything works fine if the package version is specified directly in the subcrate. For example, given this Cargo.toml: [target.'cfg(not(target_arch = "wasm32"))'.dependencies] foo = { workspace = true } [target.'cfg(target_arch = "wasm32")'.dependencies] foo = { workspace = true, default-features = false } Then if the crate is built with cargo build --target wasm32-unknown-unknown, foo will fail to build because it tries to compile wasm-incompatible code. At the same time, the following works as expected: [target.'cfg(not(target_arch = "wasm32"))'.dependencies] foo = { version = "1.2.3" } [target.'cfg(target_arch = "wasm32")'.dependencies] foo = { version = "1.2.3", default-features = false } Steps Create a workspace with a [workspace.dependencies] field. I only tested this on a non-virtual workspace (one with a "root" crate). Add async-tar-wasm = "0.4.2-wasm.1". Create a subcrate in that workspace with the following: [target.'cfg(not(target_arch = "wasm32"))'.dependencies] async-tar-wasm = { workspace = true } [target.'cfg(target_arch = "wasm32")'.dependencies] async-tar-wasm = { workspace = true, default-features = false } Build with cargo build --target wasm32-unknown-unknown Possible Solution(s) No response Notes No response Version ❯ cargo version --verbose cargo 1.67.1 (8ecd4f20a 2023-01-10) release: 1.67.1 commit-hash: 8ecd4f20a9efb626975ac18a016d480dc7183d9b commit-date: 2023-01-10 host: x86_64-pc-windows-msvc libgit2: 1.5.0 (sys:0.16.0 vendored) libcurl: 7.86.0-DEV (sys:0.4.59+curl-7.86.0 vendored ssl:Schannel) os: Windows 10.0.22621 (Windows 10 Enterprise) [64-bit] Thank you for the issue report. This is the expected behavior as features are additive. The workspace dependency didn't opt-out default features, so inherited dependencies cannot get it off. We made an improvement and will ship in the next version . See the changelog of 1.69 . This comment https://github.com/rust-lang/cargo/pull/11409#issuecomment-1374225756 also summarize each combination very well. Got it! Looks like this is all addressed, then. It was definitely surprising, though!
2025-04-01T04:35:24.406565
2023-05-23T12:35:37
1721994960
{ "authors": [ "Centri3", "ehuss", "lee-orr" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10504", "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/issues/12171" }
gharchive/issue
Cargo erroneously strips the hash(?) from a dylib's final artifact Problem When there are one or more dependencies that are dynamically linked but not cdylibs, Cargo will copy their final artifact to the same directory as the binary but remove the hash(?) - even if this is what is imported by the binary. Due to this, it will fail to find the libraries and exit. Steps cargo init Add a dependency to Cargo.toml; it can be any you wish, as long as it exports something that isn't a macro (i.e., it'll be in the final binary). cargo run I'll use cargo vendor here instead of finding a crate on crates.io that is only a dylib, but it should function the same AFAIK. cargo vendor Go to vendor/<name>/Cargo.toml and make it a dylib by adding [lib] crate-type = ["dylib"] If your crate already has [lib], check to make sure this won't break anything. You also may need to remove #![no_std] if the crate specifies it. 6. Make files in .cargo-checksum.json an empty array. We are purposefully editing it. 7. Run with the vendored dependency (and RUSTFLAGS="-Cprefer-dynamic -Crpath". This should fail, but this is expected because std (which would be dynamically linked here) isn't in target. You'll need to copy it from elsewhere. But here, it will also fail if you add std because the dependency we added to Cargo.toml isn't in target either! Expect it is; with the name <name>.dll, NOT <name>-<hash>.dll, which is what the binary searches for. E: Minimal repro: https://github.com/Centri3/aaaaaa Possible Solution(s) Cargo should not strip the hash(?) from the filename. Notes Cargo incorrectly outputs a future error warning that there are colliding filenames; This is Cargo's fault so it should not do this here, and it should not become an error (at least in this case). Cargo should also copy the std library when it's dynamically linked, to prevent the developer any pain. I haven't tested this outside of x86_64-pc-windows-gnu under wine, though running the same program on a W10 machine outputs the same error. Version cargo 1.71.0-nightly (09276c703 2023-05-16) release: 1.71.0-nightly commit-hash: 09276c703a473ab33daaeb94917232e80eefd628 commit-date: 2023-05-16 host: x86_64-unknown-linux-gnu libgit2: 1.6.4 (sys:0.17.1 vendored) libcurl: 8.0.1-DEV (sys:0.4.61+curl-8.0.1 vendored ssl:OpenSSL/1.1.1t) ssl: OpenSSL 1.1.1t 7 Feb 2023 os: Linux Mint 21.1 (vera) [64-bit] Ok I've actually just tested this on x86_64-unknown-linux-gnu and the same issue arises, I'm not sure why though since something like bevy_dylib has always worked fine. Maybe this is just an issue with vendored dependencies in particular? Looks like this is an issue somewhere around https://github.com/rust-lang/cargo/blob/5a396277e88f4fb3b7e75a1f30d6f384442ee12a/src/cargo/core/compiler/mod.rs#L577-L580, can somebody confirm this? I think this is a relatively subtle and complex issue, so I don't think just keeping the hash is quite the right thing to do. In general, the story around dylibs in Rust is incomplete, and particularly handling rpath. It may be that the correct solution is to keep the path if rpath is set in the profile, but I think it would take some digging to understand how things should work, and what the risks are to changing them. so I don't think just keeping the hash is quite the right thing to do In this case, it's the easiest. Specifying extra_filename (which is what Cargo does) adds the hash, and the resulting artifact's import table links to it with that hash. So this is about the only thing Cargo can reasonably do, other than I suppose removing the hash entirely and instead using the crate's version or just dealing with name conflicts. but I think it would take some digging to understand how things should work, and what the risks are to changing them. An easy escape hatch would be to make it an unstable option. To my knowledge, linking to a dylib currently will always use the same name as the filename of the artifact rustc creates; even if it includes a suffix at the end (as otherwise, how would it import it?). I'm not sure about bevy_dylib and others though, cargo may not pass extra_filename in that case (though using the original filename should still work fine) I was thinking alternatively, maybe you could specify a directory every dylib would be in; like deps in target. But that would likely be a change in rustc itself and not Cargo. @Centri3 - bevy_dylib works only with cargo run because that passes sets the environment to search for dependencies in the deps folder, which has the hashed version...
2025-04-01T04:35:24.407850
2019-11-12T17:51:13
521704068
{ "authors": [ "ehuss", "stevenroose" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10505", "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/issues/7583" }
gharchive/issue
FR: cargo install --do-not-error-when-already-installed This is really basic, guys. Thanks, this will be fixed once #7560 is merged.
2025-04-01T04:35:24.410253
2020-02-02T17:29:10
558728572
{ "authors": [ "epage", "inodentry" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10506", "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/issues/7854" }
gharchive/issue
Feature request: visual indication of proc-macro transitive dependencies It would be useful to have a visual indication of which of your crate's (transitive) dependencies are only used at compile time (by proc-macros) and do not contribute to the code of the compiled binary. This would allow easily judging which dependencies of a project actually contribute to code size. One solution to this problem would be to visually indicate such dependencies with some marker (and maybe also use a different text colour) during the build process and to show a separate number in the total count of dependencies to be built. There might also be other places in the UI where dependency info is indicated where this can be shown. Perhaps crates.io could also benefit from such information, to allow easily judging how much a library might contribute to code bloat (count of non-proc-macro deps).and compile time (all deps). I feel like our build output is already fairly noisy and would be a poor spot for this. cargo tree does some rendering of information but not this exact use case. It'd like be good for there to be experimentation out of tree to see what we should build directly into cargo for this.
2025-04-01T04:35:24.439932
2020-02-21T19:02:35
569129421
{ "authors": [ "1sra3l", "CleanCut", "MDGSF", "Razican", "Xanewok", "benmarten", "ehuss", "est31", "gz", "jdm", "johanlindfors", "jplatte", "simon-an", "thomwiggers", "zicklag" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10507", "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/issues/7914" }
gharchive/issue
Tracking issue for -Z features=itarget Implementation: #7820 Documentation: https://doc.rust-lang.org/nightly/cargo/reference/unstable.html#features Summary -Z features=itarget causes the feature resolver to ignore features for target-specific dependencies for targets that don't match the current compile target. For example: [dependency.common] version = "1.0" features = ["f1"] [target.'cfg(windows)'.dependencies.common] version = "1.0" features = ["f2"] When building this example for a non-Windows platform, the f2 feature will not be enabled. Unresolved issues [ ] Update cargo metadata [ ] Does this cause existing projects to fail? Projects can add missing features as a backwards-compatible workaround. However, if it causes problems for too many projects, we may need to find some way to opt-in. Does this affect lockfiles too? As in: if this feature is enabled, will a lockfile on Windows look different from a lockfile on Linux? Or is it only affecting behaviour of the actual build? It does not affect Cargo.lock. This seems to work, at least for my scenario: [target.'cfg(macos)'.dependencies] sdl2 = { version = "0.32", features = ["bundled"]} [target.'cfg(windows)'.dependencies] sdl2 = { version = "0.32", features = ["bundled", "static-link"]} Building with: cargo +nightly run -Z features=itarget But I'm still a bit confused about the proposed solution. This clearly feels like a bug in the current feature resolver, but this fix proposes that we specify a different resolver? Or is this just a temporary solution while this fix gets merged into the "original" resolver? @johanlindfors it seems to work for me. What platform are you targeting? What did you see that made you think it isn't working? One thing, cfg(macos) is not a valid expression. It should be cfg(target_os = "macos"). Yes, there is a new resolver, it is not a "bug" per se in the current resolver. The dependency resolver still needs to resolve all platforms for a stable dependency graph. These features also need to be resolved for multiple targets independently (host and target). It may be possible to do it with the current resolver, but I think it would require a multi-pass solution, so I don't think there would be any benefit, and a lot of downsides. cfg([some target_os value]) is actually a valid cfg expression in current Rust code as far as I can tell. unix and windows are special cases. You can read more about them here. Those are target families, whereas macos is a target os within the unix family. @johanlindfors Yes, -Z features=itarget is a temporary solution for trying out the new solver while it is still unstable. It will replace the current solver as the default if things go to plan. @ehuss You are right, I was usin ghte cfg(target_os = "macos") syntax but when copying the code into the comment on github I manually "shortened" it, thinking I was being smart, turns out, as usual, that was not the case! :) The crater run for this has finished: https://crater-reports.s3.amazonaws.com/pr-69560/index.html There are about 9 regressions, which is a bit more than I was hoping for: https://github.com/FauxFaux/quad-image https://github.com/Leinnan/slavic_castles https://github.com/chrisabruce/libp2p-ios https://github.com/dmweis/lunar_lander https://github.com/plaptov/ant_sim_rs https://github.com/tomaka/2018-rustrush-demo arwa 0.1.0 label_attribute 0.1.1 zrs 0.1.5 I am not able to get this to work: [target.'cfg(macos)'.dependencies.opencv] version = "0.32.0" default-features = false features = ['opencv-4'] [target.'cfg(linux)'.dependencies.opencv] version = "0.32.0" default-features = false features = ['opencv-32'] cargo build -Z features=itarget Anything wrong with the syntax? @benmarten See above. cfg(macos) is not a valid cfg expression. @ehuss Thanks, that works now on my mac and insider docker/linux. [target.'cfg(target_os = "macos")'.dependencies.opencv] version = "0.32.0" default-features = false features = ['opencv-4'] [target.'cfg(target_os = "linux")'.dependencies.opencv] version = "0.32.0" default-features = false features = ['opencv-32'] rustup override set nightly cargo +nightly build -Z features=itarget I'm getting a panic from cargo when enabling the itarget feature and trying to build: ProcessExecutionError: Command line: ['/home/gz/.cargo/bin/xargo', 'build', '--target', 'x86_64-custom', '--color', 'always', '-Zfeatures=itarget'] Exit code: 101 Stderr: | warning: profiles for the non root package will be ignored, specify profiles at the workspace root: | package: /home/gz/workspace/besp/lib/node-replication/Cargo.toml | workspace: /home/gz/workspace/besp/Cargo.toml | thread 'main' panicked at 'features did not find PackageId { name: "libc", version: "0.2.66", source: "registry `https://github.com/rust-lang/crates.io-index`" } false', src/tools/cargo/src/cargo/core/resolver/features.rs:220:17 | stack backtrace: | 0: backtrace::backtrace::libunwind::trace | at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.44/src/backtrace/libunwind.rs:86 | 1: backtrace::backtrace::trace_unsynchronized | at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.44/src/backtrace/mod.rs:66 | 2: std::sys_common::backtrace::_print_fmt | at src/libstd/sys_common/backtrace.rs:78 | 3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt | at src/libstd/sys_common/backtrace.rs:59 | 4: core::fmt::write | at src/libcore/fmt/mod.rs:1069 | 5: std::io::Write::write_fmt | at src/libstd/io/mod.rs:1427 | 6: std::sys_common::backtrace::_print | at src/libstd/sys_common/backtrace.rs:62 | 7: std::sys_common::backtrace::print | at src/libstd/sys_common/backtrace.rs:49 | 8: std::panicking::default_hook::{{closure}} | at src/libstd/panicking.rs:198 | 9: std::panicking::default_hook | at src/libstd/panicking.rs:218 | 10: std::panicking::rust_panic_with_hook | at src/libstd/panicking.rs:511 | 11: rust_begin_unwind | at src/libstd/panicking.rs:419 | 12: std::panicking::begin_panic_fmt | at src/libstd/panicking.rs:373 | 13: cargo::core::resolver::features::ResolvedFeatures::activated_features_int | 14: cargo::core::compiler::unit_dependencies::new_unit_dep_with_profile | 15: cargo::core::compiler::unit_dependencies::compute_deps | 16: cargo::core::compiler::unit_dependencies::deps_of | 17: cargo::core::compiler::unit_dependencies::deps_of | 18: cargo::core::compiler::unit_dependencies::deps_of | 19: cargo::core::compiler::unit_dependencies::deps_of | 20: cargo::core::compiler::unit_dependencies::deps_of | 21: cargo::core::compiler::unit_dependencies::deps_of | 22: cargo::core::compiler::unit_dependencies::deps_of_roots | 23: cargo::core::compiler::unit_dependencies::build_unit_dependencies | 24: cargo::ops::cargo_compile::compile_ws | 25: cargo::ops::cargo_compile::compile | 26: cargo::commands::build::exec | 27: cargo::cli::main | 28: cargo::main | 29: std::rt::lang_start::{{closure}} | 30: std::rt::lang_start_internal::{{closure}} | at src/libstd/rt.rs:52 | 31: std::panicking::try::do_call | at src/libstd/panicking.rs:331 | 32: std::panicking::try | at src/libstd/panicking.rs:274 | 33: std::panic::catch_unwind | at src/libstd/panic.rs:394 | 34: std::rt::lang_start_internal | at src/libstd/rt.rs:51 | 35: main | 36: __libc_start_main | 37: <unknown> | note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. There are no target specific dependencies in the particular lib that is being compiled (node-replication) but there are a few libc dependencies in transitive crates that are declared i.e., like this: [target.'cfg(unix)'.dependencies] libc = "0.2" @gz Which version are you using? Is this a project I can look at? If not, can you create a reproduction? Or maybe show the dependency tree and the relevant dependency declarations. Also, it looks like you are using a custom JSON target spec, can you include that, too? @ehuss I put a minimal example here https://github.com/gz/itarget-bug that triggers the panic for me. It seems the culprit is when I try to include the hashbrown dependency. @gz TYVM for the repro! I posted a fix at #8048. As a workaround, you can pass -Zfeatures=itarget,host_dep or -Zfeatures=all (requires recent nightly such as nightly-2020-03-27). Thanks for the fix (and the feature, it simplifies my cargo dependencies quite a lot). I can compile my project now with -Zfeatures=all. This works great for gfx. It removes the need for us to add specific features that must manually be provided based on the platform that you are building on ( see https://github.com/gfx-rs/gfx/pull/3151#discussion_r399609493 ). Not sure if this was mentioned and/or expected but it seems that doc command is not adapted yet. For the manifest: [package] name = "sequoia-openpgp-default" version = "0.1.0" authors = ["Sequoia Developers<EMAIL_ADDRESS>edition = "2018" [target.'cfg(not(windows))'.dependencies.sequoia-openpgp] git = "https://gitlab.com/sequoia-pgp/sequoia" default-features = false features = ["crypto-nettle"] [target.'cfg(windows)'.dependencies.sequoia-openpgp] git = "https://gitlab.com/sequoia-pgp/sequoia" default-features = false features = ["crypto-cng"] I got PS C:\Users\Xanewok\repos\sequoia\openpgp-default> rustc +nightly --version --verbose rustc 1.49.0-nightly (8dae8cdcc 2020-10-12) binary: rustc commit-hash: 8dae8cdcc8fa879cea6a4bbbfa5b32e97be4c306 commit-date: 2020-10-12 host: x86_64-pc-windows-gnu release: 1.49.0-nightly LLVM version: 11.0 PS C:\Users\Xanewok\repos\sequoia\openpgp-default> cargo +nightly --version cargo 1.48.0-nightly (9d1a4863a 2020-10-05) PS C:\Users\Xanewok\repos\sequoia\openpgp-default> cargo +nightly check --no-default-features -Zfeatures=itarget Finished dev [unoptimized + debuginfo] target(s) in 0.17s PS C:\Users\Xanewok\repos\sequoia\openpgp-default> cargo +nightly doc --no-default-features -Zfeatures=itarget thread 'main' panicked at 'activated_features for invalid package: features did not find PackageId { name: "nettle", version: "7.0.0", source: "registry `https://github.com/rust-lang/crates.io-index`" } false', src\tools\cargo\src/cargo\core\resolver\features.rs:227:14 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace which at a first glance seems that activated_features contains an entry for the nettle package, which should only be activated under cfg(not(windows)) and is correctly skipped for the check build. @Xanewok I moved your issue to #8774, and posted a fix in #8777. Awesome, thanks for such a quick fix! For the manifest: ` [target.aarch64-unknown-linux-gnu.dependencies] nanomsg-sys = { version = "0.1.0", path = "../nanomsg-sys", default-features = false, features = ["test1"] } [target.x86_64-unknown-linux-gnu.dependencies] nanomsg-sys = { version = "0.1.0", path = "../nanomsg-sys", default-features = false, features = ["test2"] } ` when I use command, it seems always use x86_64-unknown-linux-gnu, but I want aarch64. cargo build --target=aarch64-unknown-linux-gnu when I commented x86_64-unknown-linux-gnu, it will aarch64-unknown-linux-gnu as I hope. I'd love to get this on stable. Is this on the path to stabilization? Definitely agree this would be great to have, but in case this helps anybody, here's a description of how I got around not having this for gfx: https://community.amethyst.rs/t/we-need-to-figure-out-a-way-to-adapt-the-features-of-the-amethyst-dependency-to-the-platform-the-game-is-being-compiled-on/1596/6?u=zicklag It does not seem possible to do [target.'cfg(not(windows))'.features] default = ["openssl"] Is this correct? Took me quite some time to find out, that I need to set resolver = "2" in my root Cargo.toml to activate this. https://doc.rust-lang.org/cargo/reference/features.html Is this the right place to ask about "feature" specific dependencies? [target.'cfg(features = "fltk")'.dependencies] optional = true # specific fltk git repos I want a library to expose GUI elements if those features are requested by the end user Is this the right place to ask about "feature" specific dependencies? [target.'cfg(features = "fltk")'.dependencies] optional = true # specific fltk git repos I want a library to expose GUI elements if those features are requested by the end user You can already do this by adding the dependency in the feature definition, right? [features] my_feature = ["my_dependency"] [dependencies] my_dependency = { version = "1.0", optional = true } Sorry I was not clear enough, I mean to ONLY included it for the features derived. Similar to compiling with OS specific crates, but only if features are requested. If this is already possible I missed it. FWIW I thought doing something like [dependencies] # things the library needs [dependencies.my_feature] # stuff to include ONLY when they want the GUI aspect There seems like something like this should be possible to me, but it may not be. @Razican I think I misunderstood what I was doing, I think I untangled what I was trying to do... sorry for the clutter :sweat_smile:
2025-04-01T04:35:24.446877
2021-06-27T08:36:19
930879600
{ "authors": [ "ChrisDenton", "ehuss" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10508", "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/issues/9630" }
gharchive/issue
Windows environment variables are case preserving (but case-insensitive) I recently submitted a PR to stop Rust ASCII upper casing environment variables: https://github.com/rust-lang/rust/pull/85270. This causes the following test to fail. https://github.com/rust-lang/cargo/blob/36917524b96ebe701b5006b8773042dace8bca05/tests/testsuite/tool_paths.rs#L360-L373 I think it's because something relies on a new process upper casing key names? Which only happened before because Rust manually ASCII upper cased keys when starting a new process. Hm, that seems to be the assumption here: https://github.com/rust-lang/cargo/blob/36917524b96ebe701b5006b8773042dace8bca05/src/cargo/util/config/mod.rs#L698-L703 Whereas Windows itself does no such conversion. cc @ehuss ? (only because you most recently touched the relevant test 🙂) Here's a quick summary of the issue the linked PR seeks to address (using a few simple Rust programs you can try): listenv.rs: fn main() { for (key, _) in std::env::vars() { println!("{}", key); } } spawn.rs: fn main() -> std::io::Result<()> { std::process::Command::new(r".\listenv.exe").spawn()?.wait()?; Ok(()) } withenv.rs fn main() -> std::io::Result<()> { std::process::Command::new(r".\listenv.exe").env("hello", "world").spawn()?.wait()?; Ok(()) } Compile all these with Rust: Z:\test> rustc listenv.rs Z:\test> rustc spawn.rs Z:\test> rustc withenv.rs If you run .\listenv.exe you should see environment keys with lower case values (e.g. "windir", "Path", "ProgramFiles", etc). The same goes for .\spawn.exe. However .\withenv.exe invokes Rust's ASCII upper casing behaviour and therefore all keys are upper cased (or were, if my PR gets merged). Note that in any case, std::env::var(key) will work case-insensitively. I posted #9646 to disable the test on Windows for now. Once that is merged, I'll update rust-lang/rust, and then re-approve your PR. Afterwards, I'll work on fixing Cargo to work with the preserved casing. Thanks! I've added a comment to my PR with a (hopefully accurate) summary of the issue.
2025-04-01T04:35:24.473190
2022-09-08T18:43:03
1366826293
{ "authors": [ "Eh2406", "bors", "ehuss", "epage", "ijackson", "jonhoo", "rust-highfive", "weihanglo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10509", "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/pull/11062" }
gharchive/pull-request
fix(publish): Block until it is in index Originally, crates.io would block on publish requests until the publish was complete, giving cargo publish this behavior by extension. When crates.io switched to asynchronous publishing, this intermittently broke people's workflows when publishing multiple crates. I say interittent because it usually works until it doesn't and it is unclear why to the end user because it will be published by the time they check. In the end, callers tend to either put in timeouts (and pray), poll the server's API, or use crates-index crate to poll the index. This isn't sufficient because For any new interested party, this is a pit of failure they'll fall into crates-index has re-implemented index support incorrectly in the past, currently doesn't handle auth, doesn't support git-cli, etc. None of these previous options work if we were to implement workspace-publish support (#1169) The new sparse registry might increase the publish times, making the delay easier to hit manually The new sparse registry goes through CDNs so checking the server's API might not be sufficient Once the sparse registry is available, crates-index users will find out when the package is ready in git but it might not be ready through the sparse registry because of CDNs So now cargo will block until it sees the package in the index. This is checking via the index instead of server APIs in case there are propagation delays. This has the side effect of being noisy because of all of the "Updating index" messages. This is done unconditionally because cargo used to block and that didn't seem to be a problem, blocking by default is the less error prone case, and there doesn't seem to be enough justification for a "don't block" flag. In reviewing this change, be sure to look at the individual commits The first makes it possible to write the tests for this The second adds a test that shows the current behavior The third updates the test to the expected behavior, showing all of this works Fixes #9507 r? @weihanglo (rust-highfive has picked a reviewer for you, use r? to override) Hmm, hadn't run all of the tests before posting and hadn't considered there isn't a functioning-enough registry to pull from. Open to people's thoughts on doing some kind of hack for this (env variable, unstable flag to not block, etc) or to try to add an insta-stable flag for controlling blocking. Hmm, hadn't run all of the tests before posting and hadn't considered there isn't a functioning-enough registry to pull from. How hard would it be to implement a real publish API? I suspect a simplified implementation should be relatively short (maybe 40-50 lines?). I think the steps would roughly be: Read the data sent by cargo, and deserialize the JSON. Save the .crate file into the correct directory. Convert the publish JSON to the index JSON. I think this should be relatively basic copy of fields into a new serde_json::Value. The only one I think that needs real translation is the registry field of a dependency. I think the cksum would also need to be computed. Add that JSON to the index, and commit it. The fact that almost all of our tests do not require actually running a web server, it's kind of nice for performance and reliability. An alternative implementation to support the tests, would be to add a flag for skipping the new delay. We should also keep in mind in designing this that not all registries necessarily publish promptly. it is perfectly reasonable for a registry that accepts the publish API to require some kind of intervention before it appears in the index. I could see a registry requiring a user to review the package they just uploaded to verify that it looks correct before adding it to the index. A registry could also do an automated or manual security audit before it is added. We should also keep in mind in designing this that not all registries necessarily publish promptly. For this use case, it seems like we'd want this to be part of registry configuration as passing in a --wait=no every time doesn't seem right. Maybe we should start out with both an unstable flag and an unstable config value so any users can unblock themselves and we can also use this as a way to collect feedback on what is needed and not block this on naming or other details like that. We should also keep in mind in designing this that not all registries necessarily publish promptly. To start speculating on an RFC... Thinking about this more, I think the right place for a registry to tell us how long to wait is in the response to the /api/v1/crates/new call. Either in the response object or in headers. I see a couple of reasonable things a registry could want to tell a user about how long to wait: Retry for some expected period of time (the behavior you're adding here) possibly with the ability for a registry to estimate how long the retries should run for. A URL that needs to be interacted with in order for progress to be made. For the cases where it requires the author to do a manual review. A URL that can be used to track the progress of publication. (Possibly with some specification for how cargo can use it as a retry loop.) Clarification that the retry loop is unlikely to be productive. Say if all publishes require manual review by the registry. On the other hand, we can change the default behavior now and add such configurability in a backwards compatible way in the future. I wonder if we could express all of these with semi-standard HTTP response codes + headers: Retry for some expected period of time (the behavior you're adding here) possibly with the ability for a registry to estimate how long the retries should run for. 202 Accepted Retry-After: Fri, 07 Nov 2022 23:59:59 GMT Content-Location: https://crates.io/api/v1/crates/$crate/$new_version A URL that needs to be interacted with in order for progress to be made. For the cases where it requires the author to do a manual review. 202 Accepted Location: https://registry.example.com/approve/$crate/$new_version A URL that can be used to track the progress of publication. (Possibly with some specification for how cargo can use it as a retry loop.) 202 Accepted X-Cargo-Track-Progress: v1 Content-Location: https://crates.io/api/v1/crates/$crate/$new_version Clarification that the retry loop is unlikely to be productive. Say if all publishes require manual review by the registry. 202 Accepted And to add one: "available now" could be 201 Created I was curious why this waits after publish, instead of waiting for dependencies to be available? The original intent I had for #9507 is to only block publishing to wait for dependencies to become available. That way, it doesn't affect the majority of cases where someone is publishing a single package. Do you have any thoughts on those approaches? I'm a little concerned about always blocking until it is available. There are times where there can be a significant delay, and the user may not care, but would be bothered if publishing gets "stuck". I can see some benefit of blocking before finishing, such as someone wanting to publish an announcement. But I'm a bit wary of forcing that behavior. We should also keep in mind in designing this that not all registries necessarily publish promptly. it is perfectly reasonable for a registry that accepts the publish API to require some kind of intervention before it appears in the index. I could see a registry requiring a user to review the package they just uploaded to verify that it looks correct before adding it to the index. A registry could also do an automated or manual security audit before it is added. Indeed. But right now such a registry would be unuseable with cargo, because to publish a workspace of interdependent crates one would have to wait for the huma review cycle to complete before publishing the next crate. To support a registry with this kind of behaviour, it would be necessary to allow cargo publish of a crate whose dependencies are not yet available via the published view of the regsitry. :umbrella: The latest upstream changes (presumably #11111) made this pull request unmergeable. Please resolve the merge conflicts. :umbrella: The latest upstream changes (presumably #11069) made this pull request unmergeable. Please resolve the merge conflicts. This patch changes the behaviour of cargo publish as listed below: By default, synchronously wait for the publish propagating to crates.io index. The default timeout is 60 seconds. Polling interval is 1 second. To opt out, set the unstable config publish.timeout = 0. Ed summarizes this feature very well in the PR description. I'd recommend taking a look. I am going to propose this to be merged, but 1.65.0 is going to release on November 3rd, do we want to postpone the merge after that to maximize the test window? @rfcbot merge I am going to propose this to be merged, but 1.65.0 is going to release on November 3rd, do we want to postpone the merge after that to maximize the test window? If we quickly get votes and don't wait for the 10 day period, we'll get 9 weeks of testing. 7 or so if votes take some time and we wait for the full FCP period. Seeing as we've discussed this and the plan, I'm assuming the vote on this is uncontroversial and we can forego the 10 day waiting period. Either way, if people want to wait until after November 3rd, I can understand. I would ask that we just change the default to 0 and merge as-is because this has already had 3 different semantic merge conflicts, one that took a decent amount of time to resolve. @rfcbot concern update-issue I'm trying to test this out, but I can't seem to get it to work. I opened #11253 for what I'm seeing. Can someone take a look? @rfcbot resolve update-issue One thing we may want to look at separately is the UI while it is waiting. Cargo just keeps repeating Updating crates.io index. I'm wondering if it might be better to suppress the Updating message? That could maybe be implemented by temporarily enabling quiet mode. I'm also wondering about maybe showing some kind of busy indicator (maybe a progress bar that ticks forward for each second)? I'm also a little concerned that it feels like there is essentially no test for this feature (that exercises what users will experience). Would someone be willing to follow up with a test for it? I think it shouldn't be too difficult to change the PUT handler to optionally delay the write_to_index call (it could maybe toss it into a thread or something to run in the background). The recent Cargo meeting concluded that we want to move forward and maximize the testing period for this default timeout setting before it hits next stable version. I'll merge it now then. @bors r+ :pushpin: Commit f2fc5ca86d9efbc7dbf8c5d88734ecb498718413 has been approved by weihanglo It is now in the queue for this repository. :hourglass: Testing commit f2fc5ca86d9efbc7dbf8c5d88734ecb498718413 with merge 7e484fc1a766f56dbc95380f45719698e0c82749... :sunny: Test successful - checks-actions Approved by: weihanglo Pushing 7e484fc1a766f56dbc95380f45719698e0c82749 to master...
2025-04-01T04:35:24.479032
2018-08-01T00:19:53
346403769
{ "authors": [ "alexcrichton", "bors" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10510", "repo": "rust-lang/cargo", "url": "https://github.com/rust-lang/cargo/pull/5845" }
gharchive/pull-request
Rename --prepare-for to --edition, drop arg This commit tweaks the UI of cargo fix for the edition. Previously you'd execute cargo fix --prepare-for 2018, but that's a lot of typing! Plus it's some manual data that Cargo can already infer. Instead, after this commit, you now type cargo fix --edition, and that's it! The idea is that this'll tell Cargo to fix code for the next edition, inferring whatever edition is in use and figuring out what to pass to rustc. Functionality-wise this should be the exact same as --prepare-for 2018 though If others agree w/ this change I'll send a PR to the edition guide after this merges! Updated! Hm I haven't figured out a great place to print out the "Upgrading..." message, but @killercup did you have a particular location in mind? @bors: r+ Ok I'm gonna go ahead and try to squeeze this in for the preview, but we can of course continue to iterate in-tree! :pushpin: Commit b2b120e9fd2e2f777ee53fc9456e7f31224c81d2 has been approved by alexcrichton :hourglass: Testing commit b2b120e9fd2e2f777ee53fc9456e7f31224c81d2 with merge e3a90f2097e7323583f61eec0ea8f0601b93b186... :sunny: Test successful - status-appveyor, status-travis Approved by: alexcrichton Pushing e3a90f2097e7323583f61eec0ea8f0601b93b186 to master...
2025-04-01T04:35:24.495325
2022-07-01T13:56:20
1291445112
{ "authors": [ "RalfJung", "bors" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10511", "repo": "rust-lang/miri", "url": "https://github.com/rust-lang/miri/pull/2296" }
gharchive/pull-request
./miri improvements I have needed to run something with many different seeds often enough that I would like an easier way to do it. ;) So now we have ./miri many-seeds. Also I made the script less dependent on the working directory, so calling it from a different directory should work properly now even if that other directory does not have the same rustup override as the one where Miri lives. @bors r+ :pushpin: Commit 9bc7938bc27d3757278c55839c1865ee74c50baa has been approved by RalfJung :hourglass: Testing commit 9bc7938bc27d3757278c55839c1865ee74c50baa with merge 5815d8d81cbe9b5db3d70dca1cee87d17e259afe... :sunny: Test successful - checks-actions Approved by: RalfJung Pushing 5815d8d81cbe9b5db3d70dca1cee87d17e259afe to master...
2025-04-01T04:35:24.499578
2021-04-12T00:58:18
855459525
{ "authors": [ "boomshroom", "calebzulawski" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10512", "repo": "rust-lang/portable-simd", "url": "https://github.com/rust-lang/portable-simd/issues/93" }
gharchive/issue
Add "common" shuffles The shuffle API in #62 can handle nearly any hardware shuffle, but it's a bit clumsy to use (and the API requires full const generics). A few common cases of shuffles should be provided as a simpler API: [x] reverse [x] rotate [ ] shift (like rotate but insert 0s) [ ] align (or maybe rotate2?) see #78 [x] interleave/deinterleave If the shift amount is a constant, then it should be possible to implement these as const functions that output the corresponding array to passe into shuffle's const parameter. As an example, it's possible to implement alignr (x86 doesn't have an alignl) with const fn alignr<const LANES: usize>(shift: usize) -> [u32; LANES] { let mut indices = [0; LANES]; let mut block = 0; while block < LANES { let mut idx = 0; while idx < 16 && idx < LANES { // x86 chunks its vectors into 16 byte chunks for alignr let offset = if shift + idx >= 16 { LANES + (shift + idx) % 16 } else { shift + idx }; indices[idx + block] = (offset + block) as u32; idx += 1; } block += 16; } indices } After some testing, LLVM does appear to recognize the array that results as an alignr and simplifies it down. I wonder if there's any point adding shift and align? They can both be implemented manually, or they can be implemented via rotate and select, which I would think the compiler could optimize. They're not the most obvious functions, so it might actually be clearer manually implementing them than having to reference the docs.
2025-04-01T04:35:24.512677
2017-01-20T18:01:35
202204279
{ "authors": [ "BurntSushi", "bors", "shepmaster" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10513", "repo": "rust-lang/regex", "url": "https://github.com/rust-lang/regex/pull/331" }
gharchive/pull-request
Add categories to Cargo.toml Hi! crates.io now supports categories, which are a curated list of topics aimed at helping an end-user coming to crates.io looking for "a crate to do ______". We're sending pull requests to selected crates to add categories in order to help populate the categories and seed their usefulness. We've made a guess at the best category/categories for this crate; if it doesn't fit, please feel free to take a look at all the available categories and their descriptions and the slug values that should be specified in your Cargo.toml and pick different ones. If you have a category in mind that isn't available, you can send a PR to this file on crates.io to propose additional categories. Crates can have up to 5 categories, and uploading categories to crates.io currently requires publishing a new version with a cargo nightly from 2017-01-18 or later (it needs to contain this PR). We've published a blog post with further details about categories. The blog post also talks about the new crates.io support for CI badges, which you may be interested in adding as well. Please let me know if you have any questions! @bors r+ :pushpin: Commit c4e596f has been approved by BurntSushi Thanks! :-) :hourglass: Testing commit c4e596f with merge e3c845c... @bors retry @bors r- @bors r+ :pushpin: Commit c4e596f has been approved by BurntSushi :sunny: Test successful - status-appveyor, status-travis Approved by: BurntSushi Pushing e3c845c7a947d587b4e348d35a89417721222faf to master...
2025-04-01T04:35:24.551912
2015-05-22T19:40:07
79582379
{ "authors": [ "cmr", "metaljoe" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10514", "repo": "rust-lang/rust-installer", "url": "https://github.com/rust-lang/rust-installer/issues/35" }
gharchive/issue
Mac OS X installer does not document supported versions & fails with unhelpful message on old Mac Reported by @mcclure on https://github.com/rust-lang/rust/issues/25704 If I go to rust-lang.org, I see a big friendly blue "Install" button labeled with: Recommended Version: 1.0.0 (Mac installer) Problem 1: You do not clarify on the web page which versions of Mac the installer runs on (OS version, and if 10.6 or older is supported, 32- or 64- bit). Problem 2: When I click the "install" button and run the pkg, it quickly halts with this message: Notice the buttons are grayed out— clicking on them does nothing. This error message makes no sense at all. For context, I am running 10.6.8 on 64-bit intel (this is fairly old in Mac years, so it would be unsurprising if the Rust binaries do not support it). While the Installer is running, these messages appear in Console.app: Expected behavior: If the installer fails, it should be with an error message that makes sense. If my operating system version is not supported, I would expect the Rust web page or installer to simply tell me so. I've just downloaded the 1.1.0 installer as I wanted to give Rust a try, and I've got the same problem as described above. This is also on OS X 10.6.8 with both 64- and 32-bit installers.
2025-04-01T04:35:24.558434
2022-03-07T00:24:13
1160763217
{ "authors": [ "Jemoka", "brotzeit" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10515", "repo": "rust-lang/rust-mode", "url": "https://github.com/rust-lang/rust-mode/issues/442" }
gharchive/issue
(cargo-locate-project) returns 101 over tramp Hi everyone! Thanks so much for the great support for Rust on emacs. (rust-test) (rust-build) etc. or any other function that calls (cargo-locate-project) fails when called over a file on tramp (remote file access protocol.) This renders rust-mode unable to build with a workflow involving remote files. Some version info, in case it is necessary: rust-mode 1.0.2 emacs 28.0.91 host machine: macOS 11.6, rust 1.58.1 (db9d1b20b 2022-01-20) remote machine: Arch Linux with systemd 249 (249.2-1-arch), rust 1.56.1 (59eed8a2a 2021-11-01) And here's the error: rust-buffer-project: ‘cargo locate-project’ returned 101 status: error: could not find `Cargo.toml` in `/Users/houliu` or any parent directory Thank you very much! Does it work when you set (setq rust-cargo-bin "~/.cargo/bin/cargo") in your config ? it unfortunately doesn't, because ~ defaults to local machine and not tramp. Hey folks! Just wanted to ask if there's any forward motion on this topic? Thanks so much! #449 it's not ideal, but at least you should now be able to run cargo commands remotely. Got it, thank you very much. Feel free to close if there is no further progress; I will reopen if I see anything with #449. Thanks so much for your help! I'll keep the issue open until there's a better fix. Can you also open a pr with your fix here ? Drop the file-remote-p condition in rust-buffer-project and replace call-process with process-file ? Sure, work just started pretty heavily for me but I will do this when the weekend hits. I forgot I wrote this and added it to https://github.com/rust-lang/rust-mode/pull/457 :D But thanks for the issue.
2025-04-01T04:35:25.073606
2016-03-28T20:17:45
144072435
{ "authors": [ "Jayflux", "pravic", "steveklabnik" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10516", "repo": "rust-lang/sublime-rust", "url": "https://github.com/rust-lang/sublime-rust/pull/77" }
gharchive/pull-request
Add Cargo-Doc, Cargo-Release and Cargo-Update to the Cargo.sublime-build To skip manually writing these commands in shell. Cargo-Doc with --no-deps because it is most common choice, I guess. Bump: @steveklabnik @brson Cargo-Doc with --no-deps because it is most common choice, I guess. I am not sure about this, but I am also not sure how to even figure that out. @brson what do you think? Of course we can add more commands (God bless the fuzzy search!). Since I'm in doubt how to provide user input for ST's build system args. Cargo Doc and Cargo release are added now so closing this
2025-04-01T04:35:25.114783
2019-08-28T22:49:40
486646073
{ "authors": [ "Barronli", "phil-opp" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10517", "repo": "rust-osdev/bootloader", "url": "https://github.com/rust-osdev/bootloader/issues/78" }
gharchive/issue
Build successfully with benign errors in WSL When I build the bootloader under WSL (Windows subsystem Linux), it finishes while reporting errors: Compiling bootloader v0.8.0 (/mnt/d/devs/Rust/bootloader) error: failed to remove /mnt/d/devs/Rust/bootloader/target/x86_64-bootloader/release/deps/bootloader-589b595378e59014.bootloader.3pdwijpa-cgu.0.rcgu.o: Input/output error (os error 5) error: failed to remove /mnt/d/devs/Rust/bootloader/target/x86_64-bootloader/release/deps/bootloader-589b595378e59014.bootloader.3pdwijpa-cgu.1.rcgu.o: Input/output error (os error 5) error: failed to remove /mnt/d/devs/Rust/bootloader/target/x86_64-bootloader/release/deps/bootloader-589b595378e59014.bootloader.3pdwijpa-cgu.12.rcgu.o: Input/output error (os error 5) error: aborting due to 3 previous errors Finished release [optimized + debuginfo] target(s) in 3.83s It does not report the errors if I enter the build command again. The mentioned files can be removed manually since they have all the r/w file permissions set. The same errors reported when building blog_os as well, and does not block anything. I.e., blog_os can run/test in WSL as expected. The same situation happens since post-02. I think os error 5 is an "access denied" error. I'm not sure what exactly causes this, but it looks like rustc itself causes this issue when trying to remove old compilation artifacts. Maybe it's related to https://github.com/rust-lang/rust/issues/48700? @phil-opp You are right, and it is a rustc related issue. I saw the evidence in the stack trace of codegen when it tries to remove the intermediate objects after linking. But this issue is different from the one you mentioned in rust-lang/rust#48700, since I am not using VScode. I suspected it was due to my antivirus software, but the problem persists after I disabled the file scanning. Anyway, I am closing this issue, and will open one in rust-lang. Btw, the debug build (without passing "--release") to cargo does not have the issue. Thanks for the update! Let me know if there is anything that I can do in this project to prevent or work around this issue. It looks like if I build the project without the "--release" option, it does not report error, but I don't know how to force rustc (or cargo) to work the same way when with the "--release" option. I tried adding the following two lines (both and either) in Cargo.toml but got same error. I guess there are more differences between the release and dev profiles that I failed to find out by googling. [profile.release] panic = "abort" lto = false + debug = true + opt-level = 0 For blog_os purpose, don't know if it is possible to set the dependency on bootloader's debug build. I tried to use the following cargo feature in blog_os Cargo.toml but failed too. cargo-features = ["profile-overrides"] [profile.release.overrides.bootloader] debug = true opt-level = 0 Then I realized that bootimage itself actually builds the kernel and bootloader too, and always builds the bootloader in release mode. I commented the line of "--release" in bootimage builder.rs, then I can build blog_os without the said errors, but I cannot run the OS correctly. I am afraid the --release build of bootloader is necessary? Yeah, it seems like --release is currently necessary for some reason, but I'm not sure why. I suspect that is has something to do with the assembly code that loads the rest of the bootloader. Either way, this issue doesn't sound like it would be really fixed with a debug build. Even if it doesn't happen for this crate and compiler version, it might happen again with different versions.
2025-04-01T04:35:25.117842
2023-03-05T08:01:28
1610097354
{ "authors": [ "devsnek", "phil-opp" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10518", "repo": "rust-osdev/bootloader", "url": "https://github.com/rust-osdev/bootloader/pull/346" }
gharchive/pull-request
kernel image fields & zero out rbp these changes stemmed from my efforts to add stack traces to my kernel kernel_addr + kernel_len for building a slice that holds the elf kernel_image_offset for computing offsets from the instruction pointers in stack frames convention says that you should zero rbp before jumping into your actual code so that you know where to stop the stack trace There is some prior discussion about a similar change in https://github.com/rust-osdev/bootloader/pull/177. Back then I argued that might not want to guarantee that we keep the full kernel ELF file in physical memory and instead make it configurable. My original thought was that we might want to optimize the memory consumption. However, we haven't implemented such an optimization in the past two years, so I think I'm fine with merging this as-is now. @phil-opp is anything still remaining here? could it be merged? Thanks for the adjustments! Looks good, sorry for the slow review.
2025-04-01T04:35:25.120495
2020-05-16T19:43:37
619542782
{ "authors": [ "phil-opp", "vlovich" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10519", "repo": "rust-osdev/cargo-xbuild", "url": "https://github.com/rust-osdev/cargo-xbuild/issues/78" }
gharchive/issue
cargo xtest doesn't work under WSL1 with native QEMU Install QEMU natively on Windows Under WSL setup cargo In ~/bin/qemu-system-x86_64 put exec /mnt/c/Program\ Files/qemu/qemu-system-x86_64.exe "$@" and make the file executable run cargo xrun run cargo xtest The xrun command works because it provides a relative path to the target to run to qemu. However xtest fails because it provides an absolute path which native QEMU doesn't understand. Passing a relative path just like xrun does should fix the issue. Running: `qemu-system-x86_64 -drive format=raw,file=/mnt/c/projects/blog_os/target/x86_64-blog_os/debug/deps/bootimage-blog_os-f9b6181f9decdfac.bin -device isa-debug-exit,iobase=0xf4,iosize=0x04 -serial stdio -display none` C:\Program Files\qemu\qemu-system-x86_64.exe: -drive format=raw,file=/mnt/c/projects/blog_os/target/x86_64-blog_os/debug/deps/bootimage-blog_os-f9b6181f9decdfac.bin: Could not open '/mnt/c/open/blog_os/target/x86_64-blog_os/debug/deps/bootimage-blog_os-f9b6181f9decdfac.bin': The system cannot find the path specified. This seems to be an issue of the bootimage tool and not of cargo-xbuild. Let me try to transfer the issue to the bootimage repository.
2025-04-01T04:35:25.161794
2023-03-26T11:30:33
1640872218
{ "authors": [ "jon-chuang", "olexiyb", "setzer22" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10521", "repo": "rustformers/llama-rs", "url": "https://github.com/rustformers/llama-rs/issues/77" }
gharchive/issue
Swap strategy for infinite output As discussed in https://github.com/ggerganov/llama.cpp/issues/71#issuecomment-1483907574 The idea is to achieve a naive implementation for infinite output generation using a strategy that simply clears the context window (you can keep the original prompt around), and starts adding new tokens. This is a hack that doesn't properly leverage the advantages of the attention mechanism: When the context window gets full, the transformer's hidden state has information about more than just the last 2048 tokens, because this information is there indirectly embedded in the outputs for the self-attention mechanism. For example, if token 25 attended to tokens 10 and 12, even when tokens 10 and 12 fall outside the context window, a lot of information about these tokens will still be encoded at position 25. A solution that slides the context window would achieve a gradually "fading" context window, instead of something where the transformer 100% forgets about a word the moment a token falls outside of context. I have some reason to suspect systems like ChatGPT are relying on a mechanism like this based on their ability to consistently recall parts of the conversation that occured way before the token window was exceeded. However, I'm not knowledgeable enough to figure out if there's a way to actually make this work, given the fact that the positional encoding function used in LLaMA (RoPE) is absolute, not relative. By doing the swap trick proposed here, the transformer will effectively forget all prior context whenever the swap occurs, and there will be a lag spike due to the last few tokens having to be reprocessed. So this is very much non-ideal. However, since llama.cpp has recently implemented this, I feel like we should at least add this naive version too until someone can figure out a real solution. Yes, llama.cpp implements a "hacky" method like this, it takes the last $k$ tokens + first $n$ "prompt" tokens when the window becomes full There is a pull request to solve this, please review https://github.com/rustformers/llm/pull/424
2025-04-01T04:35:25.168066
2019-04-17T07:48:24
434315295
{ "authors": [ "alexcrichton", "c410-f3r", "dbrgn" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10522", "repo": "rustwasm/wasm-bindgen", "url": "https://github.com/rustwasm/wasm-bindgen/issues/1468" }
gharchive/issue
Typescript files: Add docs for methods 💡 Feature description In the Typescript declaration files that are generated, the struct docstrings are included but the method docstrings aren't. It would be great if methods were documented too. 💻 Basic example Rust: /// The context object containing the state. #[wasm_bindgen] pub struct ComposeArea { window: web_sys::Window, document: web_sys::Document, wrapper_id: String, selection_range: Option<Range>, } #[wasm_bindgen] impl ComposeArea { /// Insert plain text at the current caret position. pub fn insert_text(&mut self, text: &str) { let text_node = self.document.create_text_node(text); self.insert_node(text_node.unchecked_ref()); } } Current type declaration: /** * The context object containing the state. */ export class ComposeArea { insert_text(text: string): void; } Desired type declaration: /** * The context object containing the state. */ export class ComposeArea { /** * Insert plain text at the current caret position. */ insert_text(text: string): void; } Thanks for the report! I've transferred this issue to the wasm-bindgen repository since this is where the fix will go, but I believe this is definitely something we should fix! wasm-bindgen already generates docs for JS and it shouldn't be too difficult to implement it for TS. If no one is going to take on this issue, I will gladly send a PR. Indeed yeah, and PRs are most welcome! Cool, thanks @c410-f3r!
2025-04-01T04:35:25.175445
2019-09-19T18:07:57
495950125
{ "authors": [ "alexcrichton", "andimarek", "garrettmaring", "terwer" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10523", "repo": "rustwasm/wasm-bindgen", "url": "https://github.com/rustwasm/wasm-bindgen/issues/1775" }
gharchive/issue
How do you use a Rust struct with a String field? Summary How do you use a Rust struct with a String field using wasm-bindgen? The String type seems to be supported for function parameters and return values. https://rustwasm.github.io/docs/wasm-bindgen/reference/types/string.html Additional Details #[wasm_bindgen] pub struct Data { id: String, } Thanks for the report! For this you'll want to use getters and setters, and that shoul dod the trick! That worked! Thank you. Hi @garrettmaring can you share some details how exactly you solved it with getters and setters? thanks Sure! // doesn't work... #[wasm_bindgen] struct Data { pub id: String, } You'll get the error error[E0277]: the trait bound std::string::String: std::marker::Copy is not satisfied. Since, the String type in Rust isn't implicitly copyable. I had to read up on the difference between Copy and Clone to understand that I couldn't just implement Copy but rather needed to use .clone() to explicitly copy it. Thankfully, wasm-bindgen gives us a simple way to do it. #[wasm_bindgen] struct Data { id: String, // ensure that the field is private } #[wasm_bindgen] impl Data { #[wasm_bindgen(getter)] pub fn id(&self) -> String { self.id.clone() } #[wasm_bindgen(setter)] pub fn set_id(&mut self, id: String) { self.id = id; } } There are some interesting things that you can do with getters and setters that are documented here. @alexcrichton would it be feasible for wasm-bindgen to generate this code if a struct implements Clone? It's plausible, yeah! It's something though we've avoided doing historically because a Clone implementation can often be accidentally quite expensive, so we tend to prefer to request that users do so manually to ensure they know the cost they're opt-ing into Now that being said, it'd be a neat feature to do something like #[wasm_bindgen(getter_setter_with_clone)] or something like that so the boilerplate could be drastically reduced Thanks @garrettmaring , it works.
2025-04-01T04:35:25.180500
2020-09-05T09:49:34
694048575
{ "authors": [ "richardrl", "rusty1s" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10524", "repo": "rusty1s/pytorch_geometric", "url": "https://github.com/rusty1s/pytorch_geometric/issues/1611" }
gharchive/issue
On the fly dataset generation In the docs, it says: https://pytorch-geometric.readthedocs.io/en/latest/notes/create_dataset.html?highlight=on the fly#frequently-asked-questions from torch_geometric.data import Data, DataLoader data_list = [Data(...), ..., Data(...)] loader = DataLoader(data_list, batch_size=32) that the above is the recommended way to create on the fly datasets. But, that is not exactly correct, right? Because for me, on the fly means, that we generate every batch on the fly. The example code assumes you generate all the data at once. I am interested in quite a different use case; for me, I want to sample a fresh batch every time I do backprop. E.g., my neural network will likely never see the exact batch twice. This means I should generate batches on the fly, instead of generating all the data one time like in the example code snippet and loading a DataLoader. What's the appropriate way to do this, while still using the BatchClass? I guess I could generate data as lists on the fly and use from_data_list to create batch objects... If that doesn't have much overhead that would probably be the right way Your solution should work fine. If you really want to create batches on the fly, you can override the collate function of torch.utils.data.DataLoader, e.g.: from torch.utils.data import DataLoader def collate(batch): # Create and return your batch object. loader = DataLoader(range(num_examples), batch_size=1, collate_fn=collate) Your solution should work fine. If you really want to create batches on the fly, you can override the collate function of torch.utils.data.DataLoader, e.g.: from torch.utils.data import DataLoader def collate(batch): # Create and return your batch object. loader = DataLoader(range(num_examples), batch_size=1, collate_fn=collate) Hi, this is not possible right now because DataLoader class has the collate_fn hardcoded, instead of passed in as a default kwarg. You need to use the DataLoader directly from PyTorch: from torch.utils.data import DataLoader
2025-04-01T04:35:25.182195
2019-07-08T13:07:11
465247653
{ "authors": [ "jlevy44", "rusty1s" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10525", "repo": "rusty1s/pytorch_geometric", "url": "https://github.com/rusty1s/pytorch_geometric/issues/497" }
gharchive/issue
Bayesian GCN https://arxiv.org/pdf/1811.11103.pdf https://rlgm.github.io/papers/64.pdf It would be nice to also have bayesian estimator methods that would allow simulations of graphs. I think it could well supplement some of the other statistical models out there for learning on graphs. Feel free to submit PRs as well for any features you would like to see in PyG ;)
2025-04-01T04:35:25.187317
2020-12-03T10:35:32
756073378
{ "authors": [ "markoaamunkajo" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10526", "repo": "ruuvi/com.ruuvi.station", "url": "https://github.com/ruuvi/com.ruuvi.station/issues/195" }
gharchive/issue
1.4.2. Sync gets stuck on saving fw 3.29.1 Description: Oneplus 7T Pro (replicated on Samsung S8) User reported Sync gets stuck on downloading data from fw 3.29.1 tag to Ruuvi Station Actual: Press sync button, process connects and starts downloading, when it gets to end it doesn't enable OK button If user exists by back button on phone he will see no data was downloaded to graphs Expected: Syncs data successfully Fixed in 1.4.2. QA passed
2025-04-01T04:35:25.192524
2022-04-30T07:44:24
1221768462
{ "authors": [ "alongnice", "rvaiya" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10527", "repo": "rvaiya/warpd", "url": "https://github.com/rvaiya/warpd/issues/102" }
gharchive/issue
the function request and a bug The function request First of all, this project is really cool. My problem is that I have two screens and my operating system is UOS. I set the shortcut keys to open WARpd, but each time it only runs on one of my screens, the one where the mouse was originally,(I'm not sure) I would like developers to consider this application scenario if possible. By the way, I have installed kali Linux system on my laptop. In the prompt mode, there is no response when I press the position I want to click. There are two questions in total, and I can ask them separately if necessary. There is a dedicated screen mode (s). Please see the man page for two details.
2025-04-01T04:35:25.312631
2018-07-05T19:19:29
338695426
{ "authors": [ "romainsimon", "ryanmcdermott" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10528", "repo": "ryanmcdermott/js-fp-algorithms", "url": "https://github.com/ryanmcdermott/js-fp-algorithms/issues/1" }
gharchive/issue
One liner ;) https://github.com/ryanmcdermott/js-fp-algorithms/blob/c90671460400df271512e62f7e29857a810f64f0/uniq/index.js#L16 const uniq = list => [...new Set(list)]; That’s definitely smaller :) Does it preserve order of the initial array? On Thu, Jul 5, 2018 at 12:19 PM Romain SIMON<EMAIL_ADDRESS>wrote: https://github.com/ryanmcdermott/js-fp-algorithms/blob/c90671460400df271512e62f7e29857a810f64f0/uniq/index.js#L16 const uniq = list => [...new Set(list)]; — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ryanmcdermott/js-fp-algorithms/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/AE4LKhQF6YUaaJ6fnXPkgNVNzix4TZ22ks5uDmbCgaJpZM4VEaKu . -- ---------------------------- Ryan McDermott github.com/ryanmcdermott https://github.com/ryanmcdermott Yes It does ! Cool I didn't know that, in Python and in Set Theory in general, sets aren't ordered. Feel free to open a PR, thanks!
2025-04-01T04:35:25.336603
2015-04-22T13:42:57
70129657
{ "authors": [ "ImaginaryDevelopment", "ryanrodemoyer" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10529", "repo": "ryanrodemoyer/AltProvidersForAspNetIdentity2", "url": "https://github.com/ryanrodemoyer/AltProvidersForAspNetIdentity2/issues/1" }
gharchive/issue
GlobalAsax This looks really awesome and helpful, however I have a few questions. where is the code that hooks these custom classes into the asp.net authorization/authentication? There doesn't seem to be a global.asax.cs anywhere or an app_startup folder? Unfortunately, I haven't worked on this project recently. I changed jobs and work has taken a priority over most everything else. I should start back on this project as there seems to be interest based on GitHub activity and my StackOverflow post that prompted me to start this project in the first place.
2025-04-01T04:35:25.339322
2022-03-21T03:59:58
1174856826
{ "authors": [ "moltar", "ryansmith94" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10530", "repo": "ryansmith94/rulr", "url": "https://github.com/ryansmith94/rulr/issues/649" }
gharchive/issue
Missing repo from NPM NPM package has no repo URLs defined. See: https://www.npmjs.com/package/rulr :tada: This issue has been resolved in version 8.7.1 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:25.341174
2020-01-16T19:39:29
551013022
{ "authors": [ "aguilera51284", "ryansolid" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10531", "repo": "ryansolid/solid", "url": "https://github.com/ryansolid/solid/issues/114" }
gharchive/issue
Plugin i find this router for solid solid-router would this apply to the related projects section? You are absolutely right. I want to do more to facilitate the community here and prevent things from drying up. I did reach out to find out the status of that project when it was unclear if it was being maintained. I think I need to endeavor to embrace these projects more. I will add a community section to the readme as I am made aware of more interesting projects. Honestly I need all the help I can get, and if contributing to the core is difficult community projects are a great way. I think we make this shift moving forward. And I will close this as done for now,
2025-04-01T04:35:25.379047
2017-02-14T03:06:56
207405363
{ "authors": [ "KevinYuk", "arunreddy", "bsrinivas8687", "oscfri", "rykov8" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10532", "repo": "rykov8/ssd_keras", "url": "https://github.com/rykov8/ssd_keras/issues/38" }
gharchive/issue
[question]The code is written by tensorflow API, not pure Keras API. It can only run on tensorflow, not theano? e.g.: ssd_training.py. A lot of code are used by tensorflow API. Not pure keras API. @KevinYuk yes, you're right. Forward pass should work with Theano backend also but I've never tested it. Training part is Tensorflow only because of the lack of some methods in Keras backend. @rykov8 Thanks for your response and and great work. I try to run forward code on theano, it seems that something wrong. I can run the forward by the keras config below(backend is theano, but image_dim_ordering is th. Or it can not run): { "image_dim_ordering": "th", "epsilon": 1e-07, "floatx": "float32", "backend": "theano" } But it cannot correctly predict the picture as tensorflow. I found that ssd.py code is written by keras API, but ssd_layers.py and ssd_utils.py are written by tf API(Just focus on froward, not trainning). E.g.: In ssd_layers.py, I found the code below: if K.backend() == 'tensorflow': pattern = [tf.shape(x)[0], 1, 1] prior_boxes_tensor = tf.tile(prior_boxes_tensor, pattern) elif K.backend() == 'theano': #TODO pass return prior_boxes_tensor It seems that theano is not implemented. Is that Ok? Or do you have any idea how to fix it? Thanks again for your great work!!! @KevinYuk @rykov8 Thanks a lot for porting SSD to keras. Using the bleeding edge theano I was able to get the SSD example working. But the detection is not bad. Is it a bug due to lack of theano support? Please refer the screenshot: @arunreddy I can also get the same result as you. But that's not right. You can check your code, you will find that it still run on the tensorflow, not theano. You keras config is like below, right? { "image_dim_ordering": "tf", "epsilon": 1e-07, "floatx": "float32", "backend": "theano" } But image_dim_ordering=tf will direct the code logic go to the tensorflow, not theano. class Convolution2D(Layer): '''Convolution operator for filtering windows of two-dimensional inputs. When using this layer as the first layer in a model, provide the keyword argument `input_shape` (tuple of integers, does not include the sample axis), e.g. `input_shape=(3, 128, 128)` for 128x128 RGB pictures. # Examples ```python # apply a 3x3 convolution with 64 output filters on a 256x256 image: model = Sequential() model.add(Convolution2D(64, 3, 3, border_mode='same', input_shape=(3, 256, 256))) # now model.output_shape == (None, 64, 256, 256) # add a 3x3 convolution on top, with 32 output filters: model.add(Convolution2D(32, 3, 3, border_mode='same')) # now model.output_shape == (None, 32, 256, 256) ``` # Arguments nb_filter: Number of convolution filters to use. nb_row: Number of rows in the convolution kernel. nb_col: Number of columns in the convolution kernel. init: name of initialization function for the weights of the layer (see [initializations](../initializations.md)), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a `weights` argument. activation: name of activation function to use (see [activations](../activations.md)), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x). weights: list of numpy arrays to set as initial weights. border_mode: 'valid' or 'same'. subsample: tuple of length 2. Factor by which to subsample output. Also called strides elsewhere. W_regularizer: instance of [WeightRegularizer](../regularizers.md) (eg. L1 or L2 regularization), applied to the main weights matrix. b_regularizer: instance of [WeightRegularizer](../regularizers.md), applied to the bias. activity_regularizer: instance of [ActivityRegularizer](../regularizers.md), applied to the network output. W_constraint: instance of the [constraints](../constraints.md) module (eg. maxnorm, nonneg), applied to the main weights matrix. b_constraint: instance of the [constraints](../constraints.md) module, applied to the bias. dim_ordering: 'th' or 'tf'. In 'th' mode, the channels dimension (the depth) is at index 1, in 'tf' mode is it at index 3. It defaults to the `image_dim_ordering` value found in your Keras config file at `~/.keras/keras.json`. If you never set it, then it will be "tf". bias: whether to include a bias (i.e. make the layer affine rather than linear). # Input shape 4D tensor with shape: `(samples, channels, rows, cols)` if dim_ordering='th' or 4D tensor with shape: `(samples, rows, cols, channels)` if dim_ordering='tf'. # Output shape 4D tensor with shape: `(samples, nb_filter, new_rows, new_cols)` if dim_ordering='th' or 4D tensor with shape: `(samples, new_rows, new_cols, nb_filter)` if dim_ordering='tf'. `rows` and `cols` values might have changed due to padding. ''' def __init__(self, nb_filter, nb_row, nb_col, init='glorot_uniform', activation='linear', weights=None, border_mode='valid', subsample=(1, 1), dim_ordering='default', W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True, **kwargs): if dim_ordering == 'default': dim_ordering = K.image_dim_ordering() image_dim_ordering will direct you code run theano. @KevinYuk sorry, I forgot about ssd_layers.py and ssd_utils.py, I am afraid, you'll have to reimplement the necessary parts in Theano, in case of ssd_layers.py it should be simple. You just need to right the similar thing to pattern = [tf.shape(x)[0], 1, 1] prior_boxes_tensor = tf.tile(prior_boxes_tensor, pattern) in Theano under the appropriate brach of if-else. Probably, there can be a simple fix using Keras backend also, I will examine it later, or you can make a PR, if you manage to fix it. In case of ssd_utils.py the situation is more complicated, because here I rely on tf.image.non_max_suppression that is definitely not presented in Keras backend, so, you need to reimplement this part in pure Theano. @KevinYuk @arunreddy I believe, the problem lies in convolutional weights. As you know, actually Theano performs vanilla convolution, but Tensorflow performs cross-correlation. But it is quite simple to fix: you just need to flip the kernels like: theano_kernel = tf_kernel[::-1, ::-1, :, :] Then you need to transpose the kernel in order to match Theano dim_ordering, probably like this: theano_kernel = theano_kernel.transpose(2, 3, 0, 1) Please, check the correct order of axes in Theano, here I assume them to be (old_channels, new_channels, width, height), but probably, I'm wrong. I believe, that here is the example of kernel changing. @rykov8 Thanks for your valuable input. If you could finish the theano version, that would be better. Let me summarize the work if we want to enable the foward(aka: inference) of SSD on theano. complete the code for theano in ssd_layers.py file. Only include the last several lines; re-implement non max suppression algorithm by theano API; Then we can use it in ssd_utils.py file; adjust convolutional weights. Still there are some question below, please help check them: Q1. in ssd_layers.py file, we just need complete below code, nothing else? ...... elif K.backend() == 'theano': #TODO pass return prior_boxes_tensor Q2. in ssd_utils.py file, we just need re-implement non max suppression algorithm by theano API and replace it from tf API to theano, nothing else? I found that there are decode_boxes, encode_box, assign_boxes , iou fuctions and so on. We just keep them same, right? Q3. in ssd_utils.py file, some var are initialized by tf API like below: self.boxes = tf.placeholder(dtype='float32', shape=(None, 4)) self.scores = tf.placeholder(dtype='float32', shape=(None,)) should we concern it or not? Q4. Regarding to convolutional weights, do we need modify weights_SSD300.hdf5 for theano? Or just keep it the same as before? I have found that theano arch don't recognize it. ValueError: Input dimension mis-match. (input[0].shape[1] = 3, input[1].shape[1] = 64) Q5. Regarding to prior_boxes_ssd300.pkl, I find that it only has relationship with ssd network topology. However, if I directly use it in theano, is it still ok? Thanks a lot. Kevin. Any progress of this issue? I'm also trying to get this code work with Theano but has stumbled into the same issues. @oscfri I believe we can successfully run inference of SSD based on theano if we follow below steps(but now, I am afraid that I don't have so much time to implement it): convert weights_SSD300.hdf5 from tensorflow shape to theano shape(I have successfully convert it, the code is at the end of this coment); re-implement the two SSD layers from ssd_layers.py. They are Normalize and PriorBox layers which is used by ssd.py models; re-implement the BBox function from ssd_utils.py. The origin code is written by tensorflow. So many works to do if we want to use it on theao. Note: code for convert weights(I have successfully do that): import h5py import numpy as np dst_f = h5py.File("th_weights_SSD300.hdf5", "w") ###add attr first grp_list = ['conv1_1', 'conv1_2', 'conv2_1', 'conv2_2', 'conv3_1', 'conv3_2', 'conv3_3', 'conv4_1', 'conv4_2', 'conv4_3', 'conv4_3_norm', 'conv4_3_norm_mbox_conf', 'conv4_3_norm_mbox_conf_flat', 'conv4_3_norm_mbox_loc', 'conv4_3_norm_mbox_loc_flat', 'conv4_3_norm_mbox_priorbox', 'conv5_1', 'conv5_2', 'conv5_3', 'conv6_1', 'conv6_2', 'conv6_2_mbox_conf', 'conv6_2_mbox_conf_flat', 'conv6_2_mbox_loc', 'conv6_2_mbox_loc_flat', 'conv6_2_mbox_priorbox', 'conv7_1', 'conv7_2', 'conv7_2_mbox_conf', 'conv7_2_mbox_conf_flat', 'conv7_2_mbox_loc', 'conv7_2_mbox_loc_flat', 'conv7_2_mbox_priorbox', 'conv8_1', 'conv8_2', 'conv8_2_mbox_conf', 'conv8_2_mbox_conf_flat', 'conv8_2_mbox_loc', 'conv8_2_mbox_loc_flat', 'conv8_2_mbox_priorbox', 'fc6', 'fc7', 'fc7_mbox_conf', 'fc7_mbox_conf_flat', 'fc7_mbox_loc', 'fc7_mbox_loc_flat', 'fc7_mbox_priorbox', 'input_2', 'mbox_conf', 'mbox_conf_final', 'mbox_conf_logits', 'mbox_loc', 'mbox_loc_final', 'mbox_priorbox', 'pool1', 'pool2', 'pool3', 'pool4', 'pool5', 'pool6', 'pool6_mbox_conf_flat', 'pool6_mbox_loc_flat', 'pool6_mbox_priorbox', 'pool6_reshaped', 'predictions', 'zeropadding2d_3'] dst_f.attrs['layer_names'] = grp_list ###read src weights_SSD300.hdf5 src_f = h5py.File("weights_SSD300.hdf5") #visit each group for grp in grp_list: dset = src_f[grp] attr_data_list = dset.attrs['weight_names'] #create grp first dst_grp = dst_f.create_group(grp) #create grp attributes dst_grp.attrs['weight_names'] = attr_data_list for attr_data in attr_data_list: dataset_name = grp + '/' + attr_data dset1 = src_f[dataset_name] if 4 == dset1.ndim: #convert from hdf5 to numpy data format np_dat = np.array(dset1) #transpose and reshape idx0 = 3 idx1 = 2 idx2 = 0 idx3 = 1 np_dat = np.transpose(np_dat, (idx0, idx1, idx2, idx3)).reshape(np_dat.shape[idx0], np_dat.shape[idx1], np_dat.shape[idx2], np_dat.shape[idx3]) #write it back to dst_f hdf5 dset2 = dst_f.create_dataset(dataset_name, data=np_dat) #print np_dat else: np_dat = np.array(dset1) dset2 = dst_f.create_dataset(dataset_name, data=np_dat) Thanks! I will see if I can manage to implement step 2 and 3. On Wed, Feb 22, 2017, 19:09 KevinYuk<EMAIL_ADDRESS>wrote: @oscfri https://github.com/oscfri I believe we can successfully run inference of SSD based on theano if we follow below steps(but now, I am afraid that I don't have so much time to implement it): convert weights_SSD300.hdf5 from tensorflow shape to theano shape(I have successfully convert it, the code is at the end of this coment); re-implement the two SSD layers from ssd_layers.py. They are Normalize and PriorBox layers which is used by ssd.py models; re-implement the BBox function from ssd_utils.py. The origin code is written by tensorflow. So many works to do if we want to use it on theao. Note: code for convert weights(I have successfully do that): import h5py import numpy as np dst_f = h5py.File("th_weights_SSD300.hdf5", "w") ###add attr first grp_list = ['conv1_1', 'conv1_2', 'conv2_1', 'conv2_2', 'conv3_1', 'conv3_2', 'conv3_3', 'conv4_1', 'conv4_2', 'conv4_3', 'conv4_3_norm', 'conv4_3_norm_mbox_conf', 'conv4_3_norm_mbox_conf_flat', 'conv4_3_norm_mbox_loc', 'conv4_3_norm_mbox_loc_flat', 'conv4_3_norm_mbox_priorbox', 'conv5_1', 'conv5_2', 'conv5_3', 'conv6_1', 'conv6_2', 'conv6_2_mbox_conf', 'conv6_2_mbox_conf_flat', 'conv6_2_mbox_loc', 'conv6_2_mbox_loc_flat', 'conv6_2_mbox_priorbox', 'conv7_1', 'conv7_2', 'conv7_2_mbox_conf', 'conv7_2_mbox_conf_flat', 'conv7_2_mbox_loc', 'conv7_2_mbox_loc_flat', 'conv7_2_mbox_priorbox', 'conv8_1', 'conv8_2', 'conv8_2_mbox_conf', 'conv8_2_mbox_conf_flat', 'conv8_2_mbox_loc', 'conv8_2_mbox_loc_flat', 'conv8_2_mbox_priorbox', 'fc6', 'fc7', 'fc7_mbox_conf', 'fc7_mbox_conf_flat', 'fc7_mbox_loc', 'fc7_mbox_loc_flat', 'fc7_mbox_priorbox', 'input_2', 'mbox_conf', 'mbox_conf_final', 'mbox_conf_logits', 'mbox_loc', 'mbox_loc_final', 'mbox_priorbox', 'pool1', 'pool2', 'pool3', 'pool4', 'pool5', 'pool6', 'pool6_mbox_conf_flat', 'pool6_mbox_loc_flat', 'pool6_mbox_priorbox', 'pool6_reshaped', 'predictions', 'zeropadding2d_3'] dst_f.attrs['layer_names'] = grp_list ###read src weights_SSD300.hdf5 src_f = h5py.File("weights_SSD300.hdf5") #visit each group for grp in grp_list: dset = src_f[grp] attr_data_list = dset.attrs['weight_names'] #create grp first dst_grp = dst_f.create_group(grp) #create grp attributes dst_grp.attrs['weight_names'] = attr_data_list for attr_data in attr_data_list: dataset_name = grp + '/' + attr_data dset1 = src_f[dataset_name] if 4 == dset1.ndim: #convert from hdf5 to numpy data format np_dat = np.array(dset1) #transpose and reshape idx0 = 3 idx1 = 2 idx2 = 0 idx3 = 1 np_dat = np.transpose(np_dat, (idx0, idx1, idx2, idx3)).reshape(np_dat.shape[idx0], np_dat.shape[idx1], np_dat.shape[idx2], np_dat.shape[idx3]) #write it back to dst_f hdf5 dset2 = dst_f.create_dataset(dataset_name, data=np_dat) #print np_dat else: np_dat = np.array(dset1) dset2 = dst_f.create_dataset(dataset_name, data=np_dat) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/rykov8/ssd_keras/issues/38#issuecomment-281598078, or mute the thread https://github.com/notifications/unsubscribe-auth/AC7s8EW8TBl8dqeMrESoo8VxLkwatctzks5re-1AgaJpZM4MAAWz . @rykov8 I am unable to load the model with 'th' image_dim_ordering and 'tensorflow' backend; I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally Traceback (most recent call last): File "./test.py", line 30, in <module> model = SSD300(input_shape, num_classes=NUM_CLASSES) File "/home/dummy/ssd_keras/ssd.py", line 77, in SSD300 net['conv4_3_norm'] = Normalize(20, name='conv4_3_norm')(net['conv4_3']) File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 572, in __call__ self.add_inbound_node(inbound_layers, node_indices, tensor_indices) File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 635, in add_inbound_node Node.create_node(self, inbound_layers, node_indices, tensor_indices) File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 166, in create_node output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0])) File "/home/dummy/ssd_keras/ssd_layers.py", line 43, in call output *= self.gamma File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 814, in binary_op_wrapper return func(x, y, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 987, in _mul_dispatch return gen_math_ops.mul(x, y, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 1613, in mul result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2242, in create_op set_shapes_for_outputs(ret) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1617, in set_shapes_for_outputs shapes = shape_func(op) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1568, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn debug_python_shape_fn, require_shape_fn) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 675, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Dimensions must be equal, but are 38 and 512 for 'mul' (op: 'Mul') with input shapes: [?,512,38,38], [512].``` Is there any way to use model with 'th' image_dim_order? I am unable to load model with 'th' image_dim_ordering and 'tensorflow' backend. Using TensorFlow backend. I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally Traceback (most recent call last): File "./test.py", line 30, in <module> model = SSD300(input_shape, num_classes=NUM_CLASSES) File "/home/dummy/ssd_keras/ssd.py", line 77, in SSD300 net['conv4_3_norm'] = Normalize(20, name='conv4_3_norm')(net['conv4_3']) File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 572, in __call__ self.add_inbound_node(inbound_layers, node_indices, tensor_indices) File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 635, in add_inbound_node Node.create_node(self, inbound_layers, node_indices, tensor_indices) File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 166, in create_node output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0])) File "/home/dummy/ssd_keras/ssd_layers.py", line 43, in call output *= self.gamma File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 814, in binary_op_wrapper return func(x, y, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 987, in _mul_dispatch return gen_math_ops.mul(x, y, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 1613, in mul result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2242, in create_op set_shapes_for_outputs(ret) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1617, in set_shapes_for_outputs shapes = shape_func(op) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1568, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn debug_python_shape_fn, require_shape_fn) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 675, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Dimensions must be equal, but are 38 and 512 for 'mul' (op: 'Mul') with input shapes: [?,512,38,38], [512]. Is there any way to use model with 'th' image_dim_order?
2025-04-01T04:35:25.388611
2023-07-02T05:33:52
1784463466
{ "authors": [ "ryojp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10533", "repo": "ryojp/todo", "url": "https://github.com/ryojp/todo/pull/48" }
gharchive/pull-request
actions: use buildx for caching I found a Zenn post that says the bulid of docker-compose was indeed accelerated by using buildx cache. This PR intends to verify it, as this solves #19 It took 5:12 to finish the job this time: https://github.com/ryojp/todo/actions/runs/5434957534 The second time took 3:56: https://github.com/ryojp/todo/actions/runs/5434957534/attempts/2 The step breakdown shows that the Build the [api] image with cache and Build the [frontend] image with cache steps were accelerated by using the cache. However, the main step Build and run the tests was not faster. I'm suspecting this is because docker-compose was used instead of docker compose, which is a docker's plugin. I'll verify this in the next commit. Now, we have two separate tests i.e., api and frontend. Let's divide them into two jobs that run parallel. It took 3:13 for both tests to finish: https://github.com/ryojp/todo/actions/runs/5435029707 If you look at the job carefully, you may find the Start frontend step takes more than 1 minute even though the image should have been built in the previous step. I'm guessing this is because the buildx's result was not used in docker-compose up --build in this step. So, let's remove --build from this. After all, we should have built the image in the previous step. Two things: Removing --build did not work as expected. The build did ran and that was why Start frontend step took as long as before. Adding --load option to the previous buildx build did work, and the Start frontend step took only 14 seconds this time. Final result was 2:18 with caches used: https://github.com/ryojp/todo/actions/runs/5435169049 More than 2x latency reduction from 4:48 before this PR: https://github.com/ryojp/todo/actions/runs/5434319170
2025-04-01T04:35:25.452169
2021-05-11T20:08:01
888466390
{ "authors": [ "s-r-x", "weilinzung" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10534", "repo": "s-r-x/bull-monitor", "url": "https://github.com/s-r-x/bull-monitor/issues/2" }
gharchive/issue
Retry multiple failed jobs feature Hi, Can this have a retry multiple failed jobs feature? I think the table should be easy to migrate with checkbox: https://material-ui.com/components/tables/#data-table Thanks! Hi, Can this have a retry multiple failed jobs feature? I think the table should be easy to migrate with checkbox: https://material-ui.com/components/tables/#data-table Thanks! hello. thanks for the issue. i'll check that out @s-r-x Also, maybe another request, would be nice to be able to download multiple jobs as JSON file. Hi, Can this have a retry multiple failed jobs feature? I think the table should be easy to migrate with checkbox: https://material-ui.com/components/tables/#data-table Thanks! added in 0.14.0! for this json thing please open up a separate issue if you're still interested in it
2025-04-01T04:35:25.454261
2021-03-22T18:18:45
837990629
{ "authors": [ "s0ftik3", "vitallii-t" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10535", "repo": "s0ftik3/reverso-api", "url": "https://github.com/s0ftik3/reverso-api/issues/6" }
gharchive/issue
Requests limit Hi Does Reverso have some request limits? I don't know of any limitations. Actually, it's the same as just to visit the website. If they have any limitations for just visitors, you will encounter with them.
2025-04-01T04:35:25.493415
2018-03-04T22:15:42
302128001
{ "authors": [ "AlexIljin", "WinningAddicted", "ajdani", "harshavardhana" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10537", "repo": "s3git/s3git", "url": "https://github.com/s3git/s3git/issues/23" }
gharchive/issue
Please add s3git.exe for release v0.9.2 On the Releases page there is no Windows executable for the latest version 0.9.2. Could this issue be solved? Would be great if you provide the newest version as Windows executable. GOOS=windows go build should give you that @AlexIljin @ajdani I don't have Go installed, nor am I interested in installing it. I was just pointing out that the Windows executable was supplied for the previous releases, but is missing from the latest one. But thanks for the reply! I've forgotten all about this project since March, when I created this issue. Come to think of it, I don't even have a Linux box to follow your advice, and I'm pretty sure that pasting GOOS=windows ... into cmd.exe would result in an error due to invalid syntax, even on a machine with Go compiler installed. GOOS=windows go build should give you that @AlexIljin @ajdani Thanks for your hint. Tried it under an ubuntu system, cannot compile it - cannot compile it under my windows system too. I don't know what I'm doing wrong, maybe some misconfiguration with cgo or something else.
2025-04-01T04:35:25.514725
2021-01-08T07:22:18
781910582
{ "authors": [ "harshad71", "ritvikmahajan17", "shahednasser" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10538", "repo": "sButtons/sbuttons", "url": "https://github.com/sButtons/sbuttons/issues/1224" }
gharchive/issue
[BUTTON IDEA]: button shadow of familiar color of the button go into disco mode while hover. button shadow of familiar color of the button go into disco mode while hover.
2025-04-01T04:35:25.586818
2015-03-13T12:12:55
61050751
{ "authors": [ "jjeroennl", "sabastiaan" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10539", "repo": "sabastiaan/EntityBasedKitty", "url": "https://github.com/sabastiaan/EntityBasedKitty/issues/1" }
gharchive/issue
Geen issues :-1: U no plan? fucke you Date: Fri, 13 Mar 2015 05:12:57 -0700 From<EMAIL_ADDRESS>To<EMAIL_ADDRESS>Subject: [EntityBasedKitty] Geen issues (#1) U no plan? — Reply to this email directly or view it on GitHub. No no thenk joe :-1: You should report bugs here :(
2025-04-01T04:35:25.654201
2024-02-02T12:31:15
2114826340
{ "authors": [ "ElvisKrop", "Uxio0" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10540", "repo": "safe-global/safe-singleton-factory", "url": "https://github.com/safe-global/safe-singleton-factory/issues/359" }
gharchive/issue
[New chain]: Berachain Artio Chain Name Berachain Artio RPC URL https://artio.rpc.berachain.com The chain must be added to https://chainlist.org/ [X] I confirm that the chain is added to chainlist After creating the issue, a bot will estimate the required pre-fund and post it in the comments. Please check this checkbox after you send the pre-fund. [x] I sent the pre-fund in accordance with Github Action's comment Relevant information https://chainlist.org/chain/80085 0x20cefa870d837dc8828896a2a972ef663abf7744e039b05a41c87981c3bffa75
2025-04-01T04:35:25.655835
2023-07-17T14:44:29
1807948469
{ "authors": [ "Uxio0", "ersanyakit" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10541", "repo": "safe-global/safe-singleton-factory", "url": "https://github.com/safe-global/safe-singleton-factory/pull/196" }
gharchive/pull-request
Add missing deployments Closes #195 Closes #101 Closes #72 Closes #66 Closes #60 Closes #59 @Uxio0 @mmv08 thank you very much
2025-04-01T04:35:25.678785
2018-12-09T00:28:11
388973110
{ "authors": [ "nitaybz", "sagilo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10542", "repo": "sagilo/pyswitcherv2", "url": "https://github.com/sagilo/pyswitcherv2/issues/14" }
gharchive/issue
No matter what I tried, I can't get the info from the pcap file Hey, I'm really enthusiastic in connecting the Switcher v2 to Homebridge thanks to your plugin... but I can't seem to get the credentials I need... please help... I only get to this point: 2018-12-09 00:21:49 | mode: parse_pcap_file Loading and parsing pcap file: file.pcap [+] attempting to load file.pcap [+] found valid header [+] loaded 681 packets [+] finished loading savefile. Not control command, continuing to next packet, command: 25186 Not control command, continuing to next packet, command: 25186 Not control command, continuing to next packet, command: 25186 Not control command, continuing to next packet, command: 25186 Not control command, continuing to next packet, command: 25186 Not control command, continuing to next packet, command: 14384 Not control command, continuing to next packet, command: 25186 Not control command, continuing to next packet, command: 25186 Not control command, continuing to next packet, command: 14384 Not control command, continuing to next packet, command: 25186 Not control command, continuing to next packet, command: 14384 Not control command, continuing to next packet, command: 12341 Not control command, continuing to next packet, command: 13667 Not control command, continuing to next packet, command: 13667 Not control command, continuing to next packet, command: 12341 Not control command, continuing to next packet, command: 12341 Not control command, continuing to next packet, command: 13369 Not control command, continuing to next packet, command: 13369 Not control command, continuing to next packet, command: 12341 Not control command, continuing to next packet, command: 12341 ... ERROR: ERROR: Didn't find relevant ids in pcap file I tried a lot of time to record the packet via tPacketCupture like you told... even tried to sniff iPhone via wireshark... what else can I do to get those? Hi Did you enable local usage and made sure the target IP was local? I did... I can even see some of the packets that are pointed to the device... I can send you some of the packets if you want to review Please do, sent it to<EMAIL_ADDRESS>Thanks did you get my email? any luck with the packets? Got it. I'll update as soon as I'll have any news Thanks! On 9 Dec 2018, at 22:20, sagilo<EMAIL_ADDRESS>wrote: Got it. I'll update as soon as I'll have any news — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread. I've pushed 3eda02f1ed4d085b3ef47ab6b1b7c6646c33f078 which should fix the issue. The pcap file you've sent me is corrupted. Please try capture using Android device and parse with the latest version. Feel free to reopen if the issue persists. Thanks! It work!
2025-04-01T04:35:25.680988
2017-01-05T19:22:43
199037173
{ "authors": [ "c-cube", "sagotch" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10543", "repo": "sagotch/ISO8601.ml", "url": "https://github.com/sagotch/ISO8601.ml/issues/7" }
gharchive/issue
printing durations? I found nothing about https://en.wikipedia.org/wiki/ISO_8601#Durations ? If you want I just wrote a function for "human readable durations", which would also be nice to have. It is indeed a part of the RFC not implemented, since I focused on what I needed and never took time to complete it (yet). Any contribution is very welcome (as long as they stick to the spirit of what is already implemented and respect the RFC, obviously), so feel free to add this function if you know what to do 👍 Can you assign me? ^^ I will see if I have the time, not that I promise anything
2025-04-01T04:35:25.682315
2022-10-02T11:31:54
1393733515
{ "authors": [ "YasharF", "friendofdog" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10544", "repo": "sahat/hackathon-starter", "url": "https://github.com/sahat/hackathon-starter/pull/1202" }
gharchive/pull-request
fix: correct linting errors Corrected linting in test/app.js. Moving the require to top-level module scope did not break tests or seem to have adverse affects on the app running locally. Also corrected some linting errors in other files. Closes #1199 We need to rewrite a good amount of the code to address the vulnerable, deprecated, and out of date dependencies. Will address linting issues after the changes are made on the new code.
2025-04-01T04:35:25.690129
2022-03-25T14:24:10
1180857694
{ "authors": [ "a3de25", "sailro" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10545", "repo": "sailro/EscapeFromTarkov-Trainer", "url": "https://github.com/sailro/EscapeFromTarkov-Trainer/issues/158" }
gharchive/issue
how can i enable thos features? i know when i open alt right i get this menu how can i get those advanced features like:wallhack thickness, color,maximumdistance,show box etc. plus i need to know how can i change things like:player speed of aimbot speed? p.s sorry for bad english i only have something like this That' because you are using an obsolete version of the trainer. I do not even know how that's possible. Only use this for installing : https://github.com/sailro/EscapeFromTarkov-Trainer/releases/tag/installer-1.9 do not download unknown stuff elsewhere. i actually downloaded this obsolete version from here i just download from the link you gave me and its still the same menu That's because you are not running the proper EFT.... Ah another idea: you have a super old EFT, so the installer is using an old branch. Show us your installer window please. i use a cracked version of the game That's bad we do not support piracy. Closing this thread.
2025-04-01T04:35:25.694301
2022-10-07T05:46:20
1400652131
{ "authors": [ "hi-anusha", "sainik-khaddar" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10546", "repo": "sainik-khaddar/CodeHUB", "url": "https://github.com/sainik-khaddar/CodeHUB/issues/139" }
gharchive/issue
P.R. was submitted to a repository that has been excluded out of hacktoberfest as it does not follow the values of the event. Hello, I am getting a message on the hacktoberfest website saying my P.R. was submitted to a repository that has been excluded out of hacktoberfest as it does not follow the values of the event. It will not count towards participation. So is this repository not a part of hacktober fest ? I am attaching the screenshot for your reference. Anyone else facing this issue ? May be it is due to spams in PR May be it is due to spams in PR Does that mean this repository is excluded out of hacktoberfest ? So our P.R.s wont contribute towards hacktoberfest ?
2025-04-01T04:35:25.699915
2024-06-20T12:46:36
2364357099
{ "authors": [ "antoineco", "dmbfm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10547", "repo": "sainnhe/everforest", "url": "https://github.com/sainnhe/everforest/pull/138" }
gharchive/pull-request
Link DiagnosticUnnecessary to TSComment Description As per the neovim documentation, the DiagnosticUnnecessary highlight group corresponds to "unnecessary or unused code." So I think it makes sense to render such code as comments (i.e., dimmed) rather then the default warning with yellow underlines. This is similar to what vscode does I believe, and creates much less visual pollution. Screenshots Before: After: Good point. I hadn't though about the use-case demonstrated in your screenshot, where the code may be unnecessary on one platform but not on another. I think this is reasonable. My suggestion would be to remove the declaration altogether, since DiagnosticUnnecessary is linked to Comment by default. I was testing this change and ran into this: In this case I find a little weird that the warning is rendered as a comment. It seems DiagnosticUnnecessary is applied to both inactive code and unused variables, and I'm not 100% sure they should be rendered the same way. I'm starting to think maybe this PR is not the correct way to handle this after all, and to dim only the inactive code we need to use some plugin like https://github.com/zbirenbaum/neodim to mach the warning with a regular expression since DiagnosticUnnecessary is too broad for that. One thing I thought of. Is there a way we can render these sections greyed/dimmed but not in italics? This is how vs code renders (using everforest for vscode): It would be cool if we could just dim it, but I don't think that can be done without plugins. Yes absolutely, it can be achieved by creating (instead of linking) a highlight group similar to Comment, but with italic disabled. I like the idea. Here is an example of how you could achieve this: call everforest#highlight('DiagnosticUnnecessary', s:palette.grey1, s:palette.none)
2025-04-01T04:35:25.702471
2023-10-26T16:56:45
1963982773
{ "authors": [ "Elijah699", "medha-ab" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10548", "repo": "saintmalik/awesome-oss-docs", "url": "https://github.com/saintmalik/awesome-oss-docs/pull/175" }
gharchive/pull-request
Added hashicorp docs Summary Added HashiCorp documentation entries in the specified format and integrated these entries into the project's index Did you read the CONTRIBUTING.md? yes #154 @medha-ab good job 👏. You took your time to add new Hashicorp Docs
2025-04-01T04:35:25.704841
2020-11-24T11:57:42
749643799
{ "authors": [ "coveralls", "ctrlcctrlv", "dthadi3" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10549", "repo": "saitoha/libsixel", "url": "https://github.com/saitoha/libsixel/pull/140" }
gharchive/pull-request
Travis-ci: added support for ppc64le Added power support for the travis.yml file with ppc64le. This is part of the Ubuntu distribution for ppc64le. This helps us simplify testing later when distributions are re-building and re-releasing. Coverage remained the same at 0.0% when pulling 319385cd1825ef0fba99e36a174b4ac99b4296c8 on dthadi3:ppc64le into 6a5be8b72d84037b83a5ea838e17bcf372ab1d5f on saitoha:master. @dthadi3 This has been merged in the fork (libsixel#4). I'd encourage you to start considering the fork upstream for Ubuntu. It's actually receiving security and other patches. (See #154.) I don't know though if I'll sign up for Travis CI. Is this not doable with GitHub Actions? I thought Travis put up a bunch of limitations recently on FOSS developers?
2025-04-01T04:35:25.712259
2017-09-22T05:46:39
259709427
{ "authors": [ "Ujjaval91", "mish15" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10550", "repo": "sajari/simple-linkedin-php", "url": "https://github.com/sajari/simple-linkedin-php/issues/3" }
gharchive/issue
How to post on my linkedin acount. Hello, I need to post article/post using this library. I have installed it by the composer. Please suggest how to use/call the function to share a post. Also, i have tried with almost same example, where i got an error like "Exception class not found". I checked in your code and found that you also have extended the "Exception" class "class LinkedInException extends Exception {}" but I can not found the class in your library. Kindly assist. @Ujjaval91 would love to help, but our API access got cutoff, we don't know why as no one would respond to us. Feel free to modify this code as you wish, it's getting pretty old now. OK no problem. I just want to know to that from where i can get the "Exception" class to extends child class. Please suggest.
2025-04-01T04:35:25.713970
2022-06-10T08:43:26
1267246733
{ "authors": [ "mpellicer" ], "license": "ECL-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10551", "repo": "sakaiproject/sakai", "url": "https://github.com/sakaiproject/sakai/pull/10640" }
gharchive/pull-request
SAK-44984 Group Member Selection: Keep the selector open to save many… … clicks. This seems smart to me..... it will now be more similar to old-style selector. Thanks! You're welcome, this saves many clicks.
2025-04-01T04:35:25.716337
2020-02-06T23:04:26
561322213
{ "authors": [ "ern", "ottenhoff", "steveswinsburg" ], "license": "ECL-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10552", "repo": "sakaiproject/sakai", "url": "https://github.com/sakaiproject/sakai/pull/7855" }
gharchive/pull-request
Move grades-rest from gb1 to gradebookng @ern where should this relocated REST endpoint live? Here is an overview of what got changed by this pull request: Issues ====== - Added 30 Complexity increasing per file ============================== - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/entity/Student.java 1 - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/entity/Category.java 1 - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/entity/GradesEntityProvider.java 24 - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/entity/User.java 10 - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/logic/ExternalLogic.java 60 - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/entity/GradebookItemScore.java 15 - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/entity/SparseGradebookItem.java 4 - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/entity/Course.java 7 - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/entity/GradebookItem.java 10 - gradebookng/grades-rest/src/main/java/org/sakaiproject/gradebook/entity/Gradebook.java 7 See the complete overview on Codacy Few things these classes could be lombok's reducing their size dramatically and it provides safer implementations of equals and hashcodes. Instead of creating a duplicate gradebook endpoint probably should add these to the existing one to reduce confusion.
2025-04-01T04:35:25.750669
2022-09-01T11:15:15
1358677736
{ "authors": [ "IKarbowiak", "fowczarek", "korycins" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10553", "repo": "saleor/saleor", "url": "https://github.com/saleor/saleor/pull/10531" }
gharchive/pull-request
Attribute refactoring The goal is to simplify the attribute models. The current attribute model relations are complex and really hard to understand. It causes a high entry threshold for this part of code and makes it hard to keep. Right now we have available attributes for Product, ProductVariant, and Page models. For each of these models, we are having additional models to keep the relation to the attributes. For each model, the structure is the same. So the problem and potential solution will be shown only on Product model, but it's applicable to all of them. The below chart shows what the current solution looks like: classDiagram Attribute "1" -- "0..*" AttributeValue Attribute "1" -- "0..*" AttributeProduct AttributeProduct "0..*" -- "1" ProductType AssignedProductAttribute "0..*" -- AttributeProduct AssignedProductAttribute "0..*" -- "1" Product AssignedProductAttribute "0..*" -- "0..*" AssignedProductAttributeValue AssignedProductAttributeValue "0..*" -- "1" AttributeValue Product "0..*" -- "1" ProductType class AttributeProduct{ sort_order } class AssignedProductAttributeValue{ sort_order } class AttributeValue{ sort_order } The proposal is to simplify it to: classDiagram Attribute "1" -- "0..*" AttributeValue Attribute "1" -- "0..*" AttributeProduct AttributeProduct "0..*" -- "1" ProductType AttributeValue "1" -- "0..*" AssignedProductAttributeValue AssignedProductAttributeValue "0..*" -- "1" Product Product "0..*" -- "1" ProductType Product "0..*" -- "0..*" Attribute class AttributeProduct{ sort_order } class AssignedProductAttributeValue{ sort_order } class AttributeValue{ sort_order } Database changes To apply proposed changes the following steps need to be performed. Add Product - Attribute relation Go through all AttributeProduct and based on attribute and assigned_products fields create relations between attributes and products. Add product field to AssignedProductAttributeValue model. Fulfill AssignedProductAttributeValue.product fields based on AssignedProductAttributeValue.assignment.product field values. Drop AttributeProduct.assigned_products field. Drop AssignedProductAttribute model. The same for ProductVariant and Page relation. Dataloaders changes ProductAttributesByProductTypeIdLoader - remains unchanged To remove: AttributeProductsByProductTypeIdLoader AssignedProductAttributesByProductIdLoader AttributeValuesByAssignedProductAttributeIdLoader To add: AttributeValuesByProductIdLoader To change: SelectedAttributesByProductIdLoader - will be significantly simplified - as it will be direct relation between Product and Attribute. Current solution - we need to go through AssignedProductAttribute --> ProductAttribute, AssignedProductAttributeValue --> Attribute, AttributeValue New solution Product --> Attribute, AttributeValuesByProductIdLoader --> Attribute, AttributeValue ⚠️ The same reduction will be for ProductVariant and Page. ⚠️ As a result we will end up with 3 data loaders for each model instead of 5. With much simpler logic. What we'll gain 🎉 Simplification of model structure Easiest code to keep Significant reduction of the entry threshold for attributes Reduction of the number of data loaders (2 less per model) and simplification of data loaders code - easier to update and find bugs Impact [ ] New migrations [ ] New/Updated API fields or mutations [ ] Deprecated API fields or mutations [ ] Removed API types, fields, or mutations [ ] Documentation needs to be updated Pull Request Checklist [ ] Privileged queries and mutations are guarded by proper permission checks [ ] Database queries are optimized and the number of queries is constant [ ] Database migration files are up to date [ ] The changes are tested [ ] GraphQL schema and type definitions are up to date [ ] Changes are mentioned in the changelog LGTM :tada: @IKarbowiak great work! Looks great. I am super happy that we are going on simplify the attributes flow!
2025-04-01T04:35:25.766767
2024-06-21T11:19:47
2366327684
{ "authors": [ "TomSesselmann", "gtabboud" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10554", "repo": "salesforce-experiencecloud/notificationbell", "url": "https://github.com/salesforce-experiencecloud/notificationbell/pull/3" }
gharchive/pull-request
Paragraph font size fix dxp styling hook name is incorrect for paragraph font sizes. Fixed and included in new commit: https://github.com/salesforce-experiencecloud/notificationbell/commit/1355c3386aa8778bbceb8a82c6aa7d430b848d65 @TomSesselmann Thank you for catching these Typos / copy-paste errors! You're always welcome to code-review my other apps? 😉 Could always use another pair of eyes! thanks again!
2025-04-01T04:35:25.767846
2019-03-13T14:49:31
420542564
{ "authors": [ "interactivellama", "sharaththegeek" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10555", "repo": "salesforce/design-system-react", "url": "https://github.com/salesforce/design-system-react/pull/1841" }
gharchive/pull-request
Corrected SLDS Close Alert Button placement Corrected the placement of the "Toggle Alert" button in SLDSAlert -> Close Alert so that it is not overlapped by the alert when it appears. Corrects Issue #1840 @sharaththegeek TravisCi is saying there are lint issues.
2025-04-01T04:35:25.776802
2024-12-19T20:09:01
2751315341
{ "authors": [ "jaube-litify", "sanalpanicker" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10556", "repo": "salesforce/design-system-react", "url": "https://github.com/salesforce/design-system-react/pull/3168" }
gharchive/pull-request
Add nanoid, remove deprecated shortId Fixes # Additional description Adding nanoid and removing deprecated shortid CONTRIBUTOR checklist (do not remove) Please complete for every pull request [ ] First-time contributors should sign the Contributor License Agreement. If you haven't before, wait a few minutes and a bot will comment on this pull request with instructions. [x] npm run lint:fix has been run and linting passes. [x] Mocha, Jest (Storyshots), and components/component-docs.json CI tests pass (npm test). [ ] Tests have been added for new props to prevent regressions in the future. See readme. [ ] Review the appropriate Storybook stories. Open http://localhost:9001/. [x] Review tests are passing in the browser. Open http://localhost:8001/. [x] Review markup conforms to SLDS by looking at DOM snapshot strings. REVIEWER checklist (do not remove) [ ] CircleCI tests pass. This includes linting, Mocha, Jest, Storyshots, and components/component-docs.json tests. [ ] Tests have been added for new props to prevent regressions in the future. See readme. [ ] Review the appropriate Storybook stories. Open http://localhost:9001/. [ ] The Accessibility panel of each Storybook story has 0 violations (aXe). Open http://localhost:9001/. [ ] Review tests are passing in the browser. Open http://localhost:8001/. [ ] Review markup conforms to SLDS by looking at DOM snapshot strings. Required only if there are markup / UX changes [ ] Add year-first date and commit SHA to last-slds-markup-review in package.json and push. [ ] Request a review of the deployed Heroku app by the Salesforce UX Accessibility Team. [ ] Add year-first review date, and commit SHA, last-accessibility-review, to package.json and push. [ ] While the contributor's branch is checked out, run npm run local-update within locally cloned site repo to confirm the site will function correctly at the next release. @interactivellama it looks like last time you were able to override the CLA app since @sanalpanicker is a current salesforce employee: https://github.com/salesforce/design-system-react/pull/3155#issuecomment-2332414014. Is that something you can do here as well?
2025-04-01T04:35:25.824667
2022-02-06T20:41:22
1125311666
{ "authors": [ "alelordelo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10558", "repo": "salgum1114/react-design-editor", "url": "https://github.com/salgum1114/react-design-editor/issues/230" }
gharchive/issue
Custom HTML5 code editor not working What I did Copied the HTM5/JS/CSS from example bellow: https://codepen.io/p5js/pen/wreBKy Result The HTML5 layer shows as empty: https://gyazo.com/c151ae383b86d80b94935ce727f0cb21 Is there anything example on what that needs to be done to load HTM5/JS/CSS? @salgum1114 , here is another example of HTM5/JS https://drive.google.com/file/d/1hVA0M5NMV6BnY0WLSPPMmOUEr7SGErd1/view?usp=sharing HTML opens fine when clicked: https://gyazo.com/9c4effaa1963fe4c10dbdd5418218171 Then I copied HTM5/JS to the editor, but it doesn't show: https://gyazo.com/201cad1a312f36dc5c035c35a9328d0e Am I doing anything wrong?
2025-04-01T04:35:25.845802
2017-06-06T23:00:51
234049408
{ "authors": [ "ceciliacsilva", "salman-bhai" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10559", "repo": "salman-bhai/WhatsApp-Web", "url": "https://github.com/salman-bhai/WhatsApp-Web/issues/9" }
gharchive/issue
[branch bhai] source activate I'm trying to run the branch bhai. The command "source activate wa-web" isn't working ("bash: activate: File or directory not found"). If I ignore this error, the program wa.py runs, opens the browser, I instantiate a session on the mobile, but commands do not work right. For example: #gc -- getChats = "Failed to get messages", probably because there is no directory named chat. This is normal? What can I do? Sorry for my bad english Hey, Are you running the command inside the folder. Also, make sure you've installed virtualenv Run the command pip install virtualenv Also did you login via the browser? Hi, I'm using now, the environment seems to work. I'm authenticating through the browser. The main program still doesn't respond what I'm expecting. Do the commands work there? If you type: Bot start Are the messages answered by their XML? Hey, so the bot functionality doesn't work! I'll probably get around it one day! Still need to write that AIML Code but other features of sending a message does work out! Thanks! =) Hey pull the latest code and I've created the folder as well! You can send a Pull Request for the Get Chats feature.
2025-04-01T04:35:25.854266
2018-11-30T20:35:50
386340932
{ "authors": [ "Shinigami92" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10560", "repo": "salsita/node-pg-migrate", "url": "https://github.com/salsita/node-pg-migrate/issues/365" }
gharchive/issue
Make PgType a runtime accessable object You already have the definition for all the types in your index.d.ts enum PgType. But if I use this, I got only in example: TypeError: Cannot read property 'UUID' of undefined Can you provide an object PgType and export it, so we can use following code: import { ColumnDefinitions, MigrationBuilder, PgLiteral, PgType } from 'node-pg-migrate'; export const shorthands: ColumnDefinitions | undefined = { id: { type: PgType.UUID, primaryKey: true, default: new PgLiteral('uuid_generate_v4()') } // ~~~~~~~~~~~ }; export function up(pgm: MigrationBuilder): void { pgm.createTable('entity', { id: { type: 'id' } }); } export function down(pgm: MigrationBuilder): void { pgm.dropTable('entity'); } Thank you for all your efforts lately for me. You have helped me with your project in several of my projects. Both work and hobby projects. I am happy to continue using your work.
2025-04-01T04:35:25.892299
2016-03-23T18:10:28
143038143
{ "authors": [ "dmurphy18" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10561", "repo": "saltstack/salt-pack", "url": "https://github.com/saltstack/salt-pack/issues/56" }
gharchive/issue
Fix missing KillMode=process on Debian 8 Fix missing KillMode=process on Debian 8, 2015.5.5 - 2015.5.8, and 2015.8.0 - 2015.8.3. The issue was due to picking up Joe Healy's Debian 2015.5.3 code for Salt,which has this error. It was corrected in 2015.5.9 and 2015.8.4. From email thread: The problem is with the KillMode in the service unit on 2015.8.3 In 2015.8.3, the debian source does have a service unit which has KillMode=process, but that is not the service unit that is in the package. Because of this, the unit reverts to the default, which is control-group. What KillMode=control-group does, when you stop or restart the service, everything in the cgroup get a SigTerm, causing all the processes to stop. If you use process, only the Parent process gets a SigTerm, allowing the rest of the processes in the cgroup to finish what they are doing and exit gracefully, this is why 2015.8.5 -> 2015.8.7 works, because it has KillMode=process when it stops the service before upgrading. The only way that I can think to fix this is if debian packaging works the way I think it does. We could have a package that is a dependency of salt-minion, and drops a drop in file to /lib/systemd/system/salt-minion.d/killmode.conf, that only contains [Service] KillMode=process And then does a daemon reload. This way when the install goes to install the new salt-minion, we make sure that the running process has KillMode set to process instead of control-group. root@deb:~# systemctl show -p KillMode salt-minion KillMode=process Then, the process running the pkg.install on the salt-minion will be able to finish running, and not be killed off while running through dpkg. -- Daniel Wallace @gtmanfred Not going to do this, will issue Warning in release notes for 2015.5.10 and 2015.8.8
2025-04-01T04:35:25.974949
2024-09-09T06:39:49
2513048004
{ "authors": [ "mshahmeer02", "sambowenhughes" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10562", "repo": "sambowenhughes/a-react-video-editor", "url": "https://github.com/sambowenhughes/a-react-video-editor/issues/2" }
gharchive/issue
Access Code not working I recently purchased your video editor source code and received a verification code via email. However, I am encountering issues when trying to access the source code on your website. I've double-checked for typos and tried using different browsers, but the issue persists. Additionally, I noticed that there's no customer support link provided on your website or in the confirmation email. Could you please assist me in accessing the source code or provide an alternative method to download it? Also, it would be helpful if you could provide a dedicated support contact for future reference. This issue has now been resolved ✅
2025-04-01T04:35:26.023447
2015-09-15T09:14:42
106516203
{ "authors": [ "maqsoodumt", "sameersbn" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10563", "repo": "sameersbn/docker-squid", "url": "https://github.com/sameersbn/docker-squid/issues/7" }
gharchive/issue
ACL How to manage ACLs..... @maqsoodumt you need to configure them in squid.conf. Refer to https://github.com/sameersbn/docker-squid#configuration. Thanks again 4 your response and sorry for interrupting u......am unable to save changes ins squid.conf....can u plz help in detail.... On Wed, Sep 16, 2015 at 9:50 AM, Sameer Naik<EMAIL_ADDRESS>wrote: @maqsoodumt https://github.com/maqsoodumt you need to configure them in squid.conf. Refer to https://github.com/sameersbn/docker-squid#configuration. — Reply to this email directly or view it on GitHub https://github.com/sameersbn/docker-squid/issues/7#issuecomment-140626055 . -- Maqsood Ahmad Asst. Manager Networks, UMT 0334-9776102 Dear Sameer! Hope u r doing best.....I am Maqsood Ahmad, from Lahore,.....working as Asst. Manager<EMAIL_ADDRESS>exploring containger technology.....in short, am using your "sameersbn/squid:3.3.8" working fine.....but I can't mount and edit "/etc/squid3/squid.conf" file to add/modyfy my ACLs n other stuff.....Plz help how to do that....... FYI: root@DoCon:~# docker im images import root@DoCon:~# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE sameersbn/squid 3.3.8 e6b98d4632e2 2 weeks ago 214.5 MB root@DoCon:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a9ed6d2c90bc sameersbn/squid:3.3.8 "/sbin/entrypoint.sh 16 minutes ago Up 10 minutes <IP_ADDRESS>:3128->3128/tcp squid root@DoCon:~# docker -v Docker version 1.7.0, build 0baf609 root@DoCon:~# docker info Containers: 1 Images: 15 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 17 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.16.0-30-generic Operating System: Ubuntu 14.04.2 LTS CPUs: 1 Total Memory: 986.9 MiB Name: DoCon ID: GASE:O3A6:QTP7:LJPO:K66Q:VBID:QECJ:QMZM:AY7U:3Q7Q:TDKJ:2L5X WARNING: No swap limit support root@DoCon:~# On Wed, Sep 16, 2015 at 3:08 PM, Maqsood Ahmad<EMAIL_ADDRESS>wrote: Thanks again 4 your response and sorry for interrupting u......am unable to save changes ins squid.conf....can u plz help in detail.... On Wed, Sep 16, 2015 at 9:50 AM, Sameer Naik<EMAIL_ADDRESS>wrote: @maqsoodumt https://github.com/maqsoodumt you need to configure them in squid.conf. Refer to https://github.com/sameersbn/docker-squid#configuration. — Reply to this email directly or view it on GitHub https://github.com/sameersbn/docker-squid/issues/7#issuecomment-140626055 . -- Maqsood Ahmad Asst. Manager Networks, UMT 0334-9776102 -- Maqsood Ahmad Asst. Manager Networks, UMT 0334-9776102 Download the squid.conf file wget https://raw.githubusercontent.com/sameersbn/docker-squid/master/squid.conf Start squid using dockerdocker run --name squid -d --restart=always \ --publish 3128:3128 --volume $PWD/squid.conf:/etc/squid3/squid.conf --volume /srv/docker/squid/cache:/var/spool/squid3 sameersbn/squid:3.3.8 3. Edit `squid.conf` on the host as required 4. Reload the squid configuration for the changes to take effect `docker kill -s HUP squid` For information on how to configure squid please refer the squid documentation. Thanks a lot dear for such super and superb response.....let me try first.....will respond in detail....... On Thu, Sep 17, 2015 at 10:55 AM, Sameer Naik<EMAIL_ADDRESS>wrote: Download the squid.conf file wget https://raw.githubusercontent.com/sameersbn/docker-squid/master/squid.conf Start squid using docker bash docker run --name squid -d --restart=always \ --publish 3128:3128 \ --volume $PWD/squid.conf:/etc/squid3/squid.conf \ --volume /srv/docker/squid/cache:/var/spool/squid3 \ sameersbn/squid:3.3.8 Edit squid.conf on the host as required Reload the squid configuration for the changes to take effect docker kill -s HUP squid For information on how to configure squid please refer the squid documentation. — Reply to this email directly or view it on GitHub https://github.com/sameersbn/docker-squid/issues/7#issuecomment-140976521 . -- Maqsood Ahmad Asst. Manager Networks, UMT 0334-9776102 Oh my God!!!!.................I did it....Thanks a lot Sameer!!!!...............u know I was working on this project since couple of day and was really sleepless, because was unable to modify default setting in squid.conf.....one thing more do u suggest n recomment to running squid as proxy server @University lever in as docker image? Thanks n Regards' Maqsood On Thu, Sep 17, 2015 at 11:01 AM, Maqsood Ahmad<EMAIL_ADDRESS>wrote: Thanks a lot dear for such super and superb response.....let me try first.....will respond in detail....... On Thu, Sep 17, 2015 at 10:55 AM, Sameer Naik<EMAIL_ADDRESS>wrote: Download the squid.conf file wget https://raw.githubusercontent.com/sameersbn/docker-squid/master/squid.conf Start squid using docker bash docker run --name squid -d --restart=always \ --publish 3128:3128 \ --volume $PWD/squid.conf:/etc/squid3/squid.conf \ --volume /srv/docker/squid/cache:/var/spool/squid3 \ sameersbn/squid:3.3.8 Edit squid.conf on the host as required Reload the squid configuration for the changes to take effect docker kill -s HUP squid For information on how to configure squid please refer the squid documentation. — Reply to this email directly or view it on GitHub https://github.com/sameersbn/docker-squid/issues/7#issuecomment-140976521 . -- Maqsood Ahmad Asst. Manager Networks, UMT 0334-9776102 -- Maqsood Ahmad Asst. Manager Networks, UMT 0334-9776102 @maqsoodumt Sure. I don't see any reason not to. Thanks dear 4 all your support n suggestions....may I know a bit about u, your expertise, interest and hobbies..... On Thu, Sep 17, 2015 at 12:18 PM, Sameer Naik<EMAIL_ADDRESS>wrote: @maqsoodumt https://github.com/maqsoodumt Sure. I don't see any reason not to. — Reply to this email directly or view it on GitHub https://github.com/sameersbn/docker-squid/issues/7#issuecomment-140990140 . -- Maqsood Ahmad Asst. Manager Networks, UMT 0334-9776102 I like close issues. U mean binary/digital 0/1?.......what about Indo-Pak relations? On Thu, Sep 17, 2015 at 1:16 PM, Sameer Naik<EMAIL_ADDRESS>wrote: I like close issues. — Reply to this email directly or view it on GitHub https://github.com/sameersbn/docker-squid/issues/7#issuecomment-141000279 . -- Maqsood Ahmad Asst. Manager Networks, UMT 0334-9776102 That's on issue that does not seem will be resolved anytime soon. U know what Vajpayee ji, ex Indian PM, said once while visiting Pk: "f riends can be changed but neighbours cannot be"... On Thu, Sep 17, 2015 at 5:49 PM, Sameer Naik<EMAIL_ADDRESS>wrote: That's on issue that does not seem will be resolved anytime soon. — Reply to this email directly or view it on GitHub https://github.com/sameersbn/docker-squid/issues/7#issuecomment-141071106 . -- Maqsood Ahmad Asst. Manager Networks, UMT 0334-9776102
2025-04-01T04:35:26.025808
2021-01-04T22:56:49
778430849
{ "authors": [ "samhh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10564", "repo": "samhh/tshm", "url": "https://github.com/samhh/tshm/issues/21" }
gharchive/issue
Destructured tuples e.g. uncurry2: export declare const uncurry2: <A, B, C>(f: (a: A) => (b: B) => C) => ([a, b]: [A, B]) => C Should also check if other destructuring syntax can appear in declarations. https://github.com/samhh/tshm/commit/bfa6c86462e824b03ddbad23a218f7adca68b452 https://github.com/samhh/tshm/commit/bfa6c86462e824b03ddbad23a218f7adca68b452
2025-04-01T04:35:26.033091
2018-01-09T02:27:54
286950969
{ "authors": [ "FIISHxMAN", "ayy1337", "feder102", "nicoan777", "trueToastedCode", "yosefmahmoud" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10565", "repo": "sammchardy/python-binance", "url": "https://github.com/sammchardy/python-binance/issues/91" }
gharchive/issue
Order price rounding? Trying to make order at 1726 sats on a market with min price 1 sat, tickSize 1 sat and it rounds down to 1700 sats. Is this exchange's doing or API module? Had an issue to format small price coins. I used this code in order to fix it: def step_size_to_precision(ss): return ss.find('1') - 1 def format_value(val, step_size_str): precision = step_size_to_precision(step_size_str) if precision > 0: return "{:0.0{}f}".format(val, precision) return math.floor(int(val)) @sammchardy thanks for the code and @FIISHxMAN thanks for the mod. It seems that there was a problem when a coin the step size was 1. FIISHxMAN mod fixed it. @ayy1337 you're most likely running into weird floating point rounding issues. Does something like the following functions suit your purpose? You can pass the stepSize or tickSize values along with your computed quantity and price and have them formatted perfectly. def step_size_to_precision(ss): return ss.find('1') - 1 def format_value(val, step_size_str): precision = step_size_to_precision(step_size_str) return "{:0.0{}f}".format(val, precision) step_size = "0.0010000" quantity = 0.1231241 print(format_value(quantity, step_size)) step_size = "0.0000100" quantity = 0.1231241 print(format_value(quantity, step_size)) Muy buenooo! Muchas gracias @sammchardy , thanks for you contribution here , i think this should be submitted as PR and replace the round_step_size as it is very better than the round_step_size regarding the performance We can also take leverage of python's in-build decimal library ensuring accurate decimal representations. from decimal import Decimal, ROUND_HALF_UP, ROUND_DOWN def floor_to_step(quantity: Decimal, step: Decimal) -> Decimal: return (quantity / step).quantize(1, rounding=ROUND_DOWN) * step def round_to_step(quantity: Decimal, step: Decimal) -> Decimal: return (quantity / step).quantize(1, rounding=ROUND_HALF_UP) * step quantity = Decimal('3.281') step = Decimal('0.05') print(floor_to_step(quantity, step)) # 3.25 print(round_to_step(quantity, step)) # 3.30
2025-04-01T04:35:26.042066
2019-06-28T14:17:27
462057067
{ "authors": [ "NathanReb", "emillon" ], "license": "ISC", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10566", "repo": "samoht/dune-release", "url": "https://github.com/samoht/dune-release/issues/159" }
gharchive/issue
Prompts and notices are tricky to differentiate Hi! I'm really enjoying the output that dune-release displays - it makes it super clear what it's doing and what I have left to do (not much, it seems). However, I am having trouble making distinctions between prompts, lines where it's doing network calls, and information messages. The current output is like this: Push tag v0.1.0 to git@github.com:mirage/hacl.git? [Y/n] Pushing tag v0.1.0 to git@github.com:mirage/hacl.git Create release v0.1.0 on git@github.com:mirage/hacl.git? [Y/n] Creating release v0.1.0 on git@github.com:mirage/hacl.git via github's API Succesfully created release with id 18294940 How about some ASCII art: [?] Push tag v0.1.0 to git@github.com:mirage/hacl.git? [Y/n] ... Pushing tag v0.1.0 to git@github.com:mirage/hacl.git [+] Done [?] Create release v0.1.0 on git@github.com:mirage/hacl.git? [Y/n] ... Creating release v0.1.0 on git@github.com:mirage/hacl.git via github's API [+] Succesfully created release with id 18294940 Or some Unicode art: ❓ Push tag v0.1.0 to git@github.com:mirage/hacl.git? [Y/n] ⌛ Pushing tag v0.1.0 to git@github.com:mirage/hacl.git ✔️ Done ❓ Create release v0.1.0 on git@github.com:mirage/hacl.git? [Y/n] ⌛ Creating release v0.1.0 on git@github.com:mirage/hacl.git via github's API ✔️ Succesfully created release with id 18294940 (colors can help, but are a bit less accessible) Let me know what you think. Thanks! Thanks, some ASCII art (or anything really) would be great indeed!
2025-04-01T04:35:26.045280
2016-11-17T21:45:47
190173355
{ "authors": [ "Nicrob64", "samshadwell" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10567", "repo": "samshadwell/TrumpScript", "url": "https://github.com/samshadwell/TrumpScript/issues/194" }
gharchive/issue
Add support for delayed returns It would be nice to have the ability to delay showing your function returns until after Clintons 33k emails have been revealed, or until America has been made sufficiently great. See a76f078
2025-04-01T04:35:26.092993
2015-11-20T04:44:13
117964446
{ "authors": [ "Dejal", "nriley", "samuelclay" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10568", "repo": "samuelclay/NewsBlur", "url": "https://github.com/samuelclay/NewsBlur/issues/826" }
gharchive/issue
iOS: add support for key commands iOS 8 & 9 can access functions and navigate via key commands using an attached hardware keyboard, and iOS 9 can display a bezel to list those shortcuts when the Cmd key is held down. That would be very useful to support. Imagine using arrow keys to navigate around the feeds and stories, Cmd-F to go to the Find field, Cmd-N to add a new feed, and more. Yep, @nriley brought this up yesterday. Let him work on it, he's happy to contribute. Making some progress. The biggest issues I've had thus far are: (a) somehow the view controller isn't getting keyboard focus and I have to tap on the story before I can use the keyboard commands (right now I have it attached to StoryPageControl). (b) I'm not sure if I'm triggering all of the read/unread machinery correctly as it's distributed among a number of classes and is pretty hard to follow. I can either try to dig some more myself, or I can put my code up for review and see if you have any ideas about how I can rewrite it so it's clearer what's going on. I fixed the keyboard focus issue (had forgotten about the existence of UIResponder). There may be some places where the responder isn't getting set correctly now, but at least this is a start. I realize it seems the terminology is inconsistent between the iOS and Web app. iOS calls the RSS-feed-text "Story" (in both code and the UI) whereas the Web calls it "Feed"; there is no version of the Web's "Story" on iOS. Confusing but if nobody's complained, maybe nobody cares? :-) Anyway, think this is good enough for a first shot; please report bugs! Things seem pretty reliable now and I can do most of my newsreading entirely from the keyboard. (Note that I read almost exclusively from "All Stories"). I've seen some visual artifacts and multiple copies of the same story, but I don't think my code is responsible, although I still don't entirely understand the changes I needed to make to unread handling. There definitely seem to be some races where the unread indicator in the story list and in the header are inconsistent, but again… I'm now covering essentially anything I'd ever do with NewsBlur, so I'm going to stop now :-) If nothing else it makes navigating in the simulator a lot faster! I can squash my commits into a single one if you'd prefer. Now committed to the 5.1 branch. I'm not aware of any remaining issues, but please feel free to report.
2025-04-01T04:35:26.105022
2021-06-10T07:43:26
917022028
{ "authors": [ "BayerSe", "PrettyWood", "isac322", "zulrang" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10569", "repo": "samuelcolvin/pydantic", "url": "https://github.com/samuelcolvin/pydantic/issues/2897" }
gharchive/issue
cannot create weak reference when mapping sqlalchemy to pydantic Bug We are trying to map sqlalchemy queries to pydantic models as e.g. discussed here. We're using pydantic==1.8.2 and sqlalchemy==1.4.17. This works well when we use the pydantic.dataclasses.dataclass decorator. It fails, however, when we sublass from pydantic.BaseModel. There, we get: TypeError: cannot create weak reference to 'Data2' object. I was unsure if I should open this issue here or at sqlalchemy - so if you think the other issue tracker is more suitable please let me know and I'll approach them. import pydantic from sqlalchemy import Column, String, create_engine, select from sqlalchemy.orm import registry, Session mapper_registry = registry() Base = mapper_registry.generate_base() class Table(Base): __tablename__ = "table" id = Column(String, primary_key=True) @pydantic.dataclasses.dataclass class Data1: id: str class Data2(pydantic.BaseModel): id: str if __name__ == "__main__": engine = create_engine("sqlite:///:memory:", echo=True) Base.metadata.create_all(engine) select_id = select(Table.id).subquery() mapper_registry.map_imperatively(Data1, select_id) mapper_registry.map_imperatively(Data2, select_id) with Session(engine) as session: session.add(Table(id="id")) q1 = session.query(Data1).all() q2 = session.query(Data2).all() Output of python -c "import pydantic.utils; print(pydantic.utils.version_info())": pydantic version: 1.8.2 pydantic compiled: True install path: /home/bay2rng/.local/share/virtualenvs/p356_rtp_production_scheduling-03q6S6S0/lib/python3.9/site-packages/pydantic python version: 3.9.4 (default, Apr 9 2021, 01:10:48) [GCC 7.5.0] platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.27 optional deps. installed: ['typing-extensions'] Hi @BayerSe Can you try with https://github.com/samuelcolvin/pydantic/discussions/2857#discussioncomment-802379? Hi @PrettyWood thanks for the quick feedback! This indeed resolved the first error! Now I'm getting: ValueError: "Data2" object has no field "_sa_instance_state" You are not using sqlalchemy + pydantic correctly. You are not supposed to use the BaseModel directly in your query. from sqlalchemy import Column, String, create_engine, select from sqlalchemy.orm import registry, Session import pydantic from pydantic.dataclasses import dataclass mapper_registry = registry() Base = mapper_registry.generate_base() class Table(Base): __tablename__ = "table" id = Column(String, primary_key=True) @dataclass class DataCls: id: str class Data(pydantic.BaseModel): id: str class Config: orm_mode = True engine = create_engine("sqlite:///:memory:", echo=True) Base.metadata.create_all(engine) select_id = select(Table.id).subquery() mapper_registry.map_imperatively(DataCls, select_id) with Session(engine) as session: session.add(Table(id="id")) q1 = session.query(DataCls).one() pydantic_q1 = Data.from_orm(q1) Check the doc Thank you for the pointer! The interim step with mapper_registry.map_imperatively(DataCls, select_id) is not even necessary. This here works perfectly: import pydantic from sqlalchemy import Column, String, create_engine, select from sqlalchemy.orm import registry, Session mapper_registry = registry() Base = mapper_registry.generate_base() class Table(Base): __tablename__ = "table" id = Column(String, primary_key=True) class Data(pydantic.BaseModel): id: str class Config: orm_mode = True engine = create_engine("sqlite:///:memory:", echo=True) Base.metadata.create_all(engine) select_id = select(Table.id).subquery() with Session(engine) as session: session.add(Table(id="id")) q1 = session.query(select_id).one() pydantic_q1 = Data.from_orm(q1) @BayerSe @PrettyWood You don't have to use declarative and from_orm. Imperative mapping is more performant than Declarative one in this case because you don't have to parse object via from_orm. In this way, you can use all validation functionality of pydantic. from typing import Any import pydantic from sqlalchemy import Column, String, Table, create_engine from sqlalchemy.orm import Session, registry from sqlalchemy.orm.base import DEFAULT_STATE_ATTR mapper_registry = registry() @pydantic.dataclasses.dataclass class Data1: id: str SQLAlchemyBaseModel = pydantic.create_model( 'SQLAlchemyBaseModel', **{DEFAULT_STATE_ATTR: (Any, pydantic.PrivateAttr())}, __slots__='__weakref__', __abstract__=True, ) class Data2(SQLAlchemyBaseModel): id: str _table = Table( 'table', mapper_registry.metadata, Column('id', String, primary_key=True), ) if __name__ == "__main__": mapper_registry.map_imperatively(Data1, _table) mapper_registry.map_imperatively(Data2, _table) engine = create_engine("sqlite:///:memory:", echo=True) mapper_registry.metadata.create_all(engine) with Session(engine) as session: session.add(Data2(id="id")) q1 = session.query(Data1).all() q2 = session.query(Data2).all() Thank you, @isac322; that looks interesting as well. For now we settled on the from_orm version but I will keep it in mind when touching this are of the code again. @BayerSe @PrettyWood You don't have to use Declarative Mapping and from_orm. Imperative Mapping is more performant than Declarative one in this case because you don't have to parse object via from_orm. In this way, you can use all validation functionality of pydantic. from typing import Any import pydantic from sqlalchemy import Column, String, Table, create_engine from sqlalchemy.orm import Session, registry from sqlalchemy.orm.base import DEFAULT_STATE_ATTR mapper_registry = registry() @pydantic.dataclasses.dataclass class Data1: id: str SQLAlchemyBaseModel = pydantic.create_model( 'SQLAlchemyBaseModel', **{DEFAULT_STATE_ATTR: (Any, pydantic.PrivateAttr())}, __slots__='__weakref__', __abstract__=True, ) class Data2(SQLAlchemyBaseModel): id: str _table = Table( 'table', mapper_registry.metadata, Column('id', String, primary_key=True), ) if __name__ == "__main__": mapper_registry.map_imperatively(Data1, _table) mapper_registry.map_imperatively(Data2, _table) engine = create_engine("sqlite:///:memory:", echo=True) mapper_registry.metadata.create_all(engine) with Session(engine) as session: session.add(Data2(id="id")) q1 = session.query(Data1).all() q2 = session.query(Data2).all() Your example doesn't work. When just trying to instantiate the Model, you get: /lib/python3.9/site-packages/sqlalchemy/orm/base.py in _is_mapped_class(entity) 357 return ( 358 insp is not None --> 359 and not insp.is_clause_element 360 and (insp.is_mapper or insp.is_aliased_class) 361 ) AttributeError: 'tuple' object has no attribute 'is_clause_element' @BayerSe @PrettyWood You don't have to use Declarative Mapping and from_orm. Imperative Mapping is more performant than Declarative one in this case because you don't have to parse object via from_orm. In this way, you can use all validation functionality of pydantic. from typing import Any import pydantic from sqlalchemy import Column, String, Table, create_engine from sqlalchemy.orm import Session, registry from sqlalchemy.orm.base import DEFAULT_STATE_ATTR mapper_registry = registry() @pydantic.dataclasses.dataclass class Data1: id: str SQLAlchemyBaseModel = pydantic.create_model( 'SQLAlchemyBaseModel', **{DEFAULT_STATE_ATTR: (Any, pydantic.PrivateAttr())}, __slots__='__weakref__', __abstract__=True, ) class Data2(SQLAlchemyBaseModel): id: str _table = Table( 'table', mapper_registry.metadata, Column('id', String, primary_key=True), ) if __name__ == "__main__": mapper_registry.map_imperatively(Data1, _table) mapper_registry.map_imperatively(Data2, _table) engine = create_engine("sqlite:///:memory:", echo=True) mapper_registry.metadata.create_all(engine) with Session(engine) as session: session.add(Data2(id="id")) q1 = session.query(Data1).all() q2 = session.query(Data2).all() Your example doesn't work: <ipython-input-2-7c51403c4f17>:16: RuntimeWarning: fields may not start with an underscore, ignoring "_sa_instance_state" SQLAlchemyBaseModel = pydantic.create_model( <ipython-input-2-7c51403c4f17>:16: RuntimeWarning: fields may not start with an underscore, ignoring "__slots__" SQLAlchemyBaseModel = pydantic.create_model( <ipython-input-2-7c51403c4f17>:16: RuntimeWarning: fields may not start with an underscore, ignoring "__abstract__" SQLAlchemyBaseModel = pydantic.create_model(
2025-04-01T04:35:26.115251
2019-08-13T18:16:21
480296572
{ "authors": [ "samuelhorwitz" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10570", "repo": "samuelhorwitz/phosphorescence", "url": "https://github.com/samuelhorwitz/phosphorescence/issues/109" }
gharchive/issue
Add Black Hole Recordings playlists Vocal Trance Black Hole Recordings: 6yYS66EEthJ3IumzWck2RR #trancefamily Black Hole Recordings: 2BfXSgtwWVSoHwVhaiv8B1 Chill Zone Black Hole Recordings (add to chill sublist): 0H06AQrAegyWpBCPMNeEEK In Trance We Trust Black Hole Recordings: 3KbMl9OnWlatRacjUIggMb Beyond Black Hole Recordings (maybe add to psy sublist): 6nypwwASuMcehflKFWg1ak Trance Classics Black Hole Recordings: 2tR4ZJDVn0yfzKfMASxNKf ISOS Black Hole Recordings: 3jGlUfHVREAnejiwcGFpdP Magik: The Collection Black Hole Recordings: 49Lnw1ECPaG50tlOdXxVQh Magik Muzik Black Hole Recordings: 0Y7EUWeyD7XyzCzCM3J835 AVA Recordings Releases BHR: 5zH4xCy1FIMT6qJHb30AKE GO Music BHR: 5uxBAEP4UJYCkVrlVdJkcJ Damaged Records BHR: 4zYrEUz6iXL2HfZEmCwsMk Create Music BHR: 3sUqXHaAiOnRHQjuhGV0Pz Magic Island BHR: 7KpH2J1h9XBnxmQufTUJaN Outburst Records BHR: 1e4uXoD44ZsGuY8294Ue0z Pure Trance BHR: 2FM8SDrT0SiCnxgSHOmNSI RIDE recordings BHR: 6Tbc3nulovy3JgdNs6HQDA Touchstone recordings BHR: 1oLqtz5JNlP14Marsgw7kF Subculture BHR: 5tKZEIOyPwv31RHezjONmC Grotesque BHR: 1NHD0e01T6rpNR9XfREFBK Universal Nation BHR: 6uSUUGBBJcLU8wQKwUNHXg Let's also look into other labels/sublabels as well and maybe take out all the unofficial trance playlists (we can see if we lose a lot of good stuff without them). Also Planet Love Armada: 78OCCNfXrsTQwbEANRmXT8 Gabriel & Dresden Armada: 6hmVOoZlfhIEi2o0kGXPif Vandit Armada: 74M99JYCke9OrhwELydMt9 Subculture Armada: 7fNe00ee3Z1VoeiSlkdj1q S107 Armada: 5a4xpPH9SrQRyB0ZAYT8H4 Coldharbour Chill Armada: 74qhFAWiX6m6mVESyfPFTv Club Elite Armada: 2FdXvK4KRjhfMgAevzFXSG Perfecto Records Armada: 2EXxdhxKwuRj5fdf34oXLu Armada Captivating: 2kfuxKRMIUElCjV2APAnP0 Susana Armada: 0m5YUdPTQ2Kv8NlkOCMIw4 Aropa Armada: 2RqBNPNAjV81rWtW5m9tgv AVA Armada: 0W8lpEbHyJReUMg4518qff Alex MORPH: 6xjDy6fq4f5nivbLyeYxcY FSOE Armada: 0oMexzEZus9I57WvJ5p3to Magic Island Armada: 3d5QfTlrrRfWh2YPjyGXnR Perfecto Fluoro Armada: 7MDfRaaO0aKTzAdD8Oxtz1 It looks like Armada Elements made a lot of playlists private but I subscribed years ago and still have access? I like the idea of hunting down as many playlists as possible that are run by labels or artists over public playlists, so I should definitely keep digging but this list is already pretty good. Maybe I should revisit also pulling in Spotify trance by genre spidering? Armada ASOT Imprint Releases: 2wRre8dlV8eqnfkVIb7dME Closing for now, this is an open-ended thing but no point in this issue sitting around
2025-04-01T04:35:26.141824
2018-07-03T17:43:05
337995438
{ "authors": [ "linyuqing97", "samwhiting" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10571", "repo": "samwhiting/gnuradio-doa", "url": "https://github.com/samwhiting/gnuradio-doa/issues/4" }
gharchive/issue
Some questions Hello Sam: Thanks for sharing your method and programs. We are also studying on capon Doa and try to implement on two USRP E310 devices with two antennas on each device. This is our first time get in touch with this devices and GNURADIO. We have been trying to play around those GRCs you provided in the folder especially "master.grc". Could you please explain why sometimes we are getting negative radians? If the receiver is 90 degrees in front on the transmitter then the result we are getting is within an expected range, however, if we increase or decrease the angle, results become less accurate and somehow negative. why is that so? Also, could you explain a little bit of how the arrow block work? We would like to have some visualization on our program latter. Thanks and we very appreciate your work. I think we were shifting the angle into the [-pi, pi] region, so negative angles are to be expected. If they don't line up with what you expect, it could easily be a multipath problem or perhaps a problem with phase coherency. For us, we called the transmitter-at-broadside case "0 degrees," but you may have set up the antenna array a different way or defined it differently. In this case, you'd need to adjust things to match whatever geometry you've set up. The arrow GUI block was never stable. We ended up moving to an Android interface early on and abandoned that side of the project. Thanks for the response, appreciate it. Just one more question, what distance between this two antennas should we keep when implementing your program? Should it be half wavelength?
2025-04-01T04:35:26.157458
2015-05-05T02:15:32
73180084
{ "authors": [ "san650", "williamsbdev" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10572", "repo": "san650/ember-cli-page-object", "url": "https://github.com/san650/ember-cli-page-object/issues/1" }
gharchive/issue
[ENHANCEMENT] move code to test-support directory This looks like a great project! I'm excited to try this out. Something that I noticed was that all the code is in the addon directory. It is not really needed outside of testing. Because of that, I might recommend moving the code into the test-support directory so as not to bloat the production build of your hosting app. I was unaware of this feature until @bcardarella pointed it out in an addon that I am heavily involved with. I also wrote a really quick blog post (mainly for myself) about the feature. Again, looking forward to trying out this addon. Thanks! Thanks for you suggestions, I'll move the code to test-support. Can you point me to some documentation about that folder? I couldn't find good documentation on how to build an ember-cli addon focused on creating testing helpers. I'm really glad you liked the project and I really appreciate your feedback! There isn't much documentation yet. Projects are the best documentation we have right now. You can checkout the addon I referenced above, the blog post I wrote, or this example. That is where I stole my inspiration from. If you'd like to pair up sometime, let me know.
2025-04-01T04:35:26.159070
2016-01-31T20:23:45
130177151
{ "authors": [ "juanazam", "san650" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10573", "repo": "san650/ember-cli-page-object", "url": "https://github.com/san650/ember-cli-page-object/pull/110" }
gharchive/pull-request
Assert predicates and queries only match one element [x] Predicates and queries throw error when DOM lookups return more than one element. [x] multiple: true option overrides this behavior. @san650 I implemented most of the tweaks, there might still be room for improvement. @juanazam :trophy: :trophy: :trophy: :tada: :tada: :beers:
2025-04-01T04:35:26.161373
2018-05-08T01:20:21
321010709
{ "authors": [ "dfreeman", "ro0gr", "san650" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10574", "repo": "san650/ember-cli-page-object", "url": "https://github.com/san650/ember-cli-page-object/pull/401" }
gharchive/pull-request
Improve FastBoot compatibility This should fix #397. Newer versions of FastBoot don't set process.env.EMBER_CLI_FASTBOOT—instead there's only a single build step, and the runtime code is expected to be guarded by a typeof FastBoot check if it's not FastBoot-compatible. I do think the change proposed in #399 is a good idea independent of this, but it looks like it got held up because of code coverage issues, and this solves the more immediate problem. Thanks @dfreeman for the fix, I'll check the failing test from master. @dfreeman I've just merged https://github.com/san650/ember-cli-page-object/pull/400 which moves e-c-p-o to the "addon-test-support/". I think this PR is not needed anymore? Seems like it. Thanks @ro0gr
2025-04-01T04:35:26.182209
2021-10-15T20:51:44
1027798511
{ "authors": [ "aislinblack", "stefan-philip" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10575", "repo": "sandboxnu/knowyouroptions", "url": "https://github.com/sandboxnu/knowyouroptions/pull/2" }
gharchive/pull-request
Navigation Bar Proof of Concept The sidebar definitely isn't perfect, but we wanted to put this up so that the other frontend group would have access to the style libraries This is ready for review, but designs still don't match because Ally and I are having trouble accessing them currently To be honest, I'm indifferent about alphabetizing CSS attributes. If people feel that its important/worth it, maybe we can add it as a linter setting or a pre-commit hook
2025-04-01T04:35:26.197534
2022-01-01T22:08:29
1091902989
{ "authors": [ "MicroJoe", "sandrotosi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10576", "repo": "sandrotosi/pypi2deb", "url": "https://github.com/sandrotosi/pypi2deb/pull/10" }
gharchive/pull-request
dpt post hook: use gbp import-orig instead of manual commands Per DPT policy, packages should use GBP already. Use gbp import-orig instead of manually creating a pristine tar. Shell variables have been quoted in order to improve safety. Sheel option has been set to error in case of undefined variable. Git branch -m command has been used instead of rm + git checkout -b. if you're still interested in this, please open a MR at https://salsa.debian.org/python-team/tools/pypi2deb -- thanks
2025-04-01T04:35:26.198887
2015-09-03T01:31:08
104603907
{ "authors": [ "kentonv", "ocdtrekkie" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10577", "repo": "sandstorm-io/sandstorm-app-market", "url": "https://github.com/sandstorm-io/sandstorm-app-market/issues/9" }
gharchive/issue
Feature Request: Link to licenses Ideally, the license field should link to the full text of a license. For bonus points, hovering over the license could show a tosdr-style summary of the license in question. Also, a hub page linking to these would also then be an awesome helper to link developers to in the publishing guide to help them decide on a license. We can pretty easily arrange to link these to the respective OSI pages.
2025-04-01T04:35:26.211853
2022-08-30T17:41:56
1356100698
{ "authors": [ "gischer", "kentonv", "ocdtrekkie" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10578", "repo": "sandstorm-io/sandstorm-app-market", "url": "https://github.com/sandstorm-io/sandstorm-app-market/pull/143" }
gharchive/pull-request
Update sandstorm-app-market to Meteor v 2.3.5 While doing this I also: removed digilord:faker. This was known to cause build issues. I didn't replace it since there's no active code in the repository that uses it as far as I can see. Changed all modules to use ES5. I strongly prefer this anyway, but I don't know how else to get npm-style imports working correctly. Moved simpl-schema, underscore, and meteor-node-stubs to using their npm versions, as the Meteor-specific packages are well out of date. The Meteor community as a whole has moved in this direction. Added a line to make-bundle.sh that will add the above packages via the meteor npm subcommand. I also add the @babel/runtime this way. It seems required these days. This isn't a small change, and I'm not entirely certain you'll want them. I have a burning need for at least this much, though, so I will probably keep using this fork, and you are welcome to them if you want them. I endorse the idea of converting this app to be a Sandstorm app, by the way, but that seemed way too much for a PR, and not something I would have the time for right now, either. Crap. I just checked and you are correct. Ergh, I don't know how that happened, I thought I was super careful to specify 2.3.5 Garrr. I'm checking now if I can just change the meteor version with meteor update --release 2.5.3 If so, I will just do that and check it in. Well, I'm not normally dyslexic, but I guess it can land on anyone. I'll be happy to change it, but I'm also willing to wait for @kentonv to say he needs it. So... Believe it or not, apps.sandstorm.io today is being served as a totally static web site. There is no Meteor server nor any MongoDB running behind it. I was able to pull that off because the only features that needed the server and database were the ratings and reviews... I just removed those. The actual app database is read directly from app-index.sandstorm.io, which is technically also static content (updated whenever a new app is published). I specifically did this so I could get away with never touching it again. :) No server means no possibility of security issues. I appreciate the intent to help here, and I'm fine with merging this to help anyone else who wants to run their own app store instances, but I'm inclined not to touch the official store deployment, just on the "if it ain't broke don't fix it" principle. :) @ocdtrekkie Please feel free to take ownership of this repo and merge changes as you see fit. This is where I removed the need for a server BTW: https://github.com/sandstorm-io/sandstorm-app-market/commit/784906c4451527cb8ab0c2b64fda7c3725a0ba33 I guess that means we can safely go with the 2.7.3 flavor, @gischer, since there's no longer a production Mongo database to worry about here. This makes all kinds of sense, give certain headaches I’ve recently been forced into by changes in Meteor. Nevertheless, I think that in some unspecified future, I will probably fork the App Market and make it into a Sandstorm app. ------ Original Message ------ From: "Jacob Weisz" @.> To: "sandstorm-io/sandstorm-app-market" @.> Cc: "gischer" @.>; "Mention" @.> Sent: 11/12/2022 9:06:18 AM Subject: Re: [sandstorm-io/sandstorm-app-market] Update sandstorm-app-market to Meteor v 2.3.5 (PR #143) I guess that means we can safely go with the 2.7.3 flavor, @gischer https://github.com/gischer, since there's no longer a production Mongo database to worry about here. — Reply to this email directly, view it on GitHub https://github.com/sandstorm-io/sandstorm-app-market/pull/143#issuecomment-1312528825, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAM6NXE74SVF43WSOJQ7KCLWH7FAVANCNFSM6AAAAAAQAUBKII. You are receiving this because you were mentioned.Message ID: @.***> @gischer Depending on your needs/goals, I think it'd be ideal to transition the official app store to that model some day as well, if you'd like to work within the official repo here, we probably could accommodate that. Considering that the official market now operates as a static site, presumably it could be replaced by a Sandstorm app using the static web publishing feature.
2025-04-01T04:35:26.215868
2022-05-02T12:40:01
1222838483
{ "authors": [ "priyanka-surana", "tkchafin" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10579", "repo": "sanger-tol/genomenote", "url": "https://github.com/sanger-tol/genomenote/issues/13" }
gharchive/issue
Transcript mappability Description of feature This is ideally done with RNA-seq data. It is equal to percentage of reads mapped to the genome. Possible software: WGSIM According to this paper "mappability' = "the fraction of reads derived from a transcript that aligned to the original transcript" so I think this just involves taking the FASTA + GFF and extracting out the transcripts (as FASTA) Some various tools I found so far: GenMap: https://academic.oup.com/bioinformatics/article/36/12/3687/5815974 GEM mappability: https://evodify.com/gem-mappability/ Umap (I think the one shown on UCSC mappability track): https://academic.oup.com/nar/article/46/20/e120/5086676 mapCounter: https://github.com/shahcompbio/hmmcopy_utils has a module and uses bigwig
2025-04-01T04:35:26.222508
2023-07-13T11:22:30
1802807464
{ "authors": [ "KatyTaylor", "andrewsparkes" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10580", "repo": "sanger/limber", "url": "https://github.com/sanger/limber/issues/1331" }
gharchive/issue
DPL-827 Save data against LRC Bank Stock tubes User story As a user of the scRNA Core Banking pipeline, I would like some key data to be saved against the LRC Bank Stock tubes, so that I can more easily perform the data transfer to the BioResource team in https://github.com/sanger/sequencescape/issues/3850 Who are the primary contacts for this story Lesley, Laura, Katy Who is the nominated tester for UAT Lesley, Laura (or, might be more of a technical story that we test within the team - it will then be tested when we retrieve the data in DPL-812) Acceptance criteria When the LRC PBMC Bank plate --> LRC Bank Stock tubes transfer takes place: [ ] Cell count is stored against the LRC Bank Stock tubes as the sum of the cell counts from the source wells. [ ] Viability is stored against the LRC Bank Stock tubes as the average (mean) of the viability from the source wells. [ ] Volume is stored against the LRC Bank Stock tubes as the number of source wells multiplied by the volume (TBC) transferred by the robot each time Dependencies This story is blocked by the following dependencies: https://github.com/sanger/limber/issues/1314 Additional context Cell count and viability data is stored in the QC Results table. Volume data is stored against the receptacle (well). Questions: It talks about LRC Bank Stock tubes, are these the Seq and Spare tubes? How do we get the volume transferred? Is this hardcoded in the Tube purpose config? Or entered on the labware creator screen by the user? Have put this story on hold whilst we develop the reporting story https://github.com/sanger/sequencescape/issues/3850 Possibly don't need to store the cell count and viability if can get the up to date QC data for that PBMC well at the point of creating a report. And volume is hardcoded to 135ul (Lesley) so no real need to store that on the tube either. The complexity originally envisioned where we'd have to calculate values has gone now the samples are pooled BEFORE the Cellaca QC is done (pooling on Blood Bank to PBMC Bank step, then Cellaca on PBMC Bank), so no need for calculating values from multiple wells now. Superceded by story https://github.com/sanger/limber/issues/1439 which fetches the information at the point of creating the report.
2025-04-01T04:35:26.230813
2024-10-28T19:00:40
2619287567
{ "authors": [ "scala-steward" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10581", "repo": "sangria-graphql/sangria-federated", "url": "https://github.com/sangria-graphql/sangria-federated/pull/350" }
gharchive/pull-request
Update sbt, scripted-plugin to 1.10.4 About this PR 📦 Updates org.scala-sbt:sbt org.scala-sbt:scripted-plugin from 1.10.3 to 1.10.4 📜 GitHub Release Notes - Version Diff Usage ✅ Please merge! I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! ⚙ Adjust future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "org.scala-sbt" } ] Or, add this to slow down future updates of this dependency: dependencyOverrides = [{ pullRequests = { frequency = "30 days" }, dependency = { groupId = "org.scala-sbt" } }] labels: library-update, early-semver-patch, semver-spec-patch, version-scheme:early-semver, commit-count:1 Superseded by #353.
2025-04-01T04:35:26.251985
2023-10-06T15:17:54
1930411056
{ "authors": [ "bllfoad", "jprinsen", "kiurious", "mateusriff", "paoyangt", "vntw" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10582", "repo": "sanity-io/sanity-plugin-internationalized-array", "url": "https://github.com/sanity-io/sanity-plugin-internationalized-array/issues/47" }
gharchive/issue
defaultLanguages option causes weird behaviours (duplicate keys, unable to delete documents) Hey there, thanks for the plugin, it worked nicely while not using the defaultLanguages config option. 😅 Issues When using it, I encountered some problems: Issue 1 - Duplicate keys Creating a new document via the UI and with the option configured, this is the result for all translated fields: Looking at the data (*[_type == "project"] { translatedString1, translatedString2, translatedStringText } - same for every field): "translatedString1": [ { "_key": "en", "_type": "internationalizedArrayStringValue" }, { "_type": "internationalizedArrayStringValue", "_key": "en" } ], When creating a document, there are two POST requests https://<id>.api.sanity.io/v2022-09-09/data/mutate/production?tag=sanity.studio.document.commit&returnIds=true&visibility=async&skipCrossDatasetReferenceValidation=true containing sth. like the snippet below, so makes sense that keys are duplicated (I also observed the same field being patched twice in the same request, e.g. translatedString2[-1]): { "patch": { "insert": { "after": "translatedString2[-1]", "items": [ { "_type": "internationalizedArrayStringValue", "_key": "en" } ] }, "id": "drafts.<same-id>" } }, { "patch": { "insert": { "after": "translatedString2[-1]", "items": [ { "_type": "internationalizedArrayStringValue", "_key": "en" } ] }, "id": "drafts.<same-id>" } }, I can generate unique keys and fix the "deduped" key to be of a valid language via the UI, but… for now I'll go without the config option and having to click the languages every time Issue 2 - Document deletion not possible Deleting a document in this state is not possible, the delete request gets aborted and a POST request with createIfNotExists for every translated field follows. Removing the defaultLanguages option will delete it successfully. I know there were issues before that covered this, but it's still happening for me (in this state) Reproduction I created a reproduction repo based on this template with a few added fields in the project document where this behaviour should be observable (it is for me every time, using Firefox). You should only have to enter valid API credentials. Using the latest version of the plugin (1.10.3). If you have any further questions, let me know! Thanks for your input! I'm also facing this exact issue. It does seem to be a problem with the plugin. Hi! I am having the same problem. I've been having this for some time and ignoring, but now that is time to deliver the project, while bug hunting I found this thread. This is the schema: export default defineType({ name: "pages", type: "document", title: "Pages", fields: [ defineField({ name: "title", type: "internationalizedArrayString", title: "Title", }), defineField({ name: "slug", type: "slug", title: "Slug", description: "This is used to generate the URL for this category", validation: (Rule) => Rule.required(), options: { source: "name", maxLength: 44, }, }), ], preview: { select: { title: "title", }, prepare(selection) { const { title } = selection; return { ...selection, title: title[1].value || "Untitled", }; }, }, }); And this is my config: internationalizedArray({ languages: [ { id: "el", title: "Greek" }, { id: "en", title: "English" }, ], defaultLanguages: ["en"], fieldTypes: ["string"], }), I tried adding a _key to the schema fields but it was not recognized. Perhaps I was doing it wrong but I couldn't find a way to add it manually. Any help would be appreciated any one found the solution for this? Issue is still present to this day any one found the solution for this? Have not entirely worked this through, but could this be related to maintaining multiple language-files(e.g. languages.json and languages.js)? any one found the solution for this? Could this be somehow related by having two language-files in the setup, e.g. languages.js and languages.json? Didn't find a solution. I just ended up removing the defaultLanguages from the config and moved on with the project in other ways. Didn't find a solution. I just ended up removing the defaultLanguages from the config and moved on with the project in other ways. Yes, seems to be the only option left on the table. Thanks for letting us know.
2025-04-01T04:35:26.261069
2017-03-17T19:46:25
215104709
{ "authors": [ "santagada", "sejktmcsdlmfhsc" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10583", "repo": "santagada/xontrib-powerline", "url": "https://github.com/santagada/xontrib-powerline/issues/9" }
gharchive/issue
environment variable $HOME missing on windows @santagada I was so exited on my xonsh powerline setup at home, that I tried to migrate it to my win box at work. It turned out that was actually simple: the only one thing missing for it to work on that system was the environment variable $HOME. I quickly patched it in my .xonshrc with the two lines below. I am not sure though, if that would be a xontrib-powerline or rather a xonsh fix? import os $HOME=os.path.expanduser("~") I think this is mostly for XONSH, I'm impressed it doesn't support it by default, can you open a ticket on xonsh and link to this so we can track it? If I get time I will add this inside powerline for now, but it really shouldn't be my responsability. I agree should probably added in xonsh itself, once it is there, then there would be no need for adding it in xontrib powerline Sure: https://github.com/xonsh/xonsh/issues/2334 version 0.3.3 hopefully fixes this.
2025-04-01T04:35:26.275008
2018-05-29T17:09:36
327411199
{ "authors": [ "DandiestSquare1", "joelkim", "michaelazer", "santosjorge" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10585", "repo": "santosjorge/cufflinks", "url": "https://github.com/santosjorge/cufflinks/issues/113" }
gharchive/issue
qf.iplot() --> error: PlotlyDictKeyError: 'showlegend' is not allowed in 'increasing' Can you point me in the right direction to resolve this error I get from qf.iplot()? Thank you for the wonderful library and documentation and fantastic presentation at pycon. Your work effort here makes data analysis with plotly much more efficient and for this I thank thank you. Issue running qf.iplot() in Jupyter Notebook produces error Code that produces error import pandas as pd import cufflinks as cf import numpy as np df=cf.datagen.ohlc() qf=cf.QuantFig(df,title='First Quant Figure',legend='top',name='GS') qf.add_bollinger_bands(periods=20,boll_std=2,colors=['magenta','grey'],fill=True) qf.iplot() Error PlotlyDictKeyError: 'showlegend' is not allowed in 'increasing' Path To Error: ['data'][0]['increasing']['showlegend'] Valid attributes for 'increasing' at path ['data'][0]['increasing'] under parents ['figure', 'data', 'candlestick']: ['fillcolor', 'line'] Run <increasing-object>.help('attribute') on any of the above. '' is the object at ['data'][0]['increasing'] complete code and error output.txt my environment: python: 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)] IPython: (6, 2, 1, '') pandas: 0.22.0 cufflinks: 0.12.1 numpy: 1.14.0 +1 me too. my environment Python 3.6.5 :: Anaconda custom (64-bit) plotly 2.7.0 py36_1 anaconda cufflinks-py 0.12.1 py_0 conda-forge Nevermind. I see the issue now. Will fix it. Nevermind. I see the issue now. Will fix it. Can you share your solution, because I believe I have the same issue? Thanks
2025-04-01T04:35:26.318568
2023-08-31T11:14:46
1875318342
{ "authors": [ "Pavan-SAP" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10586", "repo": "sap-contributions/await-check-suites", "url": "https://github.com/sap-contributions/await-check-suites/pull/2" }
gharchive/pull-request
Update action.yml Update action defintion to run on node20 automatically. LGTM: https://github.com/Pavan-SAP/redis-operator/actions/runs/6036387737/job/16378576949 included in https://github.com/sap-contributions/await-check-suites/pull/3
2025-04-01T04:35:26.321750
2015-01-10T13:02:50
53957624
{ "authors": [ "jvisser", "sapegin" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10587", "repo": "sapegin/grunt-bower-concat", "url": "https://github.com/sapegin/grunt-bower-concat/issues/40" }
gharchive/issue
Support any file type Make it possible to support any file type for example: filetypes: { "css" : "dist/main.css", "js" : "dist/main.js", "scss" : "src/sass/_bowerdeps.scss" } Possibly make the key a regex for file matching. And not exit with an error when main files of a specific type are not found in a module as happens in the the current version (or add a force option and make it a warning instead of an error that fails the build). Good idea for pull request. I don’t think that it’s good idea. You have to either define main files manually or excllude those packages. I mentioned the no main files error because I think it is related. I agree with you that no main files specified at all in the bower component should throw an error. But in the case where you there are main files specified but they don't match the file type filter it should at most give a warning imo. This is at this time already a problem for example with bower components that specifiy non css/js files as main (this fails the build because bower_concat cannot find main files for css/js). You’re right. It’s obviously a bug.
2025-04-01T04:35:26.335769
2021-11-23T08:23:38
1060952022
{ "authors": [ "AndreaBorgia-Abo", "DerBrutus", "sandraros" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10588", "repo": "sapmentors/abap2xlsx", "url": "https://github.com/sapmentors/abap2xlsx/issues/895" }
gharchive/issue
Problem with SharedString parsing. Hello everyone, i use the class zcl_excel_reader_huge_file. I have a strange problem with an excel file. I think it is due to strange entries in the sharedStrings.xml. The following text is in a cell: In the sharedStrings.xml is following: As a result, there are two rows in the internal table for the cell text "Material Number" ("M", "aterial number"). Now the key of the rows in the internal table are wrong. Can anyone help? Many greetings Brutus p.s. I use the Class I will check the Issue, thank you. Here is the testfile. testFile.xlsx P.S. With the class zcl_excel reader 2007 it works fine. Hi, this was just fixed through #897. Could you please check if it solves your issue? If yes, please close it. Thank you. This is the testfile above with Excel: and with my test program: It should indeed be ok, right, @DerBrutus ?
2025-04-01T04:35:26.349178
2021-03-24T18:42:27
840044492
{ "authors": [ "ElminaIusifova", "smellems" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10589", "repo": "sara-sabr/auto-evaluation-devops-self-assessment", "url": "https://github.com/sara-sabr/auto-evaluation-devops-self-assessment/issues/39" }
gharchive/issue
Remove doubled title 1. remove doubled DevOps Sel-Assessment title 2. revome home 3. increase title font size Removed extra app name and made title bigger. Home is still there.. Realated to #41
2025-04-01T04:35:26.415835
2024-07-04T09:09:33
2390350146
{ "authors": [ "MyNameTony1000", "nex3" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10591", "repo": "sass/node-sass", "url": "https://github.com/sass/node-sass/issues/3422" }
gharchive/issue
npm warn deprecated npm warn deprecated<EMAIL_ADDRESS>This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful. npm warn deprecated<EMAIL_ADDRESS>This package is no longer supported. npm warn deprecated<EMAIL_ADDRESS>Glob versions prior to v9 are no longer supported npm warn deprecated<EMAIL_ADDRESS>Glob versions prior to v9 are no longer supported npm warn deprecated<EMAIL_ADDRESS>Rimraf versions prior to v4 are no longer supported npm ls core-js outputs: --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS> --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS> --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS>deduped --<EMAIL_ADDRESS>deduped How can I fix this moment? Node Sass is deprecated. Please use the sass package instead.
2025-04-01T04:35:26.417851
2020-04-30T19:35:18
610334023
{ "authors": [ "saper", "xzyfer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10592", "repo": "sass/node-sass", "url": "https://github.com/sass/node-sass/pull/2911" }
gharchive/pull-request
Add .cirrus.yml for FreeBSD (amd64, i386) CI/CD This is an update to #2592 I think I've clicked all the buttons.
2025-04-01T04:35:26.460534
2023-03-21T17:41:20
1634424943
{ "authors": [ "JosephDHenry", "smorrisj" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10593", "repo": "sassoftware/vscode-sas-extension", "url": "https://github.com/sassoftware/vscode-sas-extension/issues/184" }
gharchive/issue
Add a way to specify SAS Options for Session Is your feature request related to a problem? Please describe. Currently there is no way to specify SAS options for a session startup. All connections are made with default SAS options. Describe the solution you'd like The connection profile should have a section for SAS options that would be specified on Session startup. This would most likely be a JSON map like: { "options": { "MEMSIZE": "0" } } @JosephDHenry would they need to be defined as key/value pair objects? Theoretically could also do: {"options": ["-NONEWS", "-MEMSIZE 0"]} I suppose its a question of whether or not we want to be able to interact with these in a deeper way than just passing them around as switches. Yeah, so the array of options like you mentioned works well because that is the format that the SAS compute API requires.
2025-04-01T04:35:26.462562
2020-11-18T23:19:02
746116500
{ "authors": [ "astrojuanlu" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10594", "repo": "satellogic/orbit-predictor", "url": "https://github.com/satellogic/orbit-predictor/issues/108" }
gharchive/issue
Migrate off Travis CI Travis CI officially became unbearable https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing so long, and thanks for all the fish. In progress in #109. Done in #109.
2025-04-01T04:35:26.581751
2019-06-27T17:46:33
461663595
{ "authors": [ "saubyk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10595", "repo": "saubyk/c-lightning-REST", "url": "https://github.com/saubyk/c-lightning-REST/issues/4" }
gharchive/issue
Authentication for API access Implement macaroons for a bearer token based access solution. A macaroon called access.macaroon, should be generated when the server is run for the first time. The generated macaroon should be passed in the request headers of the APIs for verification and authentication. macaroon library which can be utilized: https://github.com/go-macaroon/js-macaroon Sample implementations (Thanks Oli @guggero) https://github.com/guggero/cryptography-toolkit Macaroon based authentication implemented.
2025-04-01T04:35:26.590537
2023-02-01T16:51:00
1566459028
{ "authors": [ "Dr-Emann", "saulius" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10596", "repo": "saulius/croaring-rs", "url": "https://github.com/saulius/croaring-rs/pull/97" }
gharchive/pull-request
Update CRoaring, and add Frozen View support Remove dependency on libc: the only thing we used was c_char, and we can get that from std::ffi Add Bitmap::get_frozen_serialized_size_in_bytes (to augment Bitmap::get_serialized_size_in_bytes) Add Bitmap::serialize_frozen_into (see doc comments for why the strange API) Add new type BitmapView<'a> It implements Deref<Target=Bitmap>, but not DerefMut, so only read-only methods can be called It has two constructors: BitmapView::deserialize and BitmapView::deserialize_frozen. The naming in CRoaring is pretty confusing: roaring_bitmap_portable_deserialize_frozen returns a read-only bitmap for a bitmap serialized with roaring_bitmap_portable_serialize (frozen seeming to refer to the read-only-ness, and is totally unrelated to the "frozen format") roaring_bitmap_frozen_view returns a read-only bitmap for a bitmap serialized with roaring_bitmap_frozen_serialize (frozen seeming to refer to the "frozen format") Implement Binary Operations (BitAnd, BitOr, etc) with a macro, to implement all the combinations of operations between Bitmap, &Bitmap, BitmapView For testing purposes, add serialized bitmaps created in c, and test they can be deserialized into a view Replace testing read_file with std::fs::read Add a justfile, with commands to regenerate bundled bindgen file, and to generate the c serialized bitmaps for testing Fixes #96 My laptop and usual server machine are both ARM, and didn't have any trouble, but was able to reproduce with an x64 VM. Hah, I think this is a bug in CRoaring, I'm pretty sure this is the unaligned loads biting them. Going to try to reproduce the same in c and report to CRoaring. ==30395== Process terminating with default action of signal 11 (SIGSEGV): dumping core ==30395== General Protection Fault ==30395== at 0x2A1028: _mm256_load_si256 (avxintrin.h:917) ==30395== by 0x2A1028: _avx2_bitset_container_equals (roaring.c:11320) ==30395== by 0x2A113A: bitset_container_equals (roaring.c:11344) Thanks for this awesome contribution, I will dedicate some time to review it! Also, it's great that a CRoaring issue was caught along the way 🤞
2025-04-01T04:35:26.695572
2015-12-02T21:46:52
120042998
{ "authors": [ "sayanee", "shannonfoster", "simobasso" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10597", "repo": "sayanee/angularjs-pdf", "url": "https://github.com/sayanee/angularjs-pdf/issues/77" }
gharchive/issue
Trying to implement this as a 'preview', server is returning pdf content, not a url... best way to implement I need to use this a a method to preview a working file and the backend hands me the pdf content rather than a url to the file. How would I use this directive without a file path? Are you using Unit8Array then? See some instructions from the pdf.js dependancy Thank you for the response.... is there an example I can follow? Could you try this code snippet from another previous issue as suggested? He also wanted to use base64 or Uint8Array? Seems like you need to create a blob: currentBlob = new Blob([result], {type: 'application/pdf'}); $scope.pdfUrl = URL.createObjectURL(currentBlob); @shannonfoster it worked? we can add to Doc? :+1: yes i was also thinking of adding to the readme.md hi @shannonfoster - we added it to the docs. Hope this solved your issue! We are closing this issue, but please feel free to re-open and let us know the issue at hand :smile:
2025-04-01T04:35:26.700796
2019-11-20T16:27:41
525916715
{ "authors": [ "armengau", "astro-blue", "moustakas", "sbailey" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10598", "repo": "sbailey/prospect", "url": "https://github.com/sbailey/prospect/issues/19" }
gharchive/issue
Adding a sky spectrum It may be useful to add a sky spectrum so that one can check whether or not emission line features of objects are at the wavelengths of strong sky emission lines. This can give some hints about whether observed emission line features are due to residuals of sky lines. @sbailey : to my understanding, this is already covered by the noise display (from ivar) I suppose that anyway ivar is derived at some point in the pipeline from sky spectra ? Or do I miss something ? I agree that the ivar spectrum should be dominated by the sky spectrum. Formally, the ivar includes CCD read noise, object poisson noise, and cosmics, not just the noise from the sky spectrum, even if for most wavelengths of most targets it is dominated by the sky spectrum. Since every additional thing to plot adds more data to the html download package and can impact GUI performance, I suggest that if we do add a sky spectrum, that it just be an average/typical sky spectrum across all of the spectra, and not a per-target unique sky spectrum. Hi all, I understand that for most of the objects, the sky dominates the noise. My concern is that after the sky subtraction, there will be some residuals left in the targeted spectra. The amplitudes of the residuals may not be proportional to the sky flux and therefore the noise. Having a sky spectrum for comparison may help people to check whether or not the observed emission line feature is directly on top of a sky line.