added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:40:11.526319
2024-06-20T08:16:04
2363822272
{ "authors": [ "pernielsentikaer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10057", "repo": "raycast/extensions", "url": "https://github.com/raycast/extensions/pull/13060" }
gharchive/pull-request
Update qrcode-generator extension Description Screencast Checklist [ ] I read the extension guidelines [ ] I read the documentation about publishing [ ] I ran npm run build and tested this distribution build in Raycast [ ] I checked that files in the assets folder are used by the extension itself [ ] I checked that assets used by the README are placed outside of the metadata folder Hi @Melvynx and @darmiel 👋 I updated the extension, do you mind checking it out 🙂
2025-04-01T06:40:11.532219
2024-10-02T11:25:21
2561317500
{ "authors": [ "pernielsentikaer", "xmok" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10058", "repo": "raycast/extensions", "url": "https://github.com/raycast/extensions/pull/14762" }
gharchive/pull-request
[Typeform Navigator] choose endpoint + update logo + cache + add icons + move to hooks + show basic responses Description I wanted a basic overview of Responses in a certain form which is why we have the following CHANGELOG: Choose EU or Default API Updated Typeform logo Items are now cached for a better experience Added more icons for list items Added a very basic view for Responses Updated dependencies + removed got By utilizing useFetch, we have 2 slightly generic hooks which provide us with pagination and error handling. We might need to bring back got in the future if any "POST" endpoints are added but that's a problem to discuss later. Screencast Unfortunately, I can not share responses from a real form so I am sharing a screencast from a sample account: https://github.com/user-attachments/assets/ba51424e-acde-4ecc-97bc-396a32307146 Checklist [x] I read the extension guidelines [x] I read the documentation about publishing [x] I ran npm run build and tested this distribution build in Raycast [x] I checked that files in the assets folder are used by the extension itself [x] I checked that assets used by the README are placed outside of the metadata folder Thanks for the update @xmok @jdvr do you want to check this?
2025-04-01T06:40:11.536961
2024-11-19T16:09:21
2672718947
{ "authors": [ "Keyruu", "theherk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10059", "repo": "raycast/extensions", "url": "https://github.com/raycast/extensions/pull/15443" }
gharchive/pull-request
Remove duplicate bookmarks from search results. The bookmark search was showing duplicate entries when a URL was bookmarked multiple times. This was caused by the query returning all bookmark entries for a URL, including tag-related entries. The fix uses a window function (ROW_NUMBER) to partition bookmarks by their foreign key (URL reference) and selects only the entry with the lowest ID for each URL. This ensures each bookmarked URL appears only once in the results while preserving all bookmark metadata including optional titles. fixes #15442 Description Screencast Checklist [x] I read the extension guidelines [x] I read the documentation about publishing [x] I ran npm run build and tested this distribution build in Raycast [x] I checked that files in the assets folder are used by the extension itself [x] I checked that assets used by the README are placed outside of the metadata folder I was also getting some db locked and read issues periodically, so I added some retry logic. Awesome stuff man! Thanks for the contribution.
2025-04-01T06:40:11.538748
2022-09-20T20:28:08
1379957202
{ "authors": [ "rben01" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10060", "repo": "raycast/extensions", "url": "https://github.com/raycast/extensions/pull/2965" }
gharchive/pull-request
Updated Any Website Search extension Extension: extensions/any-website-search Description The initial release had bug that ignored the selected search suggestion; this PR fixes it. Also added a setting for search suggestions provider and improved how DDG bang-related intelligence works. Can you also add so when I change to another engine in the dropdown, then the search results are refreshed? The suggestion always come from the user's configured search suggestions provider: Google or DuckDuckGo (or none). The selected site in the dropdown is only the "destination" and is not used to get suggestions. So there is nothing to refresh when changing the search engine.
2025-04-01T06:40:11.542331
2023-08-31T23:54:18
1876451216
{ "authors": [ "trillhause" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10061", "repo": "raycast/extensions", "url": "https://github.com/raycast/extensions/pull/8179" }
gharchive/pull-request
Update obsidian-smart-capture extension Description Adding support to fetch links from more browsers: "Arc", "Edge", "Firefox", "Brave" & "Opera" Checklist [x] I read the extension guidelines [x] I read the documentation about publishing [x] I ran npm run build and tested this distribution build in Raycast [x] I checked that files in the assets folder are used by the extension itself [x] I checked that assets used by the README are placed outside of the metadata folder @pernielsentikaer I planned to use them in Readme but forgot to update it.
2025-04-01T06:40:11.582730
2022-04-20T03:57:48
1209128957
{ "authors": [ "cuviper", "scuwan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10062", "repo": "rayon-rs/rayon", "url": "https://github.com/rayon-rs/rayon/issues/929" }
gharchive/issue
rayon thread pool coredump hey, I have a problem with rayon thread pool,Probabilistic coredump. the core like: rayon::ThreadPoolBuilder::new() .stack_size(8 * 1024 *1024) .num_threads((num_cpus::get() * 6 / 8).min(32)) .panic_handler(rayon_panic_handler) .build() .expect("Failed to initialize a thread pool for worker") thread_pool.install(move || { loop { //debug!("mine_one_unchecked"); let block_header = BlockHeader::mine_once_unchecked(&block_template, &terminator_clone, &mut thread_rng())?; //debug!("mine_one_unchecked end"); // Ensure the share difficulty target is met. if N::posw().verify( block_header.height(), share_difficulty, &[*block_header.to_header_root().unwrap(), *block_header.nonce()], block_header.proof(), ) { return Ok::<(N::PoSWNonce, PoSWProof<N>, u64), anyhow::Error>(( block_header.nonce(), block_header.proof().clone(), block_header.proof().to_proof_difficulty()?, )); } } }) the backtrace: gdb) bt #0 <alloc::vec::Vec<T,A> as core::ops::deref::Deref>::deref (self=0xf58017d4cd80e144) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2402 #1 <alloc::vec::Vec<T,A> as core::ops::index::Index<I>>::index (self=0xf58017d4cd80e144, index=1) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2496 #2 rayon_core::sleep::Sleep::wake_specific_thread (self=0xf58017d4cd80e134, index=1) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/sleep/mod.rs:355 #3 0x000055e3c542dbe0 in rayon_core::sleep::Sleep::notify_worker_latch_is_set (self=0xf58017d4cd80e134, target_worker_index=1) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/sleep/mod.rs:245 #4 rayon_core::registry::Registry::notify_worker_latch_is_set (target_worker_index=1, self=<optimized out>) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:544 #5 <rayon_core::latch::SpinLatch as rayon_core::latch::Latch>::set (self=0x7facd9bec448) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/latch.rs:214 #6 <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute (this=0x7facd9bec448) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/job.rs:123 #7 0x000055e3c53da4b1 in rayon_core::job::JobRef::execute (self=<optimized out>) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/job.rs:59 #8 rayon_core::registry::WorkerThread::execute (self=<optimized out>, job=...) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:749 #9 rayon_core::registry::WorkerThread::wait_until_cold (self=<optimized out>, latch=<optimized out>) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:726 #10 0x000055e3c5633534 in rayon_core::registry::WorkerThread::wait_until (self=0x7facd8be4800, latch=<optimized out>) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:700 #11 rayon_core::registry::main_loop (registry=..., index=9, worker=...) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:833 #12 rayon_core::registry::ThreadBuilder::run (self=...) at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:55 #13 0x000055e3c5635581 in <rayon_core::registry::DefaultSpawn as rayon_core::registry::ThreadSpawn>::spawn::{{closure}} () at /mnt/fstar/.aleo/aleo1/.cargo/registry/src/mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b/rayon-core-1.9.1/src/registry.rs:100 #14 std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/sys_common/backtrace.rs:123 #15 0x000055e3c5630994 in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/thread/mod.rs:483 #16 <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/core/src/panic/unwind_safe.rs:271 #17 std::panicking::try::do_call (data=<optimized out>) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:403 #18 std::panicking::try (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:367 #19 std::panic::catch_unwind (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panic.rs:133 #20 std::thread::Builder::spawn_unchecked::{{closure}} () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/thread/mod.rs:482 #21 core::ops::function::FnOnce::call_once{{vtable-shim}} () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/core/src/ops/function.rs:227 #22 0x000055e3c585ce05 in <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691 #23 <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691 #24 std::sys::unix::thread::Thread::new::thread_start () at library/std/src/sys/unix/thread.rs:106 #25 0x00007fb41d7696db in start_thread (arg=0x7facd8be5700) at pthread_create.c:463 #26 0x00007fb41cef061f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 https://github.com/rayon-rs/rayon/blob/v1.5.1/rayon-core/src/sleep/mod.rs#L355 index out of range cpu: 3970x 7542 with hyperthreading open This seems similar to #913 and #919, but I was never able to reproduce those myself. Do you have a full example that you can share for your case? my code is based on a complex library(https://github.com/AleoHQ/snarkVM), It is the client part of a complex system. I don't know if the code will help, It cannot run alone. I will try to reproduce it in a simple code, and if I reproduce it, I will share the code with you. thanks for your reply!!! Interesting, #913 also happens to be related to Aleo... but #919 is not, so there still might be a more general bug here. diff --git a/rayon-core/src/registry.rs b/rayon-core/src/registry.rs index 4dd7971..06f21da 100644 --- a/rayon-core/src/registry.rs +++ b/rayon-core/src/registry.rs @@ -437,8 +437,10 @@ impl Registry { unsafe { let worker_thread = WorkerThread::current(); if worker_thread.is_null() { println!("in_worker_cold"); self.in_worker_cold(op) } else if (*worker_thread).registry().id() != self.id() { println!("in_worker_cross"); self.in_worker_cross(&*worker_thread, op) } else { // Perfectly valid to give them a `&T`: this is the diff --git a/rayon-core/src/thread_pool/mod.rs b/rayon-core/src/thread_pool/mod.rs index 5edaedc..23f7ab7 100644 --- a/rayon-core/src/thread_pool/mod.rs +++ b/rayon-core/src/thread_pool/mod.rs @@ -108,6 +108,7 @@ impl ThreadPool { OP: FnOnce() -> R + Send, R: Send, { println!("install"); self.registry.in_worker(|_, _| op()) } i can't find any extra pool.install invoke in func "BlockHeader::mine_once_unchecked" ThreadPool::join, scope, and scope_fifo each call install as well. If this is specifically an issue between multiple pools, per in_worker_cross, that's new information. Can you capture a backtrace of all threads? In gdb that's thread apply all backtrace (t a a bt for short), and you can use logging to write that to a text file. By the way, I have edited your comments to use fenced code blocks for readability, as described here: https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks ThreadPool::join, scope, and scope_fifo each call install as well. If this is specifically an issue between multiple pools, per in_worker_cross, that's new information. Can you capture a backtrace of all threads? In gdb that's thread apply all backtrace (t a a bt for short), and you can use logging to write that to a text file. thanks, i will try to capture a backtrace i test a patch below, the coredump frequency has decreased. diff --git a/rayon-core/src/job.rs b/rayon-core/src/job.rs index a71f1b0..9dd1aa4 100644 --- a/rayon-core/src/job.rs +++ b/rayon-core/src/job.rs @@ -4,6 +4,7 @@ use crossbeam_deque::{Injector, Steal}; use std::any::Any; use std::cell::UnsafeCell; use std::mem; +use std::sync::Mutex; pub(super) enum JobResult<T> { None, @@ -73,6 +74,7 @@ where pub(super) latch: L, func: UnsafeCell<Option<F>>, result: UnsafeCell<JobResult<R>>, + m: Mutex<bool>, } impl<L, F, R> StackJob<L, F, R> @@ -86,6 +88,7 @@ where latch, func: UnsafeCell::new(Some(func)), result: UnsafeCell::new(JobResult::None), + m: Mutex::new(false), } } @@ -114,6 +117,15 @@ where } let this = &*this; + { + let mut guard = this.m.lock().unwrap(); + if *guard { + println!("job has been comsumed."); + return + } + *guard = true; + } + let _holder = this.latch.hold(); let abort = unwind::AbortIfPanic; let func = (*this.func.get()).take().unwrap(); (*this.result.get()) = match unwind::halt_unwinding(call(func)) { diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs index 1d573b7..c4f3275 100644 --- a/rayon-core/src/latch.rs +++ b/rayon-core/src/latch.rs @@ -41,6 +41,9 @@ pub(super) trait Latch { /// and it's typically better to read all the fields you will need /// to access *before* a latch is set! fn set(&self); + fn hold(&self) -> Option<Arc<Registry>> { + None + } } pub(super) trait AsCoreLatch { @@ -214,6 +217,11 @@ impl<'r> Latch for SpinLatch<'r> { registry.notify_worker_latch_is_set(target_worker_index); } } + + #[inline] + fn hold(&self) -> Option<Arc<Registry>> { + Some(Arc::clone(self.registry)) + } } On the 3970X platform, Coredump disappeared for 2 days + #!/bin/bash export RUST_BACKTRACE=full ulimit -c unlimited x=1 name="sh-94" if [ $# == 1 ];then x=$1 echo $x-$1 fi while true do for ((i=1; i<=$x; i++)) do echo "process $i" ./target/debug/worker --address aleo1nuhl5vf8xdldzxsfnzsnsdgfvqkuufzex2598fzjuxkq2qrl5qzqupr666 --tcp_server "<IP_ADDRESS>:4133" --ssl_server "<IP_ADDRESS>:4134" --ssl --custom_name $name --parallel_num 6 >> log_$i 2>&1 & # sudo prlimit --core=unlimited --pid $! done wait cur_dateTime="`date +"%Y-%m-%d %H:%M:%S"`" echo "restart $cur_dateTime" >> running_history done aleo1@x3970x-94:~/snarkOS$ cat running_history restart 2022-04-24 15:06:30 restart 2022-04-24 18:58:54 restart 2022-04-25 10:59:33 restart 2022-04-25 17:47:02 restart 2022-04-26 03:13:02 restart 2022-04-26 13:33:20 restart 2022-04-26 14:40:14 restart 2022-04-26 17:18:15 aleo1@x3970x-94:~/snarkOS$ aleo1@x3970x-94:~/snarkOS$ aleo1@x3970x-94:~/snarkOS$ date Fri Apr 29 09:59:04 CST 2022 On the 3990X platform, Coredump occurred once in two days. Unfortunately, I did not grab the Coredump file when the coredump occurred. At first I suspected that stackJob was consumed twice, but I didn't catch the log "job has been comsumed" when I coredump on 3990x. so my guess was wrong. Can you try it with just the "hold" addition? There are comments in SpinLatch::set about making sure the pool is kept alive, dating back to #739. In review, it still seems like we're doing the right thing there, but I could be wrong. the coredump frequency has decreased in my case. Coredump occurred once in two days. Oh, it's still not completely fixed? In that case, it might be that the Mutex is adding enough synchronization to make some race condition harder to fail, but that's just a guess. Can you try it with just the "hold" addition? There are comments in SpinLatch::set about making sure the pool is kept alive, dating back to #739. In review, it still seems like we're doing the right thing there, but I could be wrong. I've tried that, the mutex lock i added dind't work, the "holder" seems to be working, because the coredump frequency has dropped significantly on both of my platforms,but it still not completely fixed. For the same reson. ## 3970x aleo1@x3970x-94:~/snarkOS$ cat running_history restart 2022-04-24 15:06:30 restart 2022-04-24 18:58:54 restart 2022-04-25 10:59:33 restart 2022-04-25 17:47:02 restart 2022-04-26 03:13:02 restart 2022-04-26 13:33:20 restart 2022-04-26 14:40:14 restart 2022-04-26 17:18:15 restart 2022-05-01 08:31:29 restart 2022-05-04 02:25:54 aleo1@x3970x-94:~/snarkOS$ date Thu May 5 09:51:18 CST 2022 ## 3990x since 2022-04-28 ps@filecoin-21891:~/6pool-worker-file$ cat running_history restart 2022-05-03 05:15:09 restart 2022-05-04 08:54:47 ps@filecoin-21891:~/6pool-worker-file$ date 2022年 05月 05日 星期四 09:52:40 CST [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `./target/debug/worker --address aleo1nuhl5vf8xdldzxsfnzsnsdgfvqkuufzex2598fzjux'. Program terminated with signal SIGSEGV, Segmentation fault. #0 <alloc::vec::Vec<T,A> as core::ops::deref::Deref>::deref (self=0x1b8) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2402 2402 /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs: No such file or directory. [Current thread is 1 (Thread 0x7f92f87fb700 (LWP 118675))] warning: Missing auto-load script at offset 0 in section .debug_gdb_scripts of file /mnt/fstar/.aleo/aleo1/snarkOS/target/debug/worker. Use `info auto-load python-scripts [REGEXP]' to list them. (gdb) bt #0 <alloc::vec::Vec<T,A> as core::ops::deref::Deref>::deref (self=0x1b8) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2402 #1 <alloc::vec::Vec<T,A> as core::ops::index::Index<I>>::index (self=0x1b8, index=3) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/vec/mod.rs:2496 #2 rayon_core::sleep::Sleep::wake_specific_thread (self=0x1a8, index=3) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/sleep/mod.rs:357 #3 0x00005588f821a238 in rayon_core::sleep::Sleep::notify_worker_latch_is_set (self=0x1a8, target_worker_index=3) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/sleep/mod.rs:247 #4 rayon_core::registry::Registry::notify_worker_latch_is_set (target_worker_index=3, self=<optimized out>) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:546 #5 <rayon_core::latch::SpinLatch as rayon_core::latch::Latch>::set (self=0x7f92505f6138) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/latch.rs:217 #6 <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute (this=0x7f92505f6138) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/job.rs:135 #7 0x00005588f81fb4de in rayon_core::job::JobRef::execute (self=<optimized out>) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/job.rs:60 #8 rayon_core::registry::WorkerThread::execute (self=<optimized out>, job=...) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:751 #9 rayon_core::registry::WorkerThread::wait_until_cold (self=<optimized out>, latch=<optimized out>) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:728 #10 0x00005588f845caf4 in rayon_core::registry::WorkerThread::wait_until (self=0x7f92f87fa880, latch=<optimized out>) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:702 #11 rayon_core::registry::main_loop (registry=..., index=1, worker=...) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:835 #12 rayon_core::registry::ThreadBuilder::run (self=...) at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:55 #13 0x00005588f845a211 in <rayon_core::registry::DefaultSpawn as rayon_core::registry::ThreadSpawn>::spawn::{{closure}} () at /mnt/fstar/.aleo/aleo1/rayon/rayon-core/src/registry.rs:100 #14 std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/sys_common/backtrace.rs:123 #15 0x00005588f845db84 in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/thread/mod.rs:483 #16 <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/core/src/panic/unwind_safe.rs:271 #17 std::panicking::try::do_call (data=<optimized out>) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:403 #18 std::panicking::try (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panicking.rs:367 #19 std::panic::catch_unwind (f=...) at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/panic.rs:133 #20 std::thread::Builder::spawn_unchecked::{{closure}} () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/std/src/thread/mod.rs:482 #21 core::ops::function::FnOnce::call_once{{vtable-shim}} () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/core/src/ops/function.rs:227 #22 0x00005588f8679d05 in <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691 #23 <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once () at /rustc/f1edd0429582dd29cccacaf50fd134b05593bd9c/library/alloc/src/boxed.rs:1691 #24 std::sys::unix::thread::Thread::new::thread_start () at library/std/src/sys/unix/thread.rs:106 #25 0x00007f940454d6db in start_thread (arg=0x7f92f87fb700) at pthread_create.c:463 #26 0x00007f9403cd461f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 I would still like to see thread apply all backtrace too. I would still like to see thread apply all backtrace gdb.txt I am testing another patch:rayon-coredump.diff.txt         My basic idea is to keep the "SpinLatch instance (here)" alive until "latch.set() (here)" is executed. This ensures that the calling thread waits here for the computed thread to terminate the latch.set() call.         If the patch works, I think there is a real possibility of a potential problem with the code handling race conditions, although it is not easy to spot.         The 24 hour test results look normal so far, I will continue to test for a period of time until something goes wrong or the problem does not recur for a long time diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs index 1d573b7..0ecf381 100644 --- a/rayon-core/src/latch.rs +++ b/rayon-core/src/latch.rs @@ -143,6 +143,7 @@ pub(super) struct SpinLatch<'r> { registry: &'r Arc<Registry>, target_worker_index: usize, cross: bool, + life_lock: Mutex<()>, } impl<'r> SpinLatch<'r> { @@ -157,6 +158,7 @@ impl<'r> SpinLatch<'r> { registry: thread.registry(), target_worker_index: thread.index(), cross: false, + life_lock: Mutex::new(()), } } @@ -165,9 +167,16 @@ impl<'r> SpinLatch<'r> { /// safely call the notification. #[inline] pub(super) fn cross(thread: &'r WorkerThread) -> SpinLatch<'r> { + //SpinLatch { + // cross: true, + // ..SpinLatch::new(thread) + //} SpinLatch { + core_latch: CoreLatch::new(), + registry: thread.registry(), + target_worker_index: thread.index(), cross: true, - ..SpinLatch::new(thread) + life_lock: Mutex::new(()), } } @@ -187,6 +196,7 @@ impl<'r> AsCoreLatch for SpinLatch<'r> { impl<'r> Latch for SpinLatch<'r> { #[inline] fn set(&self) { + let _life = self.life_lock.lock().unwrap(); let cross_registry; let registry = if self.cross { @@ -216,6 +226,17 @@ impl<'r> Latch for SpinLatch<'r> { } } +impl<'r> Drop for SpinLatch<'r> { + fn drop(&mut self) { + { + let _life = self.life_lock.lock().unwrap(); + } + //if self.cross { + // println!("Drop SpinLatch"); + //} + } +} + /// A Latch starts as false and eventually becomes true. You can block /// until it becomes true. #[derive(Debug)] gdb.txt Wow, 832 threads total, and 688 of them are rayon threads?!? Then there are 141 tokio threads, 2 threads that are just starting, and the main thread. Technically there's no reason why that many threads should cause memory problems, but you're severely oversubscribed unless most of those are idle. I do see 529 in rayon_core::sleep::Sleep::sleep, leaving 159 active rayon threads. If nothing else, this will cause a lot of context switching and exacerbate whatever race condition we're facing. gdb.txt Wow, 832 threads total, and 688 of them are rayon threads?!? Then there are 141 tokio threads, 2 threads that are just starting, and the main thread. Technically there's no reason why that many threads should cause memory problems, but you're severely oversubscribed unless most of those are idle. I do see 529 in rayon_core::sleep::Sleep::sleep, leaving 159 active rayon threads. If nothing else, this will cause a lot of context switching and exacerbate whatever race condition we're facing. Yes, I have 12 globally independent Rayon pools that are concurrently executing "BlockHeader::mine_once_unchecked", and BlockHeader::mine_once_unchecked is almost unchecked during execution (although the number is fixed at 48? ) Temporary pool. My 3900X platform has 64/128 (HyperThread) cores. I am testing another patch:rayon-coredump.diff.txt         My basic idea is to keep the "SpinLatch instance (here)" alive until "latch.set() (here)" is executed. This ensures that the calling thread waits here for the computed thread to terminate the latch.set() call.         If the patch works, I think there is a real possibility of a potential problem with the code handling race conditions, although it is not easy to spot.         The 24 hour test results look normal so far, I will continue to test for a period of time until something goes wrong or the problem does not recur for a long time diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs index 1d573b7..0ecf381 100644 --- a/rayon-core/src/latch.rs +++ b/rayon-core/src/latch.rs @@ -143,6 +143,7 @@ pub(super) struct SpinLatch<'r> { registry: &'r Arc<Registry>, target_worker_index: usize, cross: bool, + life_lock: Mutex<()>, } impl<'r> SpinLatch<'r> { @@ -157,6 +158,7 @@ impl<'r> SpinLatch<'r> { registry: thread.registry(), target_worker_index: thread.index(), cross: false, + life_lock: Mutex::new(()), } } @@ -165,9 +167,16 @@ impl<'r> SpinLatch<'r> { /// safely call the notification. #[inline] pub(super) fn cross(thread: &'r WorkerThread) -> SpinLatch<'r> { + //SpinLatch { + // cross: true, + // ..SpinLatch::new(thread) + //} SpinLatch { + core_latch: CoreLatch::new(), + registry: thread.registry(), + target_worker_index: thread.index(), cross: true, - ..SpinLatch::new(thread) + life_lock: Mutex::new(()), } } @@ -187,6 +196,7 @@ impl<'r> AsCoreLatch for SpinLatch<'r> { impl<'r> Latch for SpinLatch<'r> { #[inline] fn set(&self) { + let _life = self.life_lock.lock().unwrap(); let cross_registry; let registry = if self.cross { @@ -216,6 +226,17 @@ impl<'r> Latch for SpinLatch<'r> { } } +impl<'r> Drop for SpinLatch<'r> { + fn drop(&mut self) { + { + let _life = self.life_lock.lock().unwrap(); + } + //if self.cross { + // println!("Drop SpinLatch"); + //} + } +} + /// A Latch starts as false and eventually becomes true. You can block /// until it becomes true. #[derive(Debug)] this patch seems works fine. Coredump has not appeared so far. # 3970x platform aleo1@x3970x-94:~/snarkOS$ cat running_history restart 2022-04-24 15:06:30 restart 2022-04-24 18:58:54 restart 2022-04-25 10:59:33 restart 2022-04-25 17:47:02 restart 2022-04-26 03:13:02 restart 2022-04-26 13:33:20 restart 2022-04-26 14:40:14 restart 2022-04-26 17:18:15 restart 2022-05-01 08:31:29 restart 2022-05-04 02:25:54 start 2022-05-05 13:57:53 aleo1@x3970x-94:~/snarkOS$ date Mon May 9 11:15:43 CST 2022 #3990x platform restart 2022-05-03 05:15:09 restart 2022-05-04 08:54:47 start 2022-05-05 14:51:45 ps@filecoin-21891:~/6pool-worker-file$ date 2022年 05月 09日 星期一 11:17:02 CST # running script #!/bin/bash cur_dateTime="`date +"%Y-%m-%d %H:%M:%S"`" echo "start $cur_dateTime" >> running_history export RUST_BACKTRACE=full ulimit -c unlimited x=1 name="sh-91" # hk #tcp_server="<IP_ADDRESS>:4133" #ssl_server="<IP_ADDRESS>:4134" # china tcp_server="<IP_ADDRESS>:4133" ssl_server="<IP_ADDRESS>:4134" if [ $# == 1 ];then x=$1 echo $x-$1 fi while true do for ((i=1; i<=$x; i++)) do echo "process $i" ./worker --address aleo1nuhl5vf8xdldzxsfnzsnsdgfvqkuufzex2598fzjuxkq2qrl5qzqupr666 --tcp_server $tcp_server --ssl_server $ssl_server --ssl --custom_name $name --parallel_num 12 >> log_$i 2>&1 & # sudo prlimit --core=unlimited --pid $! done wait cur_dateTime="`date +"%Y-%m-%d %H:%M:%S"`" echo "restart $cur_dateTime" >> running_history done I think I found the issue -- could you see if this fixes it for you? diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs index 1d573b781612..b84fbe371ee3 100644 --- a/rayon-core/src/latch.rs +++ b/rayon-core/src/latch.rs @@ -189,19 +189,21 @@ impl<'r> Latch for SpinLatch<'r> { fn set(&self) { let cross_registry; - let registry = if self.cross { + let registry: &Registry = if self.cross { // Ensure the registry stays alive while we notify it. // Otherwise, it would be possible that we set the spin // latch and the other thread sees it and exits, causing // the registry to be deallocated, all before we get a // chance to invoke `registry.notify_worker_latch_is_set`. cross_registry = Arc::clone(self.registry); - &cross_registry + &*cross_registry } else { // If this is not a "cross-registry" spin-latch, then the // thread which is performing `set` is itself ensuring - // that the registry stays alive. - self.registry + // that the registry stays alive. However, that doesn't + // include this *particular* `Arc` handle if the waiting + // thread then exits, so we must completely dereference it. + &**self.registry }; let target_worker_index = self.target_worker_index; Ok, I will test and give feedback. Haha, "* &" is confusing in Rust Yeah, that * invokes Deref, then & to capture an inner reference. I have applied the patch above to start the test, and I think you found the real problem, the 'Arc' handle's Lifecycle has no guarantee. aleo1@x3970x-94:~/rayon$ git diff diff --git a/rayon-core/src/latch.rs b/rayon-core/src/latch.rs index 1d573b7..b84fbe3 100644 --- a/rayon-core/src/latch.rs +++ b/rayon-core/src/latch.rs @@ -189,19 +189,21 @@ impl<'r> Latch for SpinLatch<'r> { fn set(&self) { let cross_registry; - let registry = if self.cross { + let registry: &Registry = if self.cross { // Ensure the registry stays alive while we notify it. // Otherwise, it would be possible that we set the spin // latch and the other thread sees it and exits, causing // the registry to be deallocated, all before we get a // chance to invoke `registry.notify_worker_latch_is_set`. cross_registry = Arc::clone(self.registry); - &cross_registry + &*cross_registry } else { // If this is not a "cross-registry" spin-latch, then the // thread which is performing `set` is itself ensuring - // that the registry stays alive. - self.registry + // that the registry stays alive. However, that doesn't + // include this *particular* `Arc` handle if the waiting + // thread then exits, so we must completely dereference it. + &**self.registry }; let target_worker_index = self.target_worker_index; aleo1@x3970x-94:~/rayon$ aleo1@x3970x-94:~/snarkOS$ cat running_history restart 2022-04-24 15:06:30 restart 2022-04-24 18:58:54 restart 2022-04-25 10:59:33 restart 2022-04-25 17:47:02 restart 2022-04-26 03:13:02 restart 2022-04-26 13:33:20 restart 2022-04-26 14:40:14 restart 2022-04-26 17:18:15 restart 2022-05-01 08:31:29 restart 2022-05-04 02:25:54 start 2022-05-05 13:57:53 start 2022-05-11 09:48:25 aleo1@x3970x-94:~/snarkOS$ ps@filecoin-21891:~/6pool-worker-file$ cat running_history restart 2022-05-03 05:15:09 restart 2022-05-04 08:54:47 start 2022-05-05 14:51:45 start 2022-05-11 09:58:30 https://github.com/rayon-rs/rayon/issues/929#issuecomment-1123104488 all tests seems ok. In addition to the two machines previously tested, six more machines participated in the test. ps@filecoin-21891:~/6pool-worker-file$ cat running_history restart 2022-05-03 05:15:09 restart 2022-05-04 08:54:47 start 2022-05-05 14:51:45 start 2022-05-11 09:58:30 ps@filecoin-21891:~/6pool-worker-file$ date 2022年 05月 12日 星期四 09:27:28 CST aleo1@x3970x-94:~/snarkOS$ cat running_history restart 2022-04-24 15:06:30 restart 2022-04-24 18:58:54 restart 2022-04-25 10:59:33 restart 2022-04-25 17:47:02 restart 2022-04-26 03:13:02 restart 2022-04-26 13:33:20 restart 2022-04-26 14:40:14 restart 2022-04-26 17:18:15 restart 2022-05-01 08:31:29 restart 2022-05-04 02:25:54 start 2022-05-05 13:57:53 start 2022-05-11 09:48:25 aleo1@x3970x-94:~/snarkOS$ date Thu May 12 09:27:26 CST 2022 @cuviper
2025-04-01T06:40:11.587401
2022-11-30T03:56:08
1469009342
{ "authors": [ "JeffM2501", "raysan5" ], "license": "Zlib", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10063", "repo": "raysan5/raylib-game-template", "url": "https://github.com/raysan5/raylib-game-template/pull/12" }
gharchive/pull-request
[Cleanup] use C code that is also compatible with C++ While the template is valid C code, it does not work well when converted to C++. Some users would like to use the template as the starting point for C++ code. This PR makes a few small changes to the C code to make it also compatible with C++. use the scene enums everywhere, not a mix of enums and numbers (also makes the code read better) don't pass the address of unnamed temp variables into functions, allocate an actual structure. These changes allow the c files to be simply renamed C++ and compile without issues. @JeffM2501 Thanks! I'm used to use integers for enum types, maybe it's about time to change to a stronger typeage mode... :)
2025-04-01T06:40:11.588582
2024-10-23T23:38:53
2610048868
{ "authors": [ "JeffM2501", "raysan5" ], "license": "Zlib", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10064", "repo": "raysan5/raylib", "url": "https://github.com/raysan5/raylib/pull/4425" }
gharchive/pull-request
[RTEXTURES] Remove the panorama cubemap layout option The panorama cube map option was not implemented, so this PR removes it from the enumeration in raylib.h so that users do not get confused and try to use it. I left a todo in the code for some aspiring developer to finish. @JeffM2501 thanks for the review!
2025-04-01T06:40:11.599189
2024-03-13T20:07:48
2184801440
{ "authors": [ "razterizer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10065", "repo": "razterizer/8Beat", "url": "https://github.com/razterizer/8Beat/issues/31" }
gharchive/issue
Optimize memory usage in ChipTuneEngine. Optimize memory usage in ChipTuneEngine so that instead of generating all waveforms per note first and then playing them, instead we create a waveform one or a few notes or beats before it is about to be played and then freeing up the memory just as a note has finished playing. Optionally group together sounds that are identical for more efficient memory usage.
2025-04-01T06:40:11.600389
2024-11-07T07:45:24
2640170137
{ "authors": [ "razterizer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10066", "repo": "razterizer/TextUR", "url": "https://github.com/razterizer/TextUR/issues/5" }
gharchive/issue
Add editor-in-editor textel editor To make freehand painting much easier. Also indicate if the supplied material index already exists or not. Use key 'E' (for edit).
2025-04-01T06:40:11.605401
2020-08-10T10:31:05
676033631
{ "authors": [ "noud", "rb1193" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10067", "repo": "rb1193/laravel-hydra", "url": "https://github.com/rb1193/laravel-hydra/issues/21" }
gharchive/issue
docs i am real new to hydra json in whole. i managed to install package and i get: { "@context": { "@vocab": "http:\/\/politiebureaus.localhost\/docs", "hydra": "http:\/\/www.w3.org\/ns\/hydra\/core#", "rdf": "http:\/\/www.w3.org\/1999\/02\/22-rdf-syntax-ns#" }, "@id": "http:\/\/politiebureaus.localhost\/docs", "@type": "hydra:ApiDocumentation", "hydra:title": "Politie Hydra API", "hydra:description": "Politie API that conforms to the Hydra specification", "hydra:entrypoint": null, "hydra:supportedClass": "[]" } As shows, i can set description etc in .env I had expected my App\Models classes in hydra:supportedClass ? or something in entrypoint ? Is there some docs around telling me what to do now? Or is there an example project use-ing this package? Else some suggestions? thanks, Noud Sorry, I didn't get too far with this project, I was trying something as a proof of concept and it hasn't really worked out. I'm going to archive the project as I don't really have too much time to invest in it at the moment. If you're looking for a resource to build a Hydra API then I'd recommend you get started with API Platform, if you haven't seen it already. If it has to be Laravel I don't think there's much out there, and my personal feeling after trying this project is that Laravel's opinionated nature makes working with Hydra very difficult. By all means fork the repo and see how you get on, though. Sorry I can't be more helpful right now.
2025-04-01T06:40:11.606887
2024-10-30T17:55:09
2624885902
{ "authors": [ "MorgunovE" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10068", "repo": "rbabari/Test-1277-A13", "url": "https://github.com/rbabari/Test-1277-A13/issues/10" }
gharchive/issue
need aprove and pull request need aprove and pull request Originally posted by @MorgunovE in https://github.com/rbabari/Test-1277-A13/issues/3#issuecomment-2447939967 pull request approved
2025-04-01T06:40:11.653092
2020-02-01T07:33:20
558500980
{ "authors": [ "rbw" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10069", "repo": "rbw/snow", "url": "https://github.com/rbw/snow/issues/16" }
gharchive/issue
Add development scripts scripts/install - Install dependencies in a virtual environment. scripts/test - Run the test suite. scripts/lint - Run the code linting. scripts/publish - Publish the latest version to PyPI. Fixed by https://github.com/rbw/snow/pull/44
2025-04-01T06:40:11.659074
2023-11-19T00:46:04
2000641896
{ "authors": [ "rcasia" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10070", "repo": "rcasia/neotest-bash", "url": "https://github.com/rcasia/neotest-bash/issues/13" }
gharchive/issue
bashunit does not provide test reports option yet Currently bashunit does not have an option to output test report files. As a workaround (see #12) this adapter runs a test command per test function name. This allow us to have individual results and individual outputs. With a test report we could be able to at least: Run all tests in a file with just 1 command Run all test files in a directory with just 1 command Show inline errors about what went wrong in running a test Hi everyone, After careful consideration and further usage of the adapter, as well as observing increased community adoption, I have decided that the proposed functionality to generate test report files is no longer necessary for our project. The same goals can be achieved without this specific integration. Therefore, I am closing this issue. I greatly appreciate all the time and effort that everyone has put into this discussion and proposal.
2025-04-01T06:40:11.693034
2017-10-08T14:07:08
263720497
{ "authors": [ "Ojimadu", "jimscarver", "lapin7" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10071", "repo": "rchain/Members", "url": "https://github.com/rchain/Members/issues/120" }
gharchive/issue
Use Docusign for signing documents. We have now several documents that need a signature of either one person or more persons: SoW W9 tax form So we don't have to paste a Signature.jpg in documents anymore. Evan is in takes care of the process You do not need to use Docusign to sign documents. You can sign in a google doc by Insert Drawing, and Line, Scribble tool before exporting as PDF or using Adobe acrobat, Tools, Sign. On Sun, Oct 8, 2017 at 10:07 AM, HJ Hilbolling<EMAIL_ADDRESS>wrote: We have now several documents that need a signature of either one person or more persons: SoW https://docs.google.com/document/d/1MH-wHbLy_4KSjQeyd5bJNwCS6mGYsuXEA8q4GyooAwY/editP W9 tax form https://www.google.com/url?q=https://www.irs.gov/pub/irs-pdf/fw9.pdf&sa=D&ust=1507471167994000&usg=AFQjCNEbXCPjmkmAvOiIrgH8trSWW0zE4Q So we don't have to paste a Signature.jpg in documents anymore. Evan is in takes care of the process — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/rchain/Members/issues/120, or mute the thread https://github.com/notifications/unsubscribe-auth/AC5YEZIIukQu10GSHgj_Y3tO9lu2seLzks5sqNcNgaJpZM4Pxt1c . Don't you think signing documents digitally would be easier to verify?
2025-04-01T06:40:11.703346
2024-01-04T10:06:59
2065358272
{ "authors": [ "jeff-godwin", "paulati" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10072", "repo": "rchereji/plot2DO", "url": "https://github.com/rchereji/plot2DO/issues/17" }
gharchive/issue
Errors in ploting dyads Hello team, I'm facing an issue while plotting dyad density (the occupancy plots come out fine). My data is from MNase-seq in yeast. The error messages are - Error in (function (classes, fdef, mtable) : unable to find an inherited method for function ‘coverage’ for signature ‘"NULL"’ Calls: Main ... ComputeNormalizationFactors -> coverage -> Could you please let me know what could be the issue? I can provide other information as required. Many thanks, Jeff Hi Jeffrey, based on the error, it looks like the reads are NULL. You get reads object from input file path, genome name, and annotations: rawReads <- LoadReads(params$inputFilePath, params$genome, annotations) reads <- CleanReads(rawReads, annotations$chrLen, params$lMin, params$lMax) Then, you call ComputeNormalizationFactors with reads as parameter: ComputeNormalizationFactors <- function(reads) { occ <- coverage(reads) If you want to share raw data and annotations with me, I can debug it. Best, Paula Hi Paula, Thanks for the reply. I have managed to troubleshoot the problem. It was coming from an issue with the BAM file index. It had been overwritten by the script. Thanks, Jeff
2025-04-01T06:40:11.735840
2015-03-15T07:04:37
61764552
{ "authors": [ "agundy", "seveibar" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10073", "repo": "rcos/Observatory3", "url": "https://github.com/rcos/Observatory3/pull/23" }
gharchive/pull-request
User profile Small fixes to enable users to click to see a profile and makes sure a user has a github username when they create an account. Looks good!
2025-04-01T06:40:11.746227
2023-03-23T20:45:48
1638290558
{ "authors": [ "aliciaaevans" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10074", "repo": "rcsb/py-rcsb_utils_targets", "url": "https://github.com/rcsb/py-rcsb_utils_targets/pull/9" }
gharchive/pull-request
Pharos-targets: Download sql file to separate dir The changes to https://github.com/rcsb/py-rcsb_workflow/pull/7 did the trick for stashing Pharos-targets data, however, it unnecessarily included the downloaded latest.sql.gz and pharos-update.sql in the file. It caused a 500 error from GitHub when trying to push 7 gb worth of 50 mb part files. This change puts the downloaded SQL dump into a separate directory and keeps Pharos-targets for what will be stashed and used by PharosTargetActivityProvider. This doesn't have any affect the rest of the code because they are specifically using the tdd files in Pharos-targets. That isn't changing, just the location of the SQL file that is downloaded to generate those files (via mySQL).
2025-04-01T06:40:11.763399
2022-10-17T11:25:35
1411402746
{ "authors": [ "Clueliss" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10075", "repo": "rdf4cpp/rdf4cpp", "url": "https://github.com/rdf4cpp/rdf4cpp/pull/82" }
gharchive/pull-request
Feature: Numeric Datatypes Implements the remaining xsd and owl datatypes and makes the appropriate datatype arbitrary precision. Additionally fixes the performance of numeric serialization by utilizing <charconv> Some tests missing. Tests failing because didn't implement fetchcontent stuff yet Also: will use the test from the other older numeric types PR Btw, compile times are really suffering. Maybe we should include explicit template specializations in a cpp file afterall. Although I'm not entirely certain that this would help TODOs: [ ] fix compile times [ ] fix float,double to_string [ ] make build without conan work Summed up, I think we can just remove the extra CMake file for datatypes, right? Yes, I just forgot to delete it 😅
2025-04-01T06:40:11.782619
2023-07-05T06:47:10
1788886333
{ "authors": [ "dwrobel", "pradeeptakdas" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10076", "repo": "rdkcentral/rdkservices", "url": "https://github.com/rdkcentral/rdkservices/pull/4185" }
gharchive/pull-request
RDKDEV-774 Add support for dm-verity based bundles Adds support for dm-verity[1] based encrypted bundles. It uses OMI[2] service to mount encrypted bundles. [1] https://docs.kernel.org/admin-guide/device-mapper/verity.html [2] https://code.rdkcentral.com/r/plugins/gitiles/rdk/components/generic/rdk-oe/meta-cmf/+/refs/heads/rdk-next/recipes-support/omi/omi.bb Signed-off-by: Damian Wrobel<EMAIL_ADDRESS>Signed-off-by: Damian Wrobel<EMAIL_ADDRESS>Change-Id: Iac4e76b9dcbe411ee4226a24a9b1ea8f197df681 Internal ticket for tracking https://ccp.sys.comcast.net/browse/RDKCOM-4033 Counterpart for the main branch: https://github.com/rdkcentral/rdkservices/pull/4177.
2025-04-01T06:40:11.788411
2023-06-02T18:25:54
1738625129
{ "authors": [ "vfscalfani", "wopozka" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10077", "repo": "rdkit/rdkit", "url": "https://github.com/rdkit/rdkit/issues/6436" }
gharchive/issue
InChI calculation for Sulfur with valence 4 and 6 omits hydrogens Describe the bug For molecules with sulfur with valence 4 and 6, valence is reduced to 2 when computing an InChI and hydrogens are lost. I looked through the InChI Standard Valences, and I don't believe this is a normalization change (Appendix I in InChI technical manual) https://www.inchi-trust.org/all-downloadable-versions/ To Reproduce mol1 = Chem.MolFromSmiles('CN(C)[SH3]') for atom in mol1.GetAtoms(): print(atom.GetSymbol(), atom.GetTotalValence(), atom.GetTotalNumHs()) C 4 3 N 3 0 C 4 3 S 4 3 Chem.MolToInchi(mol1) InChI=1S/C2H7NS/c1-3(2)4/h4H,1-2H3 mol2 = Chem.MolFromSmiles('CN(C)[SH5]') for atom in mol2.GetAtoms(): print(atom.GetSymbol(), atom.GetTotalValence(), atom.GetTotalNumHs()) C 4 3 N 3 0 C 4 3 S 6 5 print(Chem.MolToInchi(mol2)) InChI=1S/C2H7NS/c1-3(2)4/h4H,1-2H3 Expected behavior Here are the InChIs that I was expecting: For CN(C)[SH3] ---> InChI=1S/C2H9NS/c1-3(2)4/h1-2,4H3 For CN(C)[SH5] ---> InChI=1S/C2H11NS/c1-3(2)4/h1-2H3,4H5 Configuration (please complete the following information): RDKit version: 2023.03.1 OS: Ubuntu 22.04 Python version (if relevant): 3.11.3 Are you using conda? Yes If you are using conda, which channel did you install the rdkit from? conda-forge If you are not using conda: how did you install the RDKit? N/A It is more to the inchi library authors that to rdkit itself, but on the other hand molecules you wrote are not correct, as in the form you wrote them a charge is missing.
2025-04-01T06:40:11.807238
2018-06-03T09:12:46
328812780
{ "authors": [ "ryanflorence", "zoontek" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10078", "repo": "reach/router", "url": "https://github.com/reach/router/pull/34" }
gharchive/pull-request
Install prop-types as peer, babel-plugin-dev-expression as dev Hello! I noticed this two errors in your package.json. As prop-types is used in your codebase, it makes it impossible to use tools like BundlePhobia: https://bundlephobia.com/result?p=@reach/router thanks, looks like you've got prop-types as a normal, not peer dependency? @ryanflorence Yes, it was the point of @sheepsteak, I change that to respect the prop-types project documentation.
2025-04-01T06:40:11.811205
2019-05-18T08:40:48
445694323
{ "authors": [ "jacques-trixta", "julienben" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10079", "repo": "react-boilerplate/react-boilerplate", "url": "https://github.com/react-boilerplate/react-boilerplate/issues/2657" }
gharchive/issue
Consider creating a truffle framework box on this template Is your feature request related to a problem? Please describe. So dapps (decentralized applications) are growing and are using react to create web apps to connect to the ethereum network. Truffle is the framework used to assist with this. there are templates called truffle boxes to create apps. I honestly think your react template is the best template out there and think you should integrate your amazing template Describe the solution you'd like I wanted to actually take a version of your boilerplate I have and create a box on it. But I thought I would let you know, maybe you want to, contribute yourselves. Describe alternatives you've considered Here are the steps to create a box. If you don't want to do this. Let me know, ill attempt to create one based on your template. Additional context Here is the react box I am using at the moment. Excellent! We'd love to be be part of a Truffle box for dapps! Personally what little time I can dedicate to OS work I try to dedicate to this project directly but I'd be happy to help out if any questions/issues come up. I'll keep this open as I'm sure the rbp+dapps intersection will be relevant to a few people who come here. Closing for lack of activity.
2025-04-01T06:40:11.814451
2018-11-20T23:32:36
382891496
{ "authors": [ "coveralls", "gretzky" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10080", "repo": "react-boilerplate/react-boilerplate", "url": "https://github.com/react-boilerplate/react-boilerplate/pull/2482" }
gharchive/pull-request
Update issue templates We should consider adding 2 separate templates for bug and feature to further guide people when they visit with their ideas/problems. imo the more guidance the better. Thanks @jwinn ! Coverage remained the same at 100.0% when pulling c236d01eba4e778dc048f6ce16e3cb5e65ea0fc2 on update-templates into 435e07b10b769da07c912ffb439dcd233fd3dbbb on master. Coverage remained the same at 100.0% when pulling c236d01eba4e778dc048f6ce16e3cb5e65ea0fc2 on update-templates into 435e07b10b769da07c912ffb439dcd233fd3dbbb on master.
2025-04-01T06:40:11.819898
2017-05-12T13:29:20
228292234
{ "authors": [ "eduardoleal", "kelset", "onlydave" ], "license": "bsd-2-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10081", "repo": "react-community/react-navigation", "url": "https://github.com/react-community/react-navigation/issues/1489" }
gharchive/issue
Parent Navigator of Nested loses path params Off the back of #1232 Simplified Example const AppStackNavigation = StackNavigator({ Profile: { path: 'profile/:profileId', screen: TabNavigator({ overview: { path: 'overview', screen: () => <View/>, }, activity: { path: 'activity', screen: () => <View/>, }, }, }, } Paths and their results Using AppStackNavigation.router.getActionForPathAndParams : profile/123 => {routeName: 'Profile', params: {profileId :undefined}} ❌ profile/123/ => {routeName: 'Profile', params: {profileId :undefined}}❌ profile/123/anything => {routeName: 'Profile', params: {profileId :123}} 👍 profile/123/activity => {routeName: 'Profile', params: {profileId :123}, action: {routeName:'activity'}} 👍 Simplified the results a bit to make it easier to understand. But basically I would have thought that stackitem/someparam as a path would still work if the stackitem is a navigator any update? Hi there @onlydave , In an effort to cleanup this project and prioritize a bit, since this issue had no follow up since my last comment I'm going to close it. If you are still having the issue (especially if it's a bug report) please check the current open issues, and if you can't find a duplicate, open a new one that uses the new Issue Template to provide some more details to help us solve it.
2025-04-01T06:40:11.821893
2017-08-23T19:01:42
252383756
{ "authors": [ "limeytrader007", "matthamil" ], "license": "bsd-2-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10082", "repo": "react-community/react-navigation", "url": "https://github.com/react-community/react-navigation/issues/2453" }
gharchive/issue
How are we supposed to set this up when we don't use an EXPO project? Your example illustrates the assumption that the user has an App.js file built from Expo. How do we use the DrawerNavigator when we have index.ios.js and index.android.js? All I get is Native module cannot be null. Thanks Let those files, index.ios.js and index.android.js render a component which eventually renders your navigator, or render your navigator directly inside the render() method of your root component. Navigators are React components and can be rendered like any other component.
2025-04-01T06:40:11.837281
2020-07-23T01:48:48
664151336
{ "authors": [ "larissa-n", "laurentsenta", "zy-zero" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10083", "repo": "react-dnd/react-dnd", "url": "https://github.com/react-dnd/react-dnd/issues/2684" }
gharchive/issue
How can I use in mobile phone? Is your feature request related to a problem? Please describe. Looks like your react-dnd library is not support mobile devices. Or any idea you can provide to me to implememt in mobile? Describe the solution you'd like Think is your events listener do not listen the touch events list touch start, move and end. right? not sure. Has anyone had a chance to look into this yet? I'm not a maintainer but have you tried the TouchBackend? It's documented on the first page, it might be helpful: https://react-dnd.github.io/react-dnd/docs/overview#backends The library currently ships with the HTML backend, which should be sufficient for most web applications. There is also a Touch backend that can be used for mobile web applications.
2025-04-01T06:40:11.848247
2020-02-09T16:37:24
562203591
{ "authors": [ "bluebill1049", "kotarella1110", "sigfriedCub1990" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10084", "repo": "react-hook-form/react-hook-form-devtools", "url": "https://github.com/react-hook-form/react-hook-form-devtools/issues/1" }
gharchive/issue
Contributor Hi @bluebill1049 , I've been working with react-hook-forms for a while and the results are great. In return I would love to contribute to this repo in any way I can. Cheers, sigfried. aha that would be awesome! I will keep you in the loop on what we can work together. you can take a preview on V1 there is a branch now. It's going to be an interesting and useful project :) It's going to be an interesting and useful project :) Indeed :+1: Hey @sigfriedCub1990 you want to help on the filter functionality? the filter name part? cd app yarn yarn start everything lives inside the devTools folder. Sure thing @bluebill1049 I'll look into it today's afternoon :) @kotarella1110 if you got sometimes, please help improve the types :) ❤️ @bluebill1049 yeah! i will improve types 💚 amazing! @kotarella1110 🙏 it's an ongoing thing, let's close this issue for now. feel free to send PR to improve this devtool
2025-04-01T06:40:11.883737
2018-06-18T07:40:50
333162912
{ "authors": [ "Esxiel", "Sebosam", "anton128", "deejaygeroso", "thomasw", "vonovak", "webdevfarhan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10085", "repo": "react-native-community/google-signin", "url": "https://github.com/react-native-community/google-signin/issues/425" }
gharchive/issue
How To Get Refresh Token on Both IOS and Android Is it possible to get Refresh Token on this package?? Apparently it is not included on the documentation so I'm assuming it is not configured to get refresh token. And also is it also possible to get expiration date for access token on Android?? I've read that it is only available on IOS. Environment react-native : 0.55.0 react-native-google-signin : 0.12.0 react : 16.3.0-alpha.0 i found that fixed in #398 should i fork repo, or any better solution that can pull newest commits I might be able to answer this question if you can provide some context about what you're trying to achieve. Are you looking for a refresh token because you're trying to do some offline API access, server-side perhaps? I also have this problem. I am using google contact api v3 to get contact list(name, photo). To get contact photo, I need access_Token, but it is expired soon. How can I get refresh_token to access Google api when I need? I'm also facing the same problem, I'm wondering why I do not receive a refreshToken. Just like the post above, I'm looking for the refresh token in order to refresh to access token when it has been checked to be expired. related: https://stackoverflow.com/questions/35154509/android-how-to-get-refresh-token-by-google-sign-in-api Thanks for the quick reply! I sent the serverAuthCode in a request and I only got a new accessToken. I tried some other things and I've just remembered that I only get a refreshToken on the first request. I got the refreshToken when I send the serverAuthCode immediately on the first authentication. Just for clearity, the access token you will receive from GoogleSignin.getTokens(); will only last for about an hour, to get a new access_token we need to send serverAuthCode to https://oauth2.googleapis.com/token with fields : client_id, client_secret, code(this is serverAuthCode), grant_type(its value should be authorization_code), redirect_uri(can set it from developers console). Remember to only use the serverAuthCode that you get on your first attempt when you just allowed your app for the permissions FIRST TIME otherwise you will get grant_error every time. After getting the refresh_token, we need to get the new access_token using refresh token that we just got, so now just replace the value of grant_type from authorization_code to refresh_token and also replace the value of code field to refresh_token , fill its value and send a post request to the same url, you will get a fresh access_token that will be valid for 1 hour.
2025-04-01T06:40:11.885125
2019-10-21T13:59:53
509982630
{ "authors": [ "PrantikMondal", "rewieer" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10086", "repo": "react-native-community/react-native-modal", "url": "https://github.com/react-native-community/react-native-modal/issues/363" }
gharchive/issue
model close animation from bottom to top not working animationOut={'slideOutUp'} not working. Please reopen an issue following the issue template.
2025-04-01T06:40:11.891563
2020-01-02T15:35:09
544626950
{ "authors": [ "kopax", "msand" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10087", "repo": "react-native-community/react-native-svg", "url": "https://github.com/react-native-community/react-native-svg/issues/1234" }
gharchive/issue
Broken example Is this lib still maintained ? The example is broken on both Androis/iOS and web. What is the current way of loading an svg in react-native + web ? Thanks and best! Updated the example a bit, can you try again: https://snack.expo.io/@msand/react-native-svg-example The best way depends a bit on the use case. It the content is static, then I would recommend using react-native-svg-transformer with plain .svg files. If you need dynamic content, then you can import the elements from react-native-svg and create normal react components. They should work in react-native, react-native-web and plain react-dom (requires aliasing react-native-web to react instead) based projects. Thanks for updating the example. Many of the example from the README.md, produce error, such as non existing SvgCss SvgXml imported from react-native-svg. I was not able to import from a .svg files, even if I did expo install react-native-svg. What worked the best for me was recreating the svg in react with Path from react-native-svg. I have found that animateTransform was not existing in the library while I wanted to use my svg for the splash screen and the animation is required to make it spin. Why are some imports not defined from the main library? If possible I'd like to repair the import so i can do the animation. This is an example of error I have when importing svg on the web: bundle.js:69367 Warning: </static/media/splashScreen.0f923900.svg /> is using incorrect casing. Use PascalCase for React components, or lowercase for HTML elements. in /static/media/splashScreen.0f923900.svg (at SplashScreen/index.js:34) Seems more like it's react-native-svg-transformer related issue, or naming issue. Are you trying to use SvgCss / SvgXml in an Expo web project? They aren't defined for the web yet, as there are several processing modes to choose from, and you can easily implement your own: https://github.com/react-native-community/react-native-svg/issues/1230 Yes I was using it for the web. But that was a fallback decision, after I noticed that Platform.OS from react-native-web in expo sdk36 was always pointing to web, even if started on the expo client from an iPhone or Android. Otherwise, if the import of the .svg would work for the web, by work I mean importing a ReactComponent instead of showing a path string.
2025-04-01T06:40:11.894591
2017-03-16T19:57:12
214820811
{ "authors": [ "CharlesMangwa", "Druux", "you-fail-me" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10088", "repo": "react-native-community/react-native-tab-view", "url": "https://github.com/react-native-community/react-native-tab-view/issues/173" }
gharchive/issue
How to configure tabs slide transition? When I set new index to state of tabview (i.e. switch tabs programatically, from a button click) tabs switch too fast for me, I'd like to have a slower transition. Docs say there's configureTransition callback, but don't say how should it be written and how does that 'transition configuration' that it should return should look like. Could you please, pour some light on it and update the docs? Hi! You can use configureTransition as following: configureTransition={() => ({ timing: Animated.spring, tension: 300, friction: 35, })} If it can help you out, you also have TransitionConfigurator's type with Flow and you can check out how the callback is used under the hood right here :) @CharlesMangwa does this still work as described? For me the configureTransition prop has no effect. It doesn't get executed at all.
2025-04-01T06:40:11.908811
2023-04-04T18:54:54
1654410429
{ "authors": [ "mikehardy", "sohail-p" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10089", "repo": "react-native-device-info/react-native-device-info", "url": "https://github.com/react-native-device-info/react-native-device-info/pull/1529" }
gharchive/pull-request
feat(windows): implementation for useBatteryLevel & useBatteryLevelIsLow Description Fixes #1528 add subscription to Battery.ReportUpdated in Initialize and send event RNDeviceInfo_batteryLevelDidChange with latest battery level check for low battery condition of <20 and send event RNDeviceInfo_batteryLevelIsLow with latest battery level update readme to reflect added functionality Compatibility OS Existing Implemented iOS ✅ ❌ Android ✅ ❌ Windows ❌ ✅ Checklist [x] I have tested this on a device/simulator for each compatible OS [x] I added the documentation in README.md [ ] I updated the typings files (privateTypes.ts, types.ts) [ ] I added a sample use of the API (example/App.js) Code looks fine, thanks for the PR! The failure seems unrelated to the functionality: $ patch-package 'patch-package' is not recognized as an internal or external command, This seems like some sort of systemic issue but why is it just popping up now? I use patch-package all the time :-) - this should be working... I assume you have built + tested this locally? It all works for you? @mikehardy I was also trying to figure this out, but it seems some issue with Github flow, as same run is getting passed on my fork of the repo. Github flow sometimes is wacky though so can't say for sure. You can check E2E run details here - https://github.com/sohail-p/react-native-device-info/actions/runs/4613458100/jobs/8155450012?pr=2 I have tested Android and Windows locally but not iOS The last re-run it failed on iOS because hashfiles took too long on the iOS worker 😆 - re-running but obviously that has nothing to do with your PR, will merge regardless of status on the iOS run since windows made it through this time, and it will auto-release Thanks @mikehardy
2025-04-01T06:40:11.910651
2021-03-12T03:31:53
829724866
{ "authors": [ "nlok5923", "pranshuchittora" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10090", "repo": "react-native-elements/playground", "url": "https://github.com/react-native-elements/playground/pull/38" }
gharchive/pull-request
fix part of #3: created seperate css file for explore page Refers: #3 Changes: made seperate css file for explore page. @pranshuchittora sir please review this pr. Pls pull the new changes @pranshuchittora sir please review this pr.
2025-04-01T06:40:11.915303
2022-10-11T16:01:52
1404887915
{ "authors": [ "WillyRamirez", "bawb89", "kappu72" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10091", "repo": "react-native-maps/react-native-maps", "url": "https://github.com/react-native-maps/react-native-maps/pull/4478" }
gharchive/pull-request
fix: ios onRegionChange stops firing up when animateCamera is called … Does any other open PR do the same thing? No What issue is this PR fixing? [IOS] onRegionChangeComplete is not firing up at all (please link the issue here) #4110 How did you test this PR? On IOS, calling animate camera repeatedly, it never stops firing the event as before Create a map, crete a flow of animateCamera called repeatedly, and print the result of onRegionChangeComplete tested on simulator iPhone 13 iOs 15.4 and real device iPhone Xr iOs 15.5 Can a maintainer please look at this? Hi @Simon-TechForm and @christopherdro, who is reviewing PR's on this lib currently? Could perhaps one of you be so kind as to review this one? We have a lot riding on this bug getting solved. Thanks!! @kappu72 Thanks again for the PR. I forked the branch from your PR and I receive the following error in my app trying when opening the screen that contains the map: View config getter callback for component AIRGoogleMap must be a function (received undefined) Is this something you're familiar with?
2025-04-01T06:40:11.918777
2021-03-26T12:52:34
841931888
{ "authors": [ "HSalaila", "gp3gp3gp3" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10092", "repo": "react-native-svg/react-native-svg", "url": "https://github.com/react-native-svg/react-native-svg/issues/1555" }
gharchive/issue
fix: Allow ',' in viewBox attribute Hi! 👋 Firstly, thanks for your work on this project! 🙂 Today I used patch-package to patch<EMAIL_ADDRESS>for the project I'm working on. I have added a replace regex to remove comma's (,) in viewBox. According to MDN viewBox numbers are separated by whitespace and/or a comma. Here is the diff that solved my problem: diff --git a/node_modules/react-native-svg/src/lib/extract/extractViewBox.ts b/node_modules/react-native-svg/src/lib/extract/extractViewBox.ts index c4515b5..8053617 100644 --- a/node_modules/react-native-svg/src/lib/extract/extractViewBox.ts +++ b/node_modules/react-native-svg/src/lib/extract/extractViewBox.ts @@ -38,7 +38,7 @@ export default function extractViewBox(props: { const params = (Array.isArray(viewBox) ? viewBox - : viewBox.trim().split(spacesRegExp) + : viewBox.trim().replace(/,/g, '').split(spacesRegExp) ).map(Number); if (params.length !== 4 || params.some(isNaN)) { This issue body was partially generated by patch-package. Let me know if I you want me to create a pull request or not. This patch does not account for numbers that are separated by commas but not whitespace(for example: 0, 0, 400,69.83316168898044), updating it to the following will address that issue. viewBox.trim().replace(/,/g, ' ').replace(/ +/g, ' ').split(spacesRegExp)
2025-04-01T06:40:11.932136
2022-04-22T07:22:58
1211930608
{ "authors": [ "Adnan-Bacic", "freeboub", "hueniverse" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10093", "repo": "react-native-video/react-native-video", "url": "https://github.com/react-native-video/react-native-video/issues/2655" }
gharchive/issue
Minimum React Native version support I would like to only support RN versions from the past year. The thinking is that if you are not upgrading your RN app for more than a year, you are also probably not upgrading core dependencies like this one. This would put us at RN v0.64. This means publishing patches to older major versions if the latest has more recent requirements. For example, if we decide that v7 only supports RN 0.68 with new architecture, we will continue to support v6 for RN versions up to a year old. Thoughts? I agree it can be problematic to maintain old react-native version. In all RN modules I saw, they said they don't support of version ... Now with turbo module and new architecture, I am not sure of the exact impact in terms of dependancies. maybe @douglowder can give some clues react-native-maps already does something similar, even more strict actually: https://github.com/react-native-maps/react-native-maps#compatibility
2025-04-01T06:40:11.934478
2022-08-15T15:19:55
1339133662
{ "authors": [ "freeboub", "ubaid-wp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10094", "repo": "react-native-video/react-native-video", "url": "https://github.com/react-native-video/react-native-video/issues/2814" }
gharchive/issue
not displaying properly on android Bug ##Android Environment info react-native version "react-native": "^0.64.2", "react-native-video": "^5.2.0", taking time to display video in flatlist with images to be followed in: https://github.com/react-native-video/react-native-video/issues/2668
2025-04-01T06:40:11.936151
2023-08-08T11:22:53
1841114174
{ "authors": [ "TheAlmightyBob", "nj0034" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10095", "repo": "react-native-webview/react-native-webview", "url": "https://github.com/react-native-webview/react-native-webview/issues/3090" }
gharchive/issue
Stopping momentum scroll triggers click event on Android #863 Same issue, but not fixed yet react-native version: 0.71.6 react-native-webview version: 11.26.0 I just tried to reproduce this and I see exactly the same behavior in Chrome. If you see different behavior between the WebView and Chrome, please share a sample repro.
2025-04-01T06:40:11.941488
2018-11-29T15:06:01
386340499
{ "authors": [ "Xyzor", "brentvatne", "satya164" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10096", "repo": "react-navigation/react-navigation-tabs", "url": "https://github.com/react-navigation/react-navigation-tabs/issues/73" }
gharchive/issue
You can't navigate back from last tab to first on iOS, if the tab has auto width Current Behavior When I give a width: 'auto' or width: null, flex: 1 to the tabStyle, I can't navigate back from last page if there is not enough tab to overflow the screen. Expected Behavior Either to be able to scroll the tabbar or to not move the selected tab item to the left side of the screen like on android. How to reproduce In the demo, on iOS select the Settings2 menu and try to navigate back to Home menu. A device which is big enough either on portrait or landscape to show all tabs. Your Environment software version react-navigation 2.18.2 react-native 0.57 node 10.13.0 npm or yarn yarn @satya164 - not sure what the expected behavior is for top tabs or if this is supported. @Xyzor it might be helpful if you share a mockup or description of what you're trying to accomplish My main goal is to use flex: 1 on tabStyle, instead of fixed width, because with fixed width the tab's label either break into new lines or some tabs are unnecessarily wide. On android the demo is working because when i select a tab, it doesn't move to the left. The width depends on the width of the tab bar. Dynamic width for the tab items is not supported because it's every complicated to implement. I'm not sure what you're trying to achieve, but you probably need to customize the tab bar itself or use a custom tab bar. @satya164 Thanks for the info.
2025-04-01T06:40:11.992223
2020-03-22T19:03:37
585788576
{ "authors": [ "RWOverdijk", "soroushm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10097", "repo": "react-navigation/web", "url": "https://github.com/react-navigation/web/issues/50" }
gharchive/issue
Direct link not building hierarchy as desired Context/flow If it helps, this is what I'm trying to accomplish: Venues shows a list of venues Clicking on one of the venues takes you to VenueDetails At the venue you can add stuf to your cart, and open up your cart, which takes you to OrderDetails I need a way for the url to remember I took this route so I can go back down the chain in case the user refreshes or opens the link in a new tab. Description I've set up my linking as follows: export default function (containerRef) { return useLinking(containerRef, { prefixes: [Linking.makeUrl('/')], config: { Root: { initialRouteName: 'Venues', path: '', screens: { Venues: '', VenueDetails: { path: 'venue/:slug', parse: { slug: String }, }, OrderDetails: { path: 'venue/:slug/order-overview', parse: { slug: String }, }, } } } }) } What I want is to open OrderDetails when I navigate to /venue/some-slug/order-overview. If I revert the order in the useLinking object (so: OrderDetails, VenueDetails and then Venue) this works, but it doesn't see that VenueDetails is the previous page. How can I tell react-navigation/linking that going "back" from OrderDetails means it should go to VenueDetails? And that going back from VenueDetails means it should go to Venue? Can I define a parent somehow? Maybe I need to put it in the url somehow? Note: it works fine when starting at the home page and doing the normal navigation flow. It's only when I open up a nested page directly that this fails. Is this repo still being worked on by the way? The last release was a while ago. I'd like to help but I don't know where to start. I believe this repo deprecated and moved on react-navigation this file is for web https://github.com/react-navigation/react-navigation/blob/master/packages/native/src/useLinking.tsx
2025-04-01T06:40:12.007273
2023-04-11T14:16:04
1662592497
{ "authors": [ "C5H8NNaO4", "FujiwaraChoki", "MatchuPitchu", "ShaofeiZi", "bodinsamuel", "humphd", "jeremyckahn", "jtsorlinis", "kkn1125", "lritter79" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10098", "repo": "react-syntax-highlighter/react-syntax-highlighter", "url": "https://github.com/react-syntax-highlighter/react-syntax-highlighter/issues/513" }
gharchive/issue
PrismLight using results in regular [object Object] output (but only in production) Describe the bug I'm using import { PrismLight as SyntaxHighlighter } from 'react-syntax-highlighter'; which results in regular [object Object] [object Object] ... output (around 7 of 10 refreshs of my webpage) for my code blocks (but only in production). Notice: import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter'; has the same problem. Every 4th to 5th refresh, it's working correctly, otherwise it's rendering [object Object] ... To Reproduce Steps to reproduce the behavior: This is my implementation, notice that the processedCode output of the process function is always correct (also in production). But even without this process function, the problem persists. import { PrismLight as SyntaxHighlighter } from 'react-syntax-highlighter'; import tsx from 'react-syntax-highlighter/dist/esm/languages/prism/tsx'; import typescript from 'react-syntax-highlighter/dist/esm/languages/prism/typescript'; import { vs, vscDarkPlus } from 'react-syntax-highlighter/dist/esm/styles/prism'; SyntaxHighlighter.registerLanguage('tsx', tsx); SyntaxHighlighter.registerLanguage('typescript', typescript); interface ICodeBlock { code: string; language: 'typescript' | 'tsx'; } const process = (code = '') => { let skippedLeadingEmptyLines = false; let lastNonEmptyLineIndex = 0; let minRawStringIndentation = Number.MAX_SAFE_INTEGER; let numberOfRemovedLines = 0; const processNonEmptyLine = (line: string, index: number) => { // keep track of the index of the last non-empty line lastNonEmptyLineIndex = index - numberOfRemovedLines; // determine the minimum indentation level minRawStringIndentation = Math.min(minRawStringIndentation, Math.max(0, line.search(/\S/))); // return the processed line return [line.trimEnd()]; }; // split code into lines const codeLines = code.split('\n'); // remove empty lines, and process non-empty lines const nonEmptyLinesAtStart = codeLines.flatMap((line, index) => { if (!skippedLeadingEmptyLines) { if (line.match(/^\s*$/)) { numberOfRemovedLines += 1; return []; } skippedLeadingEmptyLines = true; return processNonEmptyLine(line, index); } if (line.match(/^\s*$/)) return ['']; return processNonEmptyLine(line, index); }); const nonEmptyLinesStartAndEnd = nonEmptyLinesAtStart.slice(0, lastNonEmptyLineIndex + 1); // If there are no non-empty lines, return an empty string if (nonEmptyLinesStartAndEnd.length === 0) return ''; const nonRawStringIndentationLines = minRawStringIndentation !== 0 ? nonEmptyLinesStartAndEnd.map((line) => line.substring(minRawStringIndentation)) : nonEmptyLinesStartAndEnd; return nonRawStringIndentationLines.join('\n'); }; export const CodeBlock = ({ code, language }: ICodeBlock) => { const { isLight } = useThemeContext(); const processedCode = process(code); const theme = isLight ? vs : vscDarkPlus; return ( <pre className={classes.pre}> <SyntaxHighlighter language={language} style={theme}> {processedCode} </SyntaxHighlighter> </pre> ); }; It's working when I do a bad workaround to force reloading of the component: // ... useEffect(() => { const timerId = setTimeout(() => setIsReloaded(true), 0); return () => clearTimeout(timerId); }, []); return isReloaded ? ( <pre className={classes.pre}> <SyntaxHighlighter language={language} style={theme}> {processedCode} </SyntaxHighlighter> </pre> ) : null; Expected behavior Output my code string (it's a simple code string of a React Component) instead of [object Object] in some cases. Oddly enough in my dev environment it's always working fine, only in production is the rendering issue with [object Object] Screenshots Desktop (please complete the following information): Browser firefox Version 111.0.1 (64-Bit) I'm getting this issue on a non-local environment as well We see this same issue intermittently with PrismAsyncLight as well. For others with this issue, using @fenkx's fork fixes the issue, the easiest way to switch is just change this line in your package.json: "react-syntax-highlighter": "npm<EMAIL_ADDRESS> I have the same issue but I guess this package is not maintained anymore 😕 I had the same problem, but the above scenario didn't solve my problem. +1 May I add my opinion here? I'm experiencing the same issue. Hopefully, it has been resolved, but I want to share this for anyone who might be facing the same problem. I think the following procedure is causing the issue: Open a page with target="_blank" (in this case, the user only opens the tab without switching to it). Once the tab is loaded and the user switches to it, the code is displayed as [Object object]. Therefore, I have made it re-render when the user switches tabs. Below is a part of the code I wrote. function CodeBlock({/* ... */}) { const [codeBlock, setCodeBlock] = useState<ReactElement | null>(null); useEffect(() => { renderCodeBlock(); document.addEventListener('visibilitychange', renderCodeBlock); return () => { setCodeBlock(() => null); document.removeEventListener('visibilitychange', renderCodeBlock); }; }, []); function renderCodeBlock() { setCodeBlock(() => ( <Box component={SyntaxHighlighter} language={language} showLineNumbers style={atomDark} > {code.trim()} </Box> )); } return ( <Box> {codeBlock === null ? ( <CircularProgress /> ) : ( codeBlock )} </Box> ); } Please note that this text has been translated and might contain some unnatural parts. This happens to me as well, what can I do about this? I'm experiencing this issue as well. I was able to work around it by adding this to my index.tsx file before I render my React app: import { PrismAsyncLight as SyntaxHighlighter } from 'react-syntax-highlighter' ReactDOM.createRoot(document.createElement('div')).render( <SyntaxHighlighter language="" children={''} /> ) Can confirm, the workaround by jeremyckahn works with SSR and production. No more problems when using hydrateRoot.
2025-04-01T06:40:12.012480
2018-10-24T16:17:01
373564533
{ "authors": [ "Mukesh-Bhootra", "tannerlinsley" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10099", "repo": "react-tools/react-table", "url": "https://github.com/react-tools/react-table/issues/1167" }
gharchive/issue
Vertical scrollbar hiding last column data Describe the bug Vertical scrollbar hiding last column data if we have horizontal bar too and last column width is reduced. To Reproduce Any react table with 3 or more columns, increase the width of middle column and reduce the width of last column , when you scroll right to end you will find one or two characters are hiding under vertical scroll bar. Expected behavior Vertical scrollbar should be post last column width should not hide the details showing in last column Codesandbox! you can use https://codesandbox.io/s/o5np5p0nrz codesand box and increase width of columns adding screen shot too. Screenshots Desktop (please complete the following information): Windows / Crome Browser - chrome Version 68.0.3440.106 RT wasn't designed for inline-scrolling, hence the pagination, so unfortunately, there is no official support for this use-case. You're more than welcome to explore and ask in the forum though! https://spectrum.chat/react-table @Mulli can you suggest how to do ? looking for few workarounds
2025-04-01T06:40:12.015331
2019-12-21T09:00:58
541303755
{ "authors": [ "devbrunopaula", "hetmann", "litehacker" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10100", "repo": "react-ui-kit/dribbble2react", "url": "https://github.com/react-ui-kit/dribbble2react/issues/50" }
gharchive/issue
Errors after downloading JavaScript bundle Warning: React.createElement: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: %s.%s%s, undefined, You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports. Check your code at Button.js:38., in Button (at Welcome.js:144) in RCTView (at Block.js:155) in Block (at Welcome.js:143) in RCTView (at Block.js:155) in Block (at Welcome.js:129) in Welcome (at SceneView.js:9) in SceneView (at StackViewLayout.tsx:899) in RCTView (at StackViewLayout.tsx:898) in RCTView (at StackViewLayout.tsx:897) in RCTView (at createAnimatedComponent.js:151) in AnimatedComponent (at StackViewCard.tsx:106) in RCTView (at createAnimatedComponent.js:151) in AnimatedComponent (at screens.native.js:71) in Screen (at StackViewCard.tsx:93) in Card (at createPointerEventsContainer.tsx:95) in Container (at StackViewLayout.tsx:985) in RCTView (at screens.native.js:101) in ScreenContainer (at StackViewLayout.tsx:394) in RCTView (at createAnimatedComponent.js:151) in AnimatedComponent (at StackViewLayout.tsx:384) in PanGestureHandler (at StackViewLayout.tsx:377) in StackViewLayout (at withOrientation.js:30) in withOrientation (at StackView.tsx:104) in RCTView (at Transitioner.tsx:267) in Transitioner (at StackView.tsx:41) in StackView (at createNavigator.js:80) in Navigator (at createKeyboardAwareNavigator.js:12) in KeyboardAwareNavigator (at createAppContainer.js:430) in NavigationContainer (at App.js:62) in RCTView (at Block.js:155) in Block (at App.js:61) in App (at withExpoRoot.js:26) in RootErrorBoundary (at withExpoRoot.js:25) in ExpoRoot (at renderApplication.js:40) in RCTView (at AppContainer.js:101) in RCTView (at AppContainer.js:119) in AppContainer (at renderApplication.js:39) ` Also this one: Element type is invalid: expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports. Check the render method of `Button`. @litehacker hey, do you version of Expo and React-Native you'r using? any word on this bug? I got the same bug "expo": "^36.0.0", "react": "16.9.0", @devbrunopaula I think this is because the current code base in based on an older expo version. I'll update the code base to the latest versions.
2025-04-01T06:40:12.027727
2022-10-17T11:21:42
1411397637
{ "authors": [ "Ezekiel8807" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10101", "repo": "reactdeveloperske/reactdevske-website", "url": "https://github.com/reactdeveloperske/reactdevske-website/pull/89" }
gharchive/pull-request
added logo component and make logo component spin Fixes Issue PR to fix issue #84 Changes proposed added a logo component in the component directory and make logo spin Requirements Create a component named Logo in the components folder that implements the screenshots below. It should accept a size as prop for the different sizes on the header and footer. The logo should rotate similar to how it does in a fresh create react app installation as shown on this GIF. Obtain the logo from the Figma design. Use THIS VIDEO as a guide Acceptance Criteria [x] The implementation should match the design. Screenshots Note to reviewers The Logo spin animation was performed using the tailwind css animation class. ok i will do that now
2025-04-01T06:40:12.040803
2017-10-23T00:34:04
267512645
{ "authors": [ "Throne3d", "ekmartin", "pkovac", "sethlesky" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10102", "repo": "reactiflux/discord-irc", "url": "https://github.com/reactiflux/discord-irc/issues/330" }
gharchive/issue
Memory leak leads to heap overflow 14 Oct 15:06:57 - Received: :nova.esper.net PONG nova.esper.net :115367 <--- Last few GCs ---> [2539:0x4299a10]<PHONE_NUMBER> ms: Mark-sweep 1325.7 (1350.2) -> 1323.7 (1350.2) MB, 718.2 / 0.1 ms allocation failure GC in old space requested [2539:0x4299a10]<PHONE_NUMBER> ms: Mark-sweep 1323.7 (1350.2) -> 1323.6 (1350.2) MB, 977.4 / 0.0 ms allocation failure GC in old space requested [2539:0x4299a10]<PHONE_NUMBER> ms: Mark-sweep 1323.6 (1350.2) -> 1323.6 (1350.2) MB, 979.5 / 0.0 ms last resort [2539:0x4299a10]<PHONE_NUMBER> ms: Mark-sweep 1323.6 (1350.2) -> 1323.3 (1350.2) MB, 969.6 / 0.0 ms last resort <--- JS stacktrace ---> ==== JS stack trace ========================================= Security context: 0x2a90f6a29891 2: heartbeat [/home/sfnet-discord-bot/node-v8.1.2-linux-x64/lib/node_modules/discord-irc/node_modules/discord.js/src/client/websocket/WebSocketConnection.js:~407] [pc=0x1141bd0b5fab](this=0x3bc473b2d581 <an EventEmitter with map 0x263359c3e3d1>,time=0x222777302311 ) 3: arguments adaptor frame: 0->1 4: _onTimeout [/home/sfnet-discord-bot/node-v8.1.2-linux-x64/lib/node_modu... FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory 1: node::Abort() [node] 2: 0x13647ec [node] 3: v8::Utils::ReportOOMFailure(char const*, bool) [node] 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [node] 5: v8::internal::Factory::NewUninitializedFixedArray(int) [node] 6: 0xe90ca3 [node] 7: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [node] 8: 0x1141bc30437d Aborted I don't understand much of what's written here. From the stack trace, it looks like… the very last… call… was from discord.js? I don't think that helps track the problem down, though. Do you have any more information on the problem? I've never noticed it, so I wonder if you're doing something weird in your setup (are you on a low-memory machine?). No, I was running into the Node v8 default heap limit. There's not enough info here, but you can probably find it fairly quickly if you try pulling in heapdump from npm and take some snapshots as the bot is running: https://blog.risingstack.com/finding-a-memory-leak-in-node-js/ I'll look into it a bit more at some point, but my fix for now was to just stick discord-irc into a systemd unit so it autorestarts when this happens. Have there been any updates on this? I'm having what appears to be a similar issue as seen in the following log: error Command failed with exit code 134. Aborted 10: 0x15a8c4bc3b67 9: 0x8e62c6 [/nodejs/bin/node] 8: node::StringBytes::Encode(v8::Isolate*, char const*, unsigned long, node::encoding, v8::Local<v8::Value>*) [/nodejs/bin/node] 7: v8::String::NewFromUtf8(v8::Isolate*, char const*, v8::NewStringType, int) [/nodejs/bin/node] 6: v8::internal::Factory::NewStringFromUtf8(v8::internal::Vector<char const>, v8::internal::PretenureFlag) [/nodejs/bin/node] 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/nodejs/bin/node] 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/nodejs/bin/node] 3: v8::Utils::ReportOOMFailure(char const*, bool) [/nodejs/bin/node] 2: 0x8ccf9c [/nodejs/bin/node] 1: node::Abort() [/nodejs/bin/node] FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory 3: unpack [/app/node_modules/discord.js/src/client/websocket/WebSocketConnection.js:~170] [pc=0x15a8c49070bd](this=0x29ae8a484649 <EventEmitter map =… 2: arguments adaptor frame: 0->3 1: toString [buffer.js:~609] [pc=0x15a8c495b041](this=0x253f2b47cc99 <Uint8Array map = 0x32dd30a43a71>,encoding=0x282dd32822d1 <undefined>,start=0x282dd32822d1 <undefined>,end=0x282dd32822d1 <undefined>) Security context: 0x2af5574a58b9 <JSObject> ==== JS stack trace ========================================= <— JS stacktrace —> [27:0x3001070] 70137123 ms: Mark-sweep 1292.3 (1470.9) -> 1292.3 (1470.4) MB, 2666.6 / 0.0 ms last resort GC in old space requested [27:0x3001070] 70134456 ms: Mark-sweep 1292.3 (1511.9) -> 1292.3 (1470.9) MB, 2829.5 / 0.0 ms last resort GC in old space requested [27:0x3001070] 70131626 ms: Mark-sweep 1292.4 (1509.9) -> 1292.3 (1511.9) MB, 2763.2 / 0.0 ms allocation failure GC in old space requested <— Last few GCs —> Looking through the discord.js issues related to memory leaks [0] it seems like they cache a lot of things without actually clearing the cache. It doesn't seem like this is something they intend to fix: https://github.com/discordjs/discord.js/issues/1409#issuecomment-433648959 It's on the other hand possible to disable some of these caches [1] so maybe that's what we want to do? [0] https://github.com/discordjs/discord.js/search?q=memory+leak&type=Issues [1] https://github.com/discordjs/discord.js/pull/2883 @ekmartin Have you had any luck with disabling the cache? It looks like the fork was abandoned due to unexpected behavior when disabling stores. https://github.com/discordjs/discord.js/pull/2883#issuecomment-433649130 Our Discord Bot's heap can grow to 2gigs in a couple of hours resulting in a crash. And we have under 50 Discord groups using the bot, so this type of rapid memory expansion seems odd, given there are some bots with pretty massive usage. Thanks for pointing me to those threads. 👍 Is this still an issue? I'm looking into updating discord.js, but I don't know if they've made any improvements here.
2025-04-01T06:40:12.043596
2020-08-09T04:34:21
675634764
{ "authors": [ "Throne3d", "coveralls" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10103", "repo": "reactiflux/discord-irc", "url": "https://github.com/reactiflux/discord-irc/pull/555" }
gharchive/pull-request
Upgrade discord.js to version 12 Also updates test stubs. WIP: [ ] Manually test most main cases (pings of users + roles, emoji use, multi-channeling, join messages) [x] Investigate hanging tests (looks to be due to Bot Events) Coverage increased (+0.03%) to 96.884% when pulling 6a986152eddf95f1c1d67ea67daf28e687fd3010 on upgrade/discordjs-12 into ee2d70fac12860b9a78043b7f121979929842493 on master.
2025-04-01T06:40:12.052440
2020-07-27T07:39:08
666064080
{ "authors": [ "balibebas", "janus-reith" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10104", "repo": "reactioncommerce/example-storefront", "url": "https://github.com/reactioncommerce/example-storefront/issues/708" }
gharchive/issue
Add Maintenance Page Feature Name Add Coming Soon and Maintenance Pages Summary description Provide two a pages for stores which are either (1) not yet launched or (2) undergoing maintenance. In both cases provide the user an experience that allows the user to take some action which continues to grow the business (e.g. a launch email sign-up form). Rationale for why this feature is necessary It's standard fare for e-commerce sites. The launch page is expecially helpful for allowing store operators users to log in and view the store in production but before it's launched. Expected use cases Prospective customers may sign-up for email notification/newsletter prior to launch. Store operators may log-in from Coming Soon if they've been given the credentials by the admin. Admins may demo a production-ready product to stakeholders and gather feedback. Shoppers receive a more professional experience when the site is down. Here's an example Coming Soon page I mocked up using Chakra UI: Thanks for taking the time to file this issue and preparing a mockup @balibebas! The example storefront, as the name says, is just an example implementation meant to be customized. I don't really consider such a screen a priority at all as it is easy to add one based on individual requirements, although I also wouldn't consider it to be an issue to showcase this functionality if someone wants to do a PR. The issue I see is, that this would either involve some more work or give a false sense of security. To offer a proper maintenance mode, there would me adjustments on the API to maybe store this on the shop document, some admin ui to toggle this and then it could be used during getStaticProps in the storefront to present such a maintenance screen. But the API would actually need to take that into consideration and deny acess to most of the queries apart from shop for non-authorized users. Also, as we use static generation were possible, still allowing storefront access to some authenticated users could be another challenge and probably be solved by using nextjs preview mode. Honestly I wouln't expect what I just described to be implemented anytime soon, although I guess such a maintenance mode would be a welcome addition if someone wants to provide PRs for that. Wonder what @mikemurray and @focusaurus think about this. A really quick and simple solution instead could be to just add some MAINTENANCE_MODE env variable and accompanying pages + redirects, if others agree that this provides any value to the way they work with the storefront. Similar to how the IOU payment example works for the API plugins I feel it would be good to keep a custom landing page of some sort present so this rather typical use case will always be considered as refactoring of the app and docs occur. Given not everyone needs a fancy landing page providing some sort of experience for API outages or errors would be nice. So having a maintenance page which appears when one clicks a product during an outage would be useful for most uses. Closing due to lack of activity
2025-04-01T06:40:12.064025
2018-04-23T22:22:25
317004222
{ "authors": [ "mikemurray", "spencern" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10105", "repo": "reactioncommerce/reaction-next-starterkit", "url": "https://github.com/reactioncommerce/reaction-next-starterkit/issues/56" }
gharchive/issue
Add segment compatible analytics event tracking to the Product Grid We need to start tracking ecommerce analytics events in our starterkit. Initially we'll be tracking all of the Segment V2 Ecommerce Events. You can read the docs for that here: https://segment.com/docs/spec/ecommerce/v2/ Start by add documentation for the events tracked to our event tracking/analytics documentation, explaining the properties tracked and any mapping decisions that were made. For our current implementation of the Product Grid, we'll need to track two events: Product List Viewed Product Clicked @spencern The product price may be a range, should I take the min or max or neither? I'd take the min if a range exists maybe create a constant variable for this that we can permit customization of in the .env file later as well?
2025-04-01T06:40:12.066341
2016-01-25T20:35:52
128632318
{ "authors": [ "davekonopka", "jmound", "sairez" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10106", "repo": "reactiveops/st2-pack-omnia", "url": "https://github.com/reactiveops/st2-pack-omnia/pull/2" }
gharchive/pull-request
Sync repo with latest from st2-chatops-aliases master This PR syncs a few stray commits from st2-chatops-aliases master branch committed there after creating this cloned repo. This is the first step in shuttering the st2-chatops-aliases forked community repo and using this repo exclusively for our Stackstorm setups. Boxed seems to have been using the use-existing-dyn-inventory branch on this repo. GMR has been using the st2-chatops-aliases repo master branch. With this sync up we can switch GMR over to this repo and delete the st2-chatops-aliases repo. We can decide how to handle Boxed's branch separately from this PR. fwiw, the actions in Boxed branch will probably be moved into the boxed-infrastructure repo. they are highly customized in a way I'm not sure we can abstract out. Looks good to me.
2025-04-01T06:40:12.067683
2015-09-12T08:33:49
106141358
{ "authors": [ "hglattergotz", "kahlil" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10107", "repo": "reactivepod/fido", "url": "https://github.com/reactivepod/fido/issues/1" }
gharchive/issue
Refactor to ES6 Either simply write the entire source in ES6 and require node 4 in the package.json or implement transpilation with Bable. This has been done in my recent clean up in https://github.com/reactivepod/fido/pull/4 via require('babel/register');.
2025-04-01T06:40:12.076931
2017-08-24T23:14:50
252757039
{ "authors": [ "coveralls", "dnfclas", "kentcb" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10108", "repo": "reactiveui/ReactiveUI", "url": "https://github.com/reactiveui/ReactiveUI/pull/1433" }
gharchive/pull-request
fix: remove duplicate ComponentModelTypeConverter What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) fix What is the current behavior? (You can also link to an open issue here) The ReactiveUI.Wpf project includes ComponentModelTypeConverter even though it's already in ReactiveUI. What is the new behavior (if this is a feature change)? Remove ComponentModelTypeConverter from ReactiveUI.WPF. Also, fix comments in the copy inside ReactiveUI because it actually has nothing to do with WPF. What might this PR break? Nothing realistic. Please check if the PR fulfills these requirements [ ] The commit follows our guidelines: https://github.com/reactiveui/reactiveui#contribute [ ] Tests for the changes have been added (for bug fixes / features) [ ] Docs have been added / updated (for bug fixes / features) Other information: @kentcb, Thanks for having already signed the Contribution License Agreement. Your agreement was validated by .NET Foundation. We will now review your pull request. Thanks, .NET Foundation Pull Request Bot Changes Unknown when pulling 88a668b9490a0b9214b726cc1dc5d17e7eda0df7 on kentcb:component-model-type-converter into ** on reactiveui:develop**.
2025-04-01T06:40:12.078875
2017-06-07T07:31:29
234117220
{ "authors": [ "claydiffrient", "diasbruno", "jamesjjk" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10109", "repo": "reactjs/react-modal", "url": "https://github.com/reactjs/react-modal/issues/396" }
gharchive/issue
Farewell For all the details check out my Medium post. The tl;dr is basically that I’m overwhelmed and out of time so I’m turning everything over to @diasbruno as I step away from React Modal. Thanks for everything and goodbye! Thank you so much for your time managing react-modal. @claydiffrient Awesome work!
2025-04-01T06:40:12.081759
2017-03-02T13:13:21
211387267
{ "authors": [ "diasbruno" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10110", "repo": "reactjs/react-modal", "url": "https://github.com/reactjs/react-modal/pull/341" }
gharchive/pull-request
[chore] added missing babel transformer plugin. Running webpack --config webpack.config.js fails due to missing babel's spread transformer. ERROR in ./examples/basic/app.js Module build failed: SyntaxError: Unexpected token (16:20) 14 | 15 | openModal: function() { > 16 | this.setState({ ...this.state, modalIsOpen: true }); | ^ 17 | }, 18 | 19 | closeModal: function() { Acceptance Checklist: [x] All commits have been squashed to one. [x] The commit message follows the guidelines in CONTRIBUTING.md. [x] Documentation (README.md) and examples have been updated as needed. [x] If this is a code change, a spec testing the functionality has been added. [x] If the commit message has [changed] or [removed], there is an upgrade path above. This will no longer be necessary.
2025-04-01T06:40:12.162112
2023-02-16T13:40:19
1587698785
{ "authors": [ "Sachin-chaurasiya", "atapas", "koustov", "siddhantsiddh15" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10112", "repo": "reactplay/react-play", "url": "https://github.com/reactplay/react-play/pull/974" }
gharchive/pull-request
[#2PlaysAMonth]: Image Gallery - Create a responsive image gallery by using the free Unsplash API First thing, PLEASE READ THIS: ReactPlay Code Review Checklist Description The project contains use of Unsplash API, which is fetched using axios and then populated in the Photo Component. Following things have been used in this project: useState useEffect React Form Component Refactoring Material UI icon Axios npm module CSS Grid Infinite scroll functionality(This feature is still not showing the react play app but is working on local file system. I am working on it) The app is responsive and redirects to the individual images and profiles when clicked on it. Fixes #910 Type of change [ ] New feature (non-breaking change which adds functionality) How Has This Been Tested? To see working of the app repeat the following Go to the app. Some images are loaded on the screen by default Search for any keyword. The screen will update with new images. Hover over any image, the cursor will turn to pointer. On clicking any image, the page will re direct to that image. On hovering, the profile name, likes will come in front of the image. On clicking the profile photo, the profile of the image contributor will open The page on scrolling down should show more images. Checklist: [ ] I have performed a self-review of my own code [ ] I have commented my code, particularly in hard-to-understand areas [ ] I have made corresponding changes to the documentation [ ] My changes generate no new warnings [ ] Any dependent changes have been merged and published in downstream modules Screenshots or example output @siddhantsiddh15 , Thanks for the PR. I would request you change the PR title to "[#2PlaysAMonth]: Image Gallery - Create a responsive image gallery by using the free Unsplash API" and the second thing is to link your PR with the issue by adding the below line in the PR description. Example: Fixes #910 Greetings, I have updated the pull request name and linked the issue with the issue number Thanks @siddhantsiddh15 , Please format and lint the code by following this guide. https://github.com/reactplay/react-play#format-the-code Greetings Have formatted the code as per the guidelines as mentioned here https://github.com/reactplay/react-play#format-the-code and then pushed the code in the branch. Thanks @siddhantsiddh15 Kindly resolve the merge conflict These errors are not in my edited files, what can I do to run the react play? @siddhantsiddh15 Catch me up on Discord today to close it. Link to the video as I was facing repeated issue in creating an account on Stack Stream. Thankyou for merging my branch into main. Can I close this pull request now? Inspect I have kicked off a build. Please check if it is successful and test if things are fine. @siddhantsiddh15 here is the preview build.. I see the changes are breaking styles. Please take a look https://react-play-git-fork-siddhantsiddh15-unsplash-8f3010-reactplayio.vercel.app/ Have recorded the video here . The website is responsive and is working correctly. @siddhantsiddh15 almost there.. please add a cover image Also edit your play from localhost and add the stream recording. Ping where when you done, will merge it. @siddhantsiddh15 let us know when done Have added the cover image. Have added the cover image. Thanks! The cover image should be in KBs, please reduce the size. Also confirm that your demo recording has been added to the play by editing it. Greetings, I have not added the demo recording in the Play. I have updated the size of the cover.png to 13 kb. Regards Greetings, I have not added the demo recording in the Play. I have updated the size of the cover.png to 135 kb. Regards Can you please add the demo recording too.. then its al done. Cannot do it. Having difficulty. Can we skip the recording portion? Cannot do it. Having difficulty. Can we skip the recording portion? Ok, no worries, no pressure. It's still valuable to get your work in. I am just curious about what kind of issues you are facing. If you can post about it in our Discord, I may try the resolution.. In fact, you can add the recording after merge too...(before 5th March) I am having slow internet connection due to the place I have travelled to, recording is a big file to upload. Hey @siddhantsiddh15 , this play looks cool. I will be waiting for the video link to updated before merging it to production branch Okay, will update it by tonight. Hey @siddhantsiddh15 , this play looks cool. I will be waiting for the video link to updated before merging it to production branch I have uploaded here the updated video of the play. I have checked the responsiveness and working of my play. The delay from my side was unwanted, I have uploaded the video as soon as I got good internet connection. Well the video should be on https://stack-stream.com/ I can get this play merged if everything is ok however please record a stackstream video before EOD @atapas you need to unblock in order to merge this PR @siddhantsiddh15 please record demo on stackstream
2025-04-01T06:40:12.173408
2017-02-21T16:19:21
209195477
{ "authors": [ "JonSilver", "TheSharpieOne", "Y-Taras", "dhanyn10", "eddywashere", "nathfy", "piavgh", "softmixt" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10113", "repo": "reactstrap/reactstrap", "url": "https://github.com/reactstrap/reactstrap/issues/336" }
gharchive/issue
Reactstrap and React-router 4.0.0-beta.6 - active Issue description components: Navlink Steps to reproduce issue I'm using Reactstrap and React-router 4.0.0-beta.6 in the educational project that is located on gitlab with custom domain - http://harazd.org/. According to Reactstrap docs: that's the way I should use active navlink import { NavLink } from 'reactstrap' ... <NavLink href="#" active = true >Link< /NavLink> According to React-router v4 docs: import { NavLink } from 'react-router-dom' ... <NavLink to="/about" activeClassName="active">About</NavLink> So how should I do implement navlink active state and use react-router? The simplest answer is that you can't use reactstrap navlink active prop when passing in react-router navlink. And that's fine because it's just 1 class active and react-router NavLink can take care of that with the activeClassName prop. To use both, you'll need to rename one of those and use the tag prop in reactstrap NavLink. import { NavLink } from 'reactstrap'; import { NavLink as RRNavLink } from 'react-router-dom'; <NavLink to="/about" activeClassName="active" tag={RRNavLink}>About</NavLink> Thanks for a quick solution (You can paste your answer here: http://stackoverflow.com/questions/42372179/reactstrap-and-react-router-4-0-0-beta-6-active-navlink). And one more thing : ... - root path is always active for some reason.. The root path is always active because '/' is in every path. I am not too fimilar with react-router-dom and it's NavLink, it looks like it has an exact prop which may be what you are looking for. Here's my router code, so I do use exact for root path. ` <BrowserRouter> <div className='app'> <Switch> <Route exact path='/' component={Landing} /> <Route path='/products' component={Products} /> <Route path='/services' component={Services} /> <Route path='/price' component={Price} /> <Route path='/contacts' component={Contacts} /> <Route component={NoMatch} /> </Switch> </div> </BrowserRouter>` @TheSharpieOne yes adding exact to root Navlink helped) Previously you were using <NavLink> from react-router-dom. From that file, it looks like it creates the Route and the Link for you in one go. react-router v4 has a ton of changes and I am not familiar with it enough to know how to use it compared to previous version so I can only offer some limited help with it. @Y-Taras @TheSharpieOne can you give me example code? a have try it but doesnt work so far. @dhanyn10 here's a link to my working example https://gitlab.com/ytaras/pinobeton2/blob/master/js/Navigation.js Hi everyone, Is there anyway to use NavLink from reactstrap alone to style the active link? Thanks @Y-Taras I can confirm your solution works here too. Thanks so much - you saved me many hours of pain. @piavgh if you just want to use reactstrap's NavLink alone (without react-router-dom) you can use the active prop: import { NavLink } from 'reactstrap'; <NavLink to="/about" active>About</NavLink> I think you can add something like : <NavLink to="/about" active={window.location.hash === '/about'}>About</NavLink> This works for me: <NavItem active={window.location.pathname === "/thing"}> <NavLink href="/thing">Things</NavLink> </NavItem>
2025-04-01T06:40:12.200888
2020-05-12T08:37:43
616470240
{ "authors": [ "hfjn", "swenzel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10114", "repo": "real-digital/esque", "url": "https://github.com/real-digital/esque/pull/148" }
gharchive/pull-request
Proposal for environment context This is a proposal for a new environment context which takes precedence of the config file. We had the need for a context which can be configured for automation without touching files. Happy to hear your feedback. Hmm instead of going through all that trouble of creating and mapping config variables, you could also create an environment variable ESQUE_CONFIG_YAML that can hold the whole esque config as yaml string. That would be more dynamic, future proof and would only require minimal changes. Hm. I get your point, in general. But that would only get rid of the need to write a file, not to create a YAML in general. :D You wrote: without touching files. So I thought, that was your problem :smile: Well, although I don't see how setting 15 environment variables is easier than creating a yaml string, I'm always a fan of "letting the user choose". So I'm not entirely against it. I'm just afraid that we might have to rename, add and/or remove some of the variables while our config evolves. We do have a migration mechanism for file based configs but not for environment variables. Do you think a separate command to add a section to the config would help you? Something like esque config add-context foo --bootstrap-servers broker1,broker2 --schema-registry registry ... I think it's easier to keep a command stable than it is to keep the environment variables the same over time. Guess I should have been clearer. Sorry for that. :D I think you're right with your approach of not wanting to make it unnecessarily complicated and I think the idea of just adding another command is good. I'll see with what I can come up with over the weekend. 😊
2025-04-01T06:40:12.204299
2015-07-31T16:34:09
98426054
{ "authors": [ "Hemanth-Eduru", "buybackoff", "lygstate", "mjpt777", "tmontgomery" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10115", "repo": "real-logic/Aeron", "url": "https://github.com/real-logic/Aeron/issues/154" }
gharchive/issue
C API For easiest integration with Go (via cgo), Erlang, and Rust (via FFI), a C API (not C++) is needed. This is a simple wrapper around C++. .NET users will also be very happy to use Aeron via P/Invoke! Do you have any ETA? A C API will instantly make Aeron as ubiquitous as ZMQ, native ports like #35 are not needed as much as a simple API. This is still a plan. However, have not had time to get to it. Any Update over the C API Currently, a C API is planned for the media driver. And would like to extend that API to cover the Aeron client API. However, no timeline is set for that yet. Closing until someone is willing to sponsor the work. Now the C API are able to be happen:) Preview in 1.28.X releases. Should be feature complete from 1.30.0.
2025-04-01T06:40:12.213406
2015-04-13T21:17:02
68196476
{ "authors": [ "dylan-k", "realpants" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10116", "repo": "real-pants/Real-Pants", "url": "https://github.com/real-pants/Real-Pants/issues/81" }
gharchive/issue
Constrain Mailing list form in Sidebar Is it possible to make the mailing list form only 300px wide, as the ads are? There's a custom CSS field in the plugin settings. I entered .et_bloom { width: 300px; } but no result Where did you enter that CSS? It's not a good idea to edit the stylesheets directly, as they are replaced with each new build of the theme. Changes that are not part of that build would be overwritten. Do you need provisions for manually inserting css? There's a CSS field within the plugin settings. In other words, when designing the look of the plugin, there are a host of options for customizing the colors and text fields. Then there is a "Custom CSS" field. On Fri, Apr 17, 2015 at 3:29 PM, Dylan Kinnett<EMAIL_ADDRESS>wrote: Where did you enter that CSS? It's not a good idea to edit the stylesheets directly, as they are replaced with each new build of the theme. Changes that are not part of that build would be overwritten. Do you need provisions for manually inserting css? — Reply to this email directly or view it on GitHub https://github.com/real-pants/Real-Pants/issues/81#issuecomment-94058258 . Can this be closed? Dimensions look great now.
2025-04-01T06:40:12.232282
2024-08-15T09:05:42
2467695136
{ "authors": [ "asomers", "casept", "realchonk" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10117", "repo": "realchonk/fuse-ufs", "url": "https://github.com/realchonk/fuse-ufs/issues/54" }
gharchive/issue
Crash on trying to run file on sparse3 Crash encountered when running file on sparse3 from ufs-big test image Backtrace with RUST_LOG=trace and RUST_BACKTRACE=full: https://gist.github.com/casept/4f9f18b75458f4801746d3597e29c2d1 What system were you running it on? Can you please show us uname -a? Fedora 40, uname: Linux l13 6.10.3-200.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Aug 5 14:30:00 UTC 2024 x86_64 GNU/Linux This is now fixed with #63.
2025-04-01T06:40:12.243363
2024-02-01T21:36:28
2113528855
{ "authors": [ "CLAassistant", "realeyes-mike-patterson" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10118", "repo": "realeyes-media/scte35-decoder-multiplatform", "url": "https://github.com/realeyes-media/scte35-decoder-multiplatform/pull/10" }
gharchive/pull-request
Fix Unit Tests and Remove Usage of Unsigned Byte Remove UByte Array usages since it was experimental Remove Roboelectric Update Unit Tests with Android mocking Base64 Tests successfully run in the terminal Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T06:40:12.249121
2017-07-14T01:25:17
242872720
{ "authors": [ "pipsqueaker", "realitix" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10119", "repo": "realitix/vulkan", "url": "https://github.com/realitix/vulkan/pull/12" }
gharchive/pull-request
Clarify error message I just wanted to clarify the error message that shows up when the library can't find the .so/.dll The previous error message implied that the error had something to do with the vulkan version (false, as far as I can tell) and also didn't really indicate that the library just may not have been in the loading path. It's a really small thing, but I thought I'd submit a PR anyways Also, random question. Why is there so much code duplication between vulkan.template.py and init.py? Hello @pipsqueaker. Thank you very much for your pull request. You are right about the error message, your version is a lot better. Nevertheless, you don't need to update the __init__.py file. This file is automatically generated from the vulkan.template.py which is a jinja2 template. When you run the generator script, it will do it for you. So what you can do is only update the template file and then I will regenerate the module. Thanks a lot for the contribution. @realitix Alright, just messed with my history a bit so that only vulkan.template.py is edited Thanks @pipsqueaker for the contribution !
2025-04-01T06:40:12.256024
2018-06-12T11:34:22
331549234
{ "authors": [ "3tty0n", "xuwei-k" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10120", "repo": "reallylabs/jwt-scala", "url": "https://github.com/reallylabs/jwt-scala/issues/16" }
gharchive/issue
Could you publish it in Scala 2.12 ? jwt-scala_2.11 and jwt-scala_2.10 are already published, but we want to use your library in 2.12. Could you publish this library in Scala 2.12 ? forked and published for Scala 2.12 and 2.13 https://github.com/xuwei-k/jwt-scala/tree/v1.4.0 https://repo1.maven.org/maven2/com/github/xuwei-k/jwt-scala_2.12/ https://repo1.maven.org/maven2/com/github/xuwei-k/jwt-scala_2.13/
2025-04-01T06:40:12.332956
2017-07-10T21:16:16
241852671
{ "authors": [ "austinzheng", "bdash" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10121", "repo": "realm/realm-object-store", "url": "https://github.com/realm/realm-object-store/pull/494" }
gharchive/pull-request
Change object store to allow sync user auth URL to be modified With this PR, getting a user with a different auth URL than the one it was originally given updates the URL, instead of throwing an exception. This fixes the issue (commonly encountered during development) when a user is opened with one URL, persisted, the server is moved or SSL is enabled, and an attempt to log in the user with the new address is made. This does not fix the corner case where two different ROSes have users with the same user ID, and the user wishes to be logged into both simultaneously, but this case wasn't supported before, nor is it supported by any of the other subsystem code. Did we get clarity on whether identity is expected to be unique across different servers? If that's not guaranteed, is this change safe? It's complicated. The default implementation of identity is a UUID. However, there's the possibility of plugging a module into ROS that allows you to specify arbitrary identities instead of using the built-in system, so a conflict is theoretically possible. Even if it is, though, using Realms across multiple servers concurrently is something we intentionally de-emphasized when designing v1 of ROS, so I don't think it's a supported use case anyways. It's something that Realm Browser inherently needs to support. Back to the drawing board, then.
2025-04-01T06:40:12.360460
2019-10-02T17:14:40
501618519
{ "authors": [ "NathanReb", "samoht" ], "license": "ISC", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10122", "repo": "realworldocaml/mdx", "url": "https://github.com/realworldocaml/mdx/pull/187" }
gharchive/pull-request
Switch to using ocaml-ci-scripts This seems like a more reliable CI until we can use ocaml-ci. This will allow us to easily add revdeps builds btw which could prove useful once we want to stabilize mdx's API and avoid breaking users' tests! Opam has a make built-in variable that opam lint suggests we use instead of the raw "make". Now that we use the ci-scripts, it's properly picked up and triggers a build failure if we don't have a compliant opam file. Ok I've read https://github.com/realworldocaml/mdx/pull/185#discussion_r330673627 which answers my question. However this is quite fragile as it will break if you run this in a duniverse setting (with a toplevel call to dune runtest). What do you mean? Do you mean that it will break if we vendor mdx in a duniverse? Aliases aren't resolved within the duniverse so it won't run the tests in this case. I agree it is a bit fragile and I wish there was a way to tell dune about the ocaml-mdx -> ocaml-mdx-test binary dependency but unfortunately there isn't any atm. I tried a couple things and they ended up suffering from the same race condition. Also when using mdx in duniverse mode, ocaml-mdx rule adds a (package mdx) dependency to the generated rules, thus solving the dependecny issue. We can't do that outside a duniverse because (package ...) deps only work for local packages, not opam ones. There's one last thing I can try which is to add an explicit dependency on the install alias for all runtest aliases that rely on ocaml-mdx but that is a bit tedious and doesn't solve the issue in a generic way either but at least running dune runtest should work then. Let's merge that as it already fix the opam test runs and I'll try that solution in a separate PR! Just had a quick look and the (alias install) dependency propably won't work as we generate most of the rules for runtest. I just mean that we should report that issue upstream as it should be fixed properly at one point :-)
2025-04-01T06:40:12.388300
2018-11-20T14:15:09
382682448
{ "authors": [ "tiborsimko" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10123", "repo": "reanahub/reana", "url": "https://github.com/reanahub/reana/pull/115" }
gharchive/pull-request
cli: reana-dev git-log New command git-log showing information about commits. Signed-off-by: Tibor Simko<EMAIL_ADDRESS> Example output: $ reana-dev git-log [reana-workflow-engine-yadage] git log -n 5 --graph --decorate ... * (upstream/pr/91) 34b1c8e tasks: add stop_workflow from reana_commons, Diego Rodriguez, 6 hours ago * (HEAD -> master, tag: v0.4.0, upstream/pr/90, upstream/master, origin/master, origin/HEAD) 11305f8 release: v0.4.0, Dinos Kousidis, 2 weeks ago * (upstream/pr/88) 720819b installation: upgrade REANA-Commons, Diego Rodriguez, 5 weeks ago * 8f13434 publisher: update to kombu producer, Diego Rodriguez, 5 weeks ago * (upstream/pr/89) 32251ab installation: bump reana-commons include pkg data, Diego Rodriguez, 6 weeks ago [reana-workflow-engine-serial] git log -n 5 --graph --decorate ... * (upstream/pr/52) e77af4e tasks: use stop_workflow from reana-commons, Diego Rodriguez, 5 hours ago * ada286d tasks: introduce a stop_workflow task, Diego Rodriguez, 21 hours ago * 522d3f1 config: disable task prefetching by process, Diego Rodriguez, 27 hours ago * 0e114d1 config: disable broker pool, Diego Rodriguez, 28 hours ago * 08f28d2 tasks: add revoke handler, Diego Rodriguez, 7 days ago [reana-job-controller] git log -n 5 --graph --decorate ... * (upstream/pr/98) d660142 api: return a dict on k8s_instantiate_job, Diego Rodriguez, 21 hours ago * e4a18d1 global: use flask run to start application, Diego Rodriguez, 5 days ago * c6b1578 tests: make application testable, Jan Okraska, 8 days ago * 82fc7bc api: add delete job endpoint, Diego Rodriguez, 8 days ago * (HEAD -> master, tag: v0.4.0, upstream/pr/97, upstream/master, origin/master, origin/HEAD) 527cc1b release: v0.4.0, Tibor Simko, 2 weeks ago [pytest-reana] git log -n 5 --graph --decorate ... * (upstream/pr/31) ff2f2df fixtures: renaming of operational parameters, Dinos Kousidis, 23 hours ago * (HEAD -> master, upstream/master, origin/master, origin/HEAD) 4ceb1e7 release: v0.5.0.dev201811191, Diego Rodriguez, 24 hours ago | * (upstream/pr/30) 70b1f6e release: v0.5.0.dev20181119.1, Diego Rodriguez, 24 hours ago |/ * d0d1920 fixtures: fix expose yadage workflow fixture, Diego Rodriguez, 24 hours ago * (upstream/pr/29) ce31f8a release: v0.5.0.dev20181119, Dinos Kousidis, 27 hours ago [reana-workflow-monitor] git log -n 5 --graph --decorate ... * (HEAD -> master, tag: v0.4.0, upstream/pr/26, upstream/master, origin/master, origin/HEAD) c242d4f release: v0.4.0, Dinos Kousidis, 2 weeks ago * (upstream/pr/25) b8d4d1c global: license change to MIT License, Tibor Simko, 6 weeks ago * (upstream/pr/24) 924ef8d docs: new logo, panel verbiage and links, Tibor Simko, 5 months ago * (upstream/pr/23) c884a9b docs: author ORCID links, Tibor Simko, 7 months ago * (tag: v0.2.0, upstream/pr/22) 8f5b219 release: v0.2.0, Dinos Kousidis, 7 months ago [reana-server] git log -n 5 --graph --decorate ... * (HEAD -> master, upstream/pr/109, upstream/master, origin/master, origin/HEAD) 698a34e installation: fix pytest-reana dependency version, Tibor Simko, 3 hours ago * (upstream/pr/104) a2cb517 api: new rest api endpoint which returns wf params, Rokas Maciulaitis, 24 hours ago | * (upstream/pr/108) a92b154 api: automatic openapi specs passing to reana-commons, Rokas Maciulaitis, 4 days ago |/ | * (upstream/pr/107) cd155f9 api: automatic openapi specs passing to reana-commons, Rokas Maciulaitis, 4 days ago | * 068441c api: new rest api endpoint which returns wf params, Rokas Maciulaitis, 5 days ago |/ [reana-message-broker] git log -n 5 --graph --decorate ... * (HEAD -> master, tag: v0.4.0, upstream/pr/16, upstream/master, origin/master, origin/HEAD) 7f18a21 release: v0.4.0, Tibor Simko, 2 weeks ago * (upstream/pr/15, upstream/license-change) 1da1398 global: license change to MIT License, Tibor Simko, 6 weeks ago * (upstream/pr/14, upstream/docs-logo-panel-links) 9d0bee3 docs: new logo, panel verbiage and links, Tibor Simko, 5 months ago * (upstream/pr/13, upstream/docs-authors-orcid) 6661833 docs: author ORCID links, Tibor Simko, 7 months ago * (tag: v0.2.0, upstream/pr/12) 1f7b82d release: v0.2.0, Tibor Simko, 7 months ago [reana-workflow-controller] git log -n 5 --graph --decorate ... * (upstream/pr/141) c1f00aa tests: utilities, Dinos Kousidis, 3 hours ago * efca3e4 rest: set_workflow_status parameters description, Dinos Kousidis, 4 hours ago * 120db5c rest: allow deletion of already deleted workflows, Dinos Kousidis, 4 hours ago * a9721db tests: workspace deletion, Dinos Kousidis, 5 hours ago * 74f9e53 rest: allow access to deleted workflows, Dinos Kousidis, 5 hours ago [reana-workflow-engine-cwl] git log -n 5 --graph --decorate ... * (upstream/pr/70) c92165a tasks: add stop_workflow task from reana_commons, Diego Rodriguez, 6 hours ago * (HEAD -> master, tag: v0.4.0, upstream/pr/68, upstream/master, origin/master, origin/HEAD) 7b4b7d5 release: v0.4.0, Dinos Kousidis, 2 weeks ago * (upstream/pr/66) e6cc7f9 installation: upgrade REANA-Commons, Diego Rodriguez, 5 weeks ago * a5bb2ee publisher: use Kombu publisher, Diego Rodriguez, 5 weeks ago * (upstream/pr/67) 696d526 installation: bump reana-commons version, Diego Rodriguez, 6 weeks ago [reana-commons] git log -n 5 --graph --decorate ... * (upstream/pr/66) 1c6ccb5 tasks: introduce common task to stop workflows, Diego Rodriguez, 5 hours ago * cd856de api: update openapi specs, Jan Okraska, 20 hours ago | * (HEAD -> master, upstream/pr/65, upstream/master, origin/master, origin/HEAD) e720eaa release: 0.5.0.dev20181116, Dinos Kousidis, 26 hours ago | * 6cdb587 installation: bump pytest-reana, Dinos Kousidis, 27 hours ago | * (upstream/pr/63) 9746ec4 api: new rest api endpoint, Rokas Maciulaitis, 4 days ago |/ [reana-db] git log -n 5 --graph --decorate ... * (HEAD -> master, upstream/pr/21, upstream/master, origin/master, origin/HEAD, installation-version) 0b0854b release: v0.5.0.dev20181116, Dinos Kousidis, 4 days ago * (upstream/pr/19) 4a5ebdf models: renaming operational parameters, Rokas Maciulaitis, 4 days ago | * (upstream/pr/20) 7f3621e models: addition of stopped WorkflowStatus, Diego Rodriguez, 6 days ago |/ * (upstream/pr/15) 3e7a812 models: addition of deleted WorkflowStatus, Dinos Kousidis, 12 days ago * (tag: v0.4.0, upstream/pr/17) e2bb3e6 release: v0.4.0, Tibor Simko, 2 weeks ago
2025-04-01T06:40:12.395369
2019-10-10T14:30:14
505310723
{ "authors": [ "cfillion", "nofishonfriday", "tormyvancool" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10124", "repo": "reaper-oss/sws", "url": "https://github.com/reaper-oss/sws/issues/1204" }
gharchive/issue
Playlist management: errors in cropping, appending or pasting I have issues with the playlist manager Once a playlist is made, it execute it finely But once I chose for Crop or Append or Paste, the final result is totally incorrect. Items are missing, or partially copied and pasted. I attached a .zip file containing a video that shows you the details. I would like to know if this is an issue or it's me badly operating, and what do you suggest for the best. Reaper-Playlist-Issue.zip Interesting, can you share the project RPP? Are any media items grouped? Interesting, can you share that TEST.RPP project? Are any media items grouped? Sorry for the late reply About the media items: not any group Yes sure, the TEST.RPP is here in attachment in the ZIP file TEST.zip I cannot reproduce the bug with SWS v2.10.0. "Paste playlist at edit cursor" behaves as expected here with that project. There is a possibility some REAPER setting is interfering. Can you share your reaper.ini as well? I can reproduce if I have Preferences (-> Project) -> Media Items Defaults: Overlap and crossfade items when splitting (length is set to 0:00.010 here). enabled when pasting here. I have that pref assigned to the toolbar button shown in the gif. https://i.imgur.com/PeL633Z.gif Fixed in this build: sws-<IP_ADDRESS>-Windows-x64-e12769f5.exe. Yep it works perfectly. Thanks a million! P.S. testing it closely I did notice that when you Crop project on new tab, the zones are replicated but not into the new position but they are referred to the original project. Thus it will be necessary to manually move or recreate the zones. Did you get the same result? "Crop project to playlist" and "Crop project to playlist (new project tab)" should produce the same output.
2025-04-01T06:40:12.449880
2015-09-21T21:22:54
107601150
{ "authors": [ "reckart", "shyamupa" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10125", "repo": "reckart/tt4j", "url": "https://github.com/reckart/tt4j/issues/23" }
gharchive/issue
Calling treetagger with untokenized test? Hi, the example on the tt4j homepage shows tokenized input being handled by treetagger. How to I give a untokenized text to treetagger to both POS tag and tokenize? Thanks Shyam See https://reckart.github.io/tt4j/tokenizer.html
2025-04-01T06:40:12.457934
2024-08-10T07:48:52
2458966962
{ "authors": [ "Rakesh9100", "aditya-bhaumik" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10126", "repo": "recodehive/resume-pitch", "url": "https://github.com/recodehive/resume-pitch/pull/49" }
gharchive/pull-request
level 3 pull request not merged yet @sanjay-kv please review and merge this pull request https://github.com/Rakesh9100/CalcDiverse/pull/1820 This is the pull request i had created @sanjay-kv I resolved the conflicts please review and merge when you are free PR is getting reviewed one by one, you can check the PR before accepting the points request from the contributors @sanjay-kv You can remove the point label from here
2025-04-01T06:40:12.460441
2024-11-08T10:49:31
2643658209
{ "authors": [ "oleksandr-danylchenko", "rsimon" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10127", "repo": "recogito/text-annotator-js", "url": "https://github.com/recogito/text-annotator-js/pull/177" }
gharchive/pull-request
Simplified arrow middleware definition Issue In https://github.com/oleksandr-danylchenko/text-annotator-js/commit/ce7014d6f74d929488cb0c277f7015a5abc90b0e#r148871559 I spotted that the arrow middleware definition can be simplified to its default form. That should be safe for the floating itself because the arrow is a "Data Middleware" that only populates the context with the positioning props, but doesn't change the behavior. Demo https://github.com/user-attachments/assets/ab1dab97-7e4e-41e5-8737-66a55e218372 Tested the same thing in Annotorious - yep, works :-) Thanks!
2025-04-01T06:40:12.464970
2022-03-25T04:54:36
1180344326
{ "authors": [ "dcfidalgo", "dvsrepo", "whaowhao" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10128", "repo": "recognai/rubrix", "url": "https://github.com/recognai/rubrix/issues/1309" }
gharchive/issue
Utilize GPT3 embedding / classification API for more automated bulk labelling Keyword-based bulk labeling using Rubrix Rules is still too slow if I were to label thousands of texts for classification - at maximum, I can label 20 at a time since keyword match is noisy with regard to ground-truth label. I have played with GPT3 embeddings, where applying UMAP on texts results in clear clusters with semantically similar texts in clusters. If one can create Rubrix Rules instead based on a selected area on a scatterplot of text embeddings, then one can label hundreds of texts at once, and can easily finish labeling 50,000 data points in a day with high quality. And that'd be a game-changer. Adding on to that, I also played with GPT3 classification API, where it gives pretty accurate few-shot classifications. And if that is added to the pipeline, it'd expedite labeling even further. Making it possible to label thousands of texts at once. I have the GPT3 embedding + classification pipeline done in a notebook, is there plan on Rubrix's side to look into utilizing GPT3? Hey @whaowhao Thank you for bringing this up! We are working on a tutorial in which we show how you can use Epoxy to achieve the same goal you are mentioning. The workflow would be something like this: Come up with a few rules trying to cover semantic diverse records Provide the weak label matrix and your embeddings of choice to Epoxy (with some thresholds) Get back an enhanced weak label matrix Hopefully, we can share it with you soon, we would love to get your feedback. Thanks @whaowhao ! Adding to what @dcfidalgo mentions, we've also been discussing the ability of "labelling-by-drawing" based on a 2-D embedding-based representation (UMAP or otherwise). This will be in the roadmap but not immediately. Meanwhile I'd be really interested on collaboration/supporting you with experiments with GPT3, would you be interested? Hi @dvsrepo yeah let's chat
2025-04-01T06:40:12.468701
2018-03-02T14:36:42
301786422
{ "authors": [ "OndraFiedler", "alepinzon" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10129", "repo": "recombee/java-api-client", "url": "https://github.com/recombee/java-api-client/issues/3" }
gharchive/issue
valuesAreSet field always is false. Hi, I'm trying to create a RecommendItemsToUser request and set the returnProperties to true, but the valuesAreSet field in the recommendation response always comes as false. When i try to execute recommendation.getValues() it always throws an IllegalStateException. I checked the response body from the Api call and it does not contain the valuesAreSet property in the payload. Recombee Api Response Payload: {"recommId": "8b904849-19ad-47d9-90f2-b024ca67726f", "recomms": [{"values": {"manufacturer_code": 1}, "id": "123"}]} Request: final RecommendItemsToUser cf = new RecommendItemsToUser("CF", 5) .setReturnProperties(true); I confirm the bug, we will fix it ASAP Thanks @OndraFiedler It is fixed in https://github.com/recombee/java-api-client/releases/tag/v2.0.1. The new version has been pushed to the central repository, but it can take some time (up to few hours) before it is available. Thanks for reporting!
2025-04-01T06:40:12.490828
2022-10-17T19:21:31
1412090532
{ "authors": [ "chssn" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10130", "repo": "reconmap/reconmap", "url": "https://github.com/reconmap/reconmap/pull/123" }
gharchive/pull-request
docker-compose not recognising boolean values When running docker-compose up -d the following error is observed: ERROR: The Compose file './docker-compose.yml' is invalid because: services.keycloak.environment.KC_HTTP_ENABLED contains false, which is an invalid type, it should be a string, number, or a null Remediation is by encapsulating any boolean values in single quotes so they are treated as a string as noted here Behaviour seen in commit 57d1665d323048c9feaf34a9fabe6cc454ec4ccc Running Docker version 20.10.19, build d85ef84 on Ubuntu 22.04.1 Just saw that a similar pull request has already been responded to
2025-04-01T06:40:12.547395
2024-02-08T15:36:04
2125473133
{ "authors": [ "zdtsw" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10131", "repo": "red-hat-data-services/rhods-operator", "url": "https://github.com/red-hat-data-services/rhods-operator/pull/190" }
gharchive/pull-request
[backport]: from 2.6 to 2.7 (#173) fix(trustyai): prometheus rules for probe update(trusty): prometheus to use job instead of instance name for record rules this is missed to get into main before 2.7 branch out
2025-04-01T06:40:12.606596
2019-04-06T10:29:22
430020512
{ "authors": [ "redboltz" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10132", "repo": "redboltz/mqtt_cpp", "url": "https://github.com/redboltz/mqtt_cpp/pull/208" }
gharchive/pull-request
Added async and sync client wrappers. Added new async test mechanism. See #196. This PR improves async/sync APIs. Design decision I choose public inheritance and void function()=delete approach. Why don't use composition and forwarding? This approach requires many forwarding functions. If the function parameter would change, many part of codes need to change. It is difficult to maintain. Code image (details are omitted intentionally) // existing classes class client : public endpoint {}; // new classes by composition class sync_client { public: void publish(...) { c_->publish(...): } private: std::shared_ptr<client> c_; }; Why don't use private inheritance and using? Class endpoint use std::enable_shared_from_this. // existing classes class endpoint : public std::enable_shared_from_this<endpoint> {}; class client : public endpoint {}; // new classes by private inheritance class sync_client : private client { public: using client::publish; }; shared_from_this() throws bad_weak_ptr exception on runtime. Why use void function_name() = delete ? =delete only checks function name. So I choose the most simple signature and return type. Consider if I use the complete function signature and return type for =delete, if the function has overloads, which one should be chosen? I think there is no appropriate answer. // existing classes class endpoint : public std::enable_shared_from_this<endpoint> {}; class client : public endpoint {}; // new classes by public inheritance class sync_client : public client { public: void async_publish() = delete; // always no parameter and returns void }; Why don't you create asnyc_client and sync_client as the base classes of client? It requires big design change. I don't have good design concept.
2025-04-01T06:40:12.660957
2023-06-19T15:56:12
1763809983
{ "authors": [ "rhtap-qe-bots" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10133", "repo": "redhat-appstudio-qe/devfile-sample-hello-world", "url": "https://github.com/redhat-appstudio-qe/devfile-sample-hello-world/pull/11000" }
gharchive/pull-request
Appstudio update test-component-pac-zvdm Pipelines as Code configuration proposal To start the PipelineRun, add a new comment with content /ok-to-test For more detailed information about running a PipelineRun, please refer to Pipelines as Code documentation Running the PipelineRun To customize the proposed PipelineRuns after merge, please refer to Build Pipeline customization Pipelines as Code CI/test-component-pac-zvdm-on-pull-request has successfully validated your commit. StatusDurationName ✅ Succeeded 7 seconds init ✅ Succeeded 12 seconds clone-repository ✅ Succeeded 29 seconds build-container ✅ Succeeded 20 seconds inspect-image ✅ Succeeded 1 minute deprecated-base-image-check ✅ Succeeded 3 minutes clair-scan ✅ Succeeded 52 seconds clamav-scan ✅ Succeeded 17 seconds sbom-json-check ✅ Succeeded 17 seconds label-check ✅ Succeeded 8 seconds show-sbom ✅ Succeeded 9 seconds show-summary Pipelines as Code CI/test-component-pac-zvdm-on-pull-request has failed. StatusDurationName --- --- init Pipelines as Code CI/test-component-pac-zvdm-on-pull-request has successfully validated your commit. StatusDurationName ✅ Succeeded 7 seconds init ✅ Succeeded 21 seconds clone-repository ✅ Succeeded 25 seconds build-container ✅ Succeeded 12 seconds deprecated-base-image-check ✅ Succeeded 12 seconds inspect-image ✅ Succeeded 11 seconds clair-scan ✅ Succeeded 37 seconds clamav-scan ✅ Succeeded 11 seconds sbom-json-check ✅ Succeeded 26 seconds label-check ✅ Succeeded 6 seconds show-sbom ✅ Succeeded 5 seconds show-summary Pipelines as Code CI/test-component-pac-zvdm-on-pull-request has successfully validated your commit. StatusDurationName ✅ Succeeded 7 seconds init ✅ Succeeded 21 seconds clone-repository ✅ Succeeded 25 seconds build-container ✅ Succeeded 12 seconds deprecated-base-image-check ✅ Succeeded 12 seconds inspect-image ✅ Succeeded 11 seconds clair-scan ✅ Succeeded 37 seconds clamav-scan ✅ Succeeded 11 seconds sbom-json-check ✅ Succeeded 26 seconds label-check ✅ Succeeded 6 seconds show-sbom ✅ Succeeded 5 seconds show-summary
2025-04-01T06:40:12.693448
2023-01-04T09:03:19
1518587243
{ "authors": [ "jkopriva" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10134", "repo": "redhat-appstudio-qe/devfile-sample-hello-world", "url": "https://github.com/redhat-appstudio-qe/devfile-sample-hello-world/pull/1141" }
gharchive/pull-request
Appstudio update test-component-pac-kjnh Pipelines as Code configuration proposal Pipelines as Code CI/test-component-pac-kjnh-on-pull-request has successfully validated your commit. StatusDurationName ✅ Succeeded 9 seconds appstudio-init ✅ Succeeded 9 seconds clone-repository ✅ Succeeded 18 seconds appstudio-configure-build ✅ Succeeded 19 seconds sast-snyk-check ✅ Succeeded 43 seconds build-container ✅ Succeeded 20 seconds sanity-inspect-image ✅ Succeeded 17 seconds deprecated-base-image-check ✅ Succeeded 49 seconds clamav-scan ✅ Succeeded 17 seconds clair-scan ✅ Succeeded 23 seconds sbom-json-check ✅ Succeeded 12 seconds sanity-label-check ✅ Succeeded 11 seconds sanity-optional-label-check ✅ Succeeded 8 seconds show-summary Pipelines as Code CI/test-component-pac-kjnh-on-pull-request has successfully validated your commit. StatusDurationName ✅ Succeeded 9 seconds appstudio-init ✅ Succeeded 9 seconds clone-repository ✅ Succeeded 18 seconds appstudio-configure-build ✅ Succeeded 19 seconds sast-snyk-check ✅ Succeeded 43 seconds build-container ✅ Succeeded 20 seconds sanity-inspect-image ✅ Succeeded 17 seconds clair-scan ✅ Succeeded 49 seconds clamav-scan ✅ Succeeded 17 seconds deprecated-base-image-check ✅ Succeeded 23 seconds sbom-json-check ✅ Succeeded 11 seconds sanity-optional-label-check ✅ Succeeded 12 seconds sanity-label-check ✅ Succeeded 8 seconds show-summary Pipelines as Code CI/test-component-pac-kjnh-on-pull-request has successfully validated your commit. StatusDurationName ✅ Succeeded 8 seconds appstudio-init ✅ Succeeded 15 seconds clone-repository ✅ Succeeded 13 seconds sast-snyk-check ✅ Succeeded 14 seconds appstudio-configure-build ✅ Succeeded 45 seconds build-container ✅ Succeeded 24 seconds sanity-inspect-image ✅ Succeeded 22 seconds deprecated-base-image-check ✅ Succeeded 47 seconds clamav-scan ✅ Succeeded 17 seconds clair-scan ✅ Succeeded 27 seconds sbom-json-check ✅ Succeeded 13 seconds sanity-optional-label-check ✅ Succeeded 11 seconds sanity-label-check ✅ Succeeded 6 seconds show-summary Pipelines as Code CI/test-component-pac-kjnh-on-pull-request has successfully validated your commit. StatusDurationName ✅ Succeeded 8 seconds appstudio-init ✅ Succeeded 15 seconds clone-repository ✅ Succeeded 14 seconds appstudio-configure-build ✅ Succeeded 13 seconds sast-snyk-check ✅ Succeeded 45 seconds build-container ✅ Succeeded 22 seconds deprecated-base-image-check ✅ Succeeded 24 seconds sanity-inspect-image ✅ Succeeded 47 seconds clamav-scan ✅ Succeeded 17 seconds clair-scan ✅ Succeeded 27 seconds sbom-json-check ✅ Succeeded 13 seconds sanity-optional-label-check ✅ Succeeded 11 seconds sanity-label-check ✅ Succeeded 6 seconds show-summary
2025-04-01T06:40:12.703049
2023-02-22T19:43:12
1595725735
{ "authors": [ "redhat-appstudio-qe-bot2" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10135", "repo": "redhat-appstudio-qe/hacbs-test-project", "url": "https://github.com/redhat-appstudio-qe/hacbs-test-project/pull/32" }
gharchive/pull-request
Appstudio update mvp-test-component Pipelines as Code configuration proposal Pipelines as Code CI/mvp-test-component-on-pull-request has successfully validated your commit. StatusDurationName ✅ Succeeded 10 seconds init ✅ Succeeded 18 seconds clone-repository ✅ Succeeded 1 minute build-container ✅ Succeeded 12 seconds sanity-inspect-image ✅ Succeeded 11 seconds deprecated-base-image-check ✅ Succeeded 58 seconds clamav-scan ✅ Succeeded 16 seconds clair-scan ✅ Succeeded 9 seconds sbom-json-check ✅ Succeeded 17 seconds sanity-label-check ✅ Succeeded 15 seconds sanity-optional-label-check ✅ Succeeded 6 seconds show-summary
2025-04-01T06:40:12.704203
2022-07-26T17:44:32
1318566968
{ "authors": [ "mayurwaghmode" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10136", "repo": "redhat-appstudio/application-service", "url": "https://github.com/redhat-appstudio/application-service/pull/156" }
gharchive/pull-request
Added ppc64le support This PR will add multi-architecture support to the application-service operator image /retest
2025-04-01T06:40:12.721748
2024-07-03T07:39:20
2387910778
{ "authors": [ "bamachrn", "manish-jangra" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10137", "repo": "redhat-appstudio/infra-deployments", "url": "https://github.com/redhat-appstudio/infra-deployments/pull/3997" }
gharchive/pull-request
KFLUXINFRA-651: Adding Instance Types for Multi-Platform Builds Adding two types of instances Higher Memory (1:4 ratio for cpu: memory) Higher CPU (1:2 ratio for cpu: memory) Note Some users may require more memory and less CPU, making Memory Optimized Instances the better choice for them. On the other hand, some users may need more CPU power but less memory, in which case CPU-optimized instances would be a good fit. Therefore, a combination of memory-optimized and CPU-optimized instances is essential to cater to different user needs. In the label naming, m refers to memory optimised and c refers to compute optimised. Memory Optimised (not aws terminology, their memory optimized class starts from r) Multi-Platform Label Instance Type Architecture CPU Memory(GB) linux-mlarge/amd64 m5a.large AMD64 2 8 linux-mxlarge/amd64 m6a.xlarge AMD64 4 16 linux-m2xlarge/amd64 m6a.2xlarge AMD64 8 32 linux-m4xlarge/amd64 m6a.4xlarge AMD64 16 64 linux-m8xlarge/amd64 m6a.8xlarge AMD64 32 128 linux-mlarge/arm64 m6g.large ARM64 2 8 linux-mxlarge/arm64 m6g.xlarge ARM64 4 16 linux-m2xlarge/arm64 m6g.2xlarge ARM64 8 32 linux-m4xlarge/arm64 m6g.4xlarge ARM64 16 64 linux-m8xlarge/arm64 m6g.8xlarge ARM64 32 128 CPU Optimised (again not aws terminology) Multi-Platform Label Instance Type Architecture CPU Memory(GB) linux-clarge/amd64 c6a.xlarge AMD64 4 8 linux-cxlarge/amd64 c6a.2xlarge AMD64 8 16 linux-c2xlarge/amd64 c6a.4xlarge AMD64 16 32 linux-c4xlarge/amd64 c6a.8xlarge AMD64 32 64 linux-clarge/arm64 c6g.xlarge ARM64 4 8 linux-cxlarge/arm64 c6g.2xlarge ARM64 8 16 linux-c2xlarge/arm64 c6g.4xlarge ARM64 16 32 linux-c4xlarge/arm64 c6g.8xlarge ARM64 32 64 Suggestion We can minimize code duplication by utilizing default values for common parameters like imageId, subnetId, region, etc, unless specified otherwise. One thought: we will have quay.io with AWS internal communication right? otherwise how about using enhanced network enabled instances (with *n) for better push pull speeds? One thought: we will have quay.io with AWS internal communication right? otherwise how about using enhanced network enabled instances (with *n) for better push pull speeds? ENA is enabled for the instance types we are using m6a.xxxxx, m6g.xxxxx, c6a.xxxxx and c6g.xxxxx. Reference AWS Document -- https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html /lgtm
2025-04-01T06:40:12.726992
2024-12-20T12:43:26
2752668445
{ "authors": [ "johnbieren", "mmalina" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10138", "repo": "redhat-appstudio/infra-deployments", "url": "https://github.com/redhat-appstudio/infra-deployments/pull/5199" }
gharchive/pull-request
Promote release-service from staging to production Included PRs: https://github.com/konflux-ci/release-service/pull/638 https://github.com/konflux-ci/release-service/pull/637 https://github.com/konflux-ci/release-service/pull/636 https://github.com/konflux-ci/release-service/pull/635 https://github.com/konflux-ci/release-service/pull/633 https://github.com/konflux-ci/release-service/pull/630 https://github.com/konflux-ci/release-service/pull/629 https://github.com/konflux-ci/release-service/pull/632 https://github.com/konflux-ci/release-service/pull/628 https://github.com/konflux-ci/release-service/pull/627 https://github.com/konflux-ci/release-service/pull/626 https://github.com/konflux-ci/release-service/pull/622 /lgtm
2025-04-01T06:40:12.728576
2022-09-01T07:44:33
1358402092
{ "authors": [ "psturc", "tkdchen" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10139", "repo": "redhat-appstudio/infra-deployments", "url": "https://github.com/redhat-appstudio/infra-deployments/pull/684" }
gharchive/pull-request
fix: increase timeout for e2e Why Currently there's a 1h (default) timeout for running e2e. Since the number of tests is growing and takes more time to finish, we are sometimes hitting the limit. If that happens, tests get interrupted and a CI check is marked as failed. This is a temporary solution until we enable parallel e2e test run for infra-deployments (which was already enabled for e2e-tests repo) /lgtm Ran 87 of 92 Specs in 4424.652 seconds SUCCESS! -- 87 Passed | 0 Failed | 0 Pending | 5 Skipped
2025-04-01T06:40:12.731904
2021-08-27T18:48:01
981505573
{ "authors": [ "davidkarlsen" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10140", "repo": "redhat-cop/group-sync-operator", "url": "https://github.com/redhat-cop/group-sync-operator/pull/118" }
gharchive/pull-request
Add auth-option to authenticate as githubapp Fixes #112. Adds support for authenticating as github-app, which offers: more narrow-grained permissions (avoiding PATs) increased security, using a private key increased API-quota The github-client is now wired up/facilitated through the go-github-app library, which offers caching, metrics etc. Note: Draft PR - not yet tested, for feedback. go-githubapp now has convenience function to configure the transport. I think this is good to go, but have not yet been able to test it. That should be it. I've updated the docs and tested it in my 4.6.x cluster, using app-based auth: group-sync-operator-controller-manager-54d5874d76-kh5cq manager 2021-09-06T17:31:04.501Z INFO controllers.GroupSync Beginning Sync {"groupsync": "group-sync-operator/github-groupsync", "Provider": "github"} group-sync-operator-controller-manager-54d5874d76-kh5cq manager 2021-09-06T17:31:06.354Z INFO controllers.GroupSync Sync Completed Successfully {"groupsync": "group-sync-operator/github-groupsync", "Provider": "github", "Groups Created or Updated": 1} CC @sabre1041 @raffaelespazzoli @sabre1041 I think all points have been covered now - thanks for the input!
2025-04-01T06:40:12.736214
2021-09-06T10:54:31
989044632
{ "authors": [ "craicoverflow" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10141", "repo": "redhat-developer/app-services-cli", "url": "https://github.com/redhat-developer/app-services-cli/issues/1029" }
gharchive/issue
Run go mod tidy on each pull request and fail the build if there are changes We will be outputting the vendor directory in the midstream repo, and this will prevent any possible conflicts. Actually this is not needed: https://github.blog/changelog/2020-10-19-dependabot-go-mod-tidy-and-vendor-support/
2025-04-01T06:40:12.739325
2022-03-28T10:57:12
1183258588
{ "authors": [ "wtrocki" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10142", "repo": "redhat-developer/app-services-cli", "url": "https://github.com/redhat-developer/app-services-cli/pull/1493" }
gharchive/pull-request
fix: add server side backed up metrics Motivation This change uses backend compiled metrics for consumer groups. This have been in production for quite a while and have been extensively tested. Verification Get Kafka Create new topic test Create kcat.properties based of the kcat guide kcat -b <yourhostname -F ./kcat.properties -P -t test` run rhoas kafka consumer-group list run rhoas kafka consumer-group describe I have used latest version of SDK. We need to wait to make sure that it is currently deployed to production @rkpattnaik780 FYI @mikeedgar API is working find and was tested end to end
2025-04-01T06:40:12.740482
2021-04-21T10:06:53
863700015
{ "authors": [ "wtrocki" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10143", "repo": "redhat-developer/app-services-operator", "url": "https://github.com/redhat-developer/app-services-operator/issues/189" }
gharchive/issue
Provide simplified ability to switch between stagging and production environments We can add env variable into OLM as example (can be empty) and then scripts can patch it (as discussed with @b1zzu) I will verify if that really works and document this in contributing docs Added env variable directly to the OLM
2025-04-01T06:40:12.750950
2024-01-31T11:00:38
2109837968
{ "authors": [ "Srivaralakshmi", "anandf" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10144", "repo": "redhat-developer/gitops-operator", "url": "https://github.com/redhat-developer/gitops-operator/pull/650" }
gharchive/pull-request
doc: OpenShift argocd CLI client command reference documentation What type of PR is this? /kind documentation What does this PR do / why we need it: Have you updated the necessary documentation? [ ] Documentation update is required by this PR. [ ] Documentation has been updated. Which issue(s) this PR fixes: Fixes #? Test acceptance criteria: [ ] Unit Test [ ] E2E Test How to test changes / Special notes to the reviewer: @anandf How about the updates to these sections? Update these existing sections: Creating an application by using the oc tool https://docs.openshift.com/gitops/1.11/argocd_applications/deploying-a-spring-boot-application-with-argo-cd.html#creating-an-application-by-using-the-oc-tool_deploying-a-spring-boot-application-with-argo-cd https://docs.openshift.com/gitops/1.11/declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.html#creating-an-application-by-using-the-oc-tool_configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations Synchronizing your application with your Git repository https://docs.openshift.com/gitops/1.11/declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.html#synchronizing-your-application-application-with-your-git-repository_configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations @anandf Can we add some content for these sections, if they are relevant at all? About the OpenShift argo cd CLI (I think this is a nice to have section) Logging in to the OpenShift argo cd CLI using a web browser usage instructions - information that must go as admonitions such as Note, Important, Tip, caution, or warning. @anandf How about the updates to these sections? Update these existing sections: Creating an application by using the oc tool https://docs.openshift.com/gitops/1.11/argocd_applications/deploying-a-spring-boot-application-with-argo-cd.html#creating-an-application-by-using-the-oc-tool_deploying-a-spring-boot-application-with-argo-cd https://docs.openshift.com/gitops/1.11/declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.html#creating-an-application-by-using-the-oc-tool_configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations Synchronizing your application with your Git repository https://docs.openshift.com/gitops/1.11/declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.html#synchronizing-your-application-application-with-your-git-repository_configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations Added sections for creating and syncing app using CLI in both normal and core modes. @anandf Can we add some content for these sections, if they are relevant at all? Logging in to the OpenShift argo cd CLI using a web browser -> This is not applicable. One has to login via the CLI itself. Its not possible to login via web browser like its possible for oc login
2025-04-01T06:40:12.759820
2022-05-31T12:18:45
1253785178
{ "authors": [ "kadel", "rm3l" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10145", "repo": "redhat-developer/odo", "url": "https://github.com/redhat-developer/odo/issues/5776" }
gharchive/issue
odo dev add flag to run non-default build command. User Story As an odo user I want to be able to execute alternative build command instead of the default one So that I can run my with different options (different configuration, flags, runtime versions etc..) without modifying the default command. Example: Go devfile has the following default build command. - exec: commandLine: GOCACHE=${PROJECT_SOURCE}/.cache go build main.go component: runtime group: isDefault: true kind: build workingDir: ${PROJECT_SOURCE} id: build In some situations, I want to pass arguments to linker to for example set string value of a variable. This can be done adding -ldflags="-X github.com/redhat-developer/odo/pkg/segment.writeKey=foo". Currently I have to edit the commandLine in order to be able to execute build with different arguments. Instead of that it would be nice to be able to add extra run command like this: - exec: commandLine: GOCACHE=${PROJECT_SOURCE}/.cache go build -ldflags="-X github.com/redhat-developer/odo/pkg/segment.writeKey=foo" main.go component: runtime group: isDefault: false kind: build workingDir: ${PROJECT_SOURCE} id: build-with-key And then simply execute odo dev --build-command build-with-key, when I want to switch back to using the default profile, I stop running odo dev command and start it again, but this time without --build-command flag. Acceptance Criteria [ ] odo dev should have --build-command flag that controls what command is used to build application. /kind user-story TODO (to not forget): As commented out in [1]: we should try to harmonize the behaviour between commands in pkg/libdevfile/libdevfile.go. Deploy is using getDefaultCommand, when Build/Test/Run is using getCommandAssociatedToGroup [1] https://github.com/redhat-developer/odo/pull/5768#discussion_r895480523
2025-04-01T06:40:12.760897
2018-05-14T14:59:48
322855232
{ "authors": [ "kadel" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10146", "repo": "redhat-developer/odo", "url": "https://github.com/redhat-developer/odo/pull/452" }
gharchive/pull-request
fix release scripts - 'odo version' output changed 😞 forgot to change one place similar to #451
2025-04-01T06:40:12.762474
2022-04-25T08:56:52
1214188266
{ "authors": [ "fbricon", "slemeur" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10147", "repo": "redhat-developer/openshift-dd-ext", "url": "https://github.com/redhat-developer/openshift-dd-ext/issues/21" }
gharchive/issue
Provide tooltips on icon As a user, the icons themselves are not going to be sufficient to tell me what actions are behind each buttons. It would be nicer to provide tooltips behind the button, so that the user can discover what's the button will do, without having to click on it.
2025-04-01T06:40:12.770318
2024-09-26T10:28:44
2550180431
{ "authors": [ "hmanwani-rh", "pabel-rh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10148", "repo": "redhat-developer/red-hat-developers-documentation-rhdh", "url": "https://github.com/redhat-developer/red-hat-developers-documentation-rhdh/pull/538" }
gharchive/pull-request
RHIDP-3377: Adding cross-links to the Installation titles IMPORTANT: Do Not Merge - To be merged by Docs Team Only Version(s): 1.2, 1.3 Issue: RHIDP-3377 Reviews: [x] Docs review: @hmanwani-rh /cherry-pick release-1.3 /cherry-pick 1.2.x
2025-04-01T06:40:12.771856
2024-11-12T15:26:55
2652518906
{ "authors": [ "hmanwani-rh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10149", "repo": "redhat-developer/red-hat-developers-documentation-rhdh", "url": "https://github.com/redhat-developer/red-hat-developers-documentation-rhdh/pull/698" }
gharchive/pull-request
Added discover content I’m creating this PR because the About RHDH title is already published on docs.redhat.com, but currently, it doesn’t contain any content. For now, I’m adding content to the page, which can be refined and improved later. cc @jmagak Closing this PR, since the title is now unpublished in Pantheon.
2025-04-01T06:40:12.824579
2016-06-17T10:17:34
160856652
{ "authors": [ "ALRubinger", "tnozicka", "xcoulon" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10150", "repo": "redhat-kontinuity/catapult", "url": "https://github.com/redhat-kontinuity/catapult/issues/128" }
gharchive/issue
Trigger a first build upon fling completion Create a new Build resource from the generated BuildConfig resource in the OpenShift project. Would adding this trigger (https://github.com/tnozicka/openshift-templates/blob/master/pipeline-template.yaml#L34) be enough? 6d85abfc3440d379e7135325f7005bbf4b80e57c
2025-04-01T06:40:12.842182
2021-11-14T20:23:45
1053032322
{ "authors": [ "tonytcampbell", "vfunction" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10151", "repo": "redhat-openshift-ecosystem/certified-operators", "url": "https://github.com/redhat-openshift-ecosystem/certified-operators/pull/119" }
gharchive/pull-request
operator vfunction-server-operator (v2.3.594) New operator bundle Name: vfunction-server-operator Version: v2.3.594 Certification project: 5f64dc1e0a4d06443d2818ad Test result URL: https://catalog.redhat.com/api/containers/v1/projects/certification/test-results/id/61917043d115ef379a598e5e Test logs URL: https://catalog.redhat.com/api/containers/v1/projects/certification/artifacts/id/61917042d6b39d004292bad1 Hi, I'm sure 100% that my ci tests passed okay (even 3 times...) on my local OpenShift cluster with the most updated operator-ci-pipeline.yml and tasks from the operator-pipelines repo. I did have some issues with timeout exceeding on both ScorecardBasicSpecCheck & ScorecardOlmSuiteCheck before I updated. (I couldn't access the test result URL nor the logs URL from above to see the exact errors for this run - to be sure they are the same, though) Before starting the long journey of opening a support case for this, I just wanted to make sure that the errors are not on your end because the tests cluster is not updated correctly? Could you please verify that tests can pass at all now for ScorecardBasicSpecCheck & ScorecardOlmSuiteCheck on your testing cluster? Thanks. Now I could also see in the logs that both ScorecardBasicSpecCheck and ScorecardOlmSuiteCheck failed on the 30 seconds timeout (as I expected and mentioned in my previous comment). As I also mentioned - I had this error too until I pulled a new version from operator-pipelines and applied all pipelines and tasks on the OpenShift cluster. From the last Perflight.log: time="2021-11-15T13:58:35Z" level=info msg="running check: ScorecardBasicSpecCheck" time="2021-11-15T13:58:35Z" level=debug msg="Running operator-sdk scorecard check for quay.io/operator-pipeline-prod/vfunction-server-operator:v2.3.594" time="2021-11-15T13:58:35Z" level=debug msg="--selector=[test=basic-check-spec-test]" time="2021-11-15T13:58:35Z" level=trace msg="running scorecard with the following invocation[operator-sdk scorecard --output json --selector=test=basic-check-spec-test --kubeconfig /tmp/kubeconfig-3421170286 --namespace default --service-account default --config /tmp/scorecard-test-config-3472466720.yaml --verbose /tmp/preflight-3179944778/fs]" time="2021-11-15T13:59:05Z" level=error msg="stdout: " time="2021-11-15T13:59:05Z" level=error msg="stderr: time="2021-11-15T13:58:35Z" level=debug msg="Debug logging is set"\nError: error running tests context deadline exceeded\nUsage:\n operator-sdk scorecard [flags]\n\nFlags:\n -c, --config string path to scorecard config file\n -h, --help help for scorecard\n --kubeconfig string kubeconfig path\n -L, --list Option to enable listing which tests are run\n -n, --namespace string namespace to run the test images in\n -o, --output string Output format for results. Valid values: text, json, xunit (default "text")\n -l, --selector string label selector to determine which tests are run\n -s, --service-account string Service account to use for tests (default "default")\n -x, --skip-cleanup Disable resource cleanup after tests are run\n -b, --storage-image string Storage image to be used by the Scorecard pod (default "docker.io/library/busybox@sha256:c71cb4f7e8ececaffb34037c2637dc86820e4185100e18b4d02d613a9bd772af")\n -t, --test-output string Test output directory. (default "test-output")\n -u, --untar-image string Untar image to be used by the Scorecard pod (default "registry.access.redhat.com/ubi8@sha256:910f6bc0b5ae9b555eb91b88d28d568099b060088616eba2867b07ab6ea457c7")\n -w, --wait-time duration seconds to wait for tests to complete. Example: 35s (default 30s)\n\nGlobal Flags:\n --plugins strings plugin keys to be used for this subcommand execution\n --verbose Enable verbose logging\n\ntime="2021-11-15T13:59:05Z" level=fatal msg="error running tests context deadline exceeded"\n" time="2021-11-15T13:59:05Z" level=info msg="check completed: ScorecardBasicSpecCheck" ERROR="failed to run operator-sdk scorecard: exit status 1" result="failed to run operator-sdk scorecard: exit status 1" time="2021-11-15T13:59:05Z" level=info msg="running check: ScorecardOlmSuiteCheck" time="2021-11-15T13:59:05Z" level=debug msg="Running operator-sdk scorecard Check for quay.io/operator-pipeline-prod/vfunction-server-operator:v2.3.594" time="2021-11-15T13:59:05Z" level=debug msg="--selector=[suite=olm]" time="2021-11-15T13:59:05Z" level=trace msg="running scorecard with the following invocation[operator-sdk scorecard --output json --selector=suite=olm --kubeconfig /tmp/kubeconfig-3421170286 --namespace default --service-account default --config /tmp/scorecard-test-config-2813129432.yaml --verbose /tmp/preflight-3179944778/fs]" time="2021-11-15T13:59:36Z" level=error msg="stdout: " time="2021-11-15T13:59:36Z" level=error msg="stderr: time="2021-11-15T13:59:05Z" level=debug msg="Debug logging is set"\nError: error running tests context deadline exceeded\nUsage:\n operator-sdk scorecard [flags]\n\nFlags:\n -c, --config string path to scorecard config file\n -h, --help help for scorecard\n --kubeconfig string kubeconfig path\n -L, --list Option to enable listing which tests are run\n -n, --namespace string namespace to run the test images in\n -o, --output string Output format for results. Valid values: text, json, xunit (default "text")\n -l, --selector string label selector to determine which tests are run\n -s, --service-account string Service account to use for tests (default "default")\n -x, --skip-cleanup Disable resource cleanup after tests are run\n -b, --storage-image string Storage image to be used by the Scorecard pod (default "docker.io/library/busybox@sha256:c71cb4f7e8ececaffb34037c2637dc86820e4185100e18b4d02d613a9bd772af")\n -t, --test-output string Test output directory. (default "test-output")\n -u, --untar-image string Untar image to be used by the Scorecard pod (default "registry.access.redhat.com/ubi8@sha256:910f6bc0b5ae9b555eb91b88d28d568099b060088616eba2867b07ab6ea457c7")\n -w, --wait-time duration seconds to wait for tests to complete. Example: 35s (default 30s)\n\nGlobal Flags:\n --plugins strings plugin keys to be used for this subcommand execution\n --verbose Enable verbose logging\n\ntime="2021-11-15T13:59:36Z" level=fatal msg="error running tests context deadline exceeded"\n" time="2021-11-15T13:59:36Z" level=info msg="check completed: ScorecardOlmSuiteCheck" ERROR="failed to run operator-sdk scorecard: exit status 1" result="failed to run operator-sdk scorecard: exit status 1" Thank you. We are looking into this on our side. Hi, Any news regarding this? Thanks.
2025-04-01T06:40:12.855418
2022-07-14T23:46:52
1305416246
{ "authors": [ "framework-automation", "selvamt94" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10152", "repo": "redhat-openshift-ecosystem/community-operators-prod", "url": "https://github.com/redhat-openshift-ecosystem/community-operators-prod/pull/1414" }
gharchive/pull-request
operator neuvector-community-operator (1.3.5) Signed-off-by: selvamt94<EMAIL_ADDRESS>Thanks submitting your Operator. Please check below list before you create your Pull Request. New Submissions [x] Are you familiar with our contribution guidelines? [x] Have you packaged and deployed your Operator for Operator Framework? [x] Is your submission signed? [x] Is operator icon set? Updates to existing Operators [x] Did you create a ci.yaml file according to the update instructions? [x] Is your new CSV pointing to the previous version with the replaces property if you chose replaces-mode via the updateGraph property in ci.yaml? [x] Is your new CSV referenced in the appropriate channel defined in the package.yaml or annotations.yaml ? [x] Have you tested an update to your Operator when deployed via OLM? [x] Is your submission signed? Your submission should not [x] Modify more than one operator [x] Modify an Operator you don't own [x] Rename an operator - please remove and add with a different name instead [x] Modify any files outside the above mentioned folders [x] Contain more than one commit. Please squash your commits. Operator Description must contain (in order) [x] Description about the managed Application and where to find more information [x] Features and capabilities of your Operator and how to use it [x] Any manual steps about potential pre-requisites for using your Operator Operator Metadata should contain [x] Human readable name and 1-liner description about your Operator [x] Valid category name1 [x] One of the pre-defined capability levels2 [x] Links to the maintainer, source code and documentation [x] Example templates for all Custom Resource Definitions intended to be used [x] A quadratic logo Remember that you can preview your CSV here. -- 1 If you feel your Operator does not fit any of the pre-defined categories, file an issue against this repo and explain your need 2 For more information see here /merge possible /merge possible
2025-04-01T06:40:12.871475
2023-10-17T13:57:57
1947502468
{ "authors": [ "framework-automation", "git-hyagi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10153", "repo": "redhat-openshift-ecosystem/community-operators-prod", "url": "https://github.com/redhat-openshift-ecosystem/community-operators-prod/pull/3465" }
gharchive/pull-request
operator pulp-operator (1.0.0-beta.2) Thanks for submitting your Operator. Please check the below list before you create your Pull Request. New Submissions [ ] Are you familiar with our contribution guidelines? [ ] Have you packaged and deployed your Operator for Operator Framework? [ ] Have you tested your Operator in all supported installation modes? [ ] Have you considered whether you want to use semantic versioning order? [ ] Is your submission signed? [ ] Is operator icon set? Updates to existing Operators [ ] Did you create a ci.yaml file according to the update instructions? [ ] Is your new CSV pointing to the previous version with the replaces property if you chose replaces-mode via the updateGraph property in ci.yaml? [ ] Is your new CSV referenced in the appropriate channel defined in the package.yaml or annotations.yaml ? [ ] Have you tested an update to your Operator when deployed via OLM? [ ] Is your submission signed? Your submission should not [ ] Modify more than one operator [ ] Modify an Operator you don't own [ ] Rename an operator - please remove and add with a different name instead [ ] Modify any files outside the above mentioned folders [ ] Contain more than one commit. Please squash your commits. Operator Description must contain (in order) [ ] Description of the managed Application and where to find more information [ ] Features and capabilities of your Operator and how to use it [ ] Any manual steps about potential pre-requisites for using your Operator Operator Metadata should contain [ ] Human readable name and 1-liner description about your Operator [ ] Valid category name1 [ ] One of the pre-defined capability levels2 [ ] Links to the maintainer, source code and documentation [ ] Example templates for all Custom Resource Definitions intended to be used [ ] A quadratic logo Remember that you can preview your CSV here. -- 1 If you feel your Operator does not fit any of the pre-defined categories, file an issue against this repo and explain your need 2 For more information see here /merge possible
2025-04-01T06:40:12.872605
2023-11-10T20:59:55
1988387321
{ "authors": [ "djzager", "framework-automation" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10154", "repo": "redhat-openshift-ecosystem/community-operators-prod", "url": "https://github.com/redhat-openshift-ecosystem/community-operators-prod/pull/3595" }
gharchive/pull-request
operator [CI] konveyor-operator Just switching to use semver-mode. /merge possible /merge possible
2025-04-01T06:40:12.882079
2023-12-18T13:04:06
2046624214
{ "authors": [ "iblancasa" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10155", "repo": "redhat-openshift-ecosystem/community-operators-prod", "url": "https://github.com/redhat-openshift-ecosystem/community-operators-prod/pull/3757" }
gharchive/pull-request
operator jaeger (1.52.0) Thanks for submitting your Operator. Please check the below list before you create your Pull Request. New Submissions [ ] Are you familiar with our contribution guidelines? [ ] Are you familiar with our operator pipeline? [ ] Have you tested your Operator in all supported installation modes? [ ] Have you considered whether you want to use semantic versioning order? [ ] Is your submission signed? [ ] Is operator icon set? Your submission should not [ ] Add more than one operator bundle per PR [ ] Modify any operator [ ] Rename an operator [ ] Modify any files outside the above mentioned folders [ ] Contain more than one commit. Please squash your commits. Operator Description must contain (in order) [ ] Description of the managed Application and where to find more information [ ] Features and capabilities of your Operator and how to use it [ ] Any manual steps about potential pre-requisites for using your Operator Operator Metadata should contain [ ] Human readable name and 1-liner description about your Operator [ ] Valid category name1 [ ] One of the pre-defined capability levels2 [ ] Links to the maintainer, source code and documentation [ ] Example templates for all Custom Resource Definitions intended to be used [ ] A quadratic logo Remember that you can preview your CSV here. -- 1 If you feel your Operator does not fit any of the pre-defined categories, file an issue against this repo and explain your need 2 For more information see here /pipeline restart community-hosted-pipelin /pipeline restart community-hosted-pipeline
2025-04-01T06:40:12.884794
2023-07-13T15:04:52
1803228341
{ "authors": [ "jnunyez" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10156", "repo": "redhat-partner-solutions/vse-carslab-hub", "url": "https://github.com/redhat-partner-solutions/vse-carslab-hub/pull/85" }
gharchive/pull-request
[DRAFT]: Manifests to install preGA sync operator from custom operator catalog TODO: Understand where to ubicate the added extra-manifests for day2 installation as a policy. We need to figure out how to add those manifests as Policies or PoliciesGenTemplate (PGT) objects referenced from the SiteConfig.
2025-04-01T06:40:12.898352
2022-10-11T07:03:43
1404110242
{ "authors": [ "FKolwa", "miyunari" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:10157", "repo": "redhatcloudx/rhelocator", "url": "https://github.com/redhatcloudx/rhelocator/pull/37" }
gharchive/pull-request
add gcp image list command to CLI Hey @major @FKolwa , I am a bit confused right now. We assign the registered github secret secrets.GOOGLE_APPLICATION_CREDENTIALS to an environment variable GCP_APP_CREDENTIALS, but we never use it. In the get_google_images function we create an compute_v1.ImagesClient client, is he supposed to use those credentials? I went to the google docs, but it also didn't really helped me. :) Can you give me some hints? :sweat_smile: @miyunari Yup that is correct! It isn't necessary to reference the ENV in code. In fact you won't find any of the other secrets either (like AWS_ACCESS_KEY_ID). Most (if not all) cloud provider CLIs authenticate using local configurations that are loaded into ENVs at runtime. Usually you would use you user credentials to sign into Google Cloud but with an automated service like this a service account can be used. The fact that "GOOGLE_APPLICATION_CREDENTIALS" is set for the workflow step means that the gcloud cli will be able to read it and use it to authenticate with the cloud provider API. I'll leave you a link in case you want to read more about Google Application Default Credentials (ADC): https://cloud.google.com/docs/authentication/provide-credentials-adc#local-key A small hint though: The env needs to be mapped to "GOOGLE_APPLICATION_CREDENTIALS" in the workflow context as well for it to properly work. Hope this helps a little! Thank you @FKolwa ! That was exactly the information I was looking for :smile: But now, there is another issue :sweat_smile: Unfortunately it's now unclear to me, how to test my changes. I got this reference from major, but I don't see the correlation: https://github.com/redhatcloudx/rhelocator/blob/b97967e77354d0331f0f7bc4a607c00b4b1eea16/tests/test_cli.py#L77-L103 @miyunari Haha yes this is a bit confusing tbh! The gcloud implementation is pretty rough at this point. get_google_images currently queries all images in the rhel-cloud project and returns everything that isn't deprecated. In the scope of this ticket, my requirement for an end to end test would be query the correct api endpoint by calling get_google_images by using 'runner.invoke' parse the json data confirm that all images that are returned do not contain the first level key status with the value 'DEPRECATED' In this case you can copy / paste most of what @major wrote for the azure test! For the offline test copy the structure of the e2e test you just wrote create a new mock for the google images in conftest (you can take a look at the AWS mockups. You need to create a new list of mocked images in a json format and create a new fixture that is passed to you offline test). Now for the tricky part: How do you know what data structure to expect from the google API? Well if you call gcloud and query for projects within 'rhel-cloud' you will receive something like this: { "architecture": "X86_64", "archiveSizeBytes": "4184623872", "creationTimestamp": "2022-09-20T16:32:45.492-07:00", "description": "Red Hat, Red Hat Enterprise Linux, 9, x86_64 built on 20220920, supports Shielded VM features", "diskSizeGb": "20", "family": "rhel-9", "guestOsFeatures": [ { "type": "UEFI_COMPATIBLE" }, { "type": "VIRTIO_SCSI_MULTIQUEUE" }, { "type": "SEV_CAPABLE" }, { "type": "GVNIC" } ], "id": "2043557223711896434", "kind": "compute#image", "labelFingerprint": "42WmSpB8rSM=", "licenseCodes": [ "7883559014960410759" ], "licenses": [ "https://www.googleapis.com/compute/beta/projects/rhel-cloud/global/licenses/rhel-9-server" ], "name": "rhel-9-v20220920", "rawDisk": { "containerType": "TAR", "source": "" }, "rolloutOverride": { "defaultRolloutTime": "2022-09-25T15:32:42Z", "locationRolloutPolicies": { "zones/asia-east1-a": "2022-09-22T04:32:42Z", .... "zones/us-west4-c": "2022-09-25T04:32:42Z" } }, "selfLink": "https://www.googleapis.com/compute/beta/projects/rhel-cloud/global/images/rhel-9-v20220920", "sourceType": "RAW", "status": "READY", "storageLocations": [ "eu", "asia", "us" ] } At this point we don't extract any specific information from this returned data (like we do for AWS) and this is not within the scope of your ticket so feel free to create a minimal test mockup version of this data structure that only contains something like { "status": "READY" } Oh boy, that was my mistake. I must have been looking at two things at the same time and put the wrong variable name in the actions workflow. 🤦🏻‍♂️ @major I think you were right :smile: . I shortened the variable name, because I thought we have to store it in config.py and use it somewhere :woman_facepalming: Oh, I see I have some merge conflicts, will try to resolve