added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:35:32.648943
2021-06-04T01:02:51
910976454
{ "authors": [ "jstarry", "t-nelson" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10914", "repo": "solana-labs/solana-program-library", "url": "https://github.com/solana-labs/solana-program-library/issues/1856" }
gharchive/issue
spl-token-cli: no signers found error is returned for null signer Problem The owner argument is used as the default signer for many token commands. Some of those commands require owner to be a signer but the owner signer may actually be a NullSigner which doesn't produce a valid signature. When this happens, an unhelpful no signers found error is returned to the user Proposed Changes Remove the owner global argument and add more explicit argument names to the commands that used it @t-nelson I assume we need to preserve backwards compatibility here? @t-nelson I assume we need to preserve backwards compatibility here? Yeah that's why I haven't already yanked out owner :slightly_smiling_face: When is owner a signer, we aren't sign-only/offline and NullSigner valid? owner was made to be allowed to be NullSigner because it is used in cases where it doesn't actually sign Yeah that's why I haven't already yanked out owner 🙂 Ok, I'll add new names that alias to owner then if that sounds good to you When is owner a signer, we aren't sign-only/offline and NullSigner valid? owner was made to be allowed to be NullSigner because it is used in cases where it doesn't actually sign I think it's intended to be the signer when minting tokens but I could be wrong. There isn't a mint authority option. I think it's also intended to be the signer for wrap but I could be wrong about that too. I think it's intended to be the signer when minting tokens but I could be wrong. There isn't a mint authority option. I think it's also intended to be the signer for wrap but I could be wrong about that too. So it's a matter of not failing when the user passes a pubkey instead of a keypair when we aren't offline? So it's a matter of not failing when the user passes a pubkey instead of a keypair when we aren't offline? Yes, exactly. I looked through the code and didn't see any quick fix for that Yes, exactly. I looked through the code and didn't see any quick fix for that Can we improve the SignerFromPathConfig::allow_null_signer logic when resolving owner? Yeah but it requires a change from clap utils I think. Only way to tell that we got a NullSigner back is to sign something and check for default signature Yeah but it requires a change from clap utils I think. Only way to tell that we got a NullSigner back is to sign something and check for default signature Mmm... I was hoping we knew ahead of time whether a NullSigner was appropriate in the given case and let resolution fail
2025-04-01T04:35:32.741147
2022-12-16T12:04:09
1500099827
{ "authors": [ "mcintyre94", "script8888" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10917", "repo": "solana-labs/wallet-adapter", "url": "https://github.com/solana-labs/wallet-adapter/issues/684" }
gharchive/issue
Phantom iOS connect requires an extra tap Describe the bug Opening this as an issue because there's some fiddly stateful logic to work through In #673 we added a redirect to Phantom's wallet browser when connecting to Phantom in iOS safari This change introduced an autoConnect function, which is called both when the wallet is first restored from localstorage and when a new wallet is selected, and just replaces existing logic to call MWA autoconnect or adapter connect. In Phantom this autoConnect is set to a no-op on iOS Safari, because when the page first loads we can't redirect to the Phantom universal link (nor would we want to, it'd make it impossible to disconnect from the dapp in Safari) This means that we need to select Phantom in the wallet modal (this triggers the no-op), and then click the connect/multi button to trigger the redirect We should be able to redirect on select, because this is a result of user action. But currently it's a no-op so we don't. To Reproduce Steps to reproduce the behavior: Open a dapp using the latest dependencies (eg a starter from this repo) in iOS Safari Connect to Phantom Expected behavior Selecting the Phantom wallet should be sufficient to trigger the redirect Screenshots Current UX: https://user-images.githubusercontent.com/1711350/208091596-8eb85722-b6c4-4e29-9cbc-906c5050979d.mov Additional context Basically the data flow going on here is: On page load: WalletProvider Get wallet name from localstorage (initial value of walletName) Set adapter from wallet name WalletProviderBase Set wallet from adapter (prop) Run a useEffect when wallet changes which calls autoConnect Here the behaviour is correct: we want a no-op for Phantom on iOS On wallet select: WalletProvider Update wallet name on select (call setWalletName) Set adapter from wallet name WalletProviderBase Set wallet from adapter (prop) Run a useEffect when wallet changes which calls autoConnect Here it's incorrect: we'd prefer to call connect Considerations: The autoConnect function is by default just a call to connect. But Phantom and MWA have custom implementations. For MWA the current behaviour is correct in both cases and needs to stay the same. For Phantom it's incorrect only on wallet select There are no other calls to setWalletName except to set it to null Suggested solutions: We could extend the autoConnect function call to pass an indication of whether the wallet has been selected automatically or by user action. Then the Phantom adapter could differentiate based on that. We could probably achieve this by having the select function set some ref that indicates a selection has been made. This would be a one way switch: once a select call has been made it'd be set to true and all future changes would be assumed to be by user select. This does rely on the fact that no other code sets another wallet elsewhere though. If we added any other code setting the wallet without user input it'd have to reset the ref. We could try to remove some of the reactive code, and write an explicit behaviour on first connect then only runs once (when we have a wallet to connect to). And then write an explicit function for select that also connects, rather than being through this chain of updates. But a lot of code relies on the intermediate values (adapter and wallet), so we'd still have to maintain them in all cases. And I think this would probably be very difficult to do without breaking anything. How did you get it to request connect in the Phantom app? mine just takes me to the phatom website can you show code please? @script8888 We recently updated Wallet Adapter to add this: https://github.com/solana-labs/wallet-adapter/pull/673 You'll need to update the wallet-adapter packages and then it should just work. If you're using NPM then npm upgrade @solana/wallet-adapter-base @solana/wallet-adapter-react @solana/wallet-adapter-react-ui @solana/wallet-adapter-wallets should do it :) Awesome I am on the latest version tho, I read through #673 but cant seem to understand how to use the autoConnect currently from the browser, Im redirected to the phantom app when I click on the button But Im not given the modal to connect the wallet in the app as seen in 0:08 of your videw @script8888 would you mind opening a question on solana.stackexchange.com please? If you can include your code (or better yet a repro in CodeSandbox or similar) hopefully we'll be able to get to the bottom of it. We'll need more detail than makes sense to go into here though. sure https://solana.stackexchange.com/questions/4882/how-to-initiate-connect-wallet-on-phantom-mobile-app This is a video of how it currently behaves
2025-04-01T04:35:32.746344
2017-04-02T17:26:28
218779197
{ "authors": [ "HerodotThukydides", "soldair" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10918", "repo": "soldair/node-walkdir", "url": "https://github.com/soldair/node-walkdir/issues/30" }
gharchive/issue
Async Problems Hey, I am quite new to Node and have slight trouble to use your module async. I tried to get help in books and stackoverflow, but I don't know how to deal with "walk". What i would like to do is to fill an array with data (which comes with the help of walkdir) and return after the walkdir is finished and the data is processed. var myThing = (parameter ) => { // should be filled var data = []; walk(directory, function(path, stat) { [...] // look at a file, process it and eventually push into "data" data.push({"path" : path, [...] }); [...] }); // after "walk" finished, the filled "data" should be returned return data; } hey. welcome! it totally makes sense you are having isues here. This module is quite old and has gathered a few idiosyncrasies of it's own walk(directory, function(path, stat) { // do stuff }).on('end',function(){ // im all done! }) also if you use async functionc in javascript you can't use return. (you can if you use a version of node that aupports async/wait but thats a whole nother js lesson) so you'll have to write myThing a little differently. var myThing = function(directory,callback){ var data =[] walk(directory, function(path, stat) { // do stuff data.push({"path" : path,stat:stat }); }).on('end',function(){ // im all done! callback(false,data) }).on('error',function(err){ callback(err) }) } now when you want to call myThing you get the result from the callback myThing('/',function(err,paths){ if(err) console.log('there was an error ',err) console.log('all done got paths ',paths) }) Wow, thank you so much for taking the time and explaining it so well. I will try it out!
2025-04-01T04:35:32.766058
2015-10-30T17:03:51
114312454
{ "authors": [ "barbieri", "ceolin", "edersondisouza" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10919", "repo": "solettaproject/soletta", "url": "https://github.com/solettaproject/soletta/pull/1005" }
gharchive/pull-request
Oauth v2 from v1 Removed commit that adds default pages for common errors [oauth] Fix url callback [oauth] Use urandom [oauth] Keep digests's reference (to delete on close) [oauth] Removed authenticated port (using the error port) [oauth] Fix possibles de-reference null pointer [server] Fix namespace validation Replace namespace with basename due uncrustify complains Several small improvements/fixes spotted on review @barbieri i've checked microhttpd and seems to not have how to check in which interface the request comes. The given url does not have neither the hostname and the port (starts from /) I'll see to add it in microhttpd. I guess you'd need to use the connection fd and get the address from it: const MHD_ConnectionInfo *info = MHD_get_connection_info(conn, MHD_CONNECTION_INFO_CONNECTION_FD); union { sockaddr_in6 in6, sockaddr_in in4 } addr; socklen_t addrlen = sizeof(addr); r = getsockname(info->connect_fd, &addr, &addrlen); if (r < 0 || addrlen > sizeof(addr)) return -EINVAL; if (addr.in4.sin_family == AF_INET) { use addr.in4 .. } else if (addr.in6.sin6_family == AF_INET6) { use addr.in6; } @edersondisouza V1_parse_response is only called from v1_request_finished that is the callback of a request @ceolin so, there's no way one could fake an Oauth response and send it to device and soletta trying to parse it? If so, it's ok. I was concerned if something sent via network could trick soletta into doing something unexpected. @edersondisouza actually someone can make a request to authorize_token url and give to us bogus data, but the worst thing will happen is the data that come be invalidated because the service will reject that token @barbieri https://github.com/ceolin/soletta/commit/7efb474863f8b5e1912ec1b596b3bb3e9e3f1cb8 is that api ok for you ? I intend to push the oauth branch with this additional commit. I think it should be struct sol_network_addr_to_str instead of a char *. pushed with the suggestion. Thanks guys for review
2025-04-01T04:35:32.807582
2015-06-30T20:18:36
92189634
{ "authors": [ "adammathys", "aldesantis" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10920", "repo": "solidusio-contrib/solidus_virtual_gift_card", "url": "https://github.com/solidusio-contrib/solidus_virtual_gift_card/issues/2" }
gharchive/issue
Update README Add information about current state of the extension. This has been brought back to life through #59 and is now passing on all maintained Solidus versions which, at the time of writing, are 2.5 to 2.9 and master, so we can keep the readme as is.
2025-04-01T04:35:32.852435
2017-06-26T16:01:19
238590705
{ "authors": [ "RalfJung", "oli-obk" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10923", "repo": "solson/miri", "url": "https://github.com/solson/miri/pull/218" }
gharchive/pull-request
Initial work towards checking const eval rules in miri cc #217 These checks are hard to test, because rustc will do them for us before miri ever runs. I don't entirely understand that const_env function. What does that have to do with constant evaluation? I somewhat expected a flag, which would also make it possible to manually test const_env mode on a given rs file. the function returns true if we are in a frame which computes a constant. It also needs a check for checking if any function further up the stack does this. The idea is that we first start checking miri's evaluation of constants within a miri evaluation of a normal program. Testing ctfe would require a new binary that doesn't process main, as main doesn't fulfill the ctfe requirement of being a const fn. The idea is that we first start checking miri's evaluation of constants within a miri evaluation of a normal program. Oh, I see. I didn't catch this. Btw, I assume this also rules out pointer arithmetic? (Like the stuff I did to make HashMap work.) Btw, I assume this also rules out pointer arithmetic? (Like the stuff I did to make HashMap work.) Yea, that definitely can't work right now.
2025-04-01T04:35:32.865686
2023-03-07T20:56:15
1614196721
{ "authors": [ "Lalaland", "guolin1" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10924", "repo": "som-shahlab/femr", "url": "https://github.com/som-shahlab/femr/issues/101" }
gharchive/issue
clmbr_compute_representations create multiple patient representations for the same index time Describe the bug clmbr_compute_representations can sometimes create multiple patient representations for the same index time, leading to extra rows in the feature matrix or mismatch in the number of patient representations and the number of labels. Steps to reproduce the bug Jupyter notebook for demonstration: /local-scratch/nigam/projects/lguo/femr-test/featurize.ipynb tested using piton_new3_extract Environment info conda activate /local-scratch/nigam/envs/lguo/femr (latest femr pull) I think I just fixed this with https://github.com/som-shahlab/femr/pull/102. Can you do me a favor and test your notebook from scratch again from the latest commit? looks good. Thanks!
2025-04-01T04:35:32.967090
2024-05-27T12:21:40
2319031040
{ "authors": [ "bibhuprasad-hcl", "divyagayathri-hcl" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10925", "repo": "sonic-net/sonic-pins", "url": "https://github.com/sonic-net/sonic-pins/pull/151" }
gharchive/pull-request
[gutil] Introduce bare-bones library for semantic versioning. PR Description : [gutil] Introduce bare-bones library for semantic versioning. Build Results : bibhuprasad@eaba4ee0062c:/sonic/src/sonic-p4rt/sonic-pins$ bazel build $BAZEL_BUILD_OPTS ... INFO: Analyzed 246 targets (2 packages loaded, 280 targets configured). INFO: Found 246 targets... INFO: Elapsed time: 27.581s, Critical Path: 23.09s INFO: 27 processes: 3 internal, 24 linux-sandbox. INFO: Build completed successfully, 27 total actions bibhuprasad@eaba4ee0062c:/sonic/src/sonic-p4rt/sonic-pins$ UT Results : bibhuprasad@eaba4ee0062c:/sonic/src/sonic-p4rt/sonic-pins$ bazel test $BAZEL_BUILD_OPTS gutil/... INFO: Analyzed 20 targets (0 packages loaded, 0 targets configured). INFO: Found 13 targets and 7 test targets... INFO: Elapsed time: 0.985s, Critical Path: 0.37s INFO: 5 processes: 1 internal, 4 linux-sandbox. INFO: Build completed successfully, 5 total actions //gutil:collections_test (cached) PASSED in 0.5s //gutil:io_test (cached) PASSED in 0.4s //gutil:status_matchers_test (cached) PASSED in 0.4s //gutil:proto_matchers_test PASSED in 0.4s //gutil:proto_test PASSED in 0.2s //gutil:table_entry_key_test PASSED in 0.1s //gutil:version_test PASSED in 0.2s Executed 4 out of 7 tests: 7 tests pass. INFO: Build completed successfully, 5 total actions bibhuprasad@eaba4ee0062c:/sonic/src/sonic-p4rt/sonic-pins$ bazel test $BAZEL_BUILD_OPTS p4_pdpi/... INFO: Analyzed 80 targets (0 packages loaded, 0 targets configured). INFO: Found 46 targets and 34 test targets... INFO: Elapsed time: 16.122s, Critical Path: 15.44s INFO: 21 processes: 34 linux-sandbox. INFO: Build completed successfully, 21 total actions //p4_pdpi:sequencing_util_test (cached) PASSED in 0.5s //p4_pdpi/netaddr:ipv4_address_and_network_address_test (cached) PASSED in 0.5s //p4_pdpi/netaddr:ipv6_address_test (cached) PASSED in 0.2s //p4_pdpi/netaddr:mac_address_test (cached) PASSED in 0.2s //p4_pdpi/string_encodings:bit_string_test (cached) PASSED in 0.4s //p4_pdpi/string_encodings:byte_string_test (cached) PASSED in 0.7s //p4_pdpi/string_encodings:decimal_string_test (cached) PASSED in 0.2s //p4_pdpi/string_encodings:decimal_string_test_runner (cached) PASSED in 0.1s //p4_pdpi/string_encodings:hex_string_test (cached) PASSED in 0.1s //p4_pdpi/string_encodings:hex_string_test_runner (cached) PASSED in 0.1s //p4_pdpi/string_encodings:readable_byte_string_test (cached) PASSED in 0.3s //p4_pdpi/testing:main_pd_test (cached) PASSED in 0.0s //p4_pdpi/testing:mock_p4_runtime_server_test (cached) PASSED in 0.3s //p4_pdpi/utils:annotation_parser_test (cached) PASSED in 0.4s //p4_pdpi:ir_tools_test PASSED in 0.3s //p4_pdpi/packetlib:packetlib_fuzzer_test PASSED in 15.4s //p4_pdpi/packetlib:packetlib_test PASSED in 0.1s //p4_pdpi/packetlib:packetlib_test_runner PASSED in 0.0s //p4_pdpi/packetlib:packetlib_unit_test PASSED in 0.8s //p4_pdpi/testing:helper_function_test PASSED in 0.4s //p4_pdpi/testing:info_test PASSED in 0.1s //p4_pdpi/testing:info_test_runner PASSED in 0.0s //p4_pdpi/testing:packet_io_test PASSED in 0.1s //p4_pdpi/testing:packet_io_test_runner PASSED in 0.0s //p4_pdpi/testing:rpc_test PASSED in 0.1s //p4_pdpi/testing:rpc_test_runner PASSED in 0.1s //p4_pdpi/testing:sequencing_test PASSED in 0.1s //p4_pdpi/testing:sequencing_test_runner PASSED in 0.1s //p4_pdpi/testing:sequencing_util_test PASSED in 0.0s //p4_pdpi/testing:sequencing_util_test_runner PASSED in 0.0s //p4_pdpi/testing:table_entry_gunit_test PASSED in 0.3s //p4_pdpi/testing:table_entry_test PASSED in 0.1s //p4_pdpi/testing:table_entry_test_runner PASSED in 0.1s //p4_pdpi/utils:ir_test PASSED in 0.3s Executed 20 out of 34 tests: 34 tests pass. INFO: Build completed successfully, 21 total actions bibhuprasad@eaba4ee0062c:/sonic/src/sonic-p4rt/sonic-pins$ bazel test $BAZEL_BUILD_OPTS p4rt_app/... INFO: Analyzed 106 targets (0 packages loaded, 0 targets configured). INFO: Found 68 targets and 38 test targets... INFO: Elapsed time: 27.580s, Critical Path: 5.04s INFO: 33 processes: 1 internal, 18 linux-sandbox, 14 local. INFO: Build completed successfully, 33 total actions //p4rt_app/sonic:app_db_acl_def_table_manager_test (cached) PASSED in 0.4s //p4rt_app/sonic:state_verification_test (cached) PASSED in 0.5s //p4rt_app/sonic/adapters:fake_sonic_db_table_test (cached) PASSED in 0.2s //p4rt_app/tests/lib:app_db_entry_builder_test (cached) PASSED in 0.1s //p4rt_app/utils:event_data_tracker_test (cached) PASSED in 0.1s //p4rt_app/utils:table_utility_test (cached) PASSED in 0.8s //p4rt_app/event_monitoring:app_state_db_port_table_event_test PASSED in 0.5s //p4rt_app/event_monitoring:app_state_db_send_to_ingress_port_table_event_test PASSED in 0.5s //p4rt_app/event_monitoring:config_db_node_cfg_table_event_test PASSED in 0.5s //p4rt_app/event_monitoring:config_db_port_table_event_test PASSED in 0.5s //p4rt_app/event_monitoring:debug_data_dump_events_test PASSED in 0.5s //p4rt_app/event_monitoring:state_verification_events_test PASSED in 0.5s //p4rt_app/p4runtime:ir_translation_test PASSED in 0.4s //p4rt_app/p4runtime:p4info_verification_schema_test PASSED in 0.4s //p4rt_app/p4runtime:p4info_verification_test PASSED in 0.4s //p4rt_app/p4runtime:packetio_helpers_test PASSED in 0.5s //p4rt_app/sonic:app_db_manager_test PASSED in 0.4s //p4rt_app/sonic:app_db_to_pdpi_ir_translator_test PASSED in 0.4s //p4rt_app/sonic:hashing_test PASSED in 0.4s //p4rt_app/sonic:packetio_impl_test PASSED in 0.3s //p4rt_app/sonic:packetio_port_test PASSED in 0.3s //p4rt_app/sonic:response_handler_test PASSED in 0.4s //p4rt_app/sonic:vrf_entry_translation_test PASSED in 0.4s //p4rt_app/tests:acl_table_test PASSED in 1.1s //p4rt_app/tests:action_set_test PASSED in 0.7s //p4rt_app/tests:api_access_test PASSED in 0.6s //p4rt_app/tests:arbitration_test PASSED in 0.7s //p4rt_app/tests:fixed_l3_tables_test PASSED in 2.4s //p4rt_app/tests:forwarding_pipeline_config_test PASSED in 3.4s //p4rt_app/tests:grpc_behavior_test PASSED in 5.0s //p4rt_app/tests:p4_constraints_test PASSED in 0.1s //p4rt_app/tests:p4_constraints_test_runner PASSED in 0.2s //p4rt_app/tests:p4_programs_test PASSED in 1.2s //p4rt_app/tests:packetio_test PASSED in 3.3s //p4rt_app/tests:port_name_and_id_test PASSED in 0.8s //p4rt_app/tests:response_path_test PASSED in 2.0s //p4rt_app/tests:role_test PASSED in 0.7s //p4rt_app/tests:state_verification_test PASSED in 0.9s Executed 32 out of 38 tests: 38 tests pass. INFO: Build completed successfully, 33 total actions bibhuprasad@eaba4ee0062c:/sonic/src/sonic-p4rt/sonic-pins$ Co-authored-by: Steffen Smolka<EMAIL_ADDRESS> @kishanps, Could you please review. Thanks.
2025-04-01T04:35:32.969896
2024-03-14T16:54:02
2186837047
{ "authors": [ "Giluerre", "vjardin" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10926", "repo": "sonic-net/sonic-platform-vpp", "url": "https://github.com/sonic-net/sonic-platform-vpp/pull/95" }
gharchive/pull-request
Update sonic-buildimage to the latest 202311 build: 4ae9a3d1e1 This pull request makes following changes: Fixes issue: #94 I have created issue for this solution I am submitting Hi @Giluerre , same as I did for the pull request !92 ; I understand that your pull request cannot be handled if you do not sign up the CLA, see the former links. /easycla Hi @vjardin , I believed that it should be alright when pipeline for EasyCLA was passing. Now is all green (even bot message). Thanks for letting me know.
2025-04-01T04:35:33.027519
2022-04-10T19:47:57
1199154066
{ "authors": [ "anhuwi", "daveking7" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10927", "repo": "sopra-fs22-group-07/server", "url": "https://github.com/sopra-fs22-group-07/server/pull/96" }
gharchive/pull-request
86 develop rest interface in gamecontroller added GameController and gameService Some parts open to discuss agree with need to first test the gamecontroller. Should i open a new task for that (and reference on this branch)? agree with need to first test the gamecontroller. Should i open a new task for that (and reference on this branch)? I'd say so
2025-04-01T04:35:33.043091
2019-12-05T03:40:31
533104115
{ "authors": [ "scivola", "sorami" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10928", "repo": "sorami/sudachi.rs", "url": "https://github.com/sorami/sudachi.rs/issues/4" }
gharchive/issue
failed to fetch_dictionary.sh Because of lack of src/resources, fetch_dictionary.sh fails to move the dictionary file to the directory. Putting an empty .keep file into the directory will resolve this problem. Thanks for reporting this!
2025-04-01T04:35:33.043978
2022-11-22T08:10:10
1459352330
{ "authors": [ "kamilsa" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10929", "repo": "soramitsu/kagome", "url": "https://github.com/soramitsu/kagome/issues/1418" }
gharchive/issue
Fix sending too many catch up requests When KAGOME node sends too many catch up requests our reputation is decreased and no one sends us commit messages which leads to lag in block finalization. Need to restrict sending too many catch up requests Also fix that we are receiving commits but do not apply them
2025-04-01T04:35:33.048707
2019-09-16T19:17:27
494228855
{ "authors": [ "DarkDimius", "bradleybuda" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10930", "repo": "sorbet/sorbet", "url": "https://github.com/sorbet/sorbet/issues/1785" }
gharchive/issue
Make Ruby Method instances aware of their param and return types Problem This code fails type checking: # typed: strict require 'cgi' x = T.let(%w[foo bar baz], T::Array[String]) y = x.map { |k| CGI.escape(k) } z = x.map(&CGI.method(:escape)) # error here Proposed solution I don't know enough about sorbet internals to know if this makes sense, but giving the Ruby Method type (which is the return value from CGI.method(:escape) in this example) type parameters similar to a Proc's parameters might allow Method#to_proc to do the right thing here. I'd expect this to be super hard.
2025-04-01T04:35:33.059569
2020-05-11T10:03:20
615757048
{ "authors": [ "isaquebezerra", "senior-js", "tacman" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10931", "repo": "sorich87/bootstrap-tour", "url": "https://github.com/sorich87/bootstrap-tour/issues/739" }
gharchive/issue
module not working with Bootstrap 4.3.1 and JQuery 3.4.1 I see this error: Same error here try this plugin: https://kamranahmed.info/driver.js/ @isaquebezerra Thanks, Although I found the Bootstrap Tourist before. https://github.com/IGreatlyDislikeJavascript/bootstrap-tourist same. I switched to Driver.js, and so far it seems to be working well.
2025-04-01T04:35:33.091729
2018-11-07T05:52:28
378151847
{ "authors": [ "dereuromark", "iijiang", "sven7777" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10932", "repo": "soullivaneuh/composer-versions-check", "url": "https://github.com/soullivaneuh/composer-versions-check/issues/57" }
gharchive/issue
Failed to remove the package. composer global remove sllh/composer-versions-check 'Changed current directory to /Users/yijiang/.composer Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 0 installs, 0 updates, 1 removal Removing sllh/composer-versions-check (v2.0.3) Writing lock file Generating autoload files Changelogs summary: sllh/composer-versions-check removed (installed version was v2.0.3) Fatal error: Uncaught Error: Class 'SLLH\ComposerVersionsCheck\OutdatedPackage' not found in /Users/yijiang/.composer/vendor/sllh/composer-versions-check/src/VersionsCheck.php:62 Stack trace: #0 phar:///usr/local/bin/composer/src/Composer/Plugin/PluginManager.php(195) : eval()'d code(128): SLLH\ComposerVersionsCheck\VersionsCheck->checkPackages(Object(Composer\Repository\ComposerRepository), Object(Composer\Repository\InstalledFilesystemRepository), Object(Composer\Package\RootPackage)) #1 phar:///usr/local/bin/composer/src/Composer/Plugin/PluginManager.php(195) : eval()'d code(90): SLLH\ComposerVersionsCheck\VersionsCheckPlugin_composer_tmp1->checkVersions(Object(Composer\Repository\RepositoryManager), Object(Composer\Package\RootPackage)) #2 [internal function]: SLLH\ComposerVersionsCheck\VersionsCheckPlugin_composer_tmp1->postUpdate(Object(Composer\Script\Event)) #3 phar:///usr/local/bin/composer/src/Composer/EventDispatcher/EventDispatcher.php(176): call_user_func(Array, Object(Composer\Script\Event)) #4 phar:///usr/l in /Users/yijiang/.composer/vendor/sllh/composer-versions-check/src/VersionsCheck.php on line 62 Yis-MacBook-Pro-2:docs yijiang$ composer update` I believe if you add this to the beginning of the activate method, the issue should be resolved // guard for self-update problem if (__CLASS__ !== 'SLLH\ComposerVersionsCheck\OutdatedPackage\VersionsCheckPlugin') { return $this->disable(); }``` I have a PR in to fix this but it doesn't look like there has been much activity on this repo sadly. This project is clearly completely dead for years. I created a new fork ( https://github.com/dereuromark/composer-versions-check ). Let's contribute there with PRs.
2025-04-01T04:35:33.099155
2017-02-20T15:48:44
208916908
{ "authors": [ "borisfom", "harjatinsingh", "iassael", "ngimel", "soumith" ], "license": "bsd-2-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10933", "repo": "soumith/cudnn.torch", "url": "https://github.com/soumith/cudnn.torch/issues/333" }
gharchive/issue
Potential CuDNN 5.1 issue Trying to access any function on :reset() or :weights() causes an error. If I have some time in the next weeks I'll try to see the differences in CuDNN 5.1 API. THCudaCheck FAIL file=/opt/torch/extra/cutorch/lib/THC/generic/THCTensorMath.cu line=35 error=8 : invalid device function /opt/torch/install/bin/luajit: /opt/torch/install/share/lua/5.1/cudnn/RNN.lua:67: cuda runtime error (8) : invalid device function at /opt/torch/extra/cutorch/lib/THC/generic/THCTensorMath.cu:35 stack traceback: [C]: in function 'zero' /opt/torch/install/share/lua/5.1/cudnn/RNN.lua:67: in function 'reset' /opt/torch/install/share/lua/5.1/cudnn/RNN.lua:41: in function '__init' /opt/torch/install/share/lua/5.1/cudnn/BGRU.lua:4: in function '__init' /opt/torch/install/share/lua/5.1/torch/init.lua:91: in function </opt/torch/install/share/lua/5.1/torch/init.lua:87> [C]: in function 'BGRU' invalid device function usually points out to an unsupported GPU. What GPU are you using? This was speed of light (as always). Hmm we are using P100, this error only started happening after moving to CuDNN 5.1, where I also get nil values returned from cudnnGetFilterNdDescriptor calls. Did you encounter any issues with 5.1? i dont have a P100 :) i personally haven't used the RNN stuff much, but I know Natalia has been using cudnn.torch's RNN API on P100 for some language modeling stuff, and it works. Do tests pass for you? i.e. https://github.com/soumith/cudnn.torch/blob/master/test/test_rnn.lua ? Everything gets an error, it used to work like a charm until CuDNN 5.0, which version is she using? I'll try to do some debugging in the wkd. she uses 5.1 too afaik. Since you have P100, you could also try out R6 RC via the R6 branch. I have no idea what state it is in. 5.1 worked for me the last time I checked (couple weeks back). There were no changes in 5.1 (compared to 5) in rnn related apis (other than a fix to a nondeterministic dropout behavior). @soumith I switched to R6 RC but I get the same behavior. I also tried it with a Maxwell arch, where I get errors when I try to access the :weights(). I'll have a closer look in the wkd. @iassael, On master, commits #323 and #324 break things for me (apparently, tonumber does not work on Ubuntu16.04 and luajit the way it does for commit author), with those commits reverted :weights gives expected outputs. R6 though does not have those commits and works for me as is. And btw I'm on P100 and I don't have invalid device errors - may be something wrong with your cutorch install, or your cudnn 5.1 is not cuda 8.0 (there are different versions for cuda 7.5 and cuda 8.0)? @ngimel, @soumith: commits #323 and #324 break things on Ubuntu 14 as well. Has nothing to do with cudnn 5.1 Looks like it has to do with Lua 5.2 vs. Lua 5.1 though - @soumith, how should we fix that ? fixed in master I am having the same issue. I just switched form GTX970 to GTX1080 Ti. What is the fix ?
2025-04-01T04:35:33.125582
2018-07-05T05:15:14
338429490
{ "authors": [ "alcen", "remo5000" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10934", "repo": "source-academy/cadet-frontend", "url": "https://github.com/source-academy/cadet-frontend/issues/160" }
gharchive/issue
Don't use location as variable name It clashes with window.location, which is accessible from anywhere as a global. Resolved!
2025-04-01T04:35:33.135765
2021-11-07T09:17:31
1046698115
{ "authors": [ "angelsl", "chownces", "coveralls" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10935", "repo": "source-academy/frontend", "url": "https://github.com/source-academy/frontend/pull/2010" }
gharchive/pull-request
Fix autocomplete in AssessmentWorkspace Description Previously, the AssessmentWorkspace and GradingWorkspace autocomplete were broken as the Variant received from the backend was null instead of default. As a result, chapterName became something like 3_null and the builtinSuggestions became an empty array as seen below: // line 135 WorkspaceSaga.ts let chapterName = context.chapter.toString(); if (context.variant !== 'default') { chapterName += '_' + context.variant; } const builtinSuggestions = Documentation.builtins[chapterName] || []; Type of change [x] Bug fix (non-breaking change which fixes an issue) How to test Test that the default chapter builtins show up in the autocomplete in AssessmentWorkspace and GradingWorkspace Checklist [x] I have tested this code Pull Request Test Coverage Report for Build<PHONE_NUMBER> 0 of 1 (0.0%) changed or added relevant line in 1 file are covered. 1 unchanged line in 1 file lost coverage. Overall coverage decreased (-0.006%) to 30.949% Changes Missing Coverage Covered Lines Changed/Added Lines % src/commons/sagas/RequestsSaga.ts 0 1 0.0% Files with Coverage Reduction New Missed Lines % src/commons/sagas/RequestsSaga.ts 1 14.75% Totals Change from base Build<PHONE_NUMBER>: -0.006% Covered Lines: 3470 Relevant Lines: 10454 💛 - Coveralls Oops, didn't see this.. I'll use my fix instead, since it also deals with null in execTimeMs
2025-04-01T04:35:33.144236
2024-03-13T07:33:47
2183306705
{ "authors": [ "arpit1503khanna", "yeshamavani" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10936", "repo": "sourcefuse/loopback4-ratelimiter", "url": "https://github.com/sourcefuse/loopback4-ratelimiter/pull/161" }
gharchive/pull-request
feat(deps): update lb4 dependencies BREAKING CHANGE: Loopback package version update Description Update the lb4 dependencies as lb4 dropped support for node 16 or lower Fixes #159 Type of change Please delete options that are not relevant. [ ] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [x] Breaking change (fix or feature that would cause existing functionality to not work as expected) [ ] Intermediate change (work in progress) How Has This Been Tested? Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration [ ] Test A [ ] Test B Checklist: [ ] Performed a self-review of my own code [ ] npm test passes on your machine [ ] New tests added or existing tests modified to cover all changes [ ] Code conforms with the style guide [ ] API Documentation in code was updated [ ] Any dependent changes have been merged and published in downstream modules :tada: This PR is included in version 7.0.0 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:33.146264
2024-06-27T12:29:13
2378026311
{ "authors": [ "Kvnsm", "love-linger" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10937", "repo": "sourcegit-scm/sourcegit", "url": "https://github.com/sourcegit-scm/sourcegit/issues/214" }
gharchive/issue
[Feature] Switch bouton on unstaged / Staged block Would it be possible to have a switch button to display the "file name / full file path" for the "staged / unstaged" block? Thank's a lot You can change the display mode by this button: Oooh really ! Appreciate it ! Sorry for the disturb ^^
2025-04-01T04:35:33.148980
2024-03-19T23:06:13
2196207034
{ "authors": [ "Xees", "vadi2" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10938", "repo": "sourcegraph/cody", "url": "https://github.com/sourcegraph/cody/issues/3472" }
gharchive/issue
bug: Output limit and stoppage during the rewrite of a 200 line class. Version 1.8.3 Describe the bug Whenever i try to refactor a class of 200 lines or so. Cody sadly stops writing at about line 100. Which makes it hard to use for anything other than small modifications to functions. Or within large context windows. And if asked to continue, it messes up the chat window frame, and context of its current work. like so: Expected behavior It should continue the output for the full class, or give me a button to continue generating like chatGPT. Additional context I'm using Claude 3 Opus which should have a massive context window and output window. While it's not ideal, you can type continue and it will continue generating output as a workaround.
2025-04-01T04:35:33.155568
2024-06-27T18:08:10
2378787686
{ "authors": [ "SamEdwardes", "jdorfman" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10939", "repo": "sourcegraph/cody", "url": "https://github.com/sourcegraph/cody/issues/4708" }
gharchive/issue
feedback: support for Code - OSS Version 1.24.0 Areas for Improvement [ ] UI/UX [X] Onboarding [ ] Docs [ ] Chat [ ] Commands [ ] Context [ ] Response Quality [X] Other What needs to be improved? Please describe how this affects the user experience and include a screenshot. I am using a fork of VS Code (Code - OSS Version: 1.90.0). I can install the Cody extension from Open VSX (https://open-vsx.org/extension/sourcegraph/cody-ai). When I try to login with GitHub, after logging in the browser tries to direct me to the "official" VS Code installed on my computer. Is there a way I can get this working with my fork? Describe the solution you'd like to see Support for forks of VS Code. Describe any alternatives that could be considered No response Additional context No response Remove stale Hey @SamEdwardes try logging in with a token: Generate the token/key: https://sourcegraph.com/user/settings/tokens URL to use: https://sourcegraph.com LMK if that works or not. Thank you I will give this a try. Sounds good. Feel free to close if it works :) Thank you @jdorfman. I tried today with an access token and it worked!
2025-04-01T04:35:33.163608
2018-08-23T14:20:08
353403811
{ "authors": [ "EmpireJones", "doxxx", "ehames", "harikb", "joshua", "keegancsmith", "slimsag" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10940", "repo": "sourcegraph/go-langserver", "url": "https://github.com/sourcegraph/go-langserver/issues/310" }
gharchive/issue
High memory usage I'm using Atom to and its go-langserver integration. Memory usage is close to 18Gb now as I type this. I cannot capture the memory/CPU profiles because the port is not open. $ go tool pprof -svg $GOPATH/bin/go-langserver http://localhost:6060/debug/pprof/profile > cpu.svg Fetching profile over HTTP from http://localhost:6060/debug/pprof/profile http://localhost:6060/debug/pprof/profile: Get http://localhost:6060/debug/pprof/profile: dial tcp [::1]:6060: connect: connection refused failed to fetch any source profiles BTW, this was triggered when I changed a function name, and many compilation errors were triggered. After 4 or 5 minutes, memory went down to 1Gb, but my computer was quite slow in the meantime. I'm experiencing the same issue but also with very high CPU usage as well. This issue is a lot more noticeable with diagnostics enabled. aah, yes this is very likely related to having diagnostics enabled. When you enable diagnostics it enables the typechecker, which can be a huge memory hog. Unfortunately there is no quick fix here, other than disabling diagnostics. yeah this is a pretty bad problem for us, since the typechecking is so useful :) I think the future is bright (once we have time to implement it), since the caching stuff in go has a lot more useful information for us, which means we can probably rely on the on-disk caching go has now. This seems to be closely related to #209. Both issues are due to the typechecker. I have to periodically kill the language server and live with missing features with vscode. Is there anything I can help with? what logs/traces can I extract the next time this happens? Having the same issue here. It sometimes takes up all available memory (e.g. 30GB), resulting in OS freezing. Just a guess, but this feels more like a bug than an issue of something being inefficient. Any details I can provide? You can set "go.languageServerFlags": ["-pprof", ":6060"] in your VS Code settings and then follow the steps in the README to capture a heap profile and upload the SVG. That would tell us where the memory is allocated. I agree this looks more like a regression. If the memory usage is coming from typechecking and not a regression in e.g. leaking memory, then we likely cannot do anything yet. The long term fix for this will be in the official Go language server which the Go developers are working on actively (it is a difficult problem to solve). I've been using the language server since yesterday and it was relatively well behaved, using up to only a few hundred MB. This morning, I started making some edits and the language server started consuming 80-100% CPU and the memory spiked up to 5GB. I managed to capture a heap snapshot: heap.zip This is probably a better CPU profile than previous one: cpu.zip This time the heap grew to 10GB: heap.zip @doxxx Your first heap profile shows the langserver using 1.3GB, and Go having allocated 2.4GB (1.1GB unused but not yet released back to the OS). Your second heap profile shows the langserver using 4GB, and Go having allocated 8GB (again, 4GB unused but not yet released back to the OS). Both traces show the memory was allocated in the golang.org/x/tools/go/loader package, which is the entrypoint for type checking. This is unfortunate and a known issue, but expected currently. It'll improve in the future when the official Go language server is released. If you notice the memory usage does not drop down after a minute or two, that would indicate a leak and a bug we could fix, however.
2025-04-01T04:35:33.214416
2016-10-27T22:37:37
185797569
{ "authors": [ "Tyriar", "julianlam", "parisk", "piggyslasher", "wilrnh" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10941", "repo": "sourcelair/xterm.js", "url": "https://github.com/sourcelair/xterm.js/issues/329" }
gharchive/issue
Tmux ignores mouse scroll events/shits itself on selection events Details Browser and browser version: Chrome 54 OS version: OSX 10.11.6 xterm.js version: demo @ https://github.com/sourcelair/xterm.js/commit/996c641a790a0b27c61c62e02b806cadd4bb3f7a Steps to reproduce Start demo tmux attach Enable mouse mode in tmux set -g mouse on Scroll or select text in terminal window Microsoft/vscode#14627 Likely related to https://github.com/sourcelair/xterm.js/issues/426 Thanks for reporting this @wilrnh. Scrolling seems to be working fine for me. I ran into a few issues though, which seem to be related to #434 (rendering issues). Does anything happen at all when scrolling after running set -g mouse on in tmux? @parisk according to OP's repro steps, he is using set -g mouse on Working for me with set -g mouse on. Vim scrolling is also working flawlessly AFAICS. Can you paste your .tmux.conf file? Fixed in https://github.com/sourcelair/xterm.js/pull/670
2025-04-01T04:35:33.224928
2018-03-12T10:14:32
304310449
{ "authors": [ "LaHaine", "rouson" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10942", "repo": "sourceryinstitute/OpenCoarrays", "url": "https://github.com/sourceryinstitute/OpenCoarrays/issues/517" }
gharchive/issue
Defect: mvapich2 build fails to run tests Avg response time Please make sure to put any logs, terminal output, or code in [fenced code blocks]. Please also read the [contributing guidelines] before submitting a new issue. Defect/Bug Report The title of the issue should start with Defect: followed by a succinct title. OpenCoarrays Version: <!-- caf --version or ./install.sh --version --> 2.0.0 Fortran Compiler: GNU 7.2.0 C compiler used for building lib: GNU 7.2.0 Installation method: install.sh Output of uname -a: Linux obst25.zeuthen.desy.de 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 04:11:40 CST 2018 x86_64 x86_64 x86_64 GNU/Linux MPI library being used: mvapich2 2.2 Machine architecture and number of physical cores: x86_64, 16 cores Version of CMake: cmake 3.9.2 from OpenHPC Observed Behavior the build finishes, but there are linking problems when trying to build the tests Expected Behavior tests should build and execute fine Steps to Reproduce install CentOS 7.4 add OpenHPC repository build with the attached spec file with the --define mpi_family\ mvapich2 switch Attached is the build log: build.log.txt And the rpm SPEC file: opencoarrays.spec.txt The output of caf -w is caf wraps /opt/ohpc/pub/compiler/gcc/7.2.0/bin/gfortran and caf -s is: /opt/ohpc/pub/compiler/gcc/7.2.0/bin/gfortran -I/builddir/build/BUILD/OpenCoarrays-2.0.0/prerequisites/builds/opencoarrays/2.0.0/include/OpenCoarrays-2.0.0_GNU-7.2.0 -fcoarray=lib -Wl,-rpath -Wl,/opt/ohpc/pub/mpi/mvapich2-gnu7/2.2/lib -Wl,--enable-new-dtags ${@} /builddir/build/BUILD/OpenCoarrays-2.0.0/prerequisites/builds/opencoarrays/2.0.0/lib64/libcaf_mpi.a /opt/ohpc/pub/mpi/mvapich2-gnu7/2.2/lib/libmpifort.so /opt/ohpc/pub/mpi/mvapich2-gnu7/2.2/lib/libmpi.so OK, I have researched this a bit, mvapich2 2.2 seems to be too old and doesn't have this binding yet. Maybe a configure check should be added for this? @LaHaine Feel free to submit a pull request with the suggested configure request. Given other commitments, I don't think anyone else will do so. For now, I'll close this issue now that you have determined the cause.
2025-04-01T04:35:33.228108
2022-02-03T13:03:16
1123061188
{ "authors": [ "kitchen-porter" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10943", "repo": "sous-chefs/sql_server", "url": "https://github.com/sous-chefs/sql_server/pull/173" }
gharchive/pull-request
Automated PR: Standardising Files This PR will standardise the files we have with out agreed spec in sous-chefs/repo-management. This repo has been identified by topic(s) of chef-cookbook Released as: 7.1.1
2025-04-01T04:35:33.233957
2023-05-14T07:40:51
1708849551
{ "authors": [ "souvikinator", "that-ambuj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10944", "repo": "souvikinator/notion-to-md", "url": "https://github.com/souvikinator/notion-to-md/issues/83" }
gharchive/issue
Improvement: Use string union for Block Type instead of string Hi, I have noticed that the setCustomTransformer method takes the first argument as a string for the type of block to use the transformer function(the second argument) which can be anything arbitrary. However there can be spelling mistakes by the users of this library and in order leverage typescript's type safety and tsserver's intellisense, that we use a string union for the first argument of the setCustomTransformer so that the users can get good autocompletion. The code could look like this: type BlockType = "image" | "video" | "file" | "pdf" | .... | string (I've added the last union as a string so that there is flexibility to use any arbitrary string at the risk of the user of this library while also providing intellisense to the users of the library who would otherwise have to guess what string to put here.) and we could change the type definition for setCustomTransformer to: setCustomTransformer(type: BlockType, ...): ... Note that this string union can also be used for the type key in BlockObjectResponse type and that'll reduce a lot of redundant code in that type and make it easier to use and understand. That's something I have faced. I was wondering if we can use existing types from notion SDK's for this. Hi, I also looked into the official notion SDK for the types but unfortunately, there isn't any such types in their library. I guess, we'll have to do this by hand. Got done with most of it and should be fine for now. Feel free to add if you find some missing. Should be live in the patch release. So it seems like the intellisense is not working. Most probably because of the generic type string. Alright we are good to go ig. So I'll publish a new patch v3.0.1. Hi, thanks for implementing this one, I thought I was going to do the PR stuff but I see you've already done it. Now that I think of it, allowing users to put any random string without any warning is a little risky and I think it's up to you if you want to allow this behaviour but I'm glad that people will get good intellisense thanks to you! I think we'll be fine and incase of any problem, always ready to fix :mechanic: but I'm glad that people will get good intellisense thanks to you! Thanks to you :)
2025-04-01T04:35:33.243543
2018-10-18T08:07:54
371408803
{ "authors": [ "lennerd", "opyh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10946", "repo": "sozialhelden/wheelmap-react-frontend", "url": "https://github.com/sozialhelden/wheelmap-react-frontend/pull/153" }
gharchive/pull-request
Add loading indicator for dynamic loaded map. Could you make the background gray so the design has more plasticity? the could have a much bigger size Styling fixed @opyh.
2025-04-01T04:35:33.279039
2024-01-02T13:16:21
2062407938
{ "authors": [ "Emisse", "Errant-4", "Hreno", "K-Dynamic", "Peptide90", "UbaserB", "juliangiebel", "metalgearsloth" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10947", "repo": "space-wizards/space-station-14", "url": "https://github.com/space-wizards/space-station-14/issues/23374" }
gharchive/issue
need station anchors physical machine or structure that when powered/activated makes the grid way more massive and very hard to move good excuse for big space anchor as well, maybe some sort of animation where it lowers down into space or something whatever this is needed so shitters in shuttles dont giga move the station around but also can be toggled for sabotage or whatever else ive been requesting this for years but never amde an issue on it proper Could this just be added to the big Gravity Generator? I agree, station gets annoying to be near as both crew and nukies Would thematically make sense to tie this to grav gen, but probably better to map multiple subspace anchors for redundancy im excited to map these when they exist Wonder if this is as simple as adding a structure with an absurdly large mass number...wonder if it passes it through to the grid or if grid weight is purely based on tiles. Not checking but nothing is ever simple :'( Wonder if this is as simple as adding a structure with an absurdly large mass number...wonder if it passes it through to the grid or if grid weight is purely based on tiles. Not checking but nothing is ever simple :'( Motion damping for has already been added to ShuttleComponent on #18697 so I think this would only need to create a new device that can override that value for a specific grid Also ideally you'd split out the gravity generator UI as I think that sort of "delayed turn-on / off" functionality is useful for here + IFF. Will also need for the trade station. okay after trade station got merged, we REALLY need this. People are literally driving the outpost to the station and then not even using the cargo shuttle to travel. People have already started to dock them to the station permanently by adding a shuttle console the the outpost and extending the cargo dock.. omg. Suppose we add an anchor to the trade outpost. Won't the same people just turn it off? If this is no longer a convenience feature but a gameplay limitation, it needs to be either added directly to the map prototypes (this can be done right now, I think), be a result of the station grid's "mass", or there needs to be some compelling downside to turning off the anchors You could set the station anchor on the trade station to require centcomm access 😄 Make sure that the Station's anchor also requires Captain access so the QM can't turn it off :D We have anchors now.
2025-04-01T04:35:33.282370
2024-01-18T09:58:32
2087927047
{ "authors": [ "Scribbles0", "UbaserB", "gordod3" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10948", "repo": "space-wizards/space-station-14", "url": "https://github.com/space-wizards/space-station-14/issues/24227" }
gharchive/issue
Carrot Juice have Oculin, but don't treat blindness When you metabolize Carrot Juice get 0.15 unit Oculin, but it's too few for treatment blindness from weld and no matter how much you drink it. Just make Oculin work from 0.15 unit, make more Oculin when metabolize Carrot Juice at least 0.50 Oculin or delete oculin from metabolize Carrot Juice. turns out carrot has no more oculine anymore, weird turns out carrot has no more oculine anymore, weird when you metabolize carrot juice you get oculine but its not enough to heal anything Im going to make a PR fixing this
2025-04-01T04:35:33.284109
2024-04-20T11:45:12
2254516522
{ "authors": [ "UbaserB", "deltanedas" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10949", "repo": "space-wizards/space-station-14", "url": "https://github.com/space-wizards/space-station-14/issues/27162" }
gharchive/issue
Can't clean puddles under morgues Description Works for everything else though... Screenshots morgues are extremely sus and should be made anchorable structures
2025-04-01T04:35:33.288604
2022-01-18T05:12:42
1106485189
{ "authors": [ "RandomDeathMatch" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10950", "repo": "space-wizards/space-station-14", "url": "https://github.com/space-wizards/space-station-14/issues/6223" }
gharchive/issue
Chemistry Metal Foam + Smoke looping interaction Description Make a cryo beaker containing the reagents for metal foam and smoke. Additionally, you can add other stuff like uncombined clf3 to make explosions. When you throw the cryobeaker, it releases liquid metal foam. The liquid metal foam then causes people it hits to release smoke. The smoke causes liquid metal foam, and this bounces back and forth until you run out of reagents. Although I am not 100% sure if this is how it works since it seems to often create less foam than it seems like it should. I tested this on localhost and it seemed to work fine (powerful, about as OP as ClF3 reagent smoke, but I thought it was working as intended). However, when it hit a ton of people on the server it seemed to bug out a ton (looped waaaay too much before stopping; lagged the server for a little bit). I am not sure what happened as I have tried it with a bunch of Urist McHands in a line and I don't think I ever got this result. Screenshots Additional context Got it to work on localhost and have screenshots now. This is from 1 thrown cryo and the Urists arranged how they are pictured, I am pretty sure that is the exact mix I used for this effect.
2025-04-01T04:35:33.293148
2023-04-21T01:11:01
1677613609
{ "authors": [ "Emisse", "EmoGarbage404" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10951", "repo": "space-wizards/space-station-14", "url": "https://github.com/space-wizards/space-station-14/pull/15597" }
gharchive/pull-request
Lipid Extractor About the PR Basically a port of https://github.com/tgstation/tgstation/commit/48370e5a35a19eab427d3e403b653e65fa391ca2 Adds Lipid extractor to be unlocked by science. Simply hop in and let it cook the fat on. The beauty of science. Media [x] I have added screenshots/videos to this PR showcasing its changes ingame, or this PR does not require an ingame showcase Changelog :cl: add: Added the lipid extractor. Simply place a well-fed spaceman inside and let the excess food fall off into meat. Unlock it at science today! worked fine for me when testing locally
2025-04-01T04:35:33.294362
2020-09-09T03:12:30
696360588
{ "authors": [ "DTanxxx" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10952", "repo": "space-wizards/space-station-14", "url": "https://github.com/space-wizards/space-station-14/pull/2051" }
gharchive/pull-request
Make more Fields VV and more editable Added some more VV and VVAccess.ReadWrite. However some may be inappropriate and may need to be changed back. And for the engine: [https://github.com/space-wizards/RobustToolbox/pull/1284]
2025-04-01T04:35:33.301194
2024-08-17T20:26:56
2471660218
{ "authors": [ "Brandon-Huu", "Hreno", "Plykiya" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10953", "repo": "space-wizards/space-station-14", "url": "https://github.com/space-wizards/space-station-14/pull/31132" }
gharchive/pull-request
Add the ability for admin ghosts and cyborgs to buckle About the PR It gives the ability for admin ghosts and cyborgs to buckle entities on chairs etc. Why / Balance Fixes #30898 Fixes #31109 Technical details The Buckle component was missing for these two entities. Media Peek 2024-08-17 22-20.webm Requirements [X] I have read and I am following the Pull Request Guidelines. I understand that not doing so may get my pr closed at maintainer’s discretion [X] I have added screenshots/videos to this PR showcasing its changes ingame, or this PR does not require an ingame showcase Breaking changes Changelog Can you buckle borgs to beds now? Screencast from 08-17-2024 04:54:07 PM.webm Current master branch: Screencast from 08-17-2024 04:56:47 PM.webm Hm, I see. This is going to be more tricky then. oh, this was the issue? let me know if you want me to push the fix for it I'll close this. @Plykiya Feel free to tackle this in a way Brandon-Huu suggests, as I don't have enough knowledge of the codebase yet.
2025-04-01T04:35:33.306327
2022-10-26T16:12:50
1424318658
{ "authors": [ "GGAlanSmithee", "MatiFalcone", "markrufino" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10954", "repo": "spacebudz/lucid", "url": "https://github.com/spacebudz/lucid/issues/106" }
gharchive/issue
Package does not work with next@latest (v. 13) I've submitted an issue in the next.js repo, because the bug only appears once I updated to their latest build, but I'm reporting it here as well, because looking at this code there might be an actual issue with setting global.Headers in node.js versions => 16.15.0, if it is readonly, as the error suggests: documentation. I've actually not been able to reproduce this in isolation, so might be nothing. There is a bit of conversation going on in the linked issue. It looks like the issue is a more generic one, and that this project - in that case - is not compatible with node 18. Will check this out when I get time. I am having the same issue. @MatiFalcone check the linked issue for updates @alessandrokonrad Maybe the fix is to conditionally polyfill only if not defined? I can send a PR for it Looks like this is what next.js does https://github.com/vercel/next.js/blob/227c35c0e281b38ff4f36b590624ffb802f5a8c7/packages/next/server/node-polyfill-fetch.js#L3 Sent a PR. No idea if this is correct or not though. @alessandrokonrad @MatiFalcone I have updated my example with latest version of lucid (0.7.0) and next.js 13 and it works fine! The example if just sending a transaction, so I cannot account for more complex use cases, but atleast that one does work. @GGAlanSmithee stumbled upon this issue as well and reverted to 0.7.0 and it works. Thanks.
2025-04-01T04:35:33.313217
2020-04-22T12:59:28
604737778
{ "authors": [ "y0sher" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10955", "repo": "spacemeshos/go-spacemesh", "url": "https://github.com/spacemeshos/go-spacemesh/pull/2001" }
gharchive/pull-request
protoc is now saved locally and used from locally if possible. Motivation easy building. Closes #1678 Closes #2000 Changes protoc is installed to devtools folder and used from there if possible. Test Plan not testing scripts bors merge LGTM, tested on ubuntu and it resolves the issue I reported in #1678 thanks for that :)
2025-04-01T04:35:33.315983
2024-09-23T07:34:29
2541810907
{ "authors": [ "fasmat" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10956", "repo": "spacemeshos/go-spacemesh", "url": "https://github.com/spacemeshos/go-spacemesh/pull/6343" }
gharchive/pull-request
chore: Remove unused environment variable in CI step Motivation codecov/codecov-action@v4 doesn't need the defined environment variable, it gains access to the token via the token argument passed to it. Description Just removed the unneeded environment variable. Test Plan Existing tests pass TODO [x] Explain motivation or link existing issue(s) [x] Test changes and document test plan [x] Update documentation as needed [x] Update changelog as needed bors merge
2025-04-01T04:35:33.394165
2018-07-27T20:12:07
345352406
{ "authors": [ "pllim", "stscicrawford" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10957", "repo": "spacetelescope/style-guides", "url": "https://github.com/spacetelescope/style-guides/issues/5" }
gharchive/issue
python style guide Create a style guide outlining best practices for writing python code and packages This issue is too broad. There are now several pages on different aspects of Python coding and packaging. Any remaining concerns should be separate issue(s).
2025-04-01T04:35:33.455839
2021-09-03T05:47:13
987420561
{ "authors": [ "cartographer16", "janardanroy", "spareeth" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10958", "repo": "spareeth/PySEBAL_dev", "url": "https://github.com/spareeth/PySEBAL_dev/issues/1" }
gharchive/issue
PySEBAL run error Hi, I have installed pySEBAL as following link: https://pysebal-doc.readthedocs.io/en/install/installation.html#installation-in-linux Checked the installaion with import pysebal_py3 and there is no error. But when giving the test run as python Run_py3.py , I am getting the following error. How to solve this error? Hi I see that you are using Linux, Did you change the Run_py3.py to link to excel file for linux? It is test data folder. Hi @spareeth, PySEBAL ran the test_data without any problems. But when I run PySEBAL with my own data, I get the same error as @janardanroy in the "log" file. I made the necessary changes. My file path in Run_py.3 does not seem to be a problem. As you can see, it produces a few outputs, but does not run after the "Meteo Part1" section. (I got my Meteo_Input data from Eddy Covariance Flux Tower) What should I do for this error? I would be glad if you help. Have a nice day. My error was related to the path only. It is running now. On Mon, Feb 7, 2022 at 2:31 PM cartographer16 @.***> wrote: Hi @spareeth https://github.com/spareeth, PySEBAL ran the test_data without any problems. But when I run PySEBAL with my own data, I get the same error as @janardanroy https://github.com/janardanroy in the "log" file. I made the necessary changes. My file path in Run_py.3 does not seem to be a problem. As you can see, it produces a few outputs, but does not run after the "Meteo Part1" section. (I got my Meteo_Input data from Eddy Covariance Flux Tower) What should I do for this error? I would be glad if you help. Have a nice day. [image: hata1] https://user-images.githubusercontent.com/74354487/152756751-b133d4fe-5f71-4735-8eb9-6fdd21745e2a.PNG [image: meteo_input] https://user-images.githubusercontent.com/74354487/152756754-029dab83-d209-4afa-b445-7c9abcb1232f.PNG — Reply to this email directly, view it on GitHub https://github.com/spareeth/PySEBAL_dev/issues/1#issuecomment-1031222831, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJNXCIPBA32AWXXBNETJ5ZDUZ6C6TANCNFSM5DLAUASQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. You are receiving this because you were mentioned.Message ID: @.***> -- Janardan Roy Teaching Assistant, Centre of Studies in Resources Engineering Indian Institute of Technology Bombay Mumbai, INDIA
2025-04-01T04:35:33.559096
2015-01-13T17:35:23
54222798
{ "authors": [ "ab", "b1nary", "flavorjones", "fulldecent", "hone", "idyll", "jirutka", "jrochkind", "knu", "rklemme", "tenderlove", "veillard", "yob" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10959", "repo": "sparklemotion/nokogiri", "url": "https://github.com/sparklemotion/nokogiri/issues/1220" }
gharchive/issue
RFC: Nokogiri should use system libraries by default if they are of acceptable vintage Proposed behavior change As discussed in #936, I'd like Nokogiri to only use the bundled versions of libxml2/libxslt if the system versions: either don't exist, or they exist, but are of unsupported or known-worse vintage than the bundled libraries. unsupported vintage would be a new minor release bump, which historically has indicated that libxml2 contains non-backwards-compatible functionality changes. For example, 2.10.0 would be an unsupported version to the Nokogiri that bundles version 2.9.2. known-worse vintage would be anything older than the bundled libraries, which are likely to contain bugs or vulnerabilities. For example, 2.9.1 would be a known-worse version to the Nokogiri that bundles 2.9.2. What's an "acceptable vintage"? Said another way, I'd like Nokogiri to use the system library if its version if greater than the installed version while not being a minor version bump. Specifically, the logic we should use for Nokogiri 1.6.5, which bundles libxml 2.9.2, would be:: system_library_version > 2.9.2 && system_library_version < 2.10.0 Show me the behavior, please For example: Feature: Installation of Nokogiri 1.6.5, which bundles libxml 2.9.2, should defer to newer versions of libxml2 if they are likely contain security patches and are also likely to be functional. In order to make my business more secure As a system administrator or a developer I want Nokogiri to use the most-secure version of libxml2 available that won't also break my application Scenario: There are no system libraries installed. Given libxml2 does not exist on the target system When we `gem install nokogiri` Then Nokogiri should compile and use its bundled version of libxml 2.9.2 Scenario: There is a deprecated version of libxml2 installed Given libxml 2.8.0 is installed on the target system When we `gem install nokogiri` Then Nokogiri should compile and use its bundled version of libxml 2.9.2 Scenario: There is an older version of libxml2 installed Given libxml 2.9.0 is installed on the target system When we `gem install nokogiri` Then Nokogiri should compile and use its bundled version of libxml 2.9.2 Scenario: There is the same version of libxml2 installed, which doesn't contain our backported security fixes (if any) Given stock libxml 2.9.2 is installed on the target system When we `gem install nokogiri` Then Nokogiri should compile and use its bundled version of libxml 2.9.2 Scenario: There is a newer version of libxml2 installed Given libxml 2.9.3 is installed on the target system When we `gem install nokogiri` Then Nokogiri should use the system-installed version of libxml 2.9.3 Scenario: There is an untested version of libxml2 installed Given libxml 2.10.0 is installed on the target system When we `gem install nokogiri` Then Nokogiri should compile and use its bundled version of libxml 2.9.2 What about command-line options to the gem install process? We should still support the --use-system-libraries option, which will always attempt to use the system libraries, even if they're of known-buggy or untested vintage. We should obviously then emit some warning message. We should also introduce a --use-bundled-libraries option, which would support the current behavior, which is to always attempt to use the bundled libraries. This information should be available in the output of nokogiri -v. The default behavior would be as described above. Will I need to re-install Nokogiri if I upgrade system libraries? I think it needs to be a requirement that, given Nokogiri decides to use bundled libraries at install time, if acceptable-vintage system libraries are upgraded (or installed), then Nokogiri should decide at runtime to use those system libraries. Unfortunately, I don't know how easy that will be to do. Currently, Nokogiri statically links libxml2 and libxslt into the C extension, for some valid reasons. We'll need to switch back to loading shared libraries and tackle those other reasons. I'll try to dig up a link to the email thread that discusses static-versus-shared linking. This is very encouraging -- I think it would improve things greatly for nokogiri users in several ways, take care of a couple different pain points, if it can be pulled off technically. Not just security, but the pain of 4+ minute nokogiri installs, which has become a significant nokogiri pain for me and others. I can't speak to the feasibility of pulling it off technically, but thanks for looking into it. It does seem like it could be challenging to determine exactly when the installed version is 'good enough', I thought last time I looked the nokogiri build actually had some custom patches and such. If it does work, nokogiri on install could even warn the user when using it's own build, that they can avoid this by upgrading system libxml to X, consider the a proper system libxml to be the preferred solution where possible. I am not sure that it needs to be an absolute requirement that a previously installed nokogiri using bundled libraries will automatically use appropriate system libraries once they become available. I think a reinstall of nokogiri is acceptable in that case. On the other hand, it probably does need to be a requirement that a nokogiri using a system install of libxml 2.9.2; when the system upgrades to 2.9.3 the installed nokogiri needs to begin using it (in case it was upgraded for security reasons for instance). And that might amount to the same problem. Jonathan Will I need to re-install Nokogiri if I upgrade system libraries? ... Yes... This really feels like it's going to be dangerous. I can imagine situations where something triggers an upgrade of libxml and you take down apps because nokogiri isn't expecting it. Image a situation where you are working on a dozen little apps and then Apple does something that bumps system libxml to a version Nokogiri doesn't support. I would then need to gem update all of them. Even if they are using different versions of libxml. This behaviour will be hard to track down. Especially for someone just learning Ruby. Which is where one needs the current system of bundled libxml the most. The burden of complicated installation should be taken on by the people who understand it. If a person wants to use system XML then they should be responsible for configuring that. @jrochkind - Thanks for your comments. Agree that "good enough" may be hard to define for edge cases, but I think that the patches we backport are generally (at least by most *nix vendors or maintainers) backported anyway, and so the heuristic of "same minor release but newer" will likely work in most or all cases. In fact, we may want to defer to the system library in case of a tie, presuming that the distro vendor/maintainer is doing a more-thorough job than Team Nokogiri (which, frankly, they ought to be). With respect to the "Should Nokogiri require a re-install", your comments and @idyll's appear to be in agreement. However, I believe that most system admins and people-responsible-for-security would disagree and want an automatic upgrade. In particular, libxml2 shouldn't require a recompilation to load against a newer version of a shared object library (especially if it's on the same minor-release branch). If it wasn't safe to upgrade dynamic libraries, your machine would stop working every time you updated libxml2, as this gist demonstrates: https://gist.github.com/flavorjones/cc1b82365d1d29f6fb3f Keep the comments coming! I use linux in development and production so tend to have system copies of libxml2 available and prefer to use them if possible. If nothing else, it makes bundling and installing nokogiri so much quicker. This proposal sounds reasonable to me. In particular, libxml2 shouldn't require a recompilation to load against a newer version of a shared object library Yes, I think you are quite right there. On heroku, when this got changed we started using the NOKOGIRI_USE_SYSTEM_LIBRARIES env var to force using the system libraries. We currently rely on the distro (ubuntu) in this case to provide libxml2 and libxml2-dev packages. I'd like to avoid forcing every app to compile libxml2 on the platform when it's readily available. So, I have some questions with this change: Will NOKOGIRI_USE_SYSTEM_LIBRARIES remain around as an env var we can hook into? If not, do we need to use the flag --use-system-libraries? I assume for using this with bundler we'd need to use RUBYGEMS_OPT or something similar. Do other gems honour this? If so, it seems like we might want to just restrict this to nokogiri. If the user ends up with the system libxml2, they will get a warning message and there's nothing they can really do about it. This would be annoying per build. Is there a way we can phrase it differently that this is expected? Thanks for raising this issue and all your hard work on nokogiri. <3 <3 <3 Hey @hone! Thanks for jumping in! As someone who works with the Cloud Foundry buildpacks team, I FEEL YOUR PAIN. Yes, the env var will continue to be the implementation that's used by the commandline options. If it ever changes, I'm extremely motivated to make sure that Heroku and Cloud Foundry know about it well ahead of time. As far as the wording of the warning message, we'll of course omit the ominous language if they're on an acceptable vintage. But I think we'll continue to want to make it obvious whether people are using system- or bundled-libraries, both at runtime and at installtime. I'm open to suggestions! @flavorjones thanks for the response. Yeah, message clarity makes sense. What kind of message copy are you thinking of? I'm worried about people filing support tickets based off of the warnings and the fact that they can't really do much about it and it's expected behaviour on our part. Libxml2/libxslt maintainer here :-) The problem of bundling is then there is no way to get fixes deployed on security issues. Most of the significant and hence risky changes done in the last 5 years of so for libxml2, were done to fix security issues, about entities processing. It did cause regressions, sometime hard to work around, but the balance is apps vulnerability. You prefer an app which may fail in some corner case or an app which stay vulnerable to attacks ? Unless the system libraries are really behind, it makes no sense at this point to not use the system libraries unles your framework provides to carrying updates automatically, which I assume from context is not the case. There is no big changes planned for either libs, it's really maintainance and security at this point. So definitely supportive of the change, Daniel @veillard, Thanks for much for your comments, I really appreciate that you took the time to weigh in! For historical context, the contributing factors that led us to bundle libxml2 were: known-bad, really old versions of libxml2 in the wild (who could forget 2.6.16?) new versions that we didn't support yet at the time (2.7 and 2.8 took time before Nokogiri supported them, and 2.9 had xpath curiosities before 2.9.2) The second case should be reasonably rare, and so let's not optimize for it (though we reacted to it at the time). To that end, it makes sense for Nokogiri to bundle a modern version of libxml2 for people running on older systems (I'm looking at you, RHEL6), while also maintaining the ability to roll forward to newer versions easily. I'm so glad you're supportive of the proposal! Cheers, -mike This proposal makes much sense to me! I suggest considering a small change in the version test example: system_library_version >= 2.9.2 && system_library_version < 2.10.0 In other words: if the system library version is the same as the bundled version use the system version. This would have the advantage to not load two different instances of the the same version of the shared lib in case other software is using the system library. There may be a risk that the system version is different despite the same version number. But if we go down that route we can never use a system version. So this is the magic why i never ran into any problems with nokogiri, and it just gets better. This sounds good, looking forward to worry even less. I completely agree with @veillard about security concerns. This is also the reason why package maintainers in Gentoo generally try to avoid bundled dependencies. I’m happy for this proposal! :smile_cat: @rklemme -- Agree. As our thinking has evolved on this topic, I agree with you (as I stated above at https://github.com/sparklemotion/nokogiri/issues/1220#issuecomment-69809954). @jirutka -- Agree with you both. Thanks for the feedback! For everyone following: this will not be in 1.6.6 (the next point release), but I'm hoping for it to be part of the 1.7.0 release soon thereafter. :+1: Hello all. Since the posting of this issue. GitHub has introduced reactions. If you are in favor of or opposed to the solution and course of action from OP, you are welcome to click the button. A little encouragement here will go a long way. I see two open problems in this discussion: 1. What happens if we depend on system libxml and Apple updates it? Programs that depend on nokogiri (and which did not use --use-system-libraries) may break. Also, every other program on the computer that depends on libxml may break. Lot's of stuff will break. Then users will use Google, find this repo and complain on GitHub Issues. Then we will update the error message to be more helpful and those users will update their apps. Last year Apple introduced "System Integrity Protection". I recommend that we do not hypothesize about the ways that things might get broken in the future, since for sure they will get broken somehow. PROPOSED BEHAVIOR: After install, naively use system libxml. If something doesn't work then crash with an error message. 2. We don't have a pull request There is clearly value to this proposal. Just reducing the carbon footprint of Travis CI running thousands of builds per second from developers that forgot to use NOKOGIRI_USE_SYSTEM_LIBRARIES will be enough to add a few seconds to your life and everyone you know. It is a lot to accept a PR than an issue. PROPOSED BEHAVIOR: After install, naively use system libxml. If something doesn't work then crash with an error message. The upside to this is people can upgrade libxml2 without upgrading Nokogiri. The problem is that if something doesn't work in the new version of libxml2, it crashes with a SEGV. We can't know what future versions will do, so we can't give the user any more useful info than a SEGV. In fact, this issue was so common we introduced the "Nokogiri was built against LibXML version XXX but loaded XXX" warning that people have come to love so much. We've also had an opposite problem, where Apple didn't upgrade libxml2 for ages and we got stuck supporting that version because we wanted to work with system libraries. I think Mike's proposal makes sense, and I think we should work with system libraries if they fit the version criteria he's specified. IMO, we should use system libraries and let the user upgrade system libraries, but we shouldn't go out of our way (at runtime) to deal with the situation where upgrading the system libraries breaks something. If the new system library breaks something, they should report the issue and we can deal with. Whether dealing with it means fixing our code, or declaring that version to be unsupported. Hopefully that makes sense? I'd like to propose closing this with no action being taken. Given: Nokogiri v1.11 ships precompiled binaries for most platforms, non-native "ruby" platform gem downloads constitute ~3.5% of Nokogiri 1.11 downloads ... this work feels less urgent and less impactful, and so I'd like to propose closing this with no further action. I'll close in a few days, to give folks an opportunity to argue that this work would be more impactful than some of the other work queued up for v1.12 and subsequent releases, including HTML5 support, performance work, and better handling of exceptions during operations involving callbacks.
2025-04-01T04:35:33.638273
2021-05-08T11:11:39
880677496
{ "authors": [ "craigraw", "keblek" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10960", "repo": "sparrowwallet/sparrow", "url": "https://github.com/sparrowwallet/sparrow/issues/124" }
gharchive/issue
Feature Request: Add many to the send page In the send page you can add new addresses one by one. This works fine if you only want to add one or two more outputs. If you are batching a large tx with many outputs (like batching several LN channel opens or a coinjoin) then this can be tedious and potentially error prone. Please add a Add Many where I can add a list of tx's and their amount. Either by opening a file with a list, or paste them into a text box. For example, I can compile a list of addresses and their amount in a text editor and have the wallet open it and construct a tx from it. An example would be something like the following; bc1...gg 1,000,000 bc1...nm 1,000,000 Added in f23a891. The addresses and amount can either be added in the spreadsheet grid in the wallet, or by loading a CSV file with the equivalent tabular data in it. Nice! Lightning batchers will appreciate this.
2025-04-01T04:35:33.644842
2020-11-23T14:51:19
748867599
{ "authors": [ "6102bitcoin", "craigraw", "louneskmt" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10961", "repo": "sparrowwallet/sparrow", "url": "https://github.com/sparrowwallet/sparrow/issues/39" }
gharchive/issue
Sparrow and Electrum produce different signatures Hi! I recently tried to sign a message from Sparrow, to prove my ownership of an address, but the signature got rejected on the website. So I tried with Electrum: I exported the wallet, opened it in Electrum, and signed the same message with the same address, and it resulted to a different signature, which has been accepted by the website. For example, let's sign the message containing just the word Test with this address 36TcRXxXDgwv5J78EL8QnTTyh5owFAXpb9: Electrum sig: ID22tR17Iv4JDZOPTDznywOrmP6YsJEhEvlQhvYjqdO4VCiFJOGXiTpZcxzzdOgb4U4/5BsCbnHg/E1xDddgKLY= Sparrow sig: I21pjaclbCUb+agjKkFTJSHdKYPACzDjsrHdimpuXIEBSDb1kicauaWHZzo+he397Isug//jNNUdF22wf2QQQeY= They are different, and each one is considered as invalided when verified in the software that did not produced it. First off, that's strange - I am able to verify the Electrum sig using Sparrow. Can you try again? The longer answer is that, unfortunately, there is no universal standard for message signing for non-P2PKH (addresses that start with 3 or bc1), and in fact Core does not support signing messages with these addresses. Electrum and Trezor followed different paths that unfortunately create different signatures. You can read more about it here: https://github.com/bitcoinjs/bitcoinjs-message/pull/29 Sparrow tries to be standards compliant, and so follows the Trezor path (which is at least documented in a BIP: https://github.com/bitcoin/bips/blob/master/bip-0137.mediawiki) in terms of generating signatures. Sparrow will however validate signatures using the Electrum format as well. First off, that's strange - I am able to verify the Electrum sig using Sparrow. Can you try again? Just tried again: it appears that Sparrow can verify Electrum sig while Electrum can't for Sparrow sig. Thanks for the multiple details and links. Indeed, I tried a few online tools to verify Bitcoin signatures, and none of them give the same result (one accepts both sig, while another accepts only one out of 2, and others neither of them). Also, I'd suggest to display the hash of the message to be signed in Sparrow, as HW like Ledger display it on its screen, to allow the user to verify it directly on the device. We actually don't know if we sign the good message and not another one, as we cannot compare hashes. Thanks a lot again. Outstanding Action: Add note that electrum can't verify sparrow sigs? Proposed Priority: Medium (Could cause confusion / frustration) As of dbfed31 Sparrow now allows toggling between the Electrum and BIP137 signature formats: Signatures from messages signed with non-P2PKH addresses can now be verified in Electrum. Closing this off as resolved. Awesome work, thanks @craigraw!
2025-04-01T04:35:33.665422
2024-12-26T19:50:58
2760094311
{ "authors": [ "SantiagoBolumburu", "bidonazzo", "sickmz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10962", "repo": "spartan-ng/spartan", "url": "https://github.com/spartan-ng/spartan/issues/545" }
gharchive/issue
Invalid package name "@spartan-ng/brain/collapsible" of package "<EMAIL_ADDRESS>name can only contain URL-friendly characters Please provide the environment you discovered this bug in. Node version: v20.18.1 NPM version: 10.8.2 Angular version: 19.0.6 Which area/package is the issue in? collapsible Description I created a new app with ng new, to simulate this error I don't understand. I followed all the steps of spartan/ui and also of the tailwind configuration. When I get to the final part of adding the primitives, with the ng g @spartan-ng/cli:ui command, a list pops up where I choose the primitive to install. If I choose a simple primitive such as a button, it is installed without any problems in libs/ui If I try to install a collapsible or accordion I get the following error: npm error code EINVALIDPACKAGENAME npm error Invalid package name “@spartan-ng/brain/collapsible” of package “<EMAIL_ADDRESS>name can only contain URL-friendly characters. npm error A complete log of this run can be found in: /home/bidonazzo/.npm/_logs/2024-12-26T19_38_48_400Z-debug-0.log Command failed: npm install Please provide the exception or error you saw 0 verbose cli /usr/bin/node /usr/bin/npm 1 info using<EMAIL_ADDRESS>2 info using<EMAIL_ADDRESS>3 silly config load:file:/usr/lib/node_modules/npm/npmrc 4 silly config load:file:/home/bidonazzo/spartan-test/.npmrc 5 silly config load:file:/home/bidonazzo/.npmrc 6 silly config load:file:/usr/etc/npmrc 7 verbose title npm install 8 verbose argv "install" 9 verbose logfile logs-max:10 dir:/home/bidonazzo/.npm/_logs/2024-12-26T19_38_48_400Z- 10 verbose logfile /home/bidonazzo/.npm/_logs/2024-12-26T19_38_48_400Z-debug-0.log 11 silly packumentCache heap:2197815296 maxSize:549453824 maxEntrySize:274726912 12 silly logfile start cleaning logs, removing 1 files 13 silly logfile done cleaning log files 14 silly idealTree buildDeps 15 verbose stack Error: Invalid package name "@spartan-ng/brain/collapsible" of package "<EMAIL_ADDRESS>name can only contain URL-friendly characters. 15 verbose stack at invalidPackageName (/usr/lib/node_modules/npm/node_modules/npm-package-arg/lib/npa.js:111:15) 15 verbose stack at Result.setName (/usr/lib/node_modules/npm/node_modules/npm-package-arg/lib/npa.js:158:11) 15 verbose stack at new Result (/usr/lib/node_modules/npm/node_modules/npm-package-arg/lib/npa.js:147:10) 15 verbose stack at Function.resolve (/usr/lib/node_modules/npm/node_modules/npm-package-arg/lib/npa.js:59:15) 15 verbose stack at #nodeFromEdge (/usr/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:1036:22) 15 verbose stack at #buildDepStep (/usr/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:901:35) 15 verbose stack at #buildDeps (/usr/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:754:30) 15 verbose stack at Arborist.buildIdealTree (/usr/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:181:28) 15 verbose stack at async Promise.all (index 1) 15 verbose stack at async Arborist.reify (/usr/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:131:5) 16 error code EINVALIDPACKAGENAME 17 error Invalid package name "@spartan-ng/brain/collapsible" of package "<EMAIL_ADDRESS>name can only contain URL-friendly characters. 18 silly unfinished npm timer reify<PHONE_NUMBER>563 19 silly unfinished npm timer reify:loadTrees<PHONE_NUMBER>570 20 silly unfinished npm timer idealTree:buildDeps<PHONE_NUMBER>958 21 silly unfinished npm timer idealTree:#root<PHONE_NUMBER>959 22 verbose cwd /home/bidonazzo/spartan-test 23 verbose os Linux 6.5.11-7-pve 24 verbose node v20.18.1 25 verbose npm v10.8.2 26 verbose exit 1 27 verbose code 1 28 error A complete log of this run can be found in: /home/bidonazzo/.npm/_logs/2024-12-26T19_38_48_400Z-debug-0.log Other information No response I would be willing to submit a PR to fix this issue [X] Yes [ ] No look similar to #532 looks similar to #532 I have the same problem @bidonazzo when trying to set up a new angular project. Attempted the solution proposed in #532 without success.
2025-04-01T04:35:33.681799
2024-01-19T00:06:54
2089316304
{ "authors": [ "Nielsvanpach", "kapi14" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10963", "repo": "spatie/holidays", "url": "https://github.com/spatie/holidays/pull/112" }
gharchive/pull-request
Add Ukrainian holidays Hi Spatie! There is one extra function for calculating the correct Eastern date due to the Meeus Julian algorithm or Revised Julian calendar Could you look at the failing tests? Thanks! Hi @Nielsvanpach, Sure, should be good now. PR looks good! 👍 I'll leave this one open until I've seen some approvals! Thanks for reviewing. Thanks!
2025-04-01T04:35:33.684086
2022-03-30T14:38:10
1186563783
{ "authors": [ "AlexVanderbist", "sanadhiflaoui" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10964", "repo": "spatie/ignition", "url": "https://github.com/spatie/ignition/issues/116" }
gharchive/issue
NotFoundError: Failed to execute 'removeChild' on 'Node': The node to be removed is not a child of this node. Please include some context and the contents of the console in your browser's developer tools. JavaScript Error Error: Failed to execute 'removeChild' on 'Node': The node to be removed is not a child of this node. at Rl (http://localhost:8000/api/quiz/:148:100946) at Cc (http://localhost:8000/api/quiz/:148:112448) at t.unstable_runWithPriority (http://localhost:8000/api/quiz/:148:11072) at $a (http://localhost:8000/api/quiz/:148:56140) at Lc (http://localhost:8000/api/quiz/:148:110483) at Ec (http://localhost:8000/api/quiz/:148:106602) at http://localhost:8000/api/quiz/:148:56362 at t.unstable_runWithPriority (http://localhost:8000/api/quiz/:148:11072) at $a (http://localhost:8000/api/quiz/:148:56140) at qa (http://localhost:8000/api/quiz/:148:56308) Reproduction Steps Please tell us what you were doing when this error occurred, so we can more easily debug it and find a solution. … User Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.84 Safari/537.36 Duplicate of #109
2025-04-01T04:35:34.089159
2022-05-13T14:35:14
1235327849
{ "authors": [ "Adel-Moumen", "RD0617", "anautsch" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10965", "repo": "speechbrain/speechbrain", "url": "https://github.com/speechbrain/speechbrain/issues/1397" }
gharchive/issue
Doesn't download models Hello! I trying to write program for recognize speech and language, i'm using python for this and speechbrain, but when i use example from internet: import torchaudio from speechbrain.pretrained import EncoderClassifier classifier = EncoderClassifier.from_hparams(source="speechbrain/lang-id-commonlanguage_ecapa", savedir="pretrained_models/lang-id-commonlanguage_ecapa") out_prob, score, index, text_lab = classifier.classify_file('speechbrain/lang-id-commonlanguage_ecapa/example-it.wav') print(text_lab) out_prob, score, index, text_lab = classifier.classify_file('speechbrain/lang-id-commonlanguage_ecapa/example-fr.wav') print(text_lab) Download speed drops to zero after a couple of seconds, What is the problem? And how i can use local model from my pc without downloading? Hi @RD0617 thanks for reaching out. I ran the code on my end (created a tmp folder to run it in there), and couldn't reproduce the error. Is it still persisting on your end? About downloading, at some point you might need to download a pretrained model to use it. There are two ways to get a pretrained model from HuggingFace: use the identifier source="speechbrain/lang-id-commonlanguage_ecapa" and let python handle it use git and download it manually upfront, ensure the savedir="pretrained_models/lang-id-commonlanguage_ecapa" is met, so the python scripts see there's nothing to do for them (since everything is downloaded already). Example for the latter mkdir pretrained_models && cd pretrained_models git clone https://huggingface.co/speechbrain/lang-id-commonlanguage_ecapa cd .. # python -> run the example snippet That way, you are in full control of the download process. Hello, is the issue still up? Many thanks! Hello, it seems that there has been no activity for a long time, so I am closing this issue. Feel free to reopen it if you still encounter the problem. Thank you. 🙂
2025-04-01T04:35:34.095333
2024-09-19T00:30:46
2534999079
{ "authors": [ "mravanelli" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10966", "repo": "speechbrain/speechbrain", "url": "https://github.com/speechbrain/speechbrain/pull/2691" }
gharchive/pull-request
Update README.md What does this PR do? Fixes #<issue_number> Before submitting [ ] Did you read the contributor guideline? [ ] Did you make sure your PR does only one thing, instead of bundling different changes together? [ ] Did you make sure to update the documentation with your changes? (if necessary) [ ] Did you write any new necessary tests? (not for typos and docs) [ ] Did you verify new and existing tests pass locally with your changes? [ ] Did you list all the breaking changes introduced by this pull request? [ ] Does your code adhere to project-specific code style and conventions? PR review Reviewer checklist [ ] Is this pull request ready for review? (if not, please submit in draft mode) [ ] Check that all items from Before submitting are resolved [ ] Make sure the title is self-explanatory and the description concisely explains the PR [ ] Add labels and milestones (and optionally projects) to the PR so it can be classified [ ] Confirm that the changes adhere to compatibility requirements (e.g., Python version, platform) [ ] Review the self-review checklist to ensure the code is ready for review I slightly modified the README to accomodate a comment made by a reviewer of our JMLR paper submission.
2025-04-01T04:35:34.103238
2018-02-11T17:19:13
296206936
{ "authors": [ "Pyknic", "minborg", "muhdkhokhar" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10967", "repo": "speedment/speedment", "url": "https://github.com/speedment/speedment/issues/588" }
gharchive/issue
java.lang.UnsupportedOperationException I am getting java.lang.UnsupportedOperationException When I try to generate code from postgres 10 Full stack trace java.lang.UnsupportedOperationException at java.util.Collections$UnmodifiableMap.put(Collections.java:1457) at com.speedment.generator.standard.lifecycle.GeneratedMetadataTranslator.makeCodeGenModel(GeneratedMetadataTranslator.java:78) at com.speedment.generator.standard.lifecycle.GeneratedMetadataTranslator.makeCodeGenModel(GeneratedMetadataTranslator.java:46) at com.speedment.generator.translator.AbstractJavaClassTranslator.get(AbstractJavaClassTranslator.java:144) at com.speedment.generator.translator.AbstractJavaClassTranslator.get(AbstractJavaClassTranslator.java:62) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at com.speedment.generator.core.internal.translator.TranslatorManagerHelper.accept(TranslatorManagerHelper.java:142) at com.speedment.generator.core.internal.translator.DefaultTranslatorManager.accept(DefaultTranslatorManager.java:58) at com.speedment.tool.core.internal.util.ConfigFileHelper.generateSources(ConfigFileHelper.java:416) at com.speedment.tool.core.internal.component.UserInterfaceComponentImpl.lambda$generate$10(UserInterfaceComponentImpl.java:358) at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822) at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797) at java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) ok. Thanks. I have scheduled this issue to release 3.0.22 Can you upload the speedment.json file for your project to this issue? speedment.json file uploaded as requested speedment.zip Thanks I think I can see the problem. It is a bug introduced when we added a version check to make sure the code is executed with the same version of Speedment as it was generated with. I will fix this now and make sure it is included in the 3.0.22-release. Until then, you can either go back to 3.0.20 that doesn't have this issue or build Speedment 3.0.22-SNAPSHOT yourself by checking out the develop-3.0-branch. Hm, it didn't fis the problem entirely. It seems to be related to generating code from the tool. Using mvn speedment:generate seems to work. I have fixed this now. I can verify that I get the same exception with 3.0.21 and it doesn't show up in 3.0.22-SNAPSHOT. We should do an immediate release of 3.0.22 since this will affect most users.
2025-04-01T04:35:34.123334
2018-02-26T23:32:13
300437154
{ "authors": [ "eparis", "prologic" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10968", "repo": "spf13/pflag", "url": "https://github.com/spf13/pflag/issues/159" }
gharchive/issue
sub-commands with global options + help? Is this doable with pflag? Like what cobra supports but more minimal/library form. (cobra is way overkill for my use-case) pflag has FlagSets which you can manage as you see fit. But the concept of subcommands and global flags and all of that are a cobra concept. You're going to have to rebuild cobra :-( I see; Thanks!
2025-04-01T04:35:34.127173
2015-07-27T08:32:42
97412103
{ "authors": [ "lauraluiz", "schleichardt" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10969", "repo": "sphereio/sphere-sunrise", "url": "https://github.com/sphereio/sphere-sunrise/issues/137" }
gharchive/issue
Requests to SPHERE.IO are slow @lauraluiz The requests to sphere on the PDP are quite slow can we look together into this? I would love to know what it is before the release of SDK 1.0.0-RC1. The PDP is using Query instead of Search for suggestion This was fixed recently.
2025-04-01T04:35:34.128695
2020-08-16T01:29:27
679670034
{ "authors": [ "sphillips431" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10970", "repo": "sphillips431/website", "url": "https://github.com/sphillips431/website/pull/2" }
gharchive/pull-request
updating readme from example branch updates i updated the readme file. Please review and accept. #thoughts i probably shouldn't, but okay 👍
2025-04-01T04:35:34.137248
2022-05-27T14:00:14
1250805453
{ "authors": [ "jdknight", "secbyd" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10971", "repo": "sphinx-contrib/confluencebuilder", "url": "https://github.com/sphinx-contrib/confluencebuilder/issues/656" }
gharchive/issue
Access requirements on Confluence Could you please elaborate on the minimum access requirements the Sphinx for Confluence account needs to publish/delete pages. I created a separate user with a PAT and granted it full admin access to a particular space. While debugging the first request is successful (HTTP 200) the second request results in a HTTP 403. DEBUG:urllib3.connectionpool:https://CONFSERVER:443 "GET /rest/api/space?spaceKey=TAT&limit=1 HTTP/1.1" 200 None DEBUG:urllib3.connectionpool:https://CONFSERVER:443 "GET /rest/api/search?cql=type%3Dspace+and+space.title~%22TEST%22&limit=2 HTTP/1.1" 403 None The second REST request is accessible for a different account but as this has privileged access, it is difficult to determine which right to grant the Sphinx account. PS It does not seem right to search for space.title as space.title and spacekey are usually different. Changing confluence_space_key in conf.py to the actual space.title still resulted in a 403 which rules out a potential problem there. Any suggestions would be highly appreciated. The initial "successful" request (200) is a bit misleading. The first request will search for a space with a key value TAT for the configured user/PAT. While the request for the space has been successfully invoked, it is reporting that no space key exists ("200 None"). This is either: The space with the key TAT does not exist on the Confluence instance; or, The user requesting the search does not have permission to view the Confluence instance. In this case, it appears that it would be the latter case, since the fallback search request results in an authentication issue. Note that the secondary search request only occurs if the first space search attempt fails. It's purpose is solely used to help find possible other spaces with similar matching names based off the configured space key. The choice of the space's title value was used in this case over the space's key value. I would agree that both are completely different, but Confluence's search API does not provide a way to wildcard search on space keys (please, correct me if I am wrong here). The main reason why the alternative search was added was that a couple of times, users reported configuration issues only to find out that they copied-pasted the space's title value, over the key value -- this was an attempt to help present a more descriptive error case to these users. I have always found it interesting that an explicit search API call (/wiki/rest/api/search) can throw an authentication error, but a space search (/wiki/rest/api/space) will not. I am sure Atlassian has its reasons for this. As for permission settings, it is important that the users is given access (or is part of the group) with the following options checked for a site: Space - View Page - Add, Delete Attachments - Add, Delete Note that even if a user is given "Admin" access, does not mean the user has permission to edit pages (although, I would assume search requests should be succeeding). If a user is configured with the above permissions and has a PAT matching the value set in confluence_publish_token (and other authentication options are not used, such as confluence_server_user), authentication should succeed. If it does not, the only other thing I can think of off hand is that a proxy is being used and the Authorization header is not being forwarded to Confluence. Two updates have been made associated with this issue. First, the implementation has been updated for users running a debug mode to clarify what is happening (if users see one request pass, another fail, debug logs will reflect why this is to help advanced users understand the URL response codes). Second, a new Publishing permissions section has been added into the documentation to provide clarity to users/system administrators on the permissions needed for this extension. Hopefully this provides additional information to users related to access requirements. Marking as closed.
2025-04-01T04:35:34.157299
2024-06-05T17:10:49
2336423337
{ "authors": [ "ShravanSunder", "j-mendez" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10972", "repo": "spider-rs/spider-clients", "url": "https://github.com/spider-rs/spider-clients/issues/3" }
gharchive/issue
crawl_url does not have any types for params the api is missing types for a lot of method parametesr thank you for the issue, this is now fixed in v0.0.23.
2025-04-01T04:35:34.162422
2024-09-13T09:32:48
2524362069
{ "authors": [ "MachinistJohn", "spimort" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10973", "repo": "spimort/TerraBrush", "url": "https://github.com/spimort/TerraBrush/issues/128" }
gharchive/issue
water shader work around not an issue just an idea. why not query the engine for its version number and have two separate shaders. then just load the one for the corresponding godot version. that way you can support everyone in one addon. func get_godot_version(): var version = Engine.get_version_info() print("Godot version: %s.%s.%s" % [version.major, version.minor, version.patch]) # If you want to include the status (e.g., beta, rc) if version.status: print("Status: ", version.status) / That's a great idea! I'll try to work on that 🙂 This as been done with https://github.com/spimort/TerraBrush/pull/131 Thanks for the suggestion!
2025-04-01T04:35:34.176562
2018-05-30T12:58:21
327719088
{ "authors": [ "dorbin", "lwander" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10974", "repo": "spinnaker/spinnaker.github.io", "url": "https://github.com/spinnaker/spinnaker.github.io/issues/810" }
gharchive/issue
Document how to tweak/operationalize spinnaker Namely has given a great writeup of their Spinnaker journey. In it they cover a lot of configuration changes/tweaks that would be useful to know of without having to read Spinnaker's codebase. We should at the very least document these somewhere in a separate "setup" heading. So I don't forget, it seems redis: configuration: secure: true Is important in some cases... need to dig into why Resolving this issue, per #811 See Productionize Spinnaker.
2025-04-01T04:35:34.190685
2024-09-04T12:23:21
2505201247
{ "authors": [ "samsonasik" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10975", "repo": "spiral/framework", "url": "https://github.com/spiral/framework/pull/1136" }
gharchive/pull-request
Apply Rector on src/*/tests for tests inside each components Q A QA? ✔️ Register tests directory that inside each of components to be applied by rector as well. Ready to merge 👍
2025-04-01T04:35:34.208486
2019-10-31T20:04:58
515709425
{ "authors": [ "P4T12ICK", "patel-bhavin" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10976", "repo": "splunk/security-content", "url": "https://github.com/splunk/security-content/pull/241" }
gharchive/pull-request
updating generate to comment out description in transforms btool says invalid key "description" , commenting that key as a fix Looks good.
2025-04-01T04:35:34.218106
2021-11-09T10:47:55
1048455948
{ "authors": [ "P4T12ICK", "tccontre" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10977", "repo": "splunk/security_content", "url": "https://github.com/splunk/security_content/pull/1787" }
gharchive/pull-request
fix_issues fixing issues modified: detections/endpoint/cmdline_tool_not_executed_in_cmd_shell.yml (fix description) modified: detections/endpoint/office_application_spawn_rundll32_process.yml (minor change) renamed and improved the detections renamed: detections/endpoint/regsvr32_silent_param_dll_loading.yml -> detections/endpoint/regsvr32_silent_and_install_param_dll_loading.yml renamed: detections/endpoint/suspicious_icedid_regsvr32_cmdline.yml -> detections/endpoint/regsvr32_with_known_silent_switch_cmdline.yml renamed: tests/endpoint/regsvr32_silent_param_dll_loading.test.yml -> tests/endpoint/regsvr32_silent_and_install_param_dll_loading.test.yml renamed: tests/endpoint/suspicious_icedid_regsvr32_cmdline.test.yml -> tests/endpoint/regsvr32_with_known_silent_switch_cmdline.test.yml This PR was created by Automated Detection Testing Service :robot: Tested detections: Regsvr32 Silent and Install Param Dll Loading :white_check_mark: diskUsage: 528384 runDuration: 6.074s scanCount: 6697 This PR was created by Automated Detection Testing Service :robot: Tested detections: Regsvr32 with Known Silent Switch Cmdline :x: diskUsage: 606208 runDuration: 56.358000000000004s scanCount: 98068 This PR was created by Automated Detection Testing Service :robot: Tested detections: Regsvr32 with Known Silent Switch Cmdline :x: diskUsage: 528384 runDuration: 53.591s scanCount: 98068 This PR was created by Automated Detection Testing Service :robot: Tested detections: Office Application Drop Executable :white_check_mark: diskUsage: 983040 runDuration: 31.345000000000002s scanCount: 28118 This PR was created by Automated Detection Testing Service :robot: Tested detections: Regsvr32 with Known Silent Switch Cmdline :white_check_mark: diskUsage: 610304 runDuration: 57.120000000000005s scanCount: 98255 This PR was created by Automated Detection Testing Service :robot: Tested detections: Regsvr32 with Known Silent Switch Cmdline :white_check_mark: diskUsage: 507904 runDuration: 53.632s scanCount: 98068 This PR was created by Automated Detection Testing Service :robot: Tested detections: Regsvr32 Silent and Install Param Dll Loading :white_check_mark: diskUsage: 606208 runDuration: 5.4430000000000005s scanCount: 6737
2025-04-01T04:35:34.219335
2021-12-15T03:52:45
1080532933
{ "authors": [ "d1vious", "dleung-splunk" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10978", "repo": "splunk/security_content", "url": "https://github.com/splunk/security_content/pull/1889" }
gharchive/pull-request
fixes a logic bug with the api that caused ci to fail logic was incorrect in generate.py Looks good
2025-04-01T04:35:34.224877
2020-03-10T08:58:08
578427684
{ "authors": [ "bmillemathias-1a" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10979", "repo": "splunk/splunk-connect-for-kubernetes", "url": "https://github.com/splunk/splunk-connect-for-kubernetes/issues/343" }
gharchive/issue
Openshift deployment not complete ? (I'm starting to play with splunk / splunk-connect-for-k8s, sorry I'm perhaps wrong) What happened: Platform: Openshift 4.3 I deployed the manifests (I don't use helm for deployment although I use it to render manifests) from master, and I got permission issue like this on the logging pods Error creating: pods "release-name-splunk-kubernetes-logging-" is forbidden: unable to validate against any security context constraint: [spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000580000,<PHONE_NUMBER>] spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed] I found the solution was to execute steps given on https://blog.openshift.com/splunk-connect-for-openshift-logging-part/ oc create sa splunk-kubernetes-logging # not necessary it seems oc adm policy add-scc-to-user privileged -z splunk-kubernetes-logging # name may vary depending on the What you expected to happen: These steps to be documented on this repo or automatically done. How to reproduce it (as minimally and precisely as possible): modify splunk-connect-for-kubernetes/helm-chart/splunk-connect-for-kubernetes/values.yaml mkdir rendered-manifests helm template -n '1-4-0' -f values.yaml . --output-dir ./rendered-manifests find ./rendered-manifes -type f | xargs -n1 oc apply -f .... oc get pods # show no logger pods are running oc get events # show error Anything else we need to know?: I saw there was some work done on openshift support but not sure if it is complete or not. I've modified in values.yaml to have openshift: true Environment: Kubernetes version (use kubectl version): openshift 4.3 kube v1.16.2 Ruby version (use ruby --version): n/a OS (e.g: cat /etc/os-release): Splunk version: 7.3.x Others: I destroyed and recreated several times all manifests and got the same issue. I removed the whole project and creating everything again from scratch and it worked fine .... sorry for the noise
2025-04-01T04:35:34.258022
2015-08-17T08:14:11
101366025
{ "authors": [ "spooning" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10980", "repo": "spooning/SeleniumLibrary", "url": "https://github.com/spooning/SeleniumLibrary/issues/38" }
gharchive/issue
Start server keyword Originally submitted to Google Code by @yanne on 8 May 2009 This could also be exposed as module level function Originally submitted to Google Code by @yanne on 13 May 2009 Implemented in r172
2025-04-01T04:35:34.265661
2020-10-09T06:15:31
717883408
{ "authors": [ "GithubBoat", "abh80" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10981", "repo": "spotify-api/spotify-api.js", "url": "https://github.com/spotify-api/spotify-api.js/pull/3" }
gharchive/pull-request
update new docs Added new docs @abh80, Hey thanks for submitting this Pull Request, please wait while a collaborator reviews your PR
2025-04-01T04:35:34.274149
2016-10-11T12:40:06
182253075
{ "authors": [ "codecov-io", "davidxia", "dmandalidis" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10982", "repo": "spotify/docker-client", "url": "https://github.com/spotify/docker-client/pull/524" }
gharchive/pull-request
Use AuthConfig for service creation in swarm mode (closes #523) Removal of obsolete ServiceCreateOptions Current coverage is 45.92% (diff: 100%) Merging #524 into master will increase coverage by 0.09% @@ master #524 diff @@ ========================================== Files 131 130 -1 Lines 4294 4285 -9 Methods 0 0 Messages 0 0 Branches 636 634 -2 ========================================== Hits 1968 1968 + Misses 2138 2129 -9 Partials 188 188 Powered by Codecov. Last update bd67c8c...f4fe5d7 @davidxia hey it's me again (if you please). Thanks in advance @davidxia @mattnworb Can you please check this? @dmandalidis Thanks for the PR. Sorry for the delay.
2025-04-01T04:35:34.278634
2020-09-11T15:46:10
699469333
{ "authors": [ "mattnworb", "ptahchiev" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10983", "repo": "spotify/dockerfile-maven", "url": "https://github.com/spotify/dockerfile-maven/issues/362" }
gharchive/issue
Very slow So I have the following layout (super standard maven build): - pom.xml - src/main/docker/Dockerfile - many more files/folders in src - target/myapp.jar I have configured the dockerfile-maven-plugin as such: <plugin> <groupId>com.spotify</groupId> <artifactId>dockerfile-maven-plugin</artifactId> <configuration> <buildArgs> <JAR_FILE>${project.build.finalName}.jar</JAR_FILE> </buildArgs> <contextDirectory>${basedir}</contextDirectory> <dockerfile>src/main/docker/Dockerfile</dockerfile> <noCache>true</noCache> <repository>nemesis/${project.artifactId}</repository> </configuration> </plugin> now when I run mvn dockerfile:build every time it takes 2 minutes before it even start building the image!!!!! I think it is because it tries to add all the files in . as . is my contextDirectory. But I have no other option - my Dockerfile is in src and my .jar file is in target so my only option is to have the contextDirectory as .. Am I right? How can I make it faster? The better solution would be to include the target/app.jar and by default have everything excluded. If you take a look at the docs, you can put something in your .dockerignore file like: target !target/app.jar to ignore everything under target except a given path
2025-04-01T04:35:34.286611
2018-02-02T21:40:05
294031737
{ "authors": [ "codecov-io", "pettermahlen" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10984", "repo": "spotify/mobius", "url": "https://github.com/spotify/mobius/pull/6" }
gharchive/pull-request
add a couple of tests and fix an InitSpec bug If an exception was expected but didn't happen, InitSpec wouldn't fail. Fixed that and added validation of that behaviour for both InitSpec and UpdateSpec. Codecov Report Merging #6 into master will increase coverage by 0.58%. The diff coverage is 100%. @@ Coverage Diff @@ ## master #6 +/- ## ============================================ + Coverage 73.03% 73.62% +0.58% Complexity 196 196 ============================================ Files 60 60 Lines 1198 1202 +4 Branches 60 61 +1 ============================================ + Hits 875 885 +10 + Misses 308 303 -5 + Partials 15 14 -1 Impacted Files Coverage Δ Complexity Δ .../main/java/com/spotify/mobius/test/UpdateSpec.java 100% <ø> (+10.52%) 5 <0> (ø) :arrow_down: ...rc/main/java/com/spotify/mobius/test/InitSpec.java 80% <100%> (+17.5%) 2 <0> (ø) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 786125a...c9a9d9e. Read the comment docs.
2025-04-01T04:35:34.290761
2020-12-05T02:04:21
757511175
{ "authors": [ "scala-steward" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10985", "repo": "spotify/ratatool", "url": "https://github.com/spotify/ratatool/pull/357" }
gharchive/pull-request
Update google-api-services-bigquery to v2-rev20201030-1.31.0 Updates com.google.apis:google-api-services-bigquery from v2-rev20200925-1.30.10 to v2-rev20201030-1.31.0. I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Ignore future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "com.google.apis", artifactId = "google-api-services-bigquery" } ] labels: library-update Superseded by #372.
2025-04-01T04:35:34.293968
2021-11-02T02:46:25
1041828135
{ "authors": [ "scala-steward" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10986", "repo": "spotify/ratatool", "url": "https://github.com/spotify/ratatool/pull/500" }
gharchive/pull-request
Update scala-library to 2.13.7 Updates org.scala-lang:scala-library from 2.13.6 to 2.13.7. I'll automatically update this PR to resolve conflicts as long as you don't change it yourself. If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below. Configure Scala Steward for your repository with a .scala-steward.conf file. Have a fantastic day writing Scala! Ignore future updates Add this to your .scala-steward.conf file to ignore future updates of this dependency: updates.ignore = [ { groupId = "org.scala-lang", artifactId = "scala-library" } ] labels: library-update, early-semver-patch, semver-spec-patch Superseded by #513.
2025-04-01T04:35:34.317782
2020-06-16T15:51:22
639782584
{ "authors": [ "TomBPotochek", "petergmurphy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10987", "repo": "spotlightify/spotlightify", "url": "https://github.com/spotlightify/spotlightify/issues/48" }
gharchive/issue
[Issue] Access Token Not Refreshing App fails to refresh the API user token if it has been open for longer than one hour. At the moment the app requires a restart for the token to be refreshed. This needs to be fixed. The function to refresh tokens can be found inside of app.py on the master branch. I get the same problem. The console yields the following lines: [WARNING] Could not refresh user API token HTTP Error for GET to https://api.spotify.com/v1/me/player returned 401 due to The access token expired This has been fixed
2025-04-01T04:35:34.383578
2017-07-13T21:36:42
242837801
{ "authors": [ "sabbyanandan", "trisberg" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10988", "repo": "spring-cloud/spring-cloud-dataflow-server-kubernetes", "url": "https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes/issues/205" }
gharchive/issue
Test SCDF deployment on Platform9/AWS Try deploying using Kube2go - https://kube2go.io/ This is not applicable anymore.
2025-04-01T04:35:34.397023
2019-05-21T20:04:14
446800363
{ "authors": [ "PadmaGit2", "hariharakumar", "rishuyadav", "spencergibb" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10989", "repo": "spring-cloud/spring-cloud-gateway", "url": "https://github.com/spring-cloud/spring-cloud-gateway/issues/1072" }
gharchive/issue
Document Weight predicate Hi , i am trying to customize or use any available predicate to use for A/B testing - like for the same route send 10% traffic to destination B and remaining 90% to A There is a Weight predicate (while trying to find the docs, I couldn't, so I'll reuse this issue for that.) Under spring.cloud.gateway.routes - id: weight_high_test uri: ${test.uri} predicates: - Host=**.weighthigh.org - Weight=group1, 8 - id: weight_low_test uri: ${test.uri} predicates: - Host=**.weightlow.org - Weight=group1, 2 The weights are calculated per group. Thank you Spencer, @spencergibb , I saw something about ThrottleRoutePredicateFactory - is that old? i dont find it in the master branch anymore. It's only a sample. @spencergibb - Thank you - I think this works, but I was wondering how this uses the weight - does it take the total # of requests in a day? or an in hour? and splits it into X % to service A and Y% to service B. does it reset the count if I restart the gateway application? I am using a config server to hold the application.yml and using actuator refresh for gateway to pick it up. I wanted to know how the gateway changes - does it consider the apps in a day or an hour and split? @PadmaGit2 : My use case is similar to yours. Can you tell me about your experience with weight based routing implementation in production? Did you find answers for this part of your question: I was wondering how this uses the weight - does it take the total # of requests in a day? or an in hour? and splits it into X % to service A and Y% to service B. does it reset the count if I restart the gateway application? Thanks in advance. I also have a quite similar use case @hariharakumar @PadmaGit2 , can u please share more about your experience with weight based routing implementation, also i was facing difficulties in implementing it in configuration java class, i want to make the uri and split percentages dynamic. Thanks
2025-04-01T04:35:34.399695
2021-05-27T14:01:25
903805193
{ "authors": [ "jaredstehler", "spencergibb", "steven-sheehy" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10990", "repo": "spring-cloud/spring-cloud-kubernetes", "url": "https://github.com/spring-cloud/spring-cloud-kubernetes/pull/805" }
gharchive/pull-request
Handle case where given entity to check readiness for is null/missing resolves https://github.com/spring-cloud/spring-cloud-kubernetes/issues/804 can you add a test? @jaredstehler Any update? Do you need any help on this? We're impacted by this as well so I can take this over if you don't have bandwidth to add those tests. let me take a crack at it this evening; i'll punt it over to you if I can't get something working. added a test @spencergibb this should be good to go now? Is this still needed since fabric8 kubernetes-client 5.5.0 is now integrated with the fix to not throw IllegalArgumentException?
2025-04-01T04:35:34.409064
2017-12-12T05:36:08
281263853
{ "authors": [ "fahimfarookme", "philwebb", "ryanjbaxter", "spencergibb" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10991", "repo": "spring-cloud/spring-cloud-netflix", "url": "https://github.com/spring-cloud/spring-cloud-netflix/issues/2538" }
gharchive/issue
RefreshablePeerEurekaNodesTests has 2 broken tests in master updatedWhenAvailabilityZoneChanged() and updatedWhenRegionChanged() Tests commented out in 170739e662d1a57e6013b7f1599f1e58409bb87e /cc @fahimfarookme @ryanjbaxter Thanks. I started to look at this and then had to get on my flight back from Spring One. You would have thought that the build on CircleCI would have caught this. I went back and looked at the PR and realized that the build was never run....weird. https://github.com/spring-cloud/spring-cloud-netflix/pull/2455 The build on CircleCI passed in the PR . I'm looking into the issue on master. My current observation is only updatedWhenAvailabilityZoneChanged fails, not updatedWhenRegionChanged. PFB the analysis so far on failing updatedWhenAvailabilityZoneChanged. This test case is passing on my PR branch - which is based on spring-boot 1x / spring-cloud-commons 1x. Once merged it's failing only on master - which is based on spring-boot 2x / spring-cloud-commons 2x. On master, EurekaClientConfigBean#availabilityZones is not being updated except for very first EnvironmentChangeEvent hence the above test case is failing. This is in fact a generic issue - that is; In spring-boot 2x / spring-cloud-commons 2x, Maps in @ConfigurationProperties are not being updated from the second EnvironmentChangeEvent onwards. I did a PoC here to confirm that - the same set of test cases passed here (spring-boot 1x), but failed here (spring-boot 2x). i.e. Following test fails in spring-boot 2x, passed in spring-boot 1x @ConfigurationProperties class Props { private String str; private Map<String, String> map = new HashMap<>(); ... } // first EnvironmentChangeEvent TestPropertyValues.of("str=str-value").applyTo(this.environment); context.publishEvent(new EnvironmentChangeEvent(..."str")); // 'str-value' is reflected in @ConfigurationProperties bean - in both spring-boot 1x and 2x // second EnvironmentChangeEvent - updates the map TestPropertyValues.of("map.key=map-value").applyTo(this.environment); context.publishEvent(new EnvironmentChangeEvent(..."map.key")); // 'map-value' is NOT reflected in @ConfigurationProperties bean - in spring-boot 2x // fails only in spring-boot 2x assertEquals("map-value", context.getBean(Props.class).getMap().get("key")); Can this be a side effect from spring-cloud-context' upgrade to spring-boot-2 and part of this #10685? Maybe @spencergibb @dsyer can shed some lights? I will anyway investigate it further tonight. Sorry for the delayed response. Failing test cases above are due to a spring-boot-2 bug. PR raised here #13344. Should be able to uncomment the test cases once #13344 is merged. Great thanks for looking at this, we will monitor the PR in Boot. https://github.com/spring-projects/spring-boot/pull/13344 is fixed. I removed the @Ignores from RefreshablePeerEurekaNodesTests and they seem OK now. :+1:
2025-04-01T04:35:34.417216
2022-02-17T11:47:00
1141234609
{ "authors": [ "lrozek", "marcingrzejszczak" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10992", "repo": "spring-cloud/spring-cloud-sleuth", "url": "https://github.com/spring-cloud/spring-cloud-sleuth/issues/2117" }
gharchive/issue
Add trace context propagation for redis streams Describe the solution you'd like There is a missing instrumentation for redis streams that would inject tracing headers into stream entry on producer side and on consumer side that would extract these headers or start a new trace context when missing I created a sample project with two applications: one for a producer and one for a consumer which use redis streams for communication. I also outlined how a propagation could be achieved. Please refer to https://github.com/lrozek/redis-stream-trace-context-propagation Spring Cloud Sleuth is feature complete, we will not be adding any new features. You can check this doc form more info https://github.com/spring-cloud/spring-cloud-sleuth/tree/main#spring-cloud-sleuth. You can file an issue to lettuce I guess. @marcingrzejszczak I read https://github.com/spring-cloud/spring-cloud-sleuth/tree/main and I am not sure if I get it right. Is Spring Cloud Sleuth to be completely replaced by https://micrometer.io/ and there will be no further development, bug fixes etc of Sleuth? Is Sleuth reaching its end of life ? I also disagree that Spring Cloud Sleuth is feature complete, otherwise I would not issue this request. Should my project start transition from Spring Cloud Sleuth to micrometer than? I assume that ingress and egress instrumentation for redis streams is not present in micrometer yet, don't I ? @marcingrzejszczak I read https://github.com/spring-cloud/spring-cloud-sleuth/tree/main and I am not sure if I get it right. Is Spring Cloud Sleuth to be completely replaced by https://micrometer.io/ and there will be no further development, bug fixes etc of Sleuth? Is Sleuth reaching its end of life ? Sleuth 3.1.x is the last version that we support for Sleuth. We will apply bug fixes to that version. I also disagree that Spring Cloud Sleuth is feature complete, otherwise I would not issue this request. I understand, however from our perspective it is feature complete. That means we're not planning any features - it doesn't mean that we couldn't add any new features. We're not planning on releasing Sleuth 3.2.x nor 4.0.0 Should my project start transition from Spring Cloud Sleuth to micrometer than? I assume that ingress and egress instrumentation for redis streams is not present in micrometer yet, is it ? For the Spring Framework 6.0, Spring Boot 3.0 you will have to transition to Micrometer Tracing (the API is very similar to the Sleuth's one). We will not have instrumentations in Micrometer, the instrumentations will be there in given projects. For example in the Lettuce project that I have already pointed you to.
2025-04-01T04:35:34.418685
2020-02-27T18:09:32
572261108
{ "authors": [ "garyrussell", "sobychacko" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10993", "repo": "spring-cloud/spring-cloud-stream-binder-kafka", "url": "https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/pull/855" }
gharchive/pull-request
GH-853: Don't propagate out "internal" headers Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/853 Merged upstream.
2025-04-01T04:35:34.424401
2023-05-21T16:19:09
1718558599
{ "authors": [ "Scoobi-wisdoom" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10994", "repo": "spring-guides/gs-async-method", "url": "https://github.com/spring-guides/gs-async-method/pull/27" }
gharchive/pull-request
Fix documentation on default TaskExecutor SimpleAsyncTaskExecutor -> ThreadPoolTaskExecutor I have figured out that the default TaskExecutor is not SimpleAsyncTaskExecutor but ThreadPoolTaskExecutor by reading references: SimpleAsyncTaskExecutor fires up only 8 threads - first answer SimpleAsyncTaskExecutor fires up only 8 threads - second answer Also, I have checked it out by myself through using demo code (environment: Kotlin 1.7.22, Spring Boot 3.0.5 & 2.5.13). Below are my codes for the test. @Configuration @EnableAsync class SpringAsyncConfig { } @Service class AsyncService( ) { @Async fun calculate() { TimeUnit.SECONDS.sleep(5) val currentThread = Thread.currentThread() println("@Async annotated method: ${currentThread.threadGroup}, ${currentThread.name}") } } @RestController @RequestMapping("/demo") class AsyncController( private val asyncService: AsyncService, ) { @GetMapping("/async") fun async() { asyncService.calculate() val currentThread = Thread.currentThread() println("Controller method: ${currentThread.threadGroup}, ${currentThread.name}") } } @Component @Order(Ordered.HIGHEST_PRECEDENCE) class ThreadPoolTaskExecutorBeanProcessor( @Autowired private val applicationContext: ApplicationContext ) : BeanPostProcessor { override fun postProcessBeforeInitialization(bean: Any, beanName: String): Any? { return bean } override fun postProcessAfterInitialization(bean: Any, beanName: String): Any? { return if (bean is ThreadPoolTaskExecutor) { createProxy(bean) } else bean } private fun createProxy(executor: ThreadPoolTaskExecutor): Any { val proxy: ThreadPoolTaskExecutorProxy = ThreadPoolTaskExecutorProxy() proxy.corePoolSize = executor.corePoolSize proxy.maxPoolSize = executor.maxPoolSize proxy.initialize() logg.info("create proxy:") logg.info("${proxy.corePoolSize}") return proxy } companion object { private val logg = LoggerFactory.getLogger(ThreadPoolTaskExecutorBeanProcessor::class.java) } } class ThreadPoolTaskExecutorProxy : ThreadPoolTaskExecutor() I have called the api defined on the rest controller. And the result is below. As you can see below, the thread's name is ThreadPoolTaskExecutorProxy which is a proxy for ThreadPoolTaskExecutor not for SimpleAsyncTaskExecutor. I have also defined a proxy for SimpleAsyncTaskExecutor but I don't think I have to add that code here. It is obvious that the default TaskExecutor is not SimpleAsyncTaskExecutor. Controller method: java.lang.ThreadGroup[name=main,maxpri=10], http-nio-8080-exec-1 @Async annotated method: java.lang.ThreadGroup[name=main,maxpri=10], ThreadPoolTaskExecutorProxy-1 I am opening a different pull request. The file change in the documentation in this commit is wrong.
2025-04-01T04:35:34.431992
2022-12-28T12:30:19
1512749690
{ "authors": [ "Buzzardo", "marcusfey" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10995", "repo": "spring-guides/gs-consuming-rest", "url": "https://github.com/spring-guides/gs-consuming-rest/issues/52" }
gharchive/issue
Port already in use due to local quoters I tried to run this example with quoters running locally as well - as suggested in "Fetching a REST Resource". The (consumer) application failed to start due to Web server failed to start. Port 8080 was already in use. IMHO there is no reason why this consumer application should fire up a web server. Thus the solution for me was to change the main method to new SpringApplicationBuilder(GsConsumingRestApplication.class) .web(WebApplicationType.NONE) // .REACTIVE, .SERVLET .run(args); Would it be okay to include this in the repo? I tried to run this example with quoters running locally as well - as suggested in "Fetching a REST Resource". The (consumer) application failed to start due to Web server failed to start. Port 8080 was already in use. IMHO there is no reason why this consumer application should fire up a web server. Thus the solution for me was to change the main method to new SpringApplicationBuilder(GsConsumingRestApplication.class) .web(WebApplicationType.NONE) // .REACTIVE, .SERVLET .run(args); Would it be okay to include this in the repo? I would welcome that change as a PR. You are correct that there's no reason to fire up a web server for this application, and the reactive approach is a good way to avoid that unnecessary overhead. I'm adding the good first issue tag, in case someone wants to use it as their first contribution to Spring. Alternatively, you could change the spring-boot-starter-web dependency to spring-boot-starter-json. This includes all the dependencies needed for the consuming app, but doesn't add a web server to the classpath. You are right. However, the solution proposed in the initial comment is better, because it removes the unnecessary overhead of starting a web server.
2025-04-01T04:35:34.434679
2019-07-24T22:36:23
472579059
{ "authors": [ "StevenGeens", "gregturn" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10996", "repo": "spring-guides/gs-producing-web-service", "url": "https://github.com/spring-guides/gs-producing-web-service/issues/24" }
gharchive/issue
jaxb2-maven-plugin generates sources in incorrect location The jaxb2-maven-plugin is configured to output the generated sources and not clear the output directory: <outputDirectory>${project.basedir}/src/main/java</outputDirectory> <clearOutputDir>false</clearOutputDir> This is not desirable for 2 reasons: It's usually generally to store generated code together with the normal project code. (See this stackoverflow question for a discussion.) Because the code generation shouldn't delete the normal sources, the clearOutputDir option is set to false. This has the unhappy side effect that certain changes to the XSD would not be correctly represented in the generated code. (e.g.: Deletion of an XML element in the XSD would not remove the generated java class.) I've created pull request #23 for this. Thanks @StevenGeens .
2025-04-01T04:35:34.446334
2017-04-03T17:24:19
219004473
{ "authors": [ "artembilan", "garyrussell" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10997", "repo": "spring-projects/spring-amqp", "url": "https://github.com/spring-projects/spring-amqp/pull/605" }
gharchive/pull-request
AMQP-725: Suppress WARN Log for Normal Cancel JIRA: https://jira.spring.io/browse/AMQP-725 Lower the log level to DEBUG when a consumer cancellation is expected, such as during shutdown or changing queues. cherry-pick to 1.7.x. Cherry-picked as https://github.com/spring-projects/spring-amqp/commit/9c55e8cf1c2782ce32945c7225972568bd81f736 with minor conflicts resolving.
2025-04-01T04:35:34.610620
2022-05-09T19:27:04
1230152993
{ "authors": [ "MariuszCwikla", "bishopmate", "dariuszkuc", "ragavlatha" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10998", "repo": "spring-projects/spring-graphql", "url": "https://github.com/spring-projects/spring-graphql/issues/384" }
gharchive/issue
Example to use Spring GraphQL with a federation library Is there any example project with Spring GraphQL on how to configure the service inorder to be used with a federation library like Apollo ? @ragavlatha It's pretty straightforward. Have a look at my repository that I used few weeks ago to report an issue https://github.com/MariuszCwikla/spring-graphql-federation-union-issue @Bean public GraphQlSourceBuilderCustomizer exposeFederation () { return builder -> { builder.schemaFactory((registry, wiring)-> Federation.transform(registry, wiring).build() ); }; } This repo does not present how to build federated graph but actually it's straighforward to do it with Apollo Router. Hello 👋 I created an example integration under https://github.com/apollographql/federation-jvm-spring-example. https://github.com/apollographql/federation-jvm-spring-example link is broken @dariuszkuc
2025-04-01T04:35:34.634503
2017-04-05T10:02:05
219530570
{ "authors": [ "artembilan", "deatharvest", "garyrussell" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:10999", "repo": "spring-projects/spring-kafka", "url": "https://github.com/spring-projects/spring-kafka/issues/272" }
gharchive/issue
could I control messages size for consumer? use api or set consumer config "control" in what way? Please, don't ask questions in GitHub issues. There are external Community resources on the matter: StackOverflow, Apache Kafka mailing list etc. We use GH issues for bugs and feature requests. Closing as non-relevant.
2025-04-01T04:35:34.643532
2017-02-28T02:14:05
210666676
{ "authors": [ "MysteryAngle", "emailalosha", "frankskywalker", "jgrandja", "otinanism" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11000", "repo": "spring-projects/spring-security-oauth", "url": "https://github.com/spring-projects/spring-security-oauth/issues/993" }
gharchive/issue
ResourceServerProperties DEFAULT filterOrder is not 0. Spring Boot 1.5.1 spring-cloud-dependencies:Camden.SR5 ResourceServerProperties is activated when I use both the authorization service and the resource service on the same server at the same time, but filterOrder default value is SecurityProperties.ACCESS_OVERRIDE_ORDER - 1, which will cause the AnyRequestMatcher of the custom WebSecurityConfigurerAdapter. OAuth2AuthenticationProcessingFilter will never be used. @MysteryAngle I run into the same issue when upgrade to Spring Boot 1.5.1. I'm currently trying to workaround this issue. If you have a working example, could you share some info here? @MysteryAngle @frankskywalker The default order for the Resource Server filter chain has been changed in Spring Boot 1.5.0. It's documented here. To restore the previous order, take a look at this comment. Essentially, add this configuration in your application.yml security.oauth2.resource.filter-order=3 I'm going to close this issue as this should solve it for you. @jgrandja It did solve my problem, thanks. But still I don't understand the reason of the result. My UserService is both a AuthenticationService and a ResourceService, and after I update to 1.5.1, I found that the Oauth2AuthenticationProcessingFilter is missing in the Spring Security Filter Chain. Would you kindly explain why the order of filter will cause its absence in the filter chain. I'm struggling to understand the overall structure and design of the Spring Oauth after I've gone through all the official docs. Should I read from the source code, or any other material I should look at first ? With 1.4.3 I have Oauth2AuthenticationProcessingFilter in the Filter Chain With 1.5.1 the Oauth2AuthenticationProcessingFilter is missing. @frankskywalker Do you have a custom WebSecurityConfigurerAdapter configured in your app? If you do, (which I'm thinking you do), then it may be the first one in the ordered FilterChainProxy.filterChains list. And likely, it has a AnyRequestMatcher which will match all the time and ignore any other SecurityFilterChain in that list, including the @EnableResourceServer SecurityFilterChain. So the OAuth2AuthenticationProcessingFilter is not actually missing but in fact the filter chain it's associated with is never getting applied/called because of the ordering. It's work for me thanks. @jgrandja Thanks for the explanation. @frankskywalker Do you have form login enabled? Do you have a working example that I can take a look at? I am also trying to combine auth and resource server and when I change filter order as suggested I can't get to the login page. For latest release of Spring boot 2.5.1 and corresponding spring security starter which I am working on right now - the property security.oauth2.resource.filter-order=3 is not exist. So you can use below annotation on top of resource server config class @Order(value = 3)
2025-04-01T04:35:34.661559
2020-09-14T15:30:29
701208408
{ "authors": [ "mp911de" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11001", "repo": "spring-projects/spring-vault", "url": "https://github.com/spring-projects/spring-vault/pull/579" }
gharchive/pull-request
Add documentation for KeyValue, PKI, Token, and Transit backend usage Added documentation for the secret backends that are primarily used and that require the most explanation to help understand their purpose and how they are used. @buzzardo care to review the newly added documentation in regard to legibility and typos? Thanks a lot. All suggestions are now in place and we can move forward and merge this change. That's merged now.
2025-04-01T04:35:34.687781
2017-10-24T14:10:50
268047720
{ "authors": [ "gasman", "loicteixeira" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11002", "repo": "springload/wagtaildraftail", "url": "https://github.com/springload/wagtaildraftail/pull/63" }
gharchive/pull-request
Implement Wagtail 1.12 'features' lists Add support for the features kwarg on DraftailTextField and DraftailTextBlock (and within WAGTAILADMIN_RICH_TEXT_EDITORS). This was introduced on RichTextField and RichTextBlock in Wagtail 1.12 to allow configuring the feature set of rich text fields in an editor-agnostic way, with the intention that it would allow a clean transition from Hallo.js to something better (such as Draftail :-) ) @thibaudcolas The front-end tests are failing. The PR doesn't change any front-end file but does change the default set of features I guess, so does that mean the snapshots need to be updated? Coverage is down mostly because tests are run with the py36-dj111-wt110 environment so all the Wagtail >= 1.12 branches aren't run. Overall coverage will need some work anyway so I'll merge this regardless. Thanks again @gasman!
2025-04-01T04:35:34.694549
2015-04-29T15:09:26
71900810
{ "authors": [ "igalic" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11003", "repo": "spuder/puppet-gitlab", "url": "https://github.com/spuder/puppet-gitlab/pull/148" }
gharchive/pull-request
logic: make it possible to actually use an external pg db given our validation logic, it's currently impossible to use an external postgresql database. also, the error message was misleading, with regard to the name of the postgresql_enable option \o/
2025-04-01T04:35:34.702599
2018-11-27T00:00:12
384562206
{ "authors": [ "coveralls", "grahamlyons", "spulec" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11004", "repo": "spulec/moto", "url": "https://github.com/spulec/moto/pull/1969" }
gharchive/pull-request
Return the deleted stacks in the list stacks call This matches the behaviour of the AWS API. Fixes #1968 Coverage decreased (-0.07%) to 92.427% when pulling 4de92accab606e5a21409c36f682939c0c5f1027 on grahamlyons:list-deleted-cloudformation-stacks into ed861ecae1039a048a6350a4ff832ef094cdf2c2 on spulec:master. Coverage decreased (-0.07%) to 92.427% when pulling 4de92accab606e5a21409c36f682939c0c5f1027 on grahamlyons:list-deleted-cloudformation-stacks into ed861ecae1039a048a6350a4ff832ef094cdf2c2 on spulec:master. Coverage decreased (-0.07%) to 92.427% when pulling 4de92accab606e5a21409c36f682939c0c5f1027 on grahamlyons:list-deleted-cloudformation-stacks into ed861ecae1039a048a6350a4ff832ef094cdf2c2 on spulec:master. Thanks!
2025-04-01T04:35:34.703618
2021-05-18T16:48:41
894582729
{ "authors": [ "andfoy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11005", "repo": "spyder-ide/pywinpty", "url": "https://github.com/spyder-ide/pywinpty/pull/154" }
gharchive/pull-request
PR: Add requires-python to Cargo.toml Fixes #153 We need to wait until Github Actions is back on
2025-04-01T04:35:34.712499
2020-05-05T12:52:25
612578823
{ "authors": [ "bgallois", "goanpeca" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11006", "repo": "spyder-ide/spyder-vim", "url": "https://github.com/spyder-ide/spyder-vim/pull/44" }
gharchive/pull-request
PR: Update plugin to work with Spyder 4 Updated plugin to work with Spyder 4, fixes #43 @bgallois thanks for this! Will review this sometime this week. Thanks again :-) I think we will probably remove Spyder 3 compat, besides that looks good! @goanpeca thanks! I fixed the formatting and the key sequence in the shortcut. The spyder 3 compatibility is removed from the test and fixed for the plugin.
2025-04-01T04:35:34.729818
2020-06-10T06:54:36
635981781
{ "authors": [ "bramson", "ccordoba12", "dalthviz", "datiti", "hpfmn", "nbud" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11007", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/12990" }
gharchive/issue
Display issues with multiple screens Issue Report Checklist [x] Searched the issues page for similar reports [x] Read the relevant sections of the Spyder Troubleshooting Guide and followed its advice [x] Reproduced the issue after updating with conda update spyder (or pip, if not using Anaconda) [x] Could not reproduce inside jupyter qtconsole (if console-related) [x] Tried basic troubleshooting (if a bug/error) [x] Restarted Spyder [x] Reset preferences with spyder --reset [x] Reinstalled the latest version of Anaconda [x] Tried the other applicable steps from the Troubleshooting Guide [x] Completed the Problem Description, Steps to Reproduce and Version sections below Problem Description I have two screens: Screen 1, of native resolution 3840x2160 Screen 2, of native resolution 1920x1080 When I start Spyder with screen 1 turned off (laptop lid closed), Spyder display is bad: some fonts are massive (which prevents to resize internal windows in some case), the pyplot windows open off screen. Tweaking the DPI options in the settings did not fix the issue. Opening Spyder when both screens are on fixes the issue. Screenshot (after spyder reset): What steps reproduce the problem? With screen 1 off, open Spyder What is the expected output? What do you see instead? Expected a decent rendering. Obtain a poor rendering. Paste Traceback/Error Below (if applicable) No error in the internal console. Here are some debug information: >>> spy.window.screen.size() PyQt5.QtCore.QSize(1920, 1080) >>> spy.window.screen.virtualSize() PyQt5.QtCore.QSize(1920, 1080) >>> spy.window.screen.geometry() PyQt5.QtCore.QRect(0, 0, 1920, 1080) >>> spy.window.screen.name() '\\\\.\\DISPLAY2' >>> spy.window.screen.logicalDotsPerInch() 96.0 >>> spy.window.screen.physicalDotsPerInch() 93.61458995237565 Versions Spyder version: 4.1.3 (obtained from anaconda channel in conda) Python version: 3.7.6 64-bit Qt version: Qt 5.9.6 PyQt version: PyQt5 5.9.2 Operating System name/version: Windows 10 Dependencies # Mandatory: atomicwrites >=1.2.0 : 1.4.0 (OK) chardet >=2.0.0 : 3.0.4 (OK) cloudpickle >=0.5.0 : 1.4.1 (OK) diff_match_patch >=20181111 : 20181111 (OK) intervaltree : None (OK) IPython >=4.0 : 7.13.0 (OK) jedi =0.15.2 : 0.15.2 (OK) nbconvert >=4.0 : 5.6.1 (OK) numpydoc >=0.6.0 : 0.9.2 (OK) paramiko >=2.4.0 : 2.7.1 (OK) parso =0.5.2 : 0.5.2 (OK) pexpect >=4.4.0 : 4.8.0 (OK) pickleshare >=0.4 : 0.7.5 (OK) psutil >=5.3 : 5.7.0 (OK) pygments >=2.0 : 2.6.1 (OK) pylint >=0.25 : 2.5.2 (OK) pyls >=0.31.9;<0.32.0 : 0.31.10 (OK) qdarkstyle >=2.8 : 2.8.1 (OK) qtawesome >=0.5.7 : 0.7.0 (OK) qtconsole >=4.6.0 : 4.7.4 (OK) qtpy >=1.5.0 : 1.9.0 (OK) rtree >=0.8.3 : 0.9.4 (OK) sphinx >=0.6.6 : 3.0.4 (OK) spyder_kernels >=1.9.1;<1.10.0 : 1.9.1 (OK) watchdog : None (OK) zmq >=17 : 18.1.1 (OK) # Optional: cython >=0.21 : 0.29.17 (OK) matplotlib >=2.0.0 : 3.1.3 (OK) numpy >=1.7 : 1.18.1 (OK) pandas >=0.13.1 : 1.0.3 (OK) scipy >=0.17.0 : 1.4.1 (OK) sympy >=0.7.3 : 1.5.1 (OK) Hi @dalthviz, I have tried the "normal", "auto" and "fixed" DPI scaling settings (with a restart between each), with no success. Thanks for the info @nbud ! We will be checking this one in a future release :+1: I found a fix: right-click on Spyder shortcut, Properties, then Compatibility tab, Change high DPI settings. Tick Override high DPI scaling behaviour, select Scaling performed by System. I found a fix: right-click on Spyder shortcut, Properties, then Compatibility tab, Change high DPI settings. Tick Override high DPI scaling behaviour, select Scaling performed by System. Thanks nbud Thanks for the info @nbud @datiti on how to work around the issue! Same problems here on Linux. Spyder messes with my custom set environment variables and destroys Qt's per screen scaling. It works as expected if commenting out this code: https://github.com/spyder-ide/spyder/blob/bbefcebd48a93cd08f2f17be86d3ce7478e5130f/spyder/app/start.py#L98 It should probably check if the enviroment variables are set before overwriting them. I could create a PR for that. Hi @hpfmn thanks for the feedback! we set the env vars using the values in the preferences but maybe, as you suggest, we need to handle the case where these values are being set outside Spyder, @ccordoba12 what do you think? It should probably check if the enviroment variables are set before overwriting them. I could create a PR for that. If you're referring to add a check in the else part of the code you referenced above, then I agree with add a check to see if those variables are set before overwriting them. Well, it's 3.5 years later, and I have the same problem. I tried the fix by @nbud, but it didn't change anything. I've posted my details on other open issues of the same problem, so I'll just leave it that this work-around is not working in my case...nor are any settings in the app Spyder 5.5.1 on Windows 11
2025-04-01T04:35:34.738750
2020-06-11T17:53:55
637215131
{ "authors": [ "ccordoba12", "dalthviz", "oscargus" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11008", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/13001" }
gharchive/issue
Spyder freezes if you continuously press profiler action 'Expand one level down' Issue Report Checklist [x] Searched the issues page for similar reports [x] Read the relevant sections of the Spyder Troubleshooting Guide and followed its advice [x] Reproduced the issue after updating with conda update spyder (or pip, if not using Anaconda) [x] Could not reproduce inside jupyter qtconsole (if console-related) [x] Tried basic troubleshooting (if a bug/error) [x] Restarted Spyder [x] Reset preferences with spyder --reset [x] Reinstalled the latest version of Anaconda [x] Tried the other applicable steps from the Troubleshooting Guide [x] Completed the Problem Description, Steps to Reproduce and Version sections below Problem Description What steps reproduce the problem? Profile a file which content is for example: import matplotlib.pyplot as plt import numpy as np t = np.arange(0.0, 7.0, 0.01) s = 1 + np.sin(2*np.pi*t) plt.plot(t, s) plt.xlabel('time (s)') plt.ylabel('voltage (mV)') plt.title('About as simple as it gets, folks') plt.grid(True) plt.savefig("test.png") plt.show() After the profile process finishes click multiple times the option Expand one level down Freeze and possible crash. What is the expected output? What do you see instead? Versions Spyder version: 4.1.3 Python version: 3.7.6 Qt version: 5.9.6 PyQt version: 5.9.2 Operating System name/version: Windows 10 Same as #9719? I believe continuously parsing the strings when sorting contributes significantly to this. Better to keep sorting data separately. (I haven't checked the code recently though.) Also, the tree view is not really that fast in itself when there are many items. Same as #9719? Yes, let's close this one as a duplicate of that issue. I believe continuously parsing the strings when sorting contributes significantly to this. Better to keep sorting data separately Good point, we should that in a thread.
2025-04-01T04:35:34.742391
2022-01-28T12:42:44
1117361737
{ "authors": [ "ccordoba12", "dalthviz", "mmagnuski" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11009", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/17255" }
gharchive/issue
Is there an option to reopen a closed console? I accidentaly closed a console that was running computations that take some time. Is there an option to reopen the closed console? (I looked around in the docs and issues to find answer to this but I couldn't) If not - it might be good idea to ask the user for confirmation (like when one tries to restart the console kernel). Hi @mmagnuski thank you for the feedback! Checking we have an option to ask the user for confirmation when closing but I think is not activated by default. To activate it you will need to go to Preferences > IPython Console > DIsplay > Interface and check there the Ask for confirmation before closing option: Let us know if that helps! To complement @dalthviz answer above, I'd like to say that this Is there an option to reopen the closed console? is not possible. When you close a console, its associated kernel is killed. So it's not possible to recover the computations you were running on it. Thanks @dalthviz and @ccordoba12, I'll set this option! When you close a console, its associated kernel is killed. So it's not possible to recover the computations you were running on it. Thanks, that's what I thought, because CPU usage dropped after I closed the console. But it's good at the same time - I can use this to circumvent the problem with resetting kernel I have. :)
2025-04-01T04:35:34.759984
2015-02-17T22:11:04
57990447
{ "authors": [ "cocofifi", "spyder-issues-migrator", "winstreak" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11010", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/2026" }
gharchive/issue
File Explorer doesn't work across network shares on Windows From<EMAIL_ADDRESS>on 2014-10-26T14:40:01Z Spyder Version: 2.3.0 Python Version: 2.7.6 Qt Version : 4.8.4, PyQt4 (API v2) 4.9.6 on Windows pyflakes >=0.5.0: None (OK) pep8 >=0.6 : None (NOK) IPython >=0.13 : 1.1.0 (OK) pygments >=1.6 : 1.6 (OK) sphinx >=0.6.6 : 1.2.1 (OK) psutil >=0.3 : 1.2.1 (OK) rope >=0.9.2 : 0.9.4 (OK) matplotlib >=1.0: 1.3.1 (OK) sympy >=0.7.0 : <IP_ADDRESS> (OK) pylint >=0.25 : 1.0.0 (OK) What steps will reproduce the problem? 1.Set Global Working Directory to a network folder For example: \foobar\c\Users\Dave\Documents\data\Imaging\Python 2. Choose "File explorer tab" What is the expected output? What do you see instead? I expect to see the files in that directory. Instead, I see nothing (blank). Please provide any additional information below . It works fine for local folders, but not for network shares on other machines. Original issue: http://code.google.com/p/spyderlib/issues/detail?id=2026 From ccordoba12 on 2014-10-29T09:56:22Z Not a top priority for us, sorry :-) Status: HelpNeeded Labels: Cat-Miscelleneous From<EMAIL_ADDRESS>on 2014-10-29T10:09:01Z Oh, but so easy to do. Just call a different API that supports it. Point me at the module where it calls the OS file I/O and I'll show you how to change it. (I'm far too inexperienced a Python programmer to hack the Spyder code myself, but I'm familiar with the different Windows calls that do and don't support network I/O.) From<EMAIL_ADDRESS>on 2014-10-29T12:11:27Z Looking into, it may be as simple as using forward slashes (/) on Windows instead of backslashes (). See http://stackoverflow.com/questions/7169845/using-python-how-can-i-access-a-shared-folder-on-windows-network I was able to reproduce the OPs original issue and the forward slash workaround didn't resolve the file explorer tab display issue. However, I was able to workaround the issue by mapping the network share to a new drive letter in windows and then setting that as your global working directory. \foobar\c\Users\Dave\Documents\data\Imaging\Python -> mapped to Z:\ and it works fine Can confirm the workaround; changing the preferences about the global working directory works.
2025-04-01T04:35:34.771072
2015-02-16T23:37:04
57864694
{ "authors": [ "spyder-issues-migrator" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11011", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/386" }
gharchive/issue
Object inspector prevents entering left parenthesis From<EMAIL_ADDRESS>on 2010-09-20T05:33:55Z After some commands, entering left parenthesis after commands does not work in the ipython-shell any more. What steps will reproduce the problem? start a fresh ipython-shell in spyder import os os.system('ls') figure() # on my system, I cannot enter the left parenthesis What is the expected output? What do you see instead? I cannot enter the left parenthesis after the figure-command and other valid commands, but after invalid commands like "figu()" I can enter the left parenthesis. I think this has something to do with the object inspector, because it turns mad (shows either "no documentation available" even if there is a docstring, or shows the documentation of a previously entered command instead of the current command) What version of the product are you using? On what operating system? I'm using spyder2.0beta3 on a ubuntu10.04-64 bit system IPython options: -pylab -q4thread -colors LightBG -xmode Plain -editor gedit Please provide any additional information below . This occurred also with other commands than "os.system", but on my system this bug is very reproducible. When I disable the "balloon-tips", the problem disappears... (-> bug in object explorer?) Original issue: http://code.google.com/p/spyderlib/issues/detail?id=386 From pierre.raybaut on 2010-09-20T06:16:19Z Could you please show the 'internal console' ('View' menu > Internal console) and post its contents here: there's probably some traceback. From<EMAIL_ADDRESS>on 2010-09-20T06:29:25Z This is the internal console's output: Python 2.6.5 ( r265 :79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 Type "copyright", "credits" or "license" for more information. >>> Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/spyderlib/widgets/shell.py", line 300, in keyPressEvent self.postprocess_keyevent(event) File "/usr/local/lib/python2.6/dist-packages/spyderlib/widgets/shell.py", line 787, in postprocess_keyevent ShellBaseWidget.postprocess_keyevent(self, event) File "/usr/local/lib/python2.6/dist-packages/spyderlib/widgets/shell.py", line 444, in postprocess_keyevent self._key_parenleft(text) File "/usr/local/lib/python2.6/dist-packages/spyderlib/widgets/shell.py", line 864, in _key_parenleft self.show_docstring(self.get_last_obj(), call=True) File "/usr/local/lib/python2.6/dist-packages/spyderlib/widgets/shell.py", line 1005, in show_docstring self.inspector.set_object_text(text) File "/usr/local/lib/python2.6/dist-packages/spyderlib/plugins/inspector.py", line 221, in set_object_text self.show_help(text) File "/usr/local/lib/python2.6/dist-packages/spyderlib/plugins/inspector.py", line 342, in show_help self.editor.set_text(hlp_text) File "/usr/local/lib/python2.6/dist-packages/spyderlib/widgets/codeeditor/codeeditor.py", line 857, in set_text self.setPlainText(text) TypeError: QPlainTextEdit.setPlainText(QString): argument 1 has unexpected type 'bool' From<EMAIL_ADDRESS>on 2010-09-20T23:02:01Z Hello, just to inform, I often issue the same problem, also in pdb sessions and didn't know how to reproduce it. From pierre.raybaut on 2010-09-25T07:18:56Z Status: Fixed
2025-04-01T04:35:34.775171
2017-02-22T22:57:00
209608682
{ "authors": [ "ccordoba12", "charlesfox97", "jitseniesen" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11012", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/4190" }
gharchive/issue
Update conflict I'm getting a package conflict that I'm not sure how to resolve. I ran activate py3 (py3 is a python 3 environment) conda update spyder returns "all requested packages already installed. : spyder 3.1.2 conda install spyder=3.1.3 returns "The following specifications were found to be in conflict: python 3.4*, spyder 3.1.3* -> python 2.7" Spyder Version: 3.1.3 Python Version: 3.4.5 Qt Version: 5.6.2 PyQt Version: 5.6.0 Operating system: windows 10 Python 3.4 is not supported anymore by Continuum, so no new Python packages of any kind will be created for that version, sorry. To be more explicit, you need to update python with conda update python and then you will be able to update spyder. Not exactly @jitseniesen: you'd need to create a new conda environment, then install Python 3.5 or 3.6 on it, and finally install Spyder.
2025-04-01T04:35:34.796463
2019-06-22T16:28:29
459489873
{ "authors": [ "ccordoba12", "oscargus" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11013", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/9644" }
gharchive/issue
Pylint output window is not inheriting the dark theme Issue Report Checklist [x] Searched the issues page for similar reports [x] Read the relevant sections of the Spyder Troubleshooting Guide and followed its advice [ ] Reproduced the issue after updating with conda update spyder (or pip, if not using Anaconda) [ ] Could not reproduce inside jupyter qtconsole (if console-related) [x] Tried basic troubleshooting (if a bug/error) [x] Restarted Spyder [ ] Reset preferences with spyder --reset [ ] Reinstalled the latest version of Anaconda [ ] Tried the other applicable steps from the Troubleshooting Guide [x] Completed the Problem Description, Steps to Reproduce and Version sections below Problem Description The Pylint output window is not dark using the dark theme (on my Windows 10 machine), when running the latest version of the master branch. Both with automatic from the default startup (which gives dark) and explicit dark theme (after restart). Not a big issue, but since I noted that this was an issue mentioned in #8068 so it seems to have been fixed at some stage. What steps reproduce the problem? Clone repository Follow contributing guidelines to install dependencies (with the exception of cloning my newly forked copy and optioning for Python 3.7 instead of 3.6) Run python bootstrap.py (Btw, conda install -c spyder-ide spyder-kernels=1.* did not install anything additional.) What is the expected output? What do you see instead? A dark themed Pylint output window. We'll take a look at this in a future beta. @dalthviz, please take a look at this one. Fixed by PR #9877
2025-04-01T04:35:34.800951
2019-08-01T06:05:07
475489487
{ "authors": [ "ccordoba12", "parichaya" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11014", "repo": "spyder-ide/spyder", "url": "https://github.com/spyder-ide/spyder/issues/9937" }
gharchive/issue
Error when starting Spyder Description What steps will reproduce the problem? Just start Spyder, and this error pops-up without doing anything. Note: Spyder IDE does start in the background, and countinues to work after closing this dialog. Traceback File "/home/parichay/code/spyderz/spyder4/lib/python3.7/site-packages/spyder/plugins/editor/plugin.py", line 302, in report_open_file language.lower()]) File "/home/parichay/code/spyderz/spyder4/lib/python3.7/site-packages/spyder/plugins/editor/plugin.py", line 326, in lsp_server_ready editorstack.notify_server_ready(language, configuration) File "/home/parichay/code/spyderz/spyder4/lib/python3.7/site-packages/spyder/plugins/editor/widgets/editor.py", line 1557, in notify_server_ready editor.start_lsp_services(config) File "/home/parichay/code/spyderz/spyder4/lib/python3.7/site-packages/spyder/plugins/editor/widgets/codeeditor.py", line 871, in start_lsp_services self.parse_lsp_config(config) File "/home/parichay/code/spyderz/spyder4/lib/python3.7/site-packages/spyder/plugins/editor/widgets/codeeditor.py", line 888, in parse_lsp_config self.will_save_notify = sync_options['willSave'] KeyError: 'willSave' WARNING:spyder.widgets.github.backend:Failed to send bug report on Github. response={'code': 401, 'json': {'message': 'Bad credentials', 'documentation_url': 'https://developer.github.com/v3'}} Versions Spyder version: 4.0.0b3 Python version: 3.7.4 Qt version: 5.12.4 PyQt5 version: 5.12.3 Operating System: Linux 4.4.0-157-generic Dependencies pygments >=2.0 : 2.4.2 (OK) qdarkstyle >=2.6.4: 2.7 (OK) sphinx >=0.6.6 : 2.1.2 (OK) psutil >=0.3 : 5.6.3 (OK) pyls >=0.27.0 : v0.28.1 (OK) nbconvert >=4.0 : 5.5.0 (OK) pandas >=0.13.1 : None (NOK) numpy >=1.7 : None (NOK) sympy >=0.7.3 : None (NOK) cython >=0.21 : None (NOK) qtconsole >=4.5.0 : 4.5.2 (OK) IPython >=4.0 : 7.7.0 (OK) matplotlib >=2.0.0: None (NOK) pylint >=0.25 : 2.3.1 (OK) Update: I installed using pip and not conda. Please do let me know if you need any additional info. Thanks. This will be fixed in our next beta, to be released today or tomorrow. For now, please downgrade the following package to avoid this error pip install python-language-server==0.27 Thank you!
2025-04-01T04:35:34.846651
2020-12-18T12:26:22
770847854
{ "authors": [ "LowlyDBA", "andrekamman", "sqlworldwide" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11015", "repo": "sqlcollaborative/DataSaturdays", "url": "https://github.com/sqlcollaborative/DataSaturdays/issues/24" }
gharchive/issue
Lunch Options Bring your own lunch (yes/no) If No list different options to choose from It would be neat to be able to say 'No' but 'I'll be going to restaurant xx and am open for others to join' with a counter for people to see how many have opted in to the ad-hoc offsite option (in NYC SQL Sat I almost always went out for food to get air + take advantage of the numerous non-sandwich options) There are many events that happen that no good options available around the venue. Plus we hold sponsor sessions. That way attendees can grab lunch and listen to them. We charge a premium $$ for those sessions. Yeah, I had a feeling NYC might be too much of an edge case 😢 One of the bugs that annoyed me the most on the SQLSat platform was that you couldn't switch off the lunch option. Many event locations have their own caterer, you can't bring your own lunch
2025-04-01T04:35:34.848870
2017-03-17T01:05:01
214881187
{ "authors": [ "niphlod" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11016", "repo": "sqlcollaborative/dbatools", "url": "https://github.com/sqlcollaborative/dbatools/pull/972" }
gharchive/pull-request
fixes #970 Fixes #970 Changes proposed in this pull request: avoid long waits to build up file lists, everything is pipelined, so concurrent alteration of the underlying filestructure are less likely to impact the command itself new logging subsystem returns the removed files and directory fullnames for empty folder removals avoids top-down recursion when removing empty folders (which ultimately was the reason behind the relative slowness described) tl;dr: on 302k files nested into 5k dirs on a local SSD the process passed from 20min to 6min @FriedrichWeinmann : I'd need a crash course on how to handle possible throws with the new system. stop-function is the solution there ?
2025-04-01T04:35:34.852942
2016-04-11T09:29:15
147367827
{ "authors": [ "BornaP", "maxcnunes" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11017", "repo": "sqlectron/sqlectron-gui", "url": "https://github.com/sqlectron/sqlectron-gui/issues/108" }
gharchive/issue
Typography in database list I agree with statement that previous style was using too small font, but current setup makes it look weird, like database metadata titles have bigger font (although they don't) than databases names because they are weighted. Since they are sub list, maybe its better to make them normally weighted. Or put database names in bigger font. Also, maybe currently active databases should be bold. Issue is here so we can discuss it since its more matter of UX. Sure. I was not totally fine with this result. But it was already better than the previous one. What do you think about this attempt? It uses the same font-size of the children and has a collapse icon. Yeah this one looks much better. Collapse icon should definitely stay. Maybe children items should have indention, so it feels more like they are children of item above them. I think its a best practice when it comes to this tree structures. Probably adding left padding would do the trick, so that children items are aligned with first/second letter of parent node. Cool. I'm gonna try that later today then. I hope finish it today, to release a new version. It's been a while since the last release. I have included a margin left for the menu. And also changed the color of the metadata titles to have a bit of less attention than the database title. Great, this looks much better now :+1:
2025-04-01T04:35:34.866039
2023-12-20T14:31:26
2050664993
{ "authors": [ "murphyke" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11018", "repo": "sqlpad/sqlpad", "url": "https://github.com/sqlpad/sqlpad/issues/1213" }
gharchive/issue
Bug: BigQuery driver: existence of search indexes breaks getSchema @rickbergfalk I have a one-liner fix for this. If you think you would integrate it, I will open a PR. Otherwise (and in the meantime) we'll just create our own private fork of sqlpad until we find a replacement. The issue is that if a search index exists in a BigQuery dataset, the schema panel cannot be populated at all - an error message is shown in place of the schema tree. Part of the search index implementation in BigQuery apparently involves creating a behind-the-scenes table which nonetheless is visible in {dataset}.__TABLES__ as if it were a user-created table. But other parts of the BigQuery API disagree that this thing is actually a table (hence the error), and in any case, users would not want to see it in the schema. This was resolved by 14e79086948c3a84a3c215b351e28456967cac9c. Thanks for accepting the PR, Rick.