added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:37:42.056491
2023-12-05T21:12:28
2027130632
{ "authors": [ "krysal" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3062", "repo": "WordPress/openverse", "url": "https://github.com/WordPress/openverse/pull/3465" }
gharchive/pull-request
Add catalog to dependencies to update by Renovate Description I noticed the Catalog was missing from these updates, but also why is it limited to development dependencies? Checklist [x] My pull request has a descriptive title (not a vague title likeUpdate index.md). [x] My pull request targets the default branch of the repository (main) or a parent feature branch. [x] My commit messages follow best practices. [x] My code follows the established code style of the repository. [ ] I added or updated tests for the changes I made (if applicable). [ ] I added or updated documentation (if applicable). [ ] I tried running the project locally and verified that there are no visible errors. [ ] I ran the DAG documentation generator (if applicable). Developer Certificate of Origin Developer Certificate of Origin Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 1 Letterman Drive Suite D4700 San Francisco, CA, 94129 Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. @AetherUnbound I'm refering to that group of dependencies, it includes all the python projects, not only the catalog so it looked strange to me. https://github.com/WordPress/openverse/blob/9b44496703708454198cb80dc18a630f63ed64e6/.github/renovate.json#L67-L70 @dhruvkb I believe that is only for putting the label on the files that are under the catalog folder 🤔
2025-04-01T06:37:42.269002
2019-03-21T14:18:05
423752806
{ "authors": [ "mikaelcom", "tbl0605" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3064", "repo": "WsdlToPhp/PackageGenerator", "url": "https://github.com/WsdlToPhp/PackageGenerator/issues/185" }
gharchive/issue
undefined StructType class Hi, when we generate php classes for our "roc.wsdl" web service, we end up with a method \ServiceType\Operation::setSoapHeaderMessage(\StructType\Max2048TextType $message, $nameSpace = 'urn:amc:ci', $mustUnderstand = false, $actor = null) that uses an unknown \StructType\Max2048TextType type for its "$message" parameter. All our .xsd and .wsdl files can be found at: https://drive.google.com/drive/folders/1m-OK8vhcMVsWKa_pyyesQK81rtTyc-PV Note that after retrieving these .xsd and .wsdl files, you must first replace all "__LOCAL_PATH__" occurrences by the new local path of those files. Thank you for your help :) Thierry. Is it possible for you to use the feature/issue-185 branch with: php /var/www/console g:p \ --urlorpath=roc.wsdl \ --destination=./ \ --composer-name=roc/soap \ --force in order to validate the fix before merging it :) Thanks ps: import use relative path, you don't need the full path in your xsd, just put the file name if it is in the same directory as the WSDL Is it possible for you to use the feature/issue-185 branch with: php console g:p \ --urlorpath=roc.wsdl \ --destination=./ \ --composer-name=roc/soap \ --force in order to validate the fix before merging it :) Thanks ps: import use relative path, you don't need the full path in your xsd, just put the file name if it is in the same directory as the WSDL Hi @mikaelcom, thank you very much, I confirm you that your patch is working great, the issue is fixed for me ;) Thierry.
2025-04-01T06:37:42.274473
2016-08-24T01:55:24
172847539
{ "authors": [ "Servovicis", "Wurmatron" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3065", "repo": "Wurmcraft/WurmTweaks", "url": "https://github.com/Wurmcraft/WurmTweaks/issues/98" }
gharchive/issue
Food Crafting food crafting for pills can be done with 1oz or the full 160oz Currently there is no easy way to handle NBT data in forge recipes without some work.
2025-04-01T06:37:42.296034
2023-05-25T16:03:09
1726101918
{ "authors": [ "Wyrine" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3066", "repo": "Wyrine/mp3-cutting", "url": "https://github.com/Wyrine/mp3-cutting/issues/3" }
gharchive/issue
Support other audio types this should support other common types such as mp4. Ideally this would also support audio types coming from WhatsApp 48fd65b addresses this
2025-04-01T06:37:42.303364
2017-05-17T12:04:09
229329291
{ "authors": [ "MaryanMorel", "Mbompr" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3067", "repo": "X-DataInitiative/tick", "url": "https://github.com/X-DataInitiative/tick/pull/19" }
gharchive/pull-request
TICK-358 longitudinal features lagger Add a preprocessor to lag exposure features matrices @MaryanMorel The two commits should have been squashed https://twitter.com/jamesfublo/status/402407321265274881?ref_src=twsrc^tfw&ref_url=http%3A%2F%2Fjamescooke.info%2Fgit-to-squash-or-not-to-squash.html Also each commit should contain the task name it is related to :) I know, my bad… I saw that when it was too late, and I didn’t want to mess with the master to make it right :( Maryan MOREL Centre de Mathématiques Appliquées (CMAP) http://www.cmap.polytechnique.fr/ École Polytechnique 91128 PALAISEAU CEDEX +33 (0)1 69 33 45 80 Le 23 mai 2017 à 11:35, Martin<EMAIL_ADDRESS>a écrit : @MaryanMorel https://github.com/maryanmorel The two commits should have been squashed https://twitter.com/jamesfublo/status/402407321265274881?ref_src=twsrc^tfw&ref_url=http%3A%2F%2Fjamescooke.info%2Fgit-to-squash-or-not-to-squash.html https://twitter.com/jamesfublo/status/402407321265274881?ref_src=twsrc^tfw&ref_url=http://jamescooke.info/git-to-squash-or-not-to-squash.html Also each commit should contain the task name it is related to :) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/X-DataInitiative/tick/pull/19#issuecomment-303345022, or mute the thread https://github.com/notifications/unsubscribe-auth/AFBDgogoF3cC3xImylM06y9pdy0Abnetks5r8qhigaJpZM4Ndw-8.
2025-04-01T06:37:42.327336
2024-11-09T01:36:44
2645459312
{ "authors": [ "coveralls", "dachengx" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3068", "repo": "XENONnT/axidence", "url": "https://github.com/XENONnT/axidence/pull/93" }
gharchive/pull-request
Remove unused configs This will change the lineage, but it is fine. Pull Request Test Coverage Report for Build<PHONE_NUMBER>8 Details 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage decreased (-0.02%) to 89.763% Totals Change from base Build<PHONE_NUMBER>2: -0.02% Covered Lines: 1210 Relevant Lines: 1348 💛 - Coveralls
2025-04-01T06:37:42.354736
2017-01-28T01:02:23
203776106
{ "authors": [ "ChrisBuchholz", "Gminfly", "keith" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3069", "repo": "XVimProject/XVim", "url": "https://github.com/XVimProject/XVim/issues/1041" }
gharchive/issue
Should this be put on the App Store? Extensions for Xcode are still allowed with Apps fromthe AppStoreandcan be enabled in the System Settings. Would this notbe a viable route for XVim? P.S. For reference see Swiftify If possible, I think it's a great idea. See https://github.com/XVimProject/XVim/issues/964 on why this is not possible.
2025-04-01T06:37:42.431720
2023-08-29T14:08:15
1871758211
{ "authors": [ "jowerner", "js-xc", "rschwietzke" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3070", "repo": "Xceptance/XLT", "url": "https://github.com/Xceptance/XLT/issues/416" }
gharchive/issue
Comparison Report: Shorten help text that is always visible The section help texts are sometimes quite lengthy. Experienced users won't need that description any longer and might be annoyed by always having to scroll passed the text. What about making the larger part of the text appear on click only, as already done in the load test report? PO: Agree. Did the same for trend report to keep appearance similar for all test types.
2025-04-01T06:37:42.436034
2022-04-23T22:54:57
1213472596
{ "authors": [ "Cyco0815", "Xeio" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3071", "repo": "Xeio/WanderLost", "url": "https://github.com/Xeio/WanderLost/pull/56" }
gharchive/pull-request
ActiveMerchantGrid UI improvements Added horizontal seperators between each merchant. Also added auto-sort on active merchants so the highest voted merchant is always on top, merchants with negative votes are greyed out depending on the number of votes (#37). Zones are now clickable and open the image in a new tab (#40), Only clickable in merchants table, not in UpdateMerchants page to avoid confusion. A star icon now appears before notified cards and rapports (#43). So just to be clear how this works, it greys out only the 2nd/3rd/etc row? Looks good though, other than the extra comment I marked above. I'm assuming it can be removed since the sorting still appears to be working. Related I should really add some dummy test images for the fake zones... Yes it only greys out additional merchants. I found it looks really odd if the first merchant is greyed out aswell. On that note: I chose not to completely hide/remove merchants over the downvote threshold, since it might be confusing to see merchants disappear (and users can't suggest them again, they will be blocked in MerchantHub), and I didn't want to add logic to completely remove a merchant server-side. Oh yeah, the marked line can be removed, the actual sorting happens in the view, not the list. That's from testing, I forgot to remove that.
2025-04-01T06:37:42.445635
2024-01-14T20:23:17
2080884175
{ "authors": [ "Gitii", "Xetera" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3072", "repo": "Xetera/kindle-api", "url": "https://github.com/Xetera/kindle-api/pull/2" }
gharchive/pull-request
Allow external http client and add custom request options Hi, Thanks for creating this npm module and sharing it with the world! I've used it and made a few small improvements: Aligned spacing & formatting (more by accident) Exported HttpClient and added a clientFactory option for Kindle class. That is very useful when you want to NOT use the tls-client-api server but the shared library. Added pagination handling to Kindle class. The default behaviour hasn't changed, there is an on-demand option for it. Moved fetching of books into a separate function (fetchBooks) to reduce complexity (increased due to pagination handling). Added query and filtering options (also includes the pagination). Please take a look at it and I hope that we merge them! Thanks! This is excellent, thanks a lot for the PR. I've been meaning to make tls-client-api optional/more modular but just haven't had the motivation for it since I solved my own problem with the proxy. One thought I have is since this includes pagination, I feel like it's the kind of thing that should be turned on by default. It feels a little bit strange to pass a false boolean flag to turn something on. Can we flip this behavior around to have pagination on by default so it can be released as the next major version to prevent breaking changes for existing users? @Xetera Do you have a preferred formatter / code style that I can use for this project? I am fighting with my vscode right now, because there are no project specific settings. And my defaults don't seem to match yours. And I also noticed that the package scripts are a bit messed up. build:all calls yarn but the package manager is pnpm. 2 spaces, semi, trailing commas is good. It's basically the changes you've got in your PR so that's no problem. I'd also like to stick to pnpm. Sorry, just feeling the embarrassment of having what I thought would just be a personal project get used by others I guess 😮‍💨 2 spaces, semi, trailing commas is good. It's basically the changes you've got in your PR so that's no problem. I'd also like to stick to pnpm. Sorry, just feeling the embarrassment of having what I thought would just be a personal project get used by others I guess 😮‍💨 No worries, this is our free time and not work 😁 pnpm is fine. I've pushed a new commit that fixes it. I also got sidetracked a little bit and wrote some tests. I will push that commit, too. Please take a look at it. I use msw to mock the network requests. That way the real Api is not part of the test and the tests can be run by anyone. I can revert that commit if you do not like it. But I think it's worth it. Looks great, I've used msw a couple times before and it's awesome so I don't mind it. Happy to merge this if you feel it's good to go I think the PR is ready.
2025-04-01T06:37:42.463935
2023-06-26T03:28:01
1773766928
{ "authors": [ "XieShaosong", "dashuaip" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3073", "repo": "XieShaosong/aubo_robot_ros2", "url": "https://github.com/XieShaosong/aubo_robot_ros2/issues/3" }
gharchive/issue
rosdep install problem 我在这一步遇到了找不到包的问题,麻烦作者帮忙看一下 I encountered the problem of not being able to find the package at this step. Could you please help me take a look? ros2@ubuntu:~/aubo_ros2_ws$ rosdep install --from-paths src --ignore-src --rosdistro foxy -r -y ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies: aubo_ros2_moveit_config: Cannot locate rosdep definition for [warehouse_ros_mongo] Continuing to install resolvable dependencies... #All required rosdeps installed successfully 我用的是ubuntu 20.04 + ros2 foxy。 I am using ubuntu 20.04+ros2 foxy. You can try to manually install sudo apt install ros-foxy-warehouse-ros-mongo Thank you. It's useful.
2025-04-01T06:37:42.472962
2022-04-17T08:48:03
1206319358
{ "authors": [ "DouDinai", "Orchidaceae", "zl200881" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3075", "repo": "Xilinx/Vitis-AI", "url": "https://github.com/Xilinx/Vitis-AI/issues/761" }
gharchive/issue
PyTorch Quantization - [VAIQ_WARN]: Node ouptut tensor is not quantized I'm trying to quantize and export a simple 2 layer MLP, code seen below: class Feedforward(torch.nn.Module): def __init__(self, input_size, hidden_size): super(Feedforward, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size) self.relu = torch.nn.ReLU() self.fc2 = torch.nn.Linear(self.hidden_size, 1) self.sigmoid = torch.nn.Sigmoid() def forward(self, x): hidden = self.fc1(x) relu = self.relu(hidden) output = self.fc2(relu) output = self.sigmoid(output) return output I use the following code to quantize and export my model: model = load_model() input = torch.randn([2,2,2]) model.eval() quantizer = torch_quantizer('calib', model, (input)) quant_model = quantizer.quant_model quantizer.export_quant_config() input = torch.randn([1,2,2]) quantizer = torch_quantizer('test', model, (input)) quantizer.export_xmodel() I get the following warning when I run export_quant_config(): [VAIQ_WARN]: Node ouptut tensor is not quantized: Feedforward::input_0 type: input and the same warning for all layers and parameters in my model. When I debugg the function I can gather the following quantization info about my model: > (Pdb) TORCHQuantizer._QuantInfo > {'param': {'Feedforward::fc1.weight': [8, None], 'Feedforward::fc1.bias': [8, None], 'Feedforward::fc2.weight': [8, None], 'Feedforward::fc2.bias': [8, None]}, 'output': {'Feedforward::input_0': [8, None], 'Feedforward::Feedforward/ReLU[relu]/input': [8, None], 'Feedforward::Feedforward/Linear[fc2]/51': [8, None]}, 'input': {}} I' m guessing that the bnfp value should not be None but I have no Idea how to properly quantize my model. Any suggestions? I have the same question [VAIQ_WARN]: Node ouptut tensor is not quantized: YoloBody::input_0 type: input [VAIQ_WARN]: Node ouptut tensor is not quantized: YoloBody::YoloBody/Focus[backbone]/CSPDarknet[backbone]/Focus[stem]/input.1 type: concat [VAIQ_WARN]: Node ouptut tensor is not quantized: YoloBody::YoloBody/Focus[backbone]/CSPDarknet[backbone]/Focus[stem]/BaseConv[conv]/Conv2d[conv]/input.2 type: conv2d [VAIQ_WARN]: Node ouptut tensor is not quantized: YoloBody::YoloBody/Focus[backbone]/CSPDarknet[backbone]/Focus[stem]/BaseConv[conv]/SiLU[act]/input.3 type: elemwise_mul …… And the final tip is [VAIQ_WARN]: Quantization is not performed completely, check if model inference function is called!!! Model forward loop for evaluation is needed before export quantization config. Please refer to the function "evaluate" in the demo resnet18_quant.py. Model forward loop for evaluation is needed before exporting quantization config. Please refer to the function "evaluate" in the example resnet18_quant.py.
2025-04-01T06:37:42.525443
2019-07-15T19:55:41
468307087
{ "authors": [ "ArtemGr", "Xudong-Huang" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3076", "repo": "Xudong-Huang/conetty", "url": "https://github.com/Xudong-Huang/conetty/pull/2" }
gharchive/pull-request
Use BufReader::get_mut to write to a socket On rustc 1.37.0-nightly (5f9c0448d 2019-06-25) fails with error[E0596]: cannot borrow data in a `&` reference as mutable --> /root/.cargo/git/checkouts/conetty-a662beb3b89f3dc0/7acf957/src/tcp_client.rs:54:9 | 54 | s.get_ref().write_all(&(req.finish(id)))?; | ^^^^^^^^^^^ cannot borrow as mutable error: aborting due to previous error Thanks!
2025-04-01T06:37:42.528107
2020-09-07T09:39:17
694917648
{ "authors": [ "JUGGHM", "kuwt", "yiranzhong" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3077", "repo": "XuelianCheng/LidarStereoNet", "url": "https://github.com/XuelianCheng/LidarStereoNet/issues/8" }
gharchive/issue
meaning of >2px, >3px, >5px I cannot find the meaning of >2px, >3px, >5px neither from KITTI nor from "Depth Map Prediction from a Single Image using a Multi-Scale Deep Network". Could you explain where i could find the information? I guess it is disparity error? I guess it is disparity error? What is the ground truth for the comparison? Why input lidar has invalid Abs Rel but some error in >2px,>3px and >5px? "What is the ground truth for the comparison?" --KITTI stereo provides disparity ground truth for the evaluation. "Why input lidar has invalid Abs Rel but some error in >2px,>3px and >5px?" --It is because the input lidar and the ground truth are not dense. In this case, we do not evaluate the abs rel metric on the input lidar. For bad pixel rate metrics, since they compute the error ratios regardless of the densities of the inputs, we provide the results in our paper.
2025-04-01T06:37:42.535062
2022-02-19T23:45:19
1144874166
{ "authors": [ "Kfeavel", "vannaka" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3078", "repo": "XyrisOS/xyris", "url": "https://github.com/XyrisOS/xyris/issues/386" }
gharchive/issue
Wrong macro used for loop condition in paging.cpp:initDirectory() https://github.com/XyrisOS/xyris/blob/adf3ae0afff96e621f22f5f2c49e58e8d7b59203/Kernel/Memory/paging.cpp#L134 I believe the upper bound for the iterator of this for loop is meant to be ARCH_PAGE_TABLE_ENTRIES. The current code works because the two macros happen to have the same value. You would be correct! Thank you for opening an issue, I'll merge in a fix soon.
2025-04-01T06:37:42.536257
2022-01-25T03:30:50
1113383509
{ "authors": [ "Kfeavel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3079", "repo": "XyrisOS/xyris", "url": "https://github.com/XyrisOS/xyris/pull/368" }
gharchive/pull-request
Resolve "Replace liballoc with Custom Heap" Closes #367 Will write some unit tests for this soon. Thanks to @micahswitzer for writing a basic one used for testing (janky-heap-tests).
2025-04-01T06:37:42.550475
2013-04-06T12:52:57
12877316
{ "authors": [ "ozh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3081", "repo": "YOURLS/YOURLS", "url": "https://github.com/YOURLS/YOURLS/issues/1277" }
gharchive/issue
Streamline API return codes & output it as HTTP header This is a COPY of Issue 1277: Streamline API return codes & output it as HTTP header, filed on Google Code before the project was moved on Github. Submitted on 2013-01-11T21:01:11.000Z by<EMAIL_ADDRESS>Status: Accepted Please review the original issue and especially its comments. Comments here on closed issues will be ignored. Thanks. Original description Currently the API result arrays contain either 'statusCode', 'errorCode' or nothing. - make it consistent: 'statusCode' and 'message' for all methods - (in case a custom method doesn't implement that, output default status &amp; message) - use it as a HTTP header in the output This might induce some breakage for people using API: make a detailed blog post about it Closed (or dismissed, really) via #3233
2025-04-01T06:37:42.562596
2018-01-12T21:54:00
288257126
{ "authors": [ "Crisses", "PopVeKind", "ayyoovod", "boumaj123", "johnaweiss", "ozh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3082", "repo": "YOURLS/YOURLS", "url": "https://github.com/YOURLS/YOURLS/issues/2354" }
gharchive/issue
No redirection when using "www" Hello; Our short URLs are not properly redirecting when using "www". I can access the home page with or without "www" but it returns back to home page if I use "www" with the shortned url. Thanks for any feedback. YES! I've been pulling my hair over this. Thanks @codegrrrl for your time and expertise 👍 I would like to know what is properly redirecting when using "www". From the OP. I'm on the home stretch of a PR refactoring the HTTP(S) scheme usage. This involves removing the YOURLS_SITE constant from config.php. So this topic is important during testing. What should a very vanilla install do? I believe what ozh reported HERE us the correct behavior and this issue is Not a Bug! If YOURLS_SITE is defined as http://sho.rt does [properly redirecting] include http://redirect.sho.rt, http://w3.sho.rt, or http://www.sho.rt? The domain sho.rt and the domain www.sho.rt are two different domains, suggesting the need for Multi-Domain Support #560, without hacking the core files. A simple, standard 301 redirect in .htaccess or the server block from www. to non-www. would do. This is used on many WordPress sites and is easy to do. Howbeit, it does add an extra 301. # FROM www. --TO-- NO www. -- In .htaccess RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] If the domain www. is a special case in YOURLS, what is the configuration for a site that does NOT want to serve www. requests? (Personally I want www. requests to be redirected to the non-www. "home page" where I have WordPress waiting to serve their needs.) If you can delay a little, or temporarily use one if the above options. I expect to release the HTTP(S) refactoring PR and HTTPS Plugin in January 2019. Then all my effort will be directed on the YOURLS MultiSite Plugin. MultiSite is 99% done and working on my production server with three different domains. It can easily handle a configuration as the OP describes. Expect MultiSite Feburary 2019. @ozh I believe the OP's issue is Not a Bug but a Multi-Domain Support #560 configuration error (see No. 1 above). What should a very vanilla install do with two different domains? I believe what you reported HERE is the correct behavior. Social media sites are mangling non-www URLs. So if you say “site.net/nick” it’s showing that, but when someone clicks it’s sending them to http(s)://www.site.net/nick This is an issue on twitter and facebook. The fix is to say “http(s)://site.net/nick” and not omit the http(s) portion OR — fix the site to accept the www portion if you have a dedicated domain for your shorturls (like many main sites both www and non-www resolve to the same/main site). So I’m interested in whether this has been resolved in a graceful way or if it’s still pending. (I’m working on a cell phone right now with reading glasses in a shell app so it’s not like it’s a great time to read a ton of text lol — I may have to hack it and get back to updates and installations/fixes later). i'm looking forward to a refactored yourls which does this "correctly." But for now, @codegrrrl 's "dirty" fix saved me! Everybody: this issue has been fixed with latest commits. Example: https://www.ozh.in/xt and https://ozh.in/xt both point here (ozh.in is running current master version. I will probably release a new release sometimes soon)
2025-04-01T06:37:42.710516
2023-11-23T09:55:27
2007844574
{ "authors": [ "Yan-Thomas", "alexanderniebuhr" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3083", "repo": "Yan-Thomas/lunaria", "url": "https://github.com/Yan-Thomas/lunaria/issues/52" }
gharchive/issue
package manager not found using create-lunaria ┌ create-lunaria │ ◇ Which @lunariajs package would you like to set up? │ @lunariajs/core │ ◇ Where should we set up Lunaria? │ ./lunaria │ ◇ Does your project use TypeScript? │ Yes │ ◇ Do you wish to install @lunariajs/core and its dependencies? │ Yes └ Could not find your package manager. Setup wizard cancelled. npm ERR! code 1 npm ERR! path /Users/alexanderniebuhr/Developer/tmp npm ERR! command failed npm ERR! command sh -c create-lunaria npm ERR! Thanks for reporting!
2025-04-01T06:37:42.730444
2015-05-17T08:30:04
77254462
{ "authors": [ "Hobbix", "Pikachuy", "SuperSajuuk", "Yonezpt", "ireun" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3084", "repo": "YePpHa/YouTubeCenter", "url": "https://github.com/YePpHa/YouTubeCenter/issues/1823" }
gharchive/issue
[Build 525] HTML5 player reloads when forcing the flash player When forcing the flash player, every time you load a video, a couple of seconds of it play in the new player, then it reloads and shows the flash player. This is mildly annoying, but I'm afraid it might be eating up resources that shouldn't even be a concern. Then again, I don't know much about this so I could be entirely wrong. That is intentional and is not a bug. YouTube is automatically loading the HTML5 player, but your extension doesn't force Flash Player until the page stops loading. You should really be using the HTML5 player though, as it has the benefit of letting you watch videos at higher frame rates, instead of just a fixed 30fps. The only reason I'm not using the HTML5 player is that it doesn't allow the progress bar to hide, and therefore I can only watch a letterboxed version of the video instead of the size I'm currently using with the flash player. @Pikachuy It hides for me. If it's not disappearing, just move your cursor around the player, then move it off and it should disappear. Make sure you've also set the option to hide the progress bar. HTML5 player is superior to Flash, due to high fps playback, so you should definitely be using it over Flash. :) In the HTML5 player there is a great problem with tearing (a problem with the vertical sync), if the Windows user is not used (disabled) Aero style. Aero style is bullshit, which immediately disable the normal people Thus HTML5 player shit, that's why I only use the Flash Player, which supports the vertical sync. The problem with refreshing the page is very annoying. If possible, try to fix it. Thank you. @Hobbix Keep in mind that YouTube intends to drop the flash player support for their website soon, that is why they are moving towards the HTML5 player completely and you won't be able to use their flash player any longer when that time comes. Yup, it's not because of YTC.. Closing...
2025-04-01T06:37:42.745594
2020-05-07T13:02:11
614040345
{ "authors": [ "hbarisik", "nsano-rururu" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3085", "repo": "Yelp/elastalert", "url": "https://github.com/Yelp/elastalert/issues/2788" }
gharchive/issue
Referencing ElasticSearch query in a json file Referencing ElasticSearch query in filter would make life really easy. I know that Kibana dashboard could be used but I think, this alternative also foce using ElastAlert .yaml rule file. I think it would be much simpler if we could have some placeholders in our ElasticSearch query that ElastAlert replace. We would reference this .json file in our filter section. I appreciate your feedback on this. Yelp/elastalert is no longer maintained. Please use jertel/elastalert2. https://github.com/jertel/elastalert2
2025-04-01T06:37:42.746796
2017-02-23T09:05:41
209702173
{ "authors": [ "ramey" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3086", "repo": "Yelp/elastalert", "url": "https://github.com/Yelp/elastalert/pull/914" }
gharchive/pull-request
Adding Exotel sms Alerter support for alerting This pull request adds sms support for Exotel sms api(https://www.exotel.in). I have tested it and works well in production @danielpops can you please merge this.
2025-04-01T06:37:42.749167
2017-07-19T12:34:47
244025227
{ "authors": [ "madelaney", "solarkennedy" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3087", "repo": "Yelp/puppet-uchiwa", "url": "https://github.com/Yelp/puppet-uchiwa/issues/84" }
gharchive/issue
Support for FreeBSD. All, I'd like to use this puppet module on FreeBSD but there is not support. Would you be okay to a pull-request that addresses this issue? Mike D. I would accept any PR that expands the OS support of this module.
2025-04-01T06:37:42.755323
2017-10-26T16:08:08
268821051
{ "authors": [ "YerkoPalma", "yoshuawuyts" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3088", "repo": "YerkoPalma/choo-tts", "url": "https://github.com/YerkoPalma/choo-tts/issues/4" }
gharchive/issue
missing link to github from npm Noticed you're missing a field to link to github from https://www.npmjs.com/package/choo-tts. Can't PR right now, but might be worth fixing. Thanks! That's because package.json is missing a repository property rihgt? I think it's happening in all my repos :anguished: Thanks for reporting!
2025-04-01T06:37:42.887618
2023-08-05T21:27:48
1837944346
{ "authors": [ "YongzhiWu", "YuetianZhou" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3089", "repo": "YongzhiWu/OFDM_ISAC_simulator", "url": "https://github.com/YongzhiWu/OFDM_ISAC_simulator/issues/1" }
gharchive/issue
请问这个代码有对应的参考文献吗 您好,想对照文献学习一下,请问是否有参考文献 可以参考:[1] Waveform Design and Signal Processing Aspects for Fusion of Wireless Communications and Radar Sensing [2] Performance analysis of joint radar and communication using OFDM and OTFS
2025-04-01T06:37:42.908544
2022-12-10T13:56:18
1488509434
{ "authors": [ "Megalonumber0ne", "YoshiCrafter29" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3090", "repo": "YoshiCrafter29/CodenameEngine", "url": "https://github.com/YoshiCrafter29/CodenameEngine/issues/5" }
gharchive/issue
Compiling help: [Did what was told to and still got errors / source/funkin/system/MusicBeatState.hx:89: characters 43-45 : Expected )] I have everything installed from the original source code for FNF, ran the update.bat and then tried to compile in debug, "Dbeta" and normal compile and all of them gave the below error source/funkin/system/MusicBeatState.hx:89: characters 43-45 : Expected ) [EDIT]: I tried running the code on windows powershell and the command prompt and CMD crashes instead of giving me the above error, powershell is the one which gives me the error [x] Windows [ ] Mac [ ] Linux [ ] HTML5 Seems like your Haxe version does not support obj is Type statements Are you sure you're using the latest version available? If not, you can update to 4.2.5 here I just realised that I'm using haxe 4.1.5, thank you! I got a whole slew of errors after updating my version of haxe to 4.2.5 so I'm just going to wait for a full release Update your lime version. haxelib install lime Alright thank you
2025-04-01T06:37:42.912366
2019-06-04T08:46:22
451871136
{ "authors": [ "comicaza", "tux3" ], "license": "isc", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3091", "repo": "YosysHQ/yosys", "url": "https://github.com/YosysHQ/yosys/pull/1062" }
gharchive/pull-request
README.md: Missing formatting for This is a drive-by PR to fix an invisible reference to <size> due to missing formatting. I do not know the answer
2025-04-01T06:37:42.919126
2019-07-11T10:48:05
466816388
{ "authors": [ "eddiehung", "janrinze", "smunaut", "whitequark" ], "license": "isc", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3092", "repo": "YosysHQ/yosys", "url": "https://github.com/YosysHQ/yosys/pull/1183" }
gharchive/pull-request
synth_ice40: switch -relut to be always on As far as I know this switch either does nothing or makes the design faster. People like @smunaut have been using it extensively and it appears robust. As I mentionned on IRC, wouldn't it be better to ignore the "-relut" option and not remove it ? Just to not break all existing scripts that have it. Else you need in a Makefile to test yosys's version to know if you need to explicitely add that option or leave it out :/ I think Yosys has never done that for any other options, did it? Note: with -abc9, -relut doesn't seem to do much of anything on the designs I tested it on. Which is strange, because it should, in principle, on a techmapped adder followed by a mux with a constant or one of the operands. Nevertheless it doesn't hurt to enable it there as well. I couldn't find any instance in synth_ice40 of any option being removed. But in synth_xilinx for instance, I could at least find one example ( 6c256b8cda66e2ba128d5fa3ba344fe4717711f8 ) where -arch got replace by -family but the code still has -arch support as compatibility. But I can also find examples of non-compatible changes as well ( 36e6da53964b406ad379a60fc289aa3af9beb8a9 ). So not sure what the policy is here. But -relut is definitely a pretty popular option currently ... @smunaut It is now parsed and ignored. But I can also find examples of non-compatible changes as well ( 36e6da5 ). So not sure what the policy is here. But -relut is definitely a pretty popular option currently ... This incompatible example was a new option made on a branch and changed on that same branch before it was merged into master. Generally, I would say we preserve them as much as possible, even silently. I've knowingly broken this once I think, for an option that was wrong to begin with and was there only for a short time.... Could we please have a -no-relut option now. My design actually runs around 30% slower with -relut. Not sure if there are other designs that suffer the same fate but I have been tracking this as the cause of a regression where the design used to run at 44 MHz but now only does 33 MHz. I'm a little surprised that this went from optional to always on without the means to opt-out. @janrinze Please see #1187 I'm a little surprised that this went from optional to always on without the means to opt-out. This regression was caused by commit 437fec0d88b4a2ad172edf0d1a861a38845f3b1d, which changed the semantics of SB_LUT4 without considering all in-tree users of SB_LUT4, which unfortunately sometimes happens and leads to undesirable results. There is nothing wrong with -relut itself though.
2025-04-01T06:37:42.922666
2021-12-19T04:11:30
1084008813
{ "authors": [ "YousefED", "hangtwenty" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3093", "repo": "YousefED/SyncedStore", "url": "https://github.com/YousefED/SyncedStore/issues/36" }
gharchive/issue
hocuspocus.dev usage example Do you know of a usage example of using SyncedStore with hocuspocus.dev as the server? Hi and welcome @hangtwenty ! This should be similar to how other providers work, as described at https://syncedstore.org/docs/sync-providers. I haven't tried this yet, but when setting up the store, try this: import { syncedStore, getYjsValue } from "@syncedstore/core"; export const store = syncedStore({ arrayData: [] }); const doc = getYjsValue(store); const provider = new HocuspocusProvider({ url: 'ws://<IP_ADDRESS>:1234', name: 'example-document', document: doc, onAwarenessUpdate: ({ states }) => { currentStates = states }, }) Feel free to reopen if you have additional questions! Looking forward to hearing your experience with syncedstore :)
2025-04-01T06:37:42.945299
2020-06-22T13:17:40
643064215
{ "authors": [ "Alasile", "YueLiao", "ghost", "volodymyrkepsha" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3094", "repo": "YueLiao/PPDM", "url": "https://github.com/YueLiao/PPDM/issues/13" }
gharchive/issue
Training error Hello, I built PPDM as it is written in Installation. But when I launch the training I got this: What might be the problem ? I'm using AWS g4dn.xlarge machine. It may be caused by the wrong cuda version. I guess that you use cuda 10,but the dcn only supports cuda 9. I plan to rewrite the repo by cuda 10 and pytorch1.4 recently. @YueLiao thanks for quick response ! Yep, I know about CUDA version, for that reason I've changed the symlink of the CUDA from 10 to 9. like this: sudo rm -fr /usr/local/cuda ln -s /usr/local/cuda-9 /usr/local/cuda The error is caused by the mismatching cuda version when compiling DCN and running the code. You need to modify the PATH likes: export PATH="$CUDA_HOME/bin:${PATH}" export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME export LD_LIBRARY_PATH="$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH" export LIBRARY_PATH=$CUDA_HOME/lib64:$LIBRARY_PATH export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH export CFLAGS="-I$CUDA_HOME/include $CFLAGS" Sorry for a long reply. Unfortunately, it didn't work. Looking forward for the solution for CUDA 10. Thanks ! 抱歉,回复很长。 不幸的是,它没有用。 期待 CUDA 10 的解决方案。 谢谢 Did you solve it? I encountered the same problem, I also checked the version and still reported this error 抱歉,回复很长。 不幸的是,它没有用。 期待 CUDA 10 的解决方案。 谢谢 Did you solve it? I encountered the same problem, I also checked the version and still reported this error pt1 branch has already supported cuda10~
2025-04-01T06:37:43.008173
2019-02-23T17:44:39
413718818
{ "authors": [ "TheDeadScythe", "Yuri6037" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3095", "repo": "Yuri6037/TSCM_Starfall", "url": "https://github.com/Yuri6037/TSCM_Starfall/issues/2" }
gharchive/issue
Crash Alert low [SF - ERROR] SF:lanai/processor/systems/ai_decoder.txt:137: attempt to call global 'CheckAutoMode' (a nil value) [starfall_processor [2081]] Server Error SF:lanai/processor/systems/ai_decoder.txt:137: attempt to call global 'CheckAutoMode' (a nil value) stack traceback: SF:lanai/processor/systems/ai_decoder.txt:137: in function 'OnChatMessage' SF:lanai/processor/systems/ai_utils.txt:22: in function 'ReadData' SF:lanai/processor/systems/protocol.txt:85: in function SF:lanai/processor/systems/protocol.txt:61 Does this happen after the core is completely initialized (that means no more yellow ligh) or does this happen while the core is initializing ? #3 on initialiez is done completely Ok then I'll have a look asap. I checked it and I can't reproduce the crash when LanAI is completely initialized... Should be fixed now, comment if it's not... #4
2025-04-01T06:37:43.109672
2024-02-02T06:16:21
2114165990
{ "authors": [ "Z1ni", "dr1055" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3096", "repo": "Z1ni/XGP-save-extractor", "url": "https://github.com/Z1ni/XGP-save-extractor/issues/115" }
gharchive/issue
Support for Persona 3 Reload Game name: Persona 3 Reload Game package name: SEGAofAmericaInc.L0cb6b3aea_s751p9cej88mt wgs.zip SteamSave.zip Duplicate of #114. Please follow that issue. Also thanks for the saves, they'll help!
2025-04-01T06:37:43.138097
2024-11-06T07:47:30
2637300131
{ "authors": [ "Kariton", "ZCube", "samip5" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3097", "repo": "ZCube/factorio-port-fixer", "url": "https://github.com/ZCube/factorio-port-fixer/issues/1" }
gharchive/issue
use of closed network connection Hey, i recently added your container to the helm chart SQLJames/factorio-server-charts while the container works as intended it sometimes crashes: {"level":"info","ts":1730878872.1065865,"caller":"cmd/local.go:115","msg":"Healthcheck server started"} ⇨ http server started on [::]:34197 {"level":"info","ts":1730878872.106785,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} {"level":"info","ts":1730878877.758352,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"<IP_ADDRESS>:31497"} {"level":"info","ts":1730878880.5381744,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"<IP_ADDRESS>:31497"} {"level":"info","ts":1730878880.5382175,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} {"level":"error","ts":1730878882.7584276,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp <IP_ADDRESS>:60707: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"} {"level":"info","ts":1730878887.7583582,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"<IP_ADDRESS>:31497"} {"level":"error","ts":1730878892.7618413,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp <IP_ADDRESS>:50433: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"} {"level":"info","ts":1730878895.554898,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"<IP_ADDRESS>:31497"} {"level":"info","ts":1730878895.5549371,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} {"level":"info","ts":1730878897.7583709,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"<IP_ADDRESS>:31497"} {"level":"error","ts":1730878898.8152833,"caller":"cmd/local.go:132","msg":"net.ReadFromUDP() error: read udp <IP_ADDRESS>:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.3\n\t/app/cmd/local.go:132\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"} {"level":"error","ts":1730878902.7591352,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp <IP_ADDRESS>:55703: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"} {"level":"info","ts":1730878903.0005345,"caller":"cmd/local.go:190","msg":"graceful shutting down"} {"level":"error","ts":1730878903.000601,"caller":"cmd/local.go:195","msg":"exit reason: read udp <IP_ADDRESS>:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1\n\t/app/cmd/local.go:195\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:920\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968\ngithub.com/zcube/factorio-port-fixer/cmd.Execute\n\t/app/cmd/root.go:31\nmain.main\n\t/app/main.go:10\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"} when the container crashes the clients disconnect with "server not responding" despite the fact that the game server logs do look good. It doesn't just sometimes crash, it crashes within 10 minutes which makes it not playable. Related: https://github.com/SQLJames/factorio-server-charts/issues/62 @Kariton @samip5 It's probably a health check issue. Isn't the health check failing because it's <IP_ADDRESS>? upstream https://github.com/SQLJames/factorio-server-charts/blob/main/charts/factorio-server-charts/templates/deployment.yaml#L218 my chart https://github.com/ZCube/factorio-server-charts/blob/64f817b163a947a71aa53844afe6be78482191a0/charts/factorio-server-charts/templates/deployment.yaml#L205 There were no problems with chart-based server operation in the past (I did not include a health check), and the docker-based server I recently started operating has been running without problems for more than 3 days. healthcheck: test: curl --fail pingpong:34197/health || exit 1 interval: 20s retries: 5 start_period: 20s timeout: 10s hmpf. absolutely. you cannot listen on localhost and expect livenessProbes to work on a "public" site... the port_fixer should listen to "<IP_ADDRESS>" and then work as expected. nice catch! gonna test, confirm and patch ASAP. well. the livenessProbe check seems to be the problem - not the listen IP. Thu, Nov 7 2024 3:08:42 pm {"level":"info","ts":1730988522.3731894,"caller":"cmd/local.go:115","msg":"Healthcheck server started"} Thu, Nov 7 2024 3:08:42 pm {"level":"info","ts":1730988522.3731928,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} Thu, Nov 7 2024 3:08:42 pm ⇨ http server started on [::]:34197 Thu, Nov 7 2024 3:08:55 pm {"level":"info","ts":1730988535.616322,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:08:55 pm {"level":"info","ts":1730988535.6163716,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} Thu, Nov 7 2024 3:08:56 pm {"level":"info","ts":1730988536.461232,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:09:01 pm {"level":"error","ts":1730988541.462053,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:58875: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"} Thu, Nov 7 2024 3:09:06 pm {"level":"info","ts":1730988546.460379,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:09:10 pm {"level":"info","ts":1730988550.6329925,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:09:10 pm {"level":"info","ts":1730988550.6330307,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} Thu, Nov 7 2024 3:09:11 pm {"level":"error","ts":1730988551.4609427,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:60609: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"} Thu, Nov 7 2024 3:09:16 pm {"level":"info","ts":1730988556.4605217,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:09:17 pm {"level":"error","ts":1730988557.5172606,"caller":"cmd/local.go:132","msg":"net.ReadFromUDP() error: read udp [::]:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.3\n\t/app/cmd/local.go:132\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"} Thu, Nov 7 2024 3:09:21 pm {"level":"error","ts":1730988561.4610858,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:36376: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"} Thu, Nov 7 2024 3:09:21 pm {"level":"info","ts":1730988561.7004497,"caller":"cmd/local.go:190","msg":"graceful shutting down"} Thu, Nov 7 2024 3:09:21 pm {"level":"error","ts":1730988561.7005007,"caller":"cmd/local.go:195","msg":"exit reason: read udp [::]:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1\n\t/app/cmd/local.go:195\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:920\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968\ngithub.com/zcube/factorio-port-fixer/cmd.Execute\n\t/app/cmd/root.go:31\nmain.main\n\t/app/main.go:10\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"} periodSeconds: 10 initialDelaySeconds: 5 failureThreshold: 3 the container gets killed. now udp [::]:34197 looks like an ipv6 problem to me. but i dont have dualstack enabled... is ReadFromUDP actually related to http probes? Thu, Nov 7 2024 3:37:14 pm ⇨ http server started on [::]:34197 Thu, Nov 7 2024 3:37:14 pm {"level":"info","ts":1730990234.3001275,"caller":"cmd/local.go:115","msg":"Healthcheck server started"} Thu, Nov 7 2024 3:37:14 pm {"level":"info","ts":1730990234.3001416,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} Thu, Nov 7 2024 3:37:27 pm {"level":"info","ts":1730990247.8176143,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:37:27 pm {"level":"info","ts":1730990247.8176613,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} Thu, Nov 7 2024 3:37:28 pm {"level":"info","ts":1730990248.2562778,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:37:33 pm {"level":"error","ts":1730990253.256902,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:55627: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"} Thu, Nov 7 2024 3:37:38 pm {"level":"info","ts":1730990258.256299,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:37:42 pm {"level":"info","ts":1730990262.8342316,"caller":"cmd/local.go:135","msg":"Read from socket","Bytes":1,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:37:42 pm {"level":"info","ts":1730990262.8342786,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} Thu, Nov 7 2024 3:37:43 pm {"level":"error","ts":1730990263.2572982,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:39508: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"} Thu, Nov 7 2024 3:37:48 pm {"level":"info","ts":1730990268.256682,"caller":"cmd/local.go:92","msg":"Wrote to socket","Bytes":3,"Remote":"<IP_ADDRESS>:31497"} Thu, Nov 7 2024 3:37:53 pm {"level":"error","ts":1730990273.2570317,"caller":"cmd/local.go:103","msg":"net.ReadFromUDP() error: %sread udp [::]:44050: i/o timeout","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.1\n\t/app/cmd/local.go:103\ngithub.com/labstack/echo/v4.(*Echo).add.func1\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:546\ngithub.com/labstack/echo/v4.(*Echo).ServeHTTP\n\t/go/pkg/mod/github.com/labstack/echo/v4@v4.10.0/echo.go:633\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2947\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1991"} Thu, Nov 7 2024 3:37:53 pm {"level":"error","ts":1730990273.2741623,"caller":"cmd/local.go:132","msg":"net.ReadFromUDP() error: read udp [::]:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1.3\n\t/app/cmd/local.go:132\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/go/pkg/mod/golang.org/x/sync@v0.1.0/errgroup/errgroup.go:75"} Thu, Nov 7 2024 3:37:53 pm {"level":"info","ts":1730990273.2743487,"caller":"cmd/local.go:190","msg":"graceful shutting down"} Thu, Nov 7 2024 3:37:53 pm {"level":"error","ts":1730990273.2743661,"caller":"cmd/local.go:195","msg":"exit reason: read udp [::]:34197: use of closed network connection","stacktrace":"github.com/zcube/factorio-port-fixer/cmd.glob..func1\n\t/app/cmd/local.go:195\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:920\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968\ngithub.com/zcube/factorio-port-fixer/cmd.Execute\n\t/app/cmd/root.go:31\nmain.main\n\t/app/main.go:10\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"} livenessProbe: failureThreshold: 3 httpGet: path: /health port: port-fixer scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 ports: - containerPort: 34197 name: port-fixer protocol: TCP unless http v3 (quic), yes. http is tcp. im unable to figure out what exactly is going on. removing the probe entirely is working. greater thresholds or delays do not help. to me it looks like an issue in the /health endpoint or something related. is there a way to get debug logging going? Liveness probe failed: Get "http://<IP_ADDRESS>:34197/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) root@factorio-server-7c8497fdc-srv9l:/# curl <IP_ADDRESS>:34197/health OKroot@factorio-server-7c8497fdc-srv9l:/# i have disabled the livenessProbe upstream. i dont understand why that is a problem. from within the factorio container it works with listener <IP_ADDRESS> (i think i have tested that before hardcoding it in the chart...) port-fixer: - args: - local - '--ip=<IP_ADDRESS>' - '--port=34197' - '--remotePort=31497' {"level":"info","ts":1730994447.8460875,"caller":"cmd/local.go:115","msg":"Healthcheck server started"} {"level":"info","ts":1730994447.8460903,"caller":"cmd/local.go:126","msg":"Accepting a new packet","IP":"<IP_ADDRESS>","Port":34197} ⇨ http server started on [::]:34197 probe test: root@factorio-server-5bcbf95db8-s4gft:/# curl <IP_ADDRESS>:34197/health OKroot@factorio-server-5bcbf95db8-s4gft:/# curl <IP_ADDRESS>:34197/health # pod IP OKroot@factorio-server-5bcbf95db8-s4gft:/# so listening to <IP_ADDRESS> or <IP_ADDRESS> does not make a difference here. re-tested both and neither of them worked as expected. higher timeouts / thresholds / delay did not help. but when deploying rcon-api within the same pod that probe does work. @Kariton I finally remembered. The /health API was an API for checking the health of the Factorio server, not factorio-port-fixer. would you mind adding an "correct" /healthz endpoint? would you mind adding a "correct" /healthz endpoint? use /health services: pingpong: image: ghcr.io/zcube/factorio-port-fixer:main command: /factorio-port-fixer local --ip=<IP_ADDRESS> --port=34197 --remotePort=${PORT:-34197} healthcheck: test: curl --fail <IP_ADDRESS>:34197/health || exit 1 factorio: image: factoriotools/factorio:stable environment: - PORT=${PORT:-34197} healthcheck: test: curl --fail pingpong:34197/health_for_factorio || exit 1 I am closing this as I believe it has been resolved in v1.0.4.
2025-04-01T06:37:43.161600
2015-05-08T16:23:38
74417165
{ "authors": [ "Shishi666", "bakura10", "danizord" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3099", "repo": "ZF-Commons/zfc-rbac", "url": "https://github.com/ZF-Commons/zfc-rbac/issues/295" }
gharchive/issue
Problème avec mon CustomRoleProvider Hello I have perhaps not understood the doc but I have declared this in my zfc_rbac.global.php file : 'role_provider_manager' => [ 'factories' => [ 'role_db_provider' => 'ShishiUser\Factory\RoleProviderFactory', ] ], 'role_provider' => [ 'role-db-provider' => [ ], ], My factory : <?php namespace ShishiUser\Factory; use Zend\ServiceManager\FactoryInterface; use ShishiUser\Role\RoleDbProvider; class RoleProviderFactory implements FactoryInterface { /* (non-PHPdoc) * @see \Zend\ServiceManager\FactoryInterface::createService() */ public function createService(\Zend\ServiceManager\ServiceLocatorInterface $serviceLocator) { // TODO Auto-generated method stub $sm = $serviceLocator->get('ServiceManager'); $roleRepository = $sm->get('shishi-user-role-repository'); $roleDbProvider = new RoleDbProvider(); $roleDbProvider->setRoleRepository($roleRepository); return $roleDbProvider; } } ?> And the RoleProviderClass : <?php namespace ShishiUser\Role; use ZfcRbac\Role\RoleProviderInterface; use ShishiUser\Repository\RoleRepository; use ZfcRbac\Exception\RoleNotFoundException; use Rbac\Role\HierarchicalRole; use Rbac\Role\Role; class RoleDbProvider implements RoleProviderInterface{ public $roleRepository; public function getRoles(array $roleNames) { $roles = $this->roleRepository->findByLibelles($roleNames); $rbacRoles = []; if (count($roles) >= count($roleNames)) { foreach ($roles as $role){ if ($role->getChilds() !== null){ $rbacRole = new HierarchicalRole($role->getRol_libelle()); foreach ( (array) $role->getChilds() as $childRole){ $rbacRole->addChild($childRole); } }else{ $rbacRole = new Role($role->getRol_libelle()); } $permissions = ($role->getRol_permissions() !== null) ? [] : $role->getRol_permissions(); if ($role->getRol_permissions() !== null){ foreach ($permissions as $permission){ $rbacRole->addPermission($permission->getPerm_libelle()); } } $rbacRoles[] = $rbacRole; } return $rbacRoles; } // We have roles that were asked but couldn't be found in database... problem! foreach ($roles as &$role) { $role = $role->getName(); } throw new RoleNotFoundException(sprintf('Some roles were asked but could not be loaded from database: %s', implode(', ', array_diff($roleNames, $roles)))); } public function setOptions($options) { $this->options = $options; return $this; } public function setRoleRepository(RoleRepository $roleRepository) { $this->roleRepository = $roleRepository; return $this; } } ?> I try to retrieve roles from my database, could you explain me what I am doing wrong please? Thank you in advance cordially Ping @bakura10 for transalation :P Shishi, penses à écrire en anglais stp, je suis le seul à être français ici ^^. Pense également à ajouter le coloriage pour ton code, c'est impossible à lire (https://help.github.com/articles/github-flavored-markdown/#syntax-highlighting). Haha yup @danizord, I'll try to help him ;). I have edited my question Hi, I think the error come from this: class RoleProviderFactory implements FactoryInterface { /* (non-PHPdoc) * @see \Zend\ServiceManager\FactoryInterface::createService() */ public function createService(\Zend\ServiceManager\ServiceLocatorInterface $serviceLocator) { // TODO Auto-generated method stub $sm = $serviceLocator->get('ServiceManager'); $roleRepository = $sm->get('shishi-user-role-repository'); $roleDbProvider = new RoleDbProvider(); $roleDbProvider->setRoleRepository($roleRepository); return $roleDbProvider; } } This object is constructed using a plugin manager (the role provider plugin manager). The $serviceLocator you received in the factory is a plugin manager. If you want to retrieve the main service locator, you have to replace: $sm = $serviceLocator->get('ServiceManager'); to: $sm = $serviceLocator->getServiceLocator(); getServiceLocator is a method defined on each plugin manager that allows to retrieve the amin plugin manager. Let me know if that works! Et merci d'avoir traduit en anglais ;). Hi, I get the same error when i replace this code : $sm = $serviceLocator->get('ServiceManager'); by $sm = $serviceLocator->getServiceLocator(); the error : Fatal error: Uncaught exception 'Zend\ServiceManager\Exception\ServiceNotFoundException' with message 'ZfcRbac\Role\RoleProviderPluginManager::get was unable to fetch or create an instance for role-db-provider' in E:\Zend studio 12 Workspace\ShishiBlog\vendor\zendframework\zendframework\library\Zend\ServiceManager\ServiceManager.php:555 Stack trace: #0 E:\Zend studio 12 Workspace\ShishiBlog\vendor\zendframework\zendframework\library\Zend\ServiceManager\AbstractPluginManager.php(116): Zend\ServiceManager\ServiceManager->get('role-db-provide...', true) #1 E:\Zend studio 12 Workspace\ShishiBlog\vendor\zf-commons\zfc-rbac\src\ZfcRbac\Factory\RoleServiceFactory.php(56): Zend\ServiceManager\AbstractPluginManager->get('role-db-provide...', Array) #2 [internal function]: ZfcRbac\Factory\RoleServiceFactory->createService(Object(Zend\ServiceManager\ServiceManager), 'zfcrbacservicer...', 'ZfcRbac\\Service...') #3 E:\Zend studio 12 Workspace\ShishiBlog\vendor\zendframework\zendframework\library\Zend\ServiceManager\ServiceManager.php( in E:\Zend studio 12 Workspace\ShishiBlog\vendor\zendframework\zendframework\library\Zend\ServiceManager\ServiceManager.php on line 555 thanks a lot for your help Ha maybe I get it. In your "role_provider" key you called it "role-db-provider", but in your plugin manager config you called it "role_db_provider". Those are different names! I'll assume that the problem was the incorrect config key. Closing due to lack of activity.
2025-04-01T06:37:43.199735
2017-10-30T01:18:36
269456512
{ "authors": [ "Zarel", "sooham" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3100", "repo": "Zarel/Pokemon-Showdown", "url": "https://github.com/Zarel/Pokemon-Showdown/issues/4094" }
gharchive/issue
Bug: Unown has no levels in sent JSON Hi, for battle type gen7randombattle. I noticed Unown JSON does not have details attribute sent when the request action is sent. Example websocket data recv from server. |request|{"active":[{"moves":[{"move":"Mirror Coat","id":"m ...: irrorcoat","pp":32,"maxpp":32,"target":"scripted","disabled":false},{"move":"Le ...: ech Life","id":"leechlife","pp":16,"maxpp":16,"target":"normal","disabled":fals ...: e},{"move":"Liquidation","id":"liquidation","pp":16,"maxpp":16,"target":"normal ...: ","disabled":false},{"move":"Toxic","id":"toxic","pp":16,"maxpp":16,"target":"n ...: ormal","disabled":false}]}],"side":{"name":"Sooham Rafiz","id":"p1","pokemon":[ ...: {"ident":"p1: Araquanid","details":"Araquanid, L79, F","condition":"237/237","a ...: ctive":true,"stats":{"atk":156,"def":191,"spa":125,"spd":254,"spe":112},"moves" ...: :["mirrorcoat","leechlife","liquidation","toxic"],"baseAbility":"waterbubble"," ...: item":"leftovers","pokeball":"pokeball","ability":"waterbubble"},{"ident":"p1: ...: Unown","details":"Unown","condition":"258/258","active":false,"stats":{"atk":14 ...: 9,"def":153,"spa":201,"spd":153,"spe":153},"moves":["hiddenpowerpsychic60"],"ba ...: seAbility":"levitate","item":"choicespecs","pokeball":"pokeball","ability":"lev ...: itate"},{"ident":"p1: Aurorus","details":"Aurorus, L81, M","condition":"331/331 ...: ","active":false,"stats":{"atk":129,"def":163,"spa":207,"spd":196,"spe":141},"m ...: oves":["freezedry","stealthrock","blizzard","ancientpower"],"baseAbility":"snow ...: warning","item":"leftovers","pokeball":"pokeball","ability":"snowwarning"},{"id ...: ent":"p1: Tapu Fini","details":"Tapu Fini, L75","condition":"229/229","active": ...: false,"stats":{"atk":117,"def":216,"spa":186,"spd":239,"spe":171},"moves":["moo ...: nblast","surf","calmmind","substitute"],"baseAbility":"mistysurge","item":"left ...: overs","pokeball":"pokeball","ability":"mistysurge"},{"ident":"p1: Garbodor","d ...: etails":"Garbodor, L81, F","condition":"262/262","active":false,"stats":{"atk": ...: 201,"def":179,"spa":144,"spd":179,"spe":168},"moves":["toxic","gunkshot","haze" ...: ,"toxicspikes"],"baseAbility":"aftermath","item":"blacksludge","pokeball":"poke ...: ball","ability":"aftermath"},{"ident":"p1: Claydol","details":"Claydol, L81","c ...: ondition":"230/230","active":false,"stats":{"atk":160,"def":217,"spa":160,"spd" ...: :241,"spe":168},"moves":["toxic","earthquake","icebeam","rapidspin"],"baseAbili ...: ty":"levitate","item":"leftovers","pokeball":"pokeball","ability":"levitate"}]} ...: ,"rqid":2} notice the following {u'ability': u'levitate', u'active': False, u'baseAbility': u'levitate', u'condition': u'258/258', u'details': u'Unown', u'ident': u'p1: Unown', u'item': u'choicespecs', u'moves': [u'hiddenpowerpsychic60'], u'pokeball': u'pokeball', u'stats': {u'atk': 149, u'def': 153, u'spa': 201, u'spd': 153, u'spe': 153}} Usually details is of format pokemon_ident level gender_if_any i.e Araquanid, L79, F. level is left off of details if it's 100. Unown is level 100 in Random Battle. details is documented in PROTOCOL.md in the |switch| major action. https://github.com/Zarel/Pokemon-Showdown/blob/master/PROTOCOL.md#major-actions Relevant excerpt of PROTOCOL.md: DETAILS is a comma-separated list of all information about a pokemon visible on the battle screen: species, shininess, gender, and level. So it starts with SPECIES, adding , shiny if it's shiny, , M if it's male, , F if it's female, , L## if it's not level 100.
2025-04-01T06:37:43.206054
2019-03-08T17:41:39
418888112
{ "authors": [ "Marty-D", "scheibo" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3101", "repo": "Zarel/Pokemon-Showdown", "url": "https://github.com/Zarel/Pokemon-Showdown/issues/5276" }
gharchive/issue
Refactor Gen 1/Stadium code to eliminate the modifiedStats table Can you also see if you can eliminate Gen 1's reliance on the modifiedStats table? It probably doesn't need to be separate from storedStats. Originally posted by @Zarel in https://github.com/Zarel/Pokemon-Showdown/pull/5274#issuecomment-470981380 I'm not sure what the idea behind using modifiedStats was, but it seems like we'd still need to use more than one stat table just for Transformed Pokemon. Gen 1 Transform keeps the user's original stats somewhere, and copies the current modified stats of the target, and stores the original stats of the target for the purposes of calculating damage during a critical hit. I'm not sure what the idea behind using modifiedStats was, but it seems like we'd still need to use more than one stat table just for Transformed Pokemon. We currently have baseStoredStats (user's original stats) and storedStats (where the transformed stats would be copied to). and copies the current modified stats of the target transformInto currently copies over volatiles and boosts to handle this, but I think this is where Gen 1 is weird (thanks to the Crystal_ discovery) and where the ordering of how boosts got applied relative to status forcing stat recalculation matters? And thus we need some way of tracking that, hence the current modifiedStats table? and stores the original stats of the target for the purposes of calculating damage during a critical hit. Do we do this today :S? It seems like we'd need another stats table for that. From what you're saying, it seems we need at least 3 stats tables (user's original, target's modified + original), though I'm not sure the 3 we currently have cover everything. Yes, I think baseStoredStats, storedStats, and a third table would be necessary just for Transform. The current modifiedStats can be rolled into storedStats safely. I also think there's probably a bug with how Transform currently calculates stats but I haven't had time to look into it yet. https://www.smogon.com/forums/threads/gen-1-and-tradebacks-dev-post-bugs-here.3524844/page-14#post-8064239
2025-04-01T06:37:43.208113
2015-09-14T20:27:44
106421220
{ "authors": [ "Marty-D", "ascriptmaster" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3102", "repo": "Zarel/Pokemon-Showdown", "url": "https://github.com/Zarel/Pokemon-Showdown/pull/2147" }
gharchive/pull-request
Add -ability tag to Imposter activation The other option is to modify transformInto in such a way that it broadcasts -transform messages to use the correct [from] parameter, then add a client change to check for it. Either way will allow Imposter to be properly tracked by the client. I was going to say the other option would be better, since otherwise Imposter will be revealed even when it doesn't activate against Illusion and substitutes. Imposter doesn't activate on Illusion and substitutes? Oh, my bad.
2025-04-01T06:37:43.220717
2021-07-29T17:23:06
956053562
{ "authors": [ "dconnolly", "teor2345" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3104", "repo": "ZcashFoundation/zebra", "url": "https://github.com/ZcashFoundation/zebra/issues/2544" }
gharchive/issue
Reject checkpointed blocks that would increase note commitment trees beyond their max sizes Motivation This was an implicit consensus rule, now made explicit. Specifications Designs Related Work Zebra already returns an error in all relevant cases: the non-finalized state returns an error if a note commitment tree becomes full the checkpoint verifier binds the transaction IDs to the block header v1-v4 transactions bind all transaction data to the transaction ID v5 transactions bind note commitments to the transaction ID
2025-04-01T06:37:43.256834
2020-07-11T13:18:53
655207250
{ "authors": [ "Zefau", "cyptus" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3106", "repo": "Zefau/ioBroker.nuki-extended", "url": "https://github.com/Zefau/ioBroker.nuki-extended/issues/73" }
gharchive/issue
log entries are off by one I am using version 2.2.6 of the adapter with an nuki soft bridge adapter installation. The entries in nuki-extended.0.smartlocks.<lock>.logs are always missing the last entry. on('nuki-extended.0.smartlocks.<lock>.logs', () => { const log = JSON.parse(getState('nuki-extended.0.smartlocks.<lock>.logs').val); console.log(log[0]); }); is only logging the action before the last one. I am able to validate this by manually checking the log entries from the object after any action. How can i get the most recent action (with triggering user) triggered on the lock? I can't confirm this with my installation. The log entries are retrieved from the Web API, which means that it matches the log you can find on https://web.nuki.io/. Could you please check if the most recent entries are available or missing there as well? Furthermore, the up-to-dateness of the log depends on the refresh time you have set in the adapter settings for the Nuki Web API. What refresh frequency have you set? Thanks for your input @Zefau Refresh frequency has been to 0, but the log object has been refreshed / changed on actions to the nuki (always 1 entry off). I changed the frequency to 5 seconds, now its working fine. Where does the log changes come from while the setting is beeing set to 0? could the issue be https://github.com/Zefau/ioBroker.nuki-extended/blob/f655657a5f8689fb684a8f7c6eb2c7463cdf879a/nuki-extended.js#L386 ? on callback by the bridge for any action, the adapter is refreshing the webApi as well. But if there is any latency on the bridge -> nuki web log, the log may not have the most recent update by the bridge yet. This would explain the off-by-one error. Yes, I guess that's it I quickly added a timeout for this case. Could you install the current Github version (no version number change for now) and verify the fix? kind of working - the nuki log is really weird. I tried verifying the delay by stopping time from action to logs available in https://web.nuki.io/#/pages/activity-log (pressing "clear filter" continuously, as this fetchs the logs again). The logs are sometimes delayed by 2-3 minutes - not sure if this has something todo with nukis android bridge. is it currently possible to trigger nukiWebApi.getSmartlockLogs by script? So it is fixed with regards to the ioBroker adapter, but not fixed with regards to Nuki Bridge vs. Web API ? I assigned foreign issue to this. Let's see what answer you get in the Nuki Developer forum. Yes, it cannot be fixed within this adapter as the nuki web logs are delayed themself. The software bridge is not able to provide logs recording to the bridge api documentation, not sure about the details the hardware bridge could provide. Hopefully nuki will provide a solution from their side, otherwise this adapter should mention the log delay within the documentation and maybe provide some kind of trigger object to fetch logs from web api again. This way scripts would be able to handle the unknown delay to get details about the last action from a bridge-callback (e.g. the user which triggered the action). Also nuki notification callbacks could be a solution for this, but they are still in beta. I will keep this up2date if there is any response by nuki.
2025-04-01T06:37:43.264434
2017-10-26T01:09:40
268598064
{ "authors": [ "ZehMatt" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3107", "repo": "ZehMatt/Lambda", "url": "https://github.com/ZehMatt/Lambda/issues/71" }
gharchive/issue
Airboat sometimes corrupts the view. Seems to happen when one bumps into stuff with a bigger latency. This was fixed in Garrys Mod, if it happens again I'll reopn it.
2025-04-01T06:37:43.290348
2022-05-24T16:05:38
1246783767
{ "authors": [ "NOiR-07", "SAMRIDHISINHA" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3108", "repo": "ZeroOctave/ZeroOctave-Javascript-Projects", "url": "https://github.com/ZeroOctave/ZeroOctave-Javascript-Projects/pull/700" }
gharchive/pull-request
2048 Game 🛠️ Fixes Issue (Number) #677 I have made a 2048 game using javascript ✅ Check List (Check all the applicable boxes) [ ✅ ] My code doesn't break any part of the project (Zero Octave-Javascript-Projects). [ ✅ ] This PR does not contain plagiarized content. [ ✅ ] My Addition/Changes works properly and matches the overall repo pattern. [ ✅ ] The title of my pull request is a short description of the requested changes. 📷 Screenshots don't delete any line of code from readme just add yours @Astrodevil Please review the changes once. I have updated the README.md and cards.json files.
2025-04-01T06:37:43.330256
2024-01-10T08:07:17
2073816723
{ "authors": [ "Zexxx" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3109", "repo": "Zexxx/Smart-Floor-Collider", "url": "https://github.com/Zexxx/Smart-Floor-Collider/issues/2" }
gharchive/issue
Rotating Causes Collider to Jitter on Y-Axis on Remote Client For reasons I'm currently not able to investigate right now. Rotating the player object causes the Collider game object to experience drifting on the Y-Axis. This results from the delayed loop and some constraint behavior or incorrect understanding. The issue appears to be unrelated to my setup. Rather likely a bug with VRChat? https://github.com/Zexxx/Smart-Floor-Collider/assets/21136842/450426da-bf49-4abd-9ed2-ac1374d17b78
2025-04-01T06:37:43.505701
2022-06-13T13:07:27
1269411837
{ "authors": [ "Eijebong", "ZoeyR" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3110", "repo": "ZoeyR/windows-accesstoken", "url": "https://github.com/ZoeyR/windows-accesstoken/pull/1" }
gharchive/pull-request
Fix reading Groups token information when compiling for x86_64 The struct we're reading from the buffer is defined as such: typedef struct _TOKEN_GROUPS { DWORD GroupCount; SID_AND_ATTRIBUTES *Groups[]; } TOKEN_GROUPS, *PTOKEN_GROUPS; which would translate to this in rust: pub struct _TOKEN_GROUPS { pub GroupCount: u32, Groups: *const c_void, } The previous code used to assume that the offset of Groups was 4, which is true on i686 but not on x86_64. Output of rustc -Zprint-type-sizes on i686 print-type-size type: `_TOKEN_GROUPS`: 8 bytes, alignment: 4 bytes print-type-size field `.GroupCount`: 4 bytes print-type-size field `.Groups`: 4 bytes Output of rustc -Zprint-type-sizes on x86_64 print-type-size type: `_TOKEN_GROUPS`: 16 bytes, alignment: 8 bytes print-type-size field `.GroupCount`: 4 bytes print-type-size padding: 4 bytes print-type-size field `.Groups`: 8 bytes, alignment: 8 bytes Oh dear god I forgot I made this. I'll assume you tested this and its all good.
2025-04-01T06:37:43.507560
2022-10-05T05:22:56
1397248998
{ "authors": [ "Zohaib-Sathio", "pritika163" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3111", "repo": "Zohaib-Sathio/Hacktoberfest_22", "url": "https://github.com/Zohaib-Sathio/Hacktoberfest_22/pull/76" }
gharchive/pull-request
Added a java program to find the number of unique characters in a string This java program finds the number of unique characters in a string. This java program finds the number of unique characters in a string. @pritika163 kindly move program to Java/Data Structures/String problems @pritika163 Your PR has been merged successfully, Thank you for your contribution. Give repo a star if it was useful. Happy hacking.
2025-04-01T06:37:43.509793
2024-08-21T00:09:38
2476738613
{ "authors": [ "Zoinkwiz", "larsiny" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3112", "repo": "Zoinkwiz/quest-helper", "url": "https://github.com/Zoinkwiz/quest-helper/issues/1710" }
gharchive/issue
A Soul's Bane: Confusion Beast Room The quest helper says to find the correct confusion beast and highlights all of them. The correct confusion beast to attack is always the one with NPC ID 1067. Fix should be pretty simple: remove this line and update the message above it: https://github.com/Zoinkwiz/quest-helper/blob/master/src/main/java/com/questhelper/helpers/quests/asoulsbane/ASoulsBane.java#L249-L250 This is intentional, as we don't want to provide information to the player that they'd otherwise be unable to obtain themselves, as adjusted in https://github.com/Zoinkwiz/quest-helper/commit/530d2ee28fd4a6a712e2da8a1bcde2aa8646dccf.
2025-04-01T06:37:43.603957
2018-09-05T13:26:00
357228672
{ "authors": [ "Zrips", "smmmadden", "weberlepecheur" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3113", "repo": "Zrips/Jobs", "url": "https://github.com/Zrips/Jobs/issues/230" }
gharchive/issue
SkeletonStray & ZombieHusk Hi, I report those two creatures don't gain any payments from the jobs for killing them, i just installed the plugin (which means i have the default jobConfig.yml) I just add the Stray and Husk with same gain from their parents (skeleton and zombie) but when I kill them, there nothing. STRAY: income: 10.0 points: 10 experience: 15 Spigot 1.12.2 Jobs 4.8.0 Vault 1.6.1 there are many items/blocks missing from the default jobConfig.yml file. It hasn't been kept as current as the release, so you'll need to add/remove the items for each job as you like. FWIW - I've been going thru the same file the last couple days and revising it quite a bit to pull in more recent released items, converting all numeric ID's to namespaceID's and making each job be more of real life scenario. I'll be sharing with @Zrips once I'm done so he can decide to include it as a new default, revise as he deems necessary or not use it at all. :-) I know there some missing, that's why I add them in my jobs but like I said earlier stray and husk didn't work and I don't know why because polar_bear, evoker, etc... everything work expect for these two. I even try to put their ID from words_en.yml but without success. 081edff7cb733cdcf528bd1f1567ddbccb0357bb
2025-04-01T06:37:43.606986
2017-11-08T16:49:08
272269734
{ "authors": [ "Zsailer" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3114", "repo": "Zsailer/phylopandas", "url": "https://github.com/Zsailer/phylopandas/issues/7" }
gharchive/issue
phylopandas DataFrame is not really a phylopandas DataFrame Need to figure out how to make sure all pandas.DataFrame methods return phylopandas.DataFrames when called by phylopandas. Solved in #9
2025-04-01T06:37:43.620015
2021-11-24T18:02:21
1062725803
{ "authors": [ "leandrorodrigueszup", "matheusalcantarazup" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3115", "repo": "ZupIT/horusec", "url": "https://github.com/ZupIT/horusec/issues/815" }
gharchive/issue
Be able to set working directory or use the project path as default working directory What would you like to be added: As far as I can tell, horusec assumes that the current directory as the working dir. It would be useful if horusec supports a different working dir or if treats the project path as so. Actual usage. $ cd `project diretory` (where my horusec-config.json is in) $ ./horusec start -e -p ./ If I don't do the first step, horusec does not find horusec-config.json file. Would be very nice if it does. I could pass the working dir: $ ./horusec start -e -p ./ -W ./ Or use the project path as working dir. (maybe it could be a brocking change) Why is this needed: Basically, I wanted to create a gradle task the wrap the horusec binary. I couldn't make it work because of this behave. Hi @leandrorodrigueszup thanks for the suggestion. Use the --config-file-path would not help you in this case? ./horusec start -e -p ./ --config-file-path path/to/your/config-file Hi @matheusalcantarazup thanks for the reply! Yes, it works! :-) But still a little verbose and unnatural in my opinion. But anyways, I can move on now. Thanks! This is good! I'll let this issue open for further discussion. Thank very much for your suggestion.
2025-04-01T06:37:43.645672
2023-01-06T17:11:05
1522887938
{ "authors": [ "Zuzu-Typ", "avdstaaij" ], "license": "Zlib", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3116", "repo": "Zuzu-Typ/PyGLM", "url": "https://github.com/Zuzu-Typ/PyGLM/issues/196" }
gharchive/issue
Unexpected behavior when constructing vectors from wrong-dtype numpy arrays In the wiki, it is stated that the vector objects support both construction from iterables and from objects that supports the buffer protocol (such as numpy arrays). It seems that the buffer protocol construction has precedence over the iterable construction. This makes sense, but it has an unfortunate side-effect. When constructing an integer vector using a numpy array of integers of the wrong size, the resulting vector is not as I would expect: >>> a = np.array([1,1,1], dtype=np.int64) >>> glm.ivec3(a) ivec3( 1, 0, 1 ) On my system, dtype=int64 is the default, so ivec3(np.array([1,1,1])) produces this "bug". This effect only seems to occur for integer and boolean vectors. If the vector had used the iterable construction method here, the result would be as expected. This effect makes combining glm and numpy quite a bit more dangerous. This "bug" is subtle and easy to miss. For safety, vectors always have to be constructed using the unpacking operator (ivec3(*np.array([1,1,1]))). Even worse: since arbitrary iterables may also support the buffer protocol, this precaution has to be taken in all functions that accept a generic iterable. My feature request: Could PyGLM be changed to only use buffer protocol construction if the types match, and use iterable construction otherwise? Hi there @avdstaaij , you seem to have found a bug. On my system, the same code runs without issues: >>> a = np.array([1,1,1], dtype=np.int64) >>> glm.ivec3(a) ivec3( 1, 1, 1 ) However, running the code on linux produces the error you've mentioned. T he bug persists with any given vector size: >>> a = numpy.array([1,1,1], dtype="int64") >>> glm.ivec3(a) ivec3( 1, 0, 1 ) >>> glm.i8vec3(a) i8vec3( 1, 0, 1 ) >>> glm.i64vec3(a) i64vec3( 1, 0, 1 ) I haven't been able to look at possible causes in the code yet. The way PyGLM converts a given object to e.g. a 32-bit int vector is by following a simple checklist: Is the object a number? Is the type of the object a PyGLM type? Does the object support the buffer protocol? Is the object a tuple or list? If the object does support the buffer protocol, PyGLM extracts its data and identifies, which PyGLM type it corresponds to. Afterwards, it uses a suitable constructor for the target type. I.e. in your example, PyGLM would first convert the numpy array to a 64-bit int vector (i64vec3) and continue by converting that to a 32-bit int vector (ivec3). This way you can convert from any bit-width to any other bit-width as you would expect - unless there is a bug, as is the case here. The reason for this bug is that the data given by numpy is interpreted based on its format string. On the linux platform I've tested on, the C datatype long appears to be 8 bytes wide, unlike on my Windows OS, where it's 4 bytes wide. Therefore, on the Windows machine, numpy reports the format (np.array((1,1,1), dtype=np.int64).data.format) as "q" (long long) and on Linux it reports as "l" (long). PyGLM always interprets "l" as 4-bit, regardless of the actual size of long, according to this table: https://docs.python.org/3/library/struct.html#format-characters. This needs to be fixed. Looks like I was too eager assuming what the issue was :).
2025-04-01T06:37:43.648830
2019-12-16T20:11:16
538627393
{ "authors": [ "SeanTolstoyevski", "Zuzu-Typ" ], "license": "Unlicense", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3117", "repo": "Zuzu-Typ/PyOpenAL", "url": "https://github.com/Zuzu-Typ/PyOpenAL/issues/12" }
gharchive/issue
No documentation Hi, this repo I couldn't find any documents about PyALL in this repo. Where can I find documents describing PyALL in detail? Hello Sean, I'm not sure this is what you're looking for at all. First of all, "PyALL" doesn't exist. There is no such library (at least in the PyPI. This (PyOpenAL) is a library that provides bindings to OpenAL. You'll find advanced documentation on their site. Aside from that, PyOpenAL provides a few of it's own methods. They're documented in the README.
2025-04-01T06:37:43.654198
2021-09-17T20:20:09
999692617
{ "authors": [ "juliexiong" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3118", "repo": "Zywave/booster", "url": "https://github.com/Zywave/booster/pull/16" }
gharchive/pull-request
feat: include systems as part of Netlify collections The about pages for Design System and Application Framework are not part of the Netlify CMS collections, so hoping this will surface them there. Aaaayyyooo it worked!
2025-04-01T06:37:43.669644
2021-11-25T16:12:58
1063761271
{ "authors": [ "BudgieInWA", "dabreegster", "enzet", "tordans" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3119", "repo": "a-b-street/osm2lanes", "url": "https://github.com/a-b-street/osm2lanes/issues/11" }
gharchive/issue
Discuss ways to merge separately mapped sidewalks and bike lanes This conversation started at https://github.com/a-b-street/abstreet/discussions/789 (search term "separate" and "multiple OSM ways") The main question is: …how will the library figure out if separate ways are geometrically located to the left or right of the main road? Preprocess data I think we should continue the search for a solution for this. AFAIK there is no way to solve this in JavaScipt/Node. But it is possible to solve it in QGIS and likely the underlying Python libraries. So it should be possible to preprocess the data in a way, that osm2lanes can then work with. I think we should check if this could be solved (in a performant way) with Osmium or osm2pgsql or Overpass, … – I don't know enough about those tools ATM. I assume, one of those tools would be able to merge the separately mapped ways to the central way. This could be done just by proximity, but I assume that would be error prone. But there are tags, that we can use, to reduce the error rate: Sidewalks highway=footway + footway=sidewalk on the separately mapped way is a strong indicator, that the way belongs to the main road sidewalk=separate/sidewalk:(left|right|both)=separate on the main road are a strong indicator, that there is a sidewalk to look for nearby Cycleways is_sidewalk=yes on the separately mapped way is a strong indicator, that the way belongs to the main road. This tag is still experimental, though, but could help a lot. We are experimenting with it to improve routing for footways and cycleways, more on this wiki page footway mapping (GER) and this wiki page on cycleways mapping (GER) The additional experimental tags is_sidepath:of=(secondary|residential|<HighwayValueOfMainRoad) and is_sidepath:of:name=<NameValueOfMainRoad> might also be useful, but are likely better automatically processed by the script. cycleway=separate/cycleway:(left|right|both)=(separate|optional_sidepath) on the main road are a strong indicator, that there is a cycleway to look for nearby (Germany wiki page) Thank you! Great issue. I've faced this problem a couple of times. We can certainly add multiple way processing, but I'm not sure how to identify them in the first place. As you mentioned, probably all the approaches are error-prone. I've attempted this before, https://github.com/a-b-street/abstreet/issues/330 tells the full story. Some of the problems are https://github.com/a-b-street/abstreet/issues/330#issuecomment-758173792. But I did make some headway on this -- https://github.com/a-b-street/abstreet/blob/master/map_model/src/make/snappy.rs for reference. This code only tries to deal with merging separate cycleways with the main road. As a first pass, it only looks at ways with separation:left|right (https://wiki.openstreetmap.org/wiki/Proposed_features/cycleway:separation). Partly this helps us know which direction to search and partly this was a way for me to slowly "opt in" some cycleways around Seattle into the experimental merging, by filling out the tag. Walking through how it works... Find the center-line and estimated width of every separate cycleway Create a quadtree (spatial partitioning structure that can quickly answer questions like "what's nearby") containing all of the left and right edge of every "regular" road segment Look for matches Delete any way that was snapped, and insert the lanes (and separators) into the main road I have a few different ideas for how to look for matches. The thing currently implemented steps every 5 meters along a cycleway, projects a perpendicular test line 3 meters from the left or right edge (based on the separation tags), and looks for the nearest collision with one of the main road edges. Based on problems I hit, the match also has to have the same layer (z-ordering). And I was also requiring the cyclepath and main road to have a similar angle where the perpendicular line connects them -- within 30 degrees. One of the biggest complications is when the main road or the cycleway are chopped up into different pieces. Around some intersections in Seattle where I was testing, some of the curb cuts are micro-mapped, so there will be tiny cycleway segments at a bunch of different angles. https://www.openstreetmap.org/way/835195091 is a simpler example of that. The current snapping code requires at least 80% of the cycleway to snap to some roads (maybe multiple). An example where this is tricky is the Burke Gilman trail at https://www.openstreetmap.org/node/53108420#map=19/47.66568/-122.30189. It's mostly a separate trail, but around here, it physically joins up and becomes part of the sidewalk on NE Blakeley Street: (The screenshot is without any snapping) Sometimes we might have to logically split a longer main road's way to indicate where a cyclepath runs parallel to it or not. These're just some of the problems I hit. Many might be specific to a certain area. AFAIK there is no way to solve this in JavaScipt/Node. But it is possible to solve it in QGIS and likely the underlying Python libraries. If we can get a robust approach working using any language/dependencies, it's definitely possible to get it working everywhere. If GDAL or Shapely or libraries in some language make it easy to come up with good heuristics, then we can adapt the approach elsewhere I feel like this should be beyond the scope of osm2lanes because it involves spatial reasoning. (Probably any problem involving multiple osm ways will end up needing some). There are some great insights in this thread, and I would like to add that the calculated or estimated width that osm2lanes produces would be an valuable input into any algorithm that was able to do this snapping. It might even be useful to be able to request "minimum sensible" "likely" or "maximum sensible" estimated widths from osm2lanes.
2025-04-01T06:37:43.716650
2024-07-06T03:47:44
2393369712
{ "authors": [ "Paradonized", "c-vandendyck-kbr" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3120", "repo": "a2lix/TranslationFormBundle", "url": "https://github.com/a2lix/TranslationFormBundle/issues/394" }
gharchive/issue
No locales were configured, but expected at least the default locale I get the error: No locales were configured, but expected at least the default locale en. Perhaps you need to add it to your a2lix_translation_form.locales bundle configuration? I'm using Symfony 7 and just set up the translation bundle. Also I did the required steps: in config/packages/a2lix.yaml a2lix_translation_form: locale_provider: default locales: [en, fr, es, de] default_locale: en required_locales: [fr] templating<EMAIL_ADDRESS> and in my formType $builder->add('translations', TranslationsType::class, [ 'locales' => ['en', 'fr', 'es', 'de'], 'default_locale' => ['en'], 'required_locales' => ['fr'], 'fields' => [ 'description' => [ 'field_type' => 'textarea', 'label' => 'descript.', 'locale_options' => [ 'es' => ['label' => 'descripción'], 'fr' => ['display' => false] ] ] ], 'excluded_fields' => ['details'], 'locale_labels' => [ 'fr' => 'Français', 'en' => 'English', ], ]); in config/bundles.php A2lix\AutoFormBundle\A2lixAutoFormBundle::class => ['all' => true], A2lix\TranslationFormBundle\A2lixTranslationFormBundle::class => ['all' => true], in config/packages/translation.yaml framework: default_locale: en translator: default_path: '%kernel.project_dir%/translations' fallbacks: - en providers: I went trough the documentation and the other issues but still couldn't resolve the problem. @Paradonized I got the exact same error as you.. Also working on Symfony 7. Any update ?
2025-04-01T06:37:43.740974
2016-02-11T10:46:59
132940391
{ "authors": [ "a5hik", "anorudes", "rhclayto" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3121", "repo": "a5hik/ng-sortable", "url": "https://github.com/a5hik/ng-sortable/issues/279" }
gharchive/issue
Controller 'asSortable', required by directive 'asSortableItem', can't be found! Hey, I'm getting this error, even though my code is almost exactly the same as in the demo. View: <p ng-if="items.length > 0" as-sortable="dragControlListeners" ng-model="items" class="mappa-block"> <span class="mappa-subtitle">List:</span> <div ng-repeat="item in items" as-sortable-item> <div as-sortable-item-handle> test </div> </div> </p> Controller: app.controller('MainCtrl', function ($scope, dataStorage) { $scope.items = []; $scope.newName = ''; $scope.newUrl = ''; $scope.dragControlListeners = dragControlListeners = { accept: function (sourceItemHandleScope, destSortableScope) {return true}, itemMoved: function (event) { }, orderChanged: function(event) { } }; dataStorage.load().then(function (items) { $scope.items = items; }); ... Am I making some silly mistake? Any advice on how to debug it? Thanks in advance! why do u do this ? $scope.dragControlListeners = dragControlListeners = { how do u initialize the module dependencies? Perhaps related to this? https://github.com/a5hik/ng-sortable/issues/230
2025-04-01T06:37:43.748666
2016-09-12T10:03:24
176338229
{ "authors": [ "LDSign", "aFarkas" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3122", "repo": "aFarkas/lazysizes", "url": "https://github.com/aFarkas/lazysizes/issues/304" }
gharchive/issue
custommedia extension markup validation failed (W3C) Hi I am using the custom media extension with media="--small" a.s.o. like in the examples. Unfortunatley this markup is no valid html and so highlighted by the W3C-Validator: "Bad value --small for attribute media on element source: Expected ( or letter at start of a media query part but saw - instead." "Stray end tag source." Is this a normal behavior or do I make something wrong? How could this be avoided? Thanks, Frank You make everything right. Custom media queries might be a future feature and therefore the validator does not validate. We basically polyfill this with lazySizes. It doesn't harm your page. You can avoid this by using the data-media attribute instead: <source srcset="data:,pixel" media="(max-width: 1px)" srcset="your_srces" data-media="--small" />
2025-04-01T06:37:43.774925
2024-06-10T11:06:20
2343612249
{ "authors": [ "bntti" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3123", "repo": "aalto-grades/aalto-grades", "url": "https://github.com/aalto-grades/aalto-grades/pull/735" }
gharchive/pull-request
Improve logging Description of changes [x] Add user id to the logs [x] Separate db and normal logs [x] Log when calling A+ [x] Document log format [x] Improve logging documentation [x] Remove unnecessary log files Related issues Closes #725 Closes #247 server/logs is not gitignored Not necessary as log files won't be created. In the production environment the docker container forwards the cli logs automatically.
2025-04-01T06:37:43.789995
2019-05-27T07:00:28
448696810
{ "authors": [ "aaronpk", "frankmeeuwsen", "jackjamieson2" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3124", "repo": "aaronpk/Monocle", "url": "https://github.com/aaronpk/Monocle/issues/40" }
gharchive/issue
HTML showing in post The feed of Peter Rukavina's blog shows fine in my feedreader (image below) but in Monocle it shows the HTML content of the post. When I check the debug-screen, I see there's HTML in the JSON as well. I'm not sure if this is some bad rendering on the side of Peter's feed, forgiveness of Inoreader, sanitization in Aperture or maybe something else? I need a little more information in order to track this down: What Microsub server are you using? It looks like it's not Aperture. What feed of his are you following? I looked at his site and he has Microformats as well as an RSS feed. Looking at the debug JSON you posted, I see a couple problems with the data there, which is a problem with whatever Microsub server you're using. author should never be a string, it should always be an object with the name, url and photo properties. https://indieweb.org/Microsub-spec#card summary is a plaintext value, and cannot contain HTML, which is why you're seeing the HTML tags in Monocle. I need to know what feed of his you're following and which Microsub server you're using in order to tell exactly where the problem is here. Thanks. I thought I had Aperture enabled but later today I found out I still had Yarns activated in my WordPress environment. So that's the Microsub server I use for this feed. I am subscribed to https://ruk.ca/rss/feedburner.xml on this server. I will do some more testing with both servers and feed. I'll post my findings later. @frankmeeuwsen, This is definitely a Yarns issue then, I'll check it out and will update you when I have it fixed I just switched to Monocle to see what would happen. I am subscribed to the same feed, now Monocle show the correct parsed HTML. Thanks for the update @jackjamieson2. Do you need me to file an issue at your repo? Thanks Frank, Aaron already filed https://github.com/jackjamieson2/yarns-microsub-server/issues/75
2025-04-01T06:37:43.799425
2016-06-19T11:35:56
161068735
{ "authors": [ "RX14", "aartur" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3125", "repo": "aartur/mschematool", "url": "https://github.com/aartur/mschematool/issues/6" }
gharchive/issue
Generated schema filename times are not UTC When multiple team members are in different timezones, migrations may get out of sync because the times used in the generated filenames are local. I changed the print_new command to use UTC datetime string. It's in 0.7.1 release. Thanks!
2025-04-01T06:37:43.802120
2023-06-24T06:07:06
1772482972
{ "authors": [ "devinluo27" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3126", "repo": "aaucsd/lemp", "url": "https://github.com/aaucsd/lemp/issues/27" }
gharchive/issue
Fail to init BITStarPlanner It seems like bit_star_planner.py is not fully finished, such as some abstract method is not implemented, using undeclared variable. There is another issue that in bit_star_planner.ipynb, This line: result_refined = BITStarPlanner(num_batch=200, stop_when_success=False).plan(env, start, goal, timeout=('time', 10)) cannot return a valid result. In addition, in the GNN planner, is the path smoother module missing?
2025-04-01T06:37:43.826066
2022-03-26T12:22:48
1181722966
{ "authors": [ "abbgrade" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3127", "repo": "abbgrade/psdocker", "url": "https://github.com/abbgrade/psdocker/issues/10" }
gharchive/issue
Docker Service.docker fails in linux build validation pipeline https://github.com/abbgrade/psdocker/actions/workflows/build-validation.yml?query=branch%3Adevelop Message: ProcessCommandException: Cannot find a process with the name "com.docker.service". Verify the process name and call the cmdlet again. linux and windows agents are effected
2025-04-01T06:37:43.828907
2024-05-02T18:19:55
2276194767
{ "authors": [ "bradegler", "sethvargo" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3128", "repo": "abcxyz/github-metrics-aggregator", "url": "https://github.com/abcxyz/github-metrics-aggregator/pull/263" }
gharchive/pull-request
Retag existing image instead of building image without additional layers Closes https://github.com/abcxyz/github-metrics-aggregator/pull/262 Sigh https://github.com/apache/beam/pull/31010 Sigh apache/beam#31010 Is it time to give up on Beam and just build some simple cloud run jobs for our pipelines? We aren't doing anything fancy and we've only ever had one customer look at them as a reference. Is it time to give up on Beam and just build some simple cloud run jobs for our pipelines? We aren't doing anything fancy and we've only ever had one customer look at them as a reference. I was ready to give up on day 1, but yes. I think we should give up on Beam and use Cloud Run jobs instead. Any performance benefit we're getting using Beam is outweighed by lost engineering cycles.
2025-04-01T06:37:43.864748
2023-04-26T08:04:09
1684489603
{ "authors": [ "gjmulder", "mozzipa" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3129", "repo": "abetlen/llama-cpp-python", "url": "https://github.com/abetlen/llama-cpp-python/issues/116" }
gharchive/issue
UnicodeDecodeError: 'utf-8' codec can't decode byte For trial to have streaming of response, below error occurs File [~/miniconda/envs/fastapi/lib/python3.10/site-packages/llama_cpp/llama.py:482](https://file+.vscode-resource.vscode-cdn.net/Users/jinwhan/Documents/Coding/Solidity/Page/cloudRun/cloudrun-fastapi/app/~/miniconda/envs/fastapi/lib/python3.10/site-packages/llama_cpp/llama.py:482), in Llama._create_completion(self, prompt, suffix, max_tokens, temperature, top_p, logprobs, echo, stop, repeat_penalty, top_k, stream) 473 self._completion_bytes.append(text[start:]) 474 ### 475 yield { 476 "id": completion_id, 477 "object": "text_completion", 478 "created": created, 479 "model": self.model_path, 480 "choices": [ 481 { --> 482 "text": text[start:].decode("utf-8"), 483 "index": 0, ... 488 } 490 if len(completion_tokens) >= max_tokens: 491 text = self.detokenize(completion_tokens) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xec in position 0: unexpected end of data My code snippet was prepared as below referring Example. from llama_cpp import Llama import json model_path = /my/model/path/for/ko_vicuna_7b/ggml-model-q4_0.bin" prompt = "Tell me about Korea in english" llm = Llama(model_path=model_path, n_ctx=4096, seed=0) stream = llm( f"Q: {prompt} \nA: ", max_tokens=512, stop=["Q:", "\n"], stream=True, temperature=0.1, ) for output in stream: print(output['choices'][0]["text"], end='') Not only 0xec, but also 0xed, 0xf0 occurred for other trial cases. I cannot assure but it may be caused by language of model which is fine tuned for korean from vicuna 7b. For your reference, several letters were given but it stops suddenly with above error. May be fixed in #118
2025-04-01T06:37:43.871084
2020-12-24T08:25:40
774265251
{ "authors": [ "abhay-krishna" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3130", "repo": "abhay-krishna/eks-distro-prow-jobs", "url": "https://github.com/abhay-krishna/eks-distro-prow-jobs/pull/16" }
gharchive/pull-request
Create prowjobs-lint-presubmits.yaml Add linting presubmit job. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. [APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhay-krishna The full list of commands accepted by this bot can be found here. The pull request process is described here Needs approval from an approver in each of these files: OWNERS [abhay-krishna] Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment @abhay-krishna: Updated the job-config configmap in namespace default at cluster default using the following files: key prowjobs-lint-presubmits.yaml using file jobs/aws/eks-distro-prow-jobs/prowjobs-lint-presubmits.yaml In response to this: Add linting presubmit job. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
2025-04-01T06:37:43.876413
2024-05-24T09:52:46
2314963144
{ "authors": [ "GavinJIAW", "abhiTronix" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3131", "repo": "abhiTronix/deffcode", "url": "https://github.com/abhiTronix/deffcode/issues/47" }
gharchive/issue
[Question]: OPen multiple Same Camera, ValueError: Invalid source with no decodable audio or video stream provided. Aborting! Issue guidelines [X] I've read the Issue Guidelines and wholeheartedly agree. Issue Checklist [X] I have searched open or closed issues for my problem and found nothing related or helpful. [X] I have read the Documentation and found nothing related to my problem. Describe your Question OPen multiple Same Camera, ValueError: Invalid source with no decodable audio or video stream provided. Aborting! Terminal log output(Optional) ValueError: Invalid source with no decodable audio or video stream provided. Aborting! Python Code(Optional) decoder = FFdecoder("0", frame_format="bgr24",custom_ffmpeg="./ffmpeg/bin", verbose=True,**ffparams).formulate() decoder = FFdecoder("1", frame_format="bgr24",custom_ffmpeg="./ffmpeg/bin", verbose=True,**ffparams).formulate() decoder = FFdecoder("3", frame_format="bgr24",custom_ffmpeg="./ffmpeg/bin", verbose=True,**ffparams).formulate() DeFFcode Version 0.2.3 Python version 3.10 Operating System version windows Any other Relevant Information? No response ValueError: Invalid source with no decodable audio or video stream provided. Aborting! @GavinJIAW This error means that one of the source value is invalid, meaning there's no device at "0" or "1" or "3" index, check logs to get more insight. And it is perfectly fine to open multiple cameras, as long as your system allows it.
2025-04-01T06:37:43.884974
2021-07-29T15:28:53
955960617
{ "authors": [ "abhiTronix", "rubar-tech" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3132", "repo": "abhiTronix/vidgear", "url": "https://github.com/abhiTronix/vidgear/issues/235" }
gharchive/issue
NetGear_Async: Add bi-directional data transfer mode Detailed Description This issue will oversee implementation of NetGear's bi-directional data transfer exclusive mode for NetGear_Async API. Context Currently, NetGear_Async is far efficent than its NetGear counterpart, but lacks in terms of flexibility with Exclusive modes. This issue will attempt to introduce first of them, NetGear's bi-directional data transfer exclusive mode for NetGear_Async API. Your Current Environment VidGear version: v0.2.2-dev Branch: development Python version: all Operating System and version: all Any Other Important Information https://abhitronix.github.io/vidgear/latest/gears/netgear/advanced/bidirectional_mode/ Where are the new codes, development branch? @rubar-tech not yet. Only after #106 resolved, be patient. @rubar-tech This was harder to pull off but I did it. Please wait till I upload related doc commits. But the bare-minimum example for bidirectional video-frames transfer will be as follows: Server End # import library from vidgear.gears.asyncio import NetGear_Async import cv2, asyncio import numpy as np options = {"bidirectional_mode": True} # initialize Server without any source server = NetGear_Async(logging=True, **options) # Create a async frame generator as custom source async def my_frame_generator(): # !!! define your own video source here !!! # Open any video stream stream = cv2.VideoCapture("foo.mp4") # loop over stream until its terminated while True: # read frames (grabbed, frame) = stream.read() # check if frame empty if not grabbed: # if True break the infinite loop break # do something with the frame to be sent here recv_data = await server.transceive_data() if not (recv_data is None): if isinstance(recv_data, np.ndarray): cv2.imshow("Reciever Frame", recv_data) key = cv2.waitKey(1) & 0xFF else: print(recv_data) target_data = "Hello, I am a Server." # yield frame yield (target_data, frame) # sleep for sometime await asyncio.sleep(0) stream.release() if __name__ == "__main__": # set event loop asyncio.set_event_loop(server.loop) # Add your custom source generator to Server configuration server.config["generator"] = my_frame_generator() # Launch the Server server.launch() try: # run your main function task until it is complete server.loop.run_until_complete(server.task) except (KeyboardInterrupt, SystemExit): # wait for interrupts pass finally: # finally close the server server.close() Client End # import libraries from vidgear.gears.asyncio import NetGear_Async import cv2, asyncio options = {"bidirectional_mode": True} # define and launch Client with `receive_mode=True` client = NetGear_Async(receive_mode=True, logging=True, **options).launch() # Create a async function where you want to show/manipulate your received frames async def main(): # !!! define your own video source here !!! # Open any video stream stream = cv2.VideoCapture("big_buck_bunny_720p_1mb.mp4") # loop over Client's Asynchronous Frame Generator async for (data, frame) in client.recv_generator(): if not(data is None): print(data) # {do something with received frames here} # Show output window cv2.imshow("Output Frame", frame) key = cv2.waitKey(1) & 0xFF (grabbed, target_data) = stream.read() # check if frame empty if grabbed: # if True break the infinite loop await client.transceive_data(data=target_data) # await before continuing await asyncio.sleep(0) stream.release() if __name__ == "__main__": # Set event loop to client's asyncio.set_event_loop(client.loop) try: # run your main function task until it is complete client.loop.run_until_complete(main()) except (KeyboardInterrupt, SystemExit): # wait for interrupts pass # close all output window cv2.destroyAllWindows() # safely close client client.close() @rubar-tech Btw, if not clear yet, you can install vidgear from source as follow: # clone the repository and get inside git clone https://github.com/abhiTronix/vidgear.git && cd vidgear # checkout the latest development branch git checkout development # install normally pip install . # OR install with asyncio support pip install .[asyncio] Bidirectional data transfer mode for NetGear_Async API successfully added and merged in commit: https://github.com/abhiTronix/vidgear/commit/c73428dff35aded6b598f3486e11d53477468ec1 Related Doc is available here: https://abhitronix.github.io/vidgear/v0.2.2-dev/gears/netgear_async/advanced/bidirectional_mode/
2025-04-01T06:37:43.886467
2020-07-17T10:26:41
659120123
{ "authors": [ "04RR", "abhidxt299" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3133", "repo": "abhidxt299/RoManOV_Automation", "url": "https://github.com/abhidxt299/RoManOV_Automation/issues/38" }
gharchive/issue
Apply regressions to find board-image homography This is a continuation of issue #37 . Here, you have to develop a RANSAC-like algorithm with linear and geometric regressions to find indices of vertical and horizontal lines from the Hough transform and assign each remaining image point to an integer pair representing a board point. I'll take up this issue.
2025-04-01T06:37:43.894984
2022-10-11T11:30:37
1404475880
{ "authors": [ "abhinavminhas", "codecov-commenter" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3134", "repo": "abhinavminhas/qtest-mstest-parser", "url": "https://github.com/abhinavminhas/qtest-mstest-parser/pull/18" }
gharchive/pull-request
Updates Github actions version updates. Test refactoring. build.yml - Node version 18.x addition. Codecov Report Base: 99.02% // Head: 99.02% // No change to project coverage :thumbsup: Coverage data is based on head (d0c0a74) compared to base (a1597bc). Patch has no changes to coverable lines. Additional details and impacted files @@ Coverage Diff @@ ## main #18 +/- ## ======================================= Coverage 99.02% 99.02% ======================================= Files 1 1 Lines 103 103 ======================================= Hits 102 102 Misses 1 1 Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. :umbrella: View full report at Codecov. :loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
2025-04-01T06:37:43.898810
2020-07-17T22:33:06
659696231
{ "authors": [ "oussamabouchikhi" ], "license": "CC0-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3135", "repo": "abhisheknaiidu/awesome-github-profile-readme", "url": "https://github.com/abhisheknaiidu/awesome-github-profile-readme/pull/83" }
gharchive/pull-request
Add my profile README What type of PR is this? (check all applicable) [x] 🚀 Added Name [ ] ✨ Feature [ ] ✅ Joined Community [ ] 🌟 ed the repo [ ] 🐛 Grammatical Error [ ] 📝 Documentation Update [ ] 🚩 Other Description I added my own profile README.md Add Link of GitHub Profile Done
2025-04-01T06:37:43.899887
2022-04-17T12:56:01
1206368069
{ "authors": [ "abhishekshree", "utkarsh-tf141" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3136", "repo": "abhishekshree/spo-website", "url": "https://github.com/abhishekshree/spo-website/pull/17" }
gharchive/pull-request
Office staff update closes #15 @utkarsh-tf141 This is not the kind of link we use in emails, mailto: has to be used. Also, don't put any HTML content inside objects. Render the emails inside anchor tags inside the Description tag.
2025-04-01T06:37:43.915245
2016-02-16T20:32:11
134089418
{ "authors": [ "hartwork", "jakebuhler" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3137", "repo": "abourget/gevent-socketio", "url": "https://github.com/abourget/gevent-socketio/issues/234" }
gharchive/issue
django.utils.importlib is removed in Django 1.9 The module sdjango.py imports django.utils.importlib, which has been deprecated since Django 1.7 and was removed in Django 1.9. Django's importlib had been a copy of Python 2.7's importlib, so the fix should be relatively straightforward. Should be fixed by acf095b78208edb59b5873662653e12773add3cc. Can we close this ticket?
2025-04-01T06:37:43.958508
2022-08-16T02:58:13
1339743996
{ "authors": [ "aglassman", "derekbrown", "drewble", "duksis", "luhagel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3138", "repo": "absinthe-graphql/absinthe_plug", "url": "https://github.com/absinthe-graphql/absinthe_plug/pull/273" }
gharchive/pull-request
Chore: Upgrade GraphiQL to 1.11.4 So, this is the first step on a bigger journey to get the GraphiQL client(s) up to date. I might need little guidance here and there since I only started wokring with the codebase this evening, and not everything might be optimal ( looking at you, build_asset_path/2) I also have s semi-functional version of graphiql 2.0 behind a new :beta interface key, but that one needs some more work to get headers to work, and to figure out whether we still need the @absinthe/absinthe-socketlibs or whethercreateGraphiQLFetcher()from@graphiql/toolkit` can solve whatever prompted the creation of that in the first place out of the box now. If someone has more context/knows, why some decisions were made, that would be greatly appreciated! This PR the package still used the 5 year old<EMAIL_ADDRESS>which has a bunch of security issues and could benefit from an update in general (bundle size reduction from react@16 among others). This PR updates graphiql and introduces a way to have two react versions in the asset list, so graphiql-workspace(the :advanced interface) stays functional, since it break with newer react versions. Next Steps The next big step would be to get graphiql@2 working, especially with headers and subscriptions. The is quite a bit of plumbing going on atm, which I will need to figure out first, plus the new createGraphiQLFetcher function is not shipped in a browser-compatible version, so we might need another intermediary package... 😨 Once 2.0 ships and graduates beta stage, we can probably remove/alias the :advancedinterface, since graphiql provides those enhancements out of the box. Also,graphiql-workspace` hasn't seen an update in ~4 years, so I wouldn't expect and update on that front. Happy about any and all feedback ! @benwilson512 can't request a review formerly yet, would be great if you could take a look at this after your parental leave :) Thanks for your work on this @luhagel, I don't have time to help with the initial work, but can definitely try it when you think it's ready for testing as we are using the old version at the moment. Adding to the convo to say this would be a huge boost for Absinthe on my team. Due to the age and limitations of the available GraphiQL interfaces we are currently dropping in Apollo Sandbox. Now that GraphiQL is up to a stable 2.2.0 release it might be reasonable to target an update to 2.x. Bump. Curious if an upgrade will be merged in the near term? cc: @benwilson512 @luhagel Bump. Curious if an upgrade will be merged in the near term? cc: @benwilson512 @luhagel Probably needs an upgrade at this point, given that graphiql 3 is underway, I'll try and find some time to get this all cleaned up in the next few weeks @derekbrown Looking forward to this very needed refresh.
2025-04-01T06:37:43.974550
2023-12-06T11:10:28
2028303657
{ "authors": [ "alexchernyavskiy", "destructorvad", "rabelenda" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3140", "repo": "abstracta/jmeter-dotnet-dsl", "url": "https://github.com/abstracta/jmeter-dotnet-dsl/issues/3" }
gharchive/issue
I need to create POST requests with different body each one Hello I have a big model (generating using Bogus lib - it has unique email, id etc) - I want to run Post Requests to create a lot of entities. Are there any ways to implement this? I'v found you advised to use PreProcessor here - https://github.com/abstracta/jmeter-java-dsl/issues/7 - but there is no PreProcessor class in dotnet dsl. Tnanks! Hello, thank you for bringing this into attention! We can easily add the jsr223PreProcessor to the dotnet library with support for specifying code in groovy. Providing support for using c# code and dotnet libraries is not that simple though. Can you provide an example code of what you would like the DSL to support or how would your think you could express in code what you need? One alternative for generated and distinct data might also be implementing the csvDataSet element in dotnet library (easy to do as well) so you can generate in dotnet/c# code a CSV with whatever content you may like (and using whatever library you desire) and then just use the proper variables in the HTTP Post request. In any case, I would like to understand a little better what would be the ideal scenario for your use case and then provide a good approximation to it . Hello, thank you for bringing this into attention! We can easily add the jsr223PreProcessor to the dotnet library with support for specifying code in groovy. Providing support for using c# code and dotnet libraries is not that simple though. Can you provide an example code of what you would like the DSL to support or how would your think you could express in code what you need? One alternative for generated and distinct data might also be implementing the csvDataSet element in dotnet library (easy to do as well) so you can generate in dotnet/c# code a CSV with whatever content you may like (and using whatever library you desire) and then just use the proper variables in the HTTP Post request. In any case, I would like to understand a little better what would be the ideal scenario for your use case and then provide a good approximation to it . Tnx for fast reply, this is my code: ............. using Bogus; [Fact] public async Task LoadTest() { // Arrange var requestDto = Create_CreateCompanyRequestBogus(); var request = JsonSerializer.Serialize(requestDto); // Act var stats = TestPlan( ThreadGroup(2, 5, HttpSampler("https://tst-api.purpleunity.dev/company-and-contact/api/companies") .Post(request, new MediaTypeHeaderValue(MediaTypeNames.Application.Json)) .Header("Authorization", AuthToken) ), JtlWriter("jtls") ).Run(); stats.Overall.SampleTimePercentile99.Should().BeLessThan(TimeSpan.FromSeconds(5)); } private CreateCompanyRequest Create_CreateCompanyRequestBogus() { var faker = new Faker<CreateCompanyRequest>() .CustomInstantiator(f => new CreateCompanyRequest( IdentityId: Guid.NewGuid().ToString().OrNull(f, 0.2f), Name: f.Company.CompanyName(), VatNumber: f.Random.Replace("??#########").OrNull(f, 0.2f), Iban: f.Random.Replace("??######################").OrNull(f, 0.2f), Bic: f.Finance.Bic().OrNull(f, 0.2f), ChamberOfCommerceNumber : f.Random.Replace("??-???-########").OrNull(f, 0.2f), ExternalId: Guid.NewGuid().ToString().OrNull(f, 0.2f), IsBuyerEvaluated: f.Random.Bool().OrNull(f, 0.2f), Remarks: f.Lorem.Sentences(1).OrNull(f, 0.2f), AddressLine: f.Address.StreetAddress().OrNull(f, 0.2f), Zipcode: f.Address.ZipCode().OrNull(f, 0.2f), City: f.Address.City().OrNull(f, 0.2f), Region: f.Address.State().OrNull(f, 0.2f), CountryCode: f.Address.CountryCode().OrNull(f, 0.2f), Phone: f.Phone.PhoneNumber().OrNull(f, 0.2f), Mobile: f.Phone.PhoneNumber().OrNull(f, 0.2f), Fax: f.Phone.PhoneNumber().OrNull(f, 0.2f), Email: f.Internet.Email().OrNull(f, 0.2f), WebsiteUrl: new Uri(f.Internet.Url()), DefaultContact: null, UseTheSameAddressForPostal: f.Random.Bool().OrNull(f, 0.2f), PostalAddressLine: f.Address.StreetAddress().OrNull(f, 0.2f), PostalZipCode: f.Address.ZipCode().OrNull(f, 0.2f), PostalCity: f.Address.City().OrNull(f, 0.2f), PostalRegion: f.Address.State().OrNull(f, 0.2f), PostalCountry: f.Address.CountryCode().OrNull(f, 0.2f) ) ); return faker.Generate(); } I join the question. For each request in each iteration, a new request must be created. My code: `[Test] public void RegisterTest() { var stats = TestPlan( ThreadGroup(5, 2, HttpSampler("http://localhost/BonusWebApi/api/processing/register/") .Post(GetRegRequest(), new MediaTypeHeaderValue("application/json") ))).Run(); Assert.That(stats.Overall.SampleTimePercentile99, Is.LessThan(TimeSpan.FromSeconds(2))); } //Random random = new Random(); private SecureRandom random = new SecureRandom(); public string GetCardCode() { var getCard = random.Next(0, cardsCount - 1); logger.Debug(getCard); var cardCode = cards.Skip(getCard).Take(1).FirstOrDefault()?.CardCode; logger.Debug(cardCode); while (cardCode == null) { cardCode = cards.Skip(random.Next(0, cardsCount - 1)).Take(1).FirstOrDefault()?.CardCode; } return cardCode; } public string GetRegRequest() { var license = cashes.ToList().ElementAt(random.Next(0, cashCount - 1)); Guid accessTokenGuid = license.AccessTokenGuid; var cardCode = GetCardCode(); return JsonConvert.SerializeObject(new RegisterRequestDto { LicenseGuid = license.LicenseGuid, AccessTokenGuid = accessTokenGuid, CardCode = cardCode, CardRegisterDateTime = DateTime.Now, RegisterDetailDtos = new List { new RegisterDetailDto { PositionId="1", ProductCode="12345", Quantity=1, TotalPrice=100 } } }); }` Great, thank you! The information you two provide is very helpful and we have some ideas that we would like to implement to support these scenarios. It is great to see the community interest in this feature. If some other have similar interests please let us know! We just released 0.4 which includes CsvDataSet and Jsr223PreProcessor. None of these are optimal solutions for your use cases, but you can use them in the meantime as workarounds or approximations while we come up with some better solution. Regards
2025-04-01T06:37:43.991241
2016-08-08T14:31:52
169937731
{ "authors": [ "lsensible" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3141", "repo": "abunsen/Paython", "url": "https://github.com/abunsen/Paython/pull/33" }
gharchive/pull-request
using PostGateway for authorize.net API as HTTP GET requests not supported using PostGateway for authorize.net API as HTTP GET requests not supported Hi thank you for merging this in ... If you're working towards a new version can you give us an idea of when it will be available via the PIP packaging system?
2025-04-01T06:37:43.995000
2019-03-16T02:35:58
421760515
{ "authors": [ "aburrell", "asreimer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3142", "repo": "aburrell/aacgmv2", "url": "https://github.com/aburrell/aacgmv2/issues/33" }
gharchive/issue
Question: logbook dependency Quick question, why is the logbook used instead of logging? I didn't dig through the codebase much, but it doesn't look like the features of logbook are being used in a way that necessitates using it? At some point logging was deprecated and logbook was the replacement. I made the change, but then started using warnings more and kept logging for more informational output. If you have a compelling reason to change things, I have a deprecation warning that needs to be propagated from the develop to the master branch and am willing to do some additional stylistic fine-tuning. logging was deprecated? I can't find any information about this with a Google search. It's included in python 3.7 even: https://docs.python.org/3/library/logging.html Mostly just curious. The most compelling reason to use logging instead of logbook is that users don't need to install another dependency if logging is used, especially if none of the logbook functionality is being used. I wish I could point you somewhere, but it was a few years ago and I don't remember. Clearly it didn't happen. Removing dependencies is a good enough reason for me. I will put it on the to-do list. No worries. I am mostly curious because we use logging a lot and if it were deprecated I would want to be moving away from it to something else! Addressed in new branch, will be out in the next version.
2025-04-01T06:37:43.998060
2024-06-02T17:41:35
2329786579
{ "authors": [ "abuzarmahmood", "cmazzio" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3143", "repo": "abuzarmahmood/pytau", "url": "https://github.com/abuzarmahmood/pytau/issues/34" }
gharchive/issue
Changing number of states does not change model fitting Changing states to 3 produces model fitting with 4 states: Could you please add where you changed number of states and what code you ran to fit the models and generate the plots? I rat fit_with_fit_handler.py and changed only the states variable to 3 Could you please attach the updated scripts you used here. Thanks. Issue selecting correct model https://github.com/abuzarmahmood/pytau/blob/d0a2d6e697579bc934ae802a26a41fd2fa059e33/pytau/how_to/scripts/fit_with_fit_handler.py#L57C1-L61C49 dframe = fit_database.fit_database wanted_exp_name = 'pytau_test' wanted_frame = dframe.loc[dframe['exp.exp_name'] == wanted_exp_name] # Pull out a single data_directory pkl_path = wanted_frame['exp.save_path'].iloc[0] Plots came out fine once correct model was selected.
2025-04-01T06:37:44.117330
2024-04-02T15:32:41
2220833811
{ "authors": [ "NCU-MC", "Xiyu-AI", "acherstyx" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3149", "repo": "acherstyx/CoCap", "url": "https://github.com/acherstyx/CoCap/issues/12" }
gharchive/issue
ERROR about cv_reader when using MSVD I converted the video according to the method you provided. I found that some errors occurred in the batch of videos(num_workers=12, 4 were correct and 8 were wrong) , the wrong videos are: ./dataset/msvd/videos_240_h264_keyint_60/Nd45qJn61Dw_0_10.avi ./dataset/msvd/videos_240_h264_keyint_60/5P6UU6m3cqk_57_75.avi ./dataset/msvd/videos_240_h264_keyint_60/PD6eQY7yCfw_32_37.avi ./dataset/msvd/videos_240_h264_keyint_60/77iDIp40m9E_159_181.avi ./dataset/msvd/videos_240_h264_keyint_60/9Wr48VFhZH8_45_50.avi ./dataset/msvd/videos_240_h264_keyint_60/HxRK-WqZ5Gk_30_50.avi ./dataset/msvd/videos_240_h264_keyint_60/UgUFP5baQ9Y_0_7.avi ./dataset/msvd/videos_240_h264_keyint_60/PqSZ89FqpiY_65_75.avi and I converted these wrong videos to mp4 but got the same error. I wonder if there is something wrong with the MSVD dataset or cv_reader (I can train normally on MSRVTT). Your help will be greatly appreciated. I converted the video according to the method you provided. I found that some errors occurred in the batch of videos(num_workers=12, 4 were correct and 8 were wrong) , the wrong videos are: ./dataset/msvd/videos_240_h264_keyint_60/Nd45qJn61Dw_0_10.avi ./dataset/msvd/videos_240_h264_keyint_60/5P6UU6m3cqk_57_75.avi ./dataset/msvd/videos_240_h264_keyint_60/PD6eQY7yCfw_32_37.avi ./dataset/msvd/videos_240_h264_keyint_60/77iDIp40m9E_159_181.avi ./dataset/msvd/videos_240_h264_keyint_60/9Wr48VFhZH8_45_50.avi ./dataset/msvd/videos_240_h264_keyint_60/HxRK-WqZ5Gk_30_50.avi ./dataset/msvd/videos_240_h264_keyint_60/UgUFP5baQ9Y_0_7.avi ./dataset/msvd/videos_240_h264_keyint_60/PqSZ89FqpiY_65_75.avi and I converted these wrong videos to mp4 but got the same error. I wonder if there is something wrong with the MSVD dataset or cv_reader (I can train normally on MSRVTT). Your help will be greatly appreciated. Hello, i met the same problem on MSVD dataset. Do you solve this problem?? How to solve this problem?? I converted the video according to the method you provided. I found that some errors occurred in the batch of videos(num_workers=12, 4 were correct and 8 were wrong) , the wrong videos are: ./dataset/msvd/videos_240_h264_keyint_60/Nd45qJn61Dw_0_10.avi ./dataset/msvd/videos_240_h264_keyint_60/5P6UU6m3cqk_57_75.avi ./dataset/msvd/videos_240_h264_keyint_60/PD6eQY7yCfw_32_37.avi ./dataset/msvd/videos_240_h264_keyint_60/77iDIp40m9E_159_181.avi ./dataset/msvd/videos_240_h264_keyint_60/9Wr48VFhZH8_45_50.avi ./dataset/msvd/videos_240_h264_keyint_60/HxRK-WqZ5Gk_30_50.avi ./dataset/msvd/videos_240_h264_keyint_60/UgUFP5baQ9Y_0_7.avi ./dataset/msvd/videos_240_h264_keyint_60/PqSZ89FqpiY_65_75.avi and I converted these wrong videos to mp4 but got the same error. I wonder if there is something wrong with the MSVD dataset or cv_reader (I can train normally on MSRVTT). Your help will be greatly appreciated. I find that it can run normally on MSRVTT. Does the MSVD dataset have some problems? 并没有得到解决,所以我只做了MSRVTT的实验o(╥﹏╥)o I converted the video according to the method you provided. I found that some errors occurred in the batch of videos(num_workers=12, 4 were correct and 8 were wrong) , the wrong videos are: ./dataset/msvd/videos_240_h264_keyint_60/Nd45qJn61Dw_0_10.avi ./dataset/msvd/videos_240_h264_keyint_60/5P6UU6m3cqk_57_75.avi ./dataset/msvd/videos_240_h264_keyint_60/PD6eQY7yCfw_32_37.avi ./dataset/msvd/videos_240_h264_keyint_60/77iDIp40m9E_159_181.avi ./dataset/msvd/videos_240_h264_keyint_60/9Wr48VFhZH8_45_50.avi ./dataset/msvd/videos_240_h264_keyint_60/HxRK-WqZ5Gk_30_50.avi ./dataset/msvd/videos_240_h264_keyint_60/UgUFP5baQ9Y_0_7.avi ./dataset/msvd/videos_240_h264_keyint_60/PqSZ89FqpiY_65_75.avi and I converted these wrong videos to mp4 but got the same error. I wonder if there is something wrong with the MSVD dataset or cv_reader (I can train normally on MSRVTT). Your help will be greatly appreciated. I find that it can run normally on MSRVTT. Does the MSVD dataset have some problems? 可以加个微信吗?我也是研究字幕生成的,我也有一些伙伴在群里,要不要一起加个群,以后大家可以一起讨论下? 发自我的iPhone ------------------ Original ------------------ From: Mc @.> Date: Wed,Aug 14,2024 2:17 PM To: acherstyx/CoCap @.> Cc: 习泽宇 @.>, Comment @.> Subject: Re: [acherstyx/CoCap] ERROR about cv_reader when using MSVD (Issue#12) Refer to https://github.com/acherstyx/CoCap/issues/13.
2025-04-01T06:37:44.192429
2023-10-13T17:13:14
1942315323
{ "authors": [ "bun137", "sumukhacharya03" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3150", "repo": "acmpesuecc/Intelligent_Traffic_Light_System", "url": "https://github.com/acmpesuecc/Intelligent_Traffic_Light_System/pull/4" }
gharchive/pull-request
soundSensor.ino changed I have added ambulance sensor to the program. reviewing! @sumukhacharya03 could you please come meet me in room 001?
2025-04-01T06:37:44.208943
2024-11-11T22:59:29
2650594995
{ "authors": [ "StrongMonkey", "sangee2004" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3151", "repo": "acorn-io/acorn", "url": "https://github.com/acorn-io/acorn/issues/542" }
gharchive/issue
Knowledge - Ingestion time field overlaps with the "Unsupported" status. Steps to reproduce the problem: Create agent with knowledge files that are unsupported like .xlsx files. Notice that Ingestion time field overlaps with the "Unsupported" status when ingestion status is reported for these files. we changed the UX a bit, this should no longer be revelant This issue is not relevant anymore.
2025-04-01T06:37:44.219825
2024-04-09T13:27:06
2233441836
{ "authors": [ "adku1173", "esarradj" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3152", "repo": "acoular/acoular", "url": "https://github.com/acoular/acoular/issues/204" }
gharchive/issue
Some solvers ind CMF can yield negative source strength results including: FISTA Split-Bregman LassoLars LassoLarsBIC(?) Reproduce with: import acoular from acoular import L_p, Calib, MicGeom, Environment, PowerSpectra, \ RectGrid, MaskedTimeSamples,BeamformerCMF, SteeringVector # other imports from os import path from pylab import figure, imshow, colorbar, title # files datafile = 'example_data.h5' calibfile = 'example_calib.xml' micgeofile = path.join( path.split(acoular.__file__)[0],'xml','array_56.xml') #octave band of interest cfreq = 4000 t1 = MaskedTimeSamples(name=datafile) t1.start = 0 # first sample, default t1.stop = 16000 # last valid sample = 15999 invalid = [1,7] # list of invalid channels (unwanted microphones etc.) t1.invalid_channels = invalid t1.calib = Calib(from_file=calibfile) m = MicGeom(from_file=micgeofile) m.invalid_channels = invalid g = RectGrid(x_min=-0.6, x_max=-0.0, y_min=-0.3, y_max=0.3, z=0.68, increment=0.05) env = Environment(c = 346.04) st = SteeringVector(grid=g, mics=m, env=env) f = PowerSpectra(time_data=t1, window='Hanning', overlap='50%', block_size=128, #FFT-parameters ind_low=8, ind_high=16) #to save computational effort, only # frequencies with indices 8..15 are used bcmf = BeamformerCMF(freq_data=f, steer=st, method='LassoLars') figure(1,(10,6)) smap = bcmf.synthetic(cfreq,1) print(smap.min()) For sklearn solvers there is a new positive parameter which can be set to enforce strictly non-negative results.
2025-04-01T06:37:44.236774
2023-06-20T11:29:28
1765208208
{ "authors": [ "abhijeetnarvekar" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3153", "repo": "acrolinx/sidebar-sdk-dotnet", "url": "https://github.com/acrolinx/sidebar-sdk-dotnet/pull/26" }
gharchive/pull-request
Use Sonar GitHub Action Remove downloading sonar build exexutable and running action manually, instead use out of box action provided by Sonar source https://github.com/marketplace/actions/sonarcloud-scan There is no SONAR action out of the box available for classic dot net projects. The default SONAR action oly works on ubuntu image and can be used for dotnet core projects but not for windows based dotnet projects
2025-04-01T06:37:44.244833
2024-05-03T12:02:24
2277537619
{ "authors": [ "dabreegster" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3154", "repo": "acteng/inspectorate_tools", "url": "https://github.com/acteng/inspectorate_tools/issues/39" }
gharchive/issue
The horse rider case A special question in Path Check that trickles down elsewhere This is mostly done. The formulas on row 475 of Lookups1 are complex. If PA25 is N/A, then it uses PA22. If PA29 is N/A, then PA26. To sanely match this behavior, we need the unit tests to be working, and vary some of these things.
2025-04-01T06:37:44.261262
2020-12-12T11:51:02
763652841
{ "authors": [ "dhadka", "fugkco", "mmomtchev" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3155", "repo": "actions/cache", "url": "https://github.com/actions/cache/issues/485" }
gharchive/issue
"Unable to reserve cache with key" Hi, I've got a workflow where I'm using this action, however, it doesn't seem to be caching anything. I get the error: Unable to reserve cache with key Linux-xmltv-cache, another job may be creating this cache. Obviously, because of it, subsequent jobs do not find any caches. Cache usage (full workflow): - name: XMLTV cache id: cache-xmltv uses: actions/cache@v2 with: path: /config/.xmltv/cache key: ${{ runner.os }}-xmltv-cache Link to workflow Hi @fugkco, this is usually a symptom indicating the creation of the cache failed. To avoid wasting unnecessary time creating the cache if it already exists, the cache is first "reserved", then the runner creates the compressed tar with the cache contents, and finally that is uploaded to the server. If something breaks between when the cache is reserved and when the upload finishes, the cache gets stuck in the "reserved" state. You will then see the "Unable to reserve cache...another job may be creating this cache" message until that cache is automatically removed (it could take up to 24 hours but work is underway to reduce this delay). Instead of waiting that long, you can also change the cache key, for example: key: ${{ runner.os }}-xmltv-cache-v2 After making this change, take a look at the Post Cache step on the next run. It will show an error if there is any issue creating the cache. If I had to guess, the failure may have to do with the cache being rooted at /config. If you see any errors related to that path, try moving that folder to within the GitHub workspace (this is the default folder each step runs in). I realise that I saw this response, and used gave it a cache key, which worked, but I forgot to reply and close this issue. Anyway, so it worked. Thanks for the help! @dhadka what is the point of creating a new cache key? If this is what it seems, the best course of action would be to ignore the error. You start X concurrent jobs, they all produce the same artifact, then they try to save it using the same cache key. You need only one of these to succeed. Ideally, you should be able to run only one of those jobs, then make the other X-1 wait on it to get its artifact from the cache, but AFAIK, there is no easy way to do this.
2025-04-01T06:37:44.267385
2024-10-08T17:02:43
2573752004
{ "authors": [ "AnimMouse", "Jason3S", "joshmgross", "ulgens" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3156", "repo": "actions/cache", "url": "https://github.com/actions/cache/pull/1467" }
gharchive/pull-request
Restore original behavior of cache-hit output #1262 #1466 As described in #1466, #1404 broke workflows that relied on the previous behavior of cache-hit being an empty string when there was a cache miss: if: ${{ steps.cache-restore.outputs.cache-hit }} If cache-hit is an empty string, this would work as expected. When cache-hit is false, this is a non-empty string and treated as "true". Actions outputs do not support proper boolean values, and we can't guarantee how users are interpreting this string output. This PR reverts #1404 and updates the README to clarify all possible values of cache-hit: 'true' if there's an exact match 'false' if there's a cache hit with a restore key '' if there's a cache miss This is likely confusing to users, but we should maintain the existing behavior to avoid breaking existing workflows. @joshmgross Thank you 🙏 @joshmgross Thank you! This is likely confusing to users, but we should maintain the existing behavior to avoid breaking existing workflows. I can see the reasoning here but this shouldn't mean a weird behaviour should be kept just because it's breaking. IMO having it in the release notes and making the necessary change in version number should be enough, which was already done. Having this PR merged feels like a step in the wrong direction and there should be a plan to properly handle that breaking change. IMO having it in the release notes and making the necessary change in version number should be enough, which was already done. Per https://semver.org/, a breaking change needs to be a major version bump (i.e. v5 of this action). I agreed that we should fix this confusing behavior, but we're not ready to create a new major version of this action. https://github.com/actions/cache/issues/1466#issuecomment-2400480945
2025-04-01T06:37:44.269187
2022-03-13T23:28:54
1167726547
{ "authors": [ "Phantsure", "lookfirst" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3157", "repo": "actions/cache", "url": "https://github.com/actions/cache/pull/765" }
gharchive/pull-request
[docs] remove star search from yarn.lock i dunno, some recent change to cache caused the **/yarn.lock lookup to start failing for me. I just have a standard project with a yarn.lock in the top level and this fixed it. @lookfirst Changing the path of yarn file in example may not help everyone. That path is for the yarn.lock file which can be at different location for different projects. Thanks for your contribution. @lookfirst Can you try it again with the current recommendation and latest version of cache? It seems to work fine for me
2025-04-01T06:37:44.278413
2022-08-25T07:21:13
1350440134
{ "authors": [ "Javernaut", "mikhailkoliada" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3158", "repo": "actions/runner-images", "url": "https://github.com/actions/runner-images/issues/6141" }
gharchive/issue
Update Android NDK to r25b LTS Tool name Android NDK Tool license Same as before Add or update? [ ] Add [X] Update Desired version 25.1.8937393 Approximate size No response Brief description of tool No response URL for tool's homepage No response Provide a basic test case to validate the tool's functionality. No response Platforms where you need the tool [ ] Azure DevOps [X] GitHub Actions Virtual environments where you need the tool [ ] Ubuntu 18.04 [X] Ubuntu 20.04 [X] Ubuntu 22.04 [X] macOS 10.15 [X] macOS 11 [X] macOS 12 [ ] Windows Server 2019 [ ] Windows Server 2022 Can this tool be installed during the build? No response Tool installation time in runtime No response Are you willing to submit a PR? No response Will be deployed next week. Deployed
2025-04-01T06:37:44.287952
2023-06-26T08:46:41
1774255295
{ "authors": [ "erik-bershel", "greadtm", "mikhailkoliada", "sivakolisetti" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3159", "repo": "actions/runner-images", "url": "https://github.com/actions/runner-images/issues/7784" }
gharchive/issue
Timeout waiting for SSH. Description Syntax-only check passed. Everything looks okay. Show Packer Version 1.9.1 Build ubuntu2204 VM ==> azure-arm.build_vhd: Running builder ... ==> azure-arm.build_vhd: Getting tokens using client secret ==> azure-arm.build_vhd: Getting tokens using client secret azure-arm.build_vhd: Creating Azure Resource Manager (ARM) client ... ==> azure-arm.build_vhd: Warning: You are using Azure Packer Builder to create VHDs which is being deprecated, consider using Managed Images. Learn more https://www.packer.io/docs/builders/azure/arm#azure-arm-builder-specific-options ==> azure-arm.build_vhd: Getting source image id for the deployment ... ==> azure-arm.build_vhd: Creating resource group ... ==> azure-arm.build_vhd: Validating deployment template ... ==> azure-arm.build_vhd: Deploying deployment template ... ==> azure-arm.build_vhd: Getting the VM's IP address ... ==> azure-arm.build_vhd: Waiting for SSH to become available... ==> azure-arm.build_vhd: Timeout waiting for SSH. ==> azure-arm.build_vhd: ==> azure-arm.build_vhd: Deleting individual resources ... Platforms affected [ ] Azure DevOps [X] GitHub Actions - Standard Runners [ ] GitHub Actions - Larger Runners Runner images affected [ ] Ubuntu 20.04 [X] Ubuntu 22.04 [ ] macOS 11 [ ] macOS 12 [ ] macOS 13 [ ] Windows Server 2019 [ ] Windows Server 2022 Image version and build link Current hosted runner is version: '2.305.0' Is it regression? n/a Expected behavior no errors Actual behavior it's broken Repro steps see description @sivakolisetti this means there are the problem in your setup, packer can not connect to a newly created vm, it is outside of our scope @sivakolisetti as it already said, it looks like issue not on our side. Our builds work fine the same time. Please provide more info about what are you tried to do. Hello @sivakolisetti! I am forced to close the issue due to the fact that it is not repeated in our infrastructure and on test runners. If you still have any questions related to this problem, then you can ask them here - we can always reopen the issue. In case of other problems, please open a new issue. Separately, I can add that the described problem can be both transitive and related to local settings in your organization, but not with the work of the scripts themselves at the moment. @mikhailkoliada is correct our vnet has does not internet connectivity once vm created. We created separate vnet then actions running successfully . thanks for guys who reply this issue. Hi @sivakolisetti, what did you add to the created vnet to accomodate ssh from the created vm? Did you need to add an outbound or inbound ssh rule? Thanks.
2025-04-01T06:37:44.322098
2023-08-22T14:21:29
1861571943
{ "authors": [ "MostefaKamalLala", "aparnajyothi-y", "codepuncher", "dsame", "dusan-trickovic", "gzurbach", "iBotPeaches" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3160", "repo": "actions/setup-node", "url": "https://github.com/actions/setup-node/issues/835" }
gharchive/issue
Heavy cache miss count Description: A large collection of "cache miss" and only a handful of "cache hit". Like why was yocto-queue a hit but not fontawesome? So this is being reported because you get charged for using bandwidth when using Font Awesome. Sadly in our case - the basic example used here does not properly cache all items and I'm not following why. We get a hit restoring the cache blob, but it appears its quite small to what I expect. /home/runner/.npm Received 24691139 of 24691139 (100.0%), 30.4 MBs/sec Cache Size: ~24 MB (24691139 B) Cache restored successfully Cache restored from key: node-cache-Linux-npm-912ac010d33c9c8d3694535a04cbea9be930f020a6b09d60b707ae2ce2d4db8b I know this action does NOT cache node_modules, but then I struggle to understand the benefit I'm gaining here. Its clearly caching and restoring something, but since it doesn't cache the thing that costs us money then :( We then get billed for overages and Font Awesome support just tells us to follow this guide which we clearly do. So the large collection of misses was resolved by manually deleting/purging all the caches from that repository. The default cache key I assume is the lockfile hash, so why this happened - I'm not sure. Now the only thing not caching is the links from a private registry. This private registry (Font Awesome) does not appear to ever successfully cache itself. I assume since the non node_modules cache is HTTP responses and these expire after 20 minutes. So the cache is respecting the TTL and thus becoming misses on builds. I don't know if this means we should go back to rolling our own cache - (caching node_modules, skipping install if cache hit, etc) or just missing something here. Hello, @iBotPeaches ! Thanks for reporting this issue, we will take a closer look and see what can be done :) I too am experiencing this same issue with FontAwesome. It is having a large impact on us and our budgets with overage charges. We have many repositories that are facing this issue. The only difference that I can see is that we are using yarn rather than npm. @codepuncher - I dug into this a lot with FA, who honestly just kept redirecting me back to official GitHub docs on caching. You can't really use setup-node, etc with FA Pro and expect to keep your bandwidth. We mixed two forms of cache now. - name: Setup node uses: actions/setup-node@v3 with: node-version: ${{ inputs.node-version }} cache: 'npm' - uses: actions/cache@v3 id: npm-cache with: path: node_modules key: ${{ inputs.node-version }}-node-${{ hashFiles('**/package-lock.json') }} restore-keys: ${{ inputs.node-version }}-node-${{ hashFiles('**/package-lock.json') }} - run: npm ci if: steps.npm-cache.outputs.cache-hit != 'true' working-directory: ${{ inputs.directory }} shell: bash (this is in a composite action) We restore cache using regular setup-node, but then we attempt to restore a 1-1 cache based on the exact hash of the lock file. If we find a match for that, we skip npm ci - which is what would eventually hit FA Pro servers again and bleed bandwidth due to what I described above. If that cache is a miss, we already restored the npm global cache (the 1st cache) so our cold/miss timing is still pretty fast as we are just installing the FA Pro, the private packages and whatever caused the miss (new version, etc) This has kept us under the 2gb bandwidth limit for Sept. Hello @iBotPeaches, The problem is not reproducible 1st run https://github.com/akv-demo/setup-node-test/actions/runs/7251833566/job/19755011576#step:4:9 Run npm ci --verbose ... npm http fetch GET [20](https://github.com/akv-demo/setup-node-test/actions/runs/7251833566/job/19755011576#step:4:21)0 https://registry.npmjs.org/font-awesome/-/font-awesome-4.7.0.tgz 93ms (cache miss) npm http fetch POST 200 https://registry.npmjs.org/-/npm/v1/security/advisories/bulk 357ms 2nd run Run npm ci --verbose ... npm http fetch POST [20](https://github.com/akv-demo/setup-node-test/actions/runs/7251833566/job/19755025500#step:4:21)0 https://registry.npmjs.org/-/npm/v1/security/advisories/bulk 127ms The idea of action is to cahce ~/.npm directory which contains downloaded packages. Caching the node_modules does not seem a good idea because they its content can be out of sync. It is impossible to find the exact reason why the ~/.npm directory does not contain the whole bunch of packages but in the theory it can be caused the repository has outdated package-lock.json. Did it help? Hi @dsame, My experience was more for the Pro/Private option of FA, which does not appear to be cachable. Your test appears to be the public version of Font Awesome - so I don't think a true comparable replicable test. I'm not saying my method is the cleanest, but it did solve our immediate problem of exceeding rate limits Since my comment I've yet to hit a month where we exceed limits. I don't want to cache node_modules, but I did what I had to stay under usage limits. Hello @iBotPeaches, thank you for the clarification, i supposed there's something related to the private package and wanted to get the confirmation. Nevertheless, it puzzles me with the dependencies from https://registry.npmjs.org are not cached as well despite I expect them to be stored in ~/.npm folder and cached. I understand you've solved the problem with the workaround, but can you please double check does the build on the local host has to reload the awesomefonts npm with its dependencies? I still suspect the action has some flaw with the generating .npmrc config that might cause the problem. In case if the local build has the same problem or you don't have a time, please let us to close the issue with "it does not relate to the action" Hi @dsame, I ran a few tests in a private repo like your configuration. With each change I got closer to our affected application, but each time I re-ran it - it seemed fine. npm verb cli /opt/hostedtoolcache/node/18.19.0/x64/bin/node /opt/hostedtoolcache/node/18.19.0/x64/bin/npm npm info using<EMAIL_ADDRESS>npm info using node@v1[8](https://github.com/sourcetoad/actions-node-test/actions/runs/7264253745/job/19791375113#step:5:9).1[9](https://github.com/sourcetoad/actions-node-test/actions/runs/7264253745/job/19791375113#step:5:10).0 npm verb title npm ci npm verb argv "ci" "--loglevel" "verbose" npm verb logfile logs-max:[10](https://github.com/sourcetoad/actions-node-test/actions/runs/7264253745/job/19791375113#step:5:11) dir:/home/runner/.npm/_logs/2023-[12](https://github.com/sourcetoad/actions-node-test/actions/runs/7264253745/job/19791375113#step:5:13)-19T15_53_14_549Z- npm verb logfile /home/runner/.npm/_logs/2023-12-19T15_53_14_549Z-debug-0.log npm http fetch POST 200 https://registry.npmjs.org/-/npm/v1/security/advisories/bulk 494ms npm info run<EMAIL_ADDRESS>postinstall node_modules/@fortawesome/fontawesome-common-types node attribution.js npm info run<EMAIL_ADDRESS>postinstall node_modules/@fortawesome/fontawesome-svg-core node attribution.js npm info run<EMAIL_ADDRESS>postinstall { code: 0, signal: null } npm info run<EMAIL_ADDRESS>postinstall { code: 0, signal: null } { "dependencies": { "@fortawesome/fontawesome-svg-core": "^6.5.1", "@fortawesome/pro-solid-svg-icons": "^6.5.1" } } @fortawesome:registry=https://npm.fontawesome.com //npm.fontawesome.com/:_authToken=${FONTAWESOME_NPM_AUTH_TOKEN} This worked fine. So I removed my workaround on the actual affected application and it was immediate misses again. Unfortunately the difference between the real affected application and this sample is 98,000~ different node packages. So at this point - I can't replicate in my test, but my real application continues to replicate. I'm happy with a workaround, but I understand if you can't replicate - you can't fix. I'll close this and if others stumble upon this with FA Pro and have the same bandwidth overage issue - maybe they can use my sample that works and/or discover the root cause themselves. Thanks @iBotPeaches / @dsame / @codepuncher - Is this issue still occurring using actions/setup-node@v4? Occuring for me! Hello Everyone, Thank you for bringing this to our attention. We understand your concerns regarding Font Awesome and the caching behavior you're seeing. Based on our investigation, we were unable to reproduce the issue you're experiencing with actions/setup-node@v4. The caching mechanism in actions/setup-node is designed to cache local npm dependencies (i.e., those fetched from the npm registry), but it does not extend to external resources like CDN assets. Since CDN resources such as Font Awesome are not part of the npm ecosystem, they aren't included in the npm cache and therefore are not cached by the GitHub Actions workflow. Regarding your concern about npm dependencies from https://registry.npmjs.org not being cached as expected, please note that actions/setup-node caches the npm package cache (~/.npm), but it does not directly cache the node_modules directory. This means that dependencies fetched from the npm registry are cached in the npm cache folder, but they may not be restored directly into the node_modules folder on every run. If local caching is properly configured, there should be no need to reload Font Awesome (or other dependencies) on every build. For more information on the caching behavior in GitHub Actions and actions/setup-node, you can refer to the following documentation: GitHub Actions Cache Documentation setup-node Action Documentation We hope this helps clarify the caching behavior. Please feel free to reach out if you have any further questions or need additional assistance! Thank you @iBotPeaches for the suggested workaround. We will give it a try tomorrow. Our workaround was to use Kits as a way to reduce the significant bandwidth usage caused by our CI/CD because of pro icon packages. Our plan was to configure the Kit to include only the specific icons used in our project, which would have allowed us to uninstall the larger packages containing all icons. Unfortunately, this idea also proved to be a dead end since Kits are not supported in React Native. As a result, we're left with either caching the entire node_modules folder as you suggested, or disabling CI/CD entirely... for the sake of icons 🤷‍♂️ We hope this caching issue can be addressed soon. Hello @iBotPeaches, Thank you for your response, and apologies for the delay. You're absolutely right that the behavior around CDN assets like Font Awesome isn't fully addressed in the current documentation. It’s also true that the caching mechanism in actions/setup-node is primarily designed for npm dependencies, not node_modules and external CDN assets. I’m glad to hear that your workaround caching node_modules alongside setup-node is working well for you. I agree that using a cold cache to handle potential misses is a solid approach, as it ensures both your npm dependencies and external assets are reliably available. However, please note that caching node_modules or external CDN assets (e.g., Font Awesome) is not implemented by default to avoid inconsistencies across environments (OS, Node.js versions). We are actively working to update the documentation to provide more clarity on caching behaviors, and we’ll definitely consider your feedback when addressing future caching use cases. If anything changes or you need further assistance, please don't hesitate to reach out. Hello @iBotPeaches, Thanks for the response! The setup-node action already keys the cache based on variables such as Node.js version, OS, and architecture. This ensures that the cache is environment-specific, helping to avoid issues with mismatched dependencies and improving cache hit rates across different Node.js versions and platforms. We are proceeding to close this issue, but we will keep it open for a documentation update in the setup-node action.
2025-04-01T06:37:44.347851
2020-02-28T14:29:17
572790713
{ "authors": [ "gregorybleiker", "miketimofeev" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3161", "repo": "actions/virtual-environments", "url": "https://github.com/actions/virtual-environments/issues/479" }
gharchive/issue
Update/Add azure cli to 2.1.0 Tool information Tool name: az cli Add or update? update Desired version: 2.1.0 Area for Triage: Virtual environments affected [ ] macOS 10.15 [ ] Ubuntu 16.04 LTS [x ] Ubuntu 18.04 LTS [ ] Windows Server 2016 R2 [x ] Windows Server 2019 Can this tool be installed during the build? Don't know Are you willing to submit a PR? No Hi @gregorybleiker! Az-Cli 2.1.0 is already rolling out. You can find it in pre-release readmes here: https://github.com/actions/virtual-environments/releases Feel free to reopen the issue if you have any concerns. Thanks! Can I use this in my pipelines somehow? What is the ETA for the rollout?
2025-04-01T06:37:44.444087
2021-08-24T14:53:46
978188779
{ "authors": [ "max-acumen" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3162", "repo": "acumenlabs/status-page", "url": "https://github.com/acumenlabs/status-page/issues/1428" }
gharchive/issue
⚠️ Processors has degraded performance In c248588, Processors ($STATUS_URL) experienced degraded performance: HTTP code: 200 Response time: 94 ms Resolved: Processors performance has improved in 411518f.
2025-04-01T06:37:44.445552
2022-10-07T22:44:09
1401724352
{ "authors": [ "danielshir" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3163", "repo": "acumenlabs/status-page", "url": "https://github.com/acumenlabs/status-page/issues/4287" }
gharchive/issue
⚠️ Processors has degraded performance In b6e451f, Processors ($STATUS_URL) experienced degraded performance: HTTP code: 200 Response time: 95 ms Resolved: Processors performance has improved in 26696f8.
2025-04-01T06:37:44.446965
2023-02-05T23:32:57
1571662857
{ "authors": [ "danielshir" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3164", "repo": "acumenlabs/status-page", "url": "https://github.com/acumenlabs/status-page/issues/4557" }
gharchive/issue
⚠️ Processors has degraded performance In 6bb5d44, Processors ($STATUS_URL) experienced degraded performance: HTTP code: 200 Response time: 77 ms Resolved: Processors performance has improved in e49e544.
2025-04-01T06:37:44.450851
2024-06-19T13:55:38
2362459015
{ "authors": [ "eltonfss", "hannahbast" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3165", "repo": "ad-freiburg/qlever-control", "url": "https://github.com/ad-freiburg/qlever-control/pull/47" }
gharchive/pull-request
Update Qleverfile.pubchem Fixing error: qlever index --system=native To enable autocompletion, run the following command, and consider adding it to your `.bashrc` or `.zshrc`: eval "$(register-python-argcomplete qlever)" && export QLEVER_ARGCOMPLETE_ENABLED=1 Command: index echo '{ "languages-internal": [], "prefixes-external": [""], "ascii-prefixes-only": false, "num-triples-per-batch": 1000000 }' > pubchem.settings.json ulimit -Sn 1048576; zcat pubchem.additional-ontologies.nt.gz nt.2024-02-03/*.nt.gz | IndexBuilderMain -F ttl -f - -i pubchem -s pubchem.settings.json --stxxl-memory 10G | tee pubchem.index-log.txt No file matching "pubchem.additional-ontologies.nt.gz" found Did you call `qlever get-data`? If you did, check GET_DATA_CMD and INPUT_FILES in the QLeverfile @eltonfss @Qup42 The problem is that these ontologies are many and have to be downloaded from various websites, which change frequently. Hence the comment at the top of the Qleverfile. As is, the file should be removed from INPUT_FILES, as @eltonfss suggests. Right now, I am trying out whether it still works. If it does, I will make the change and close this. @eltonfss There is now a brand-new Qleverfile for PubChem, which also dowloads the latest version of the ontologies, see 589d5b97ba2ffb68681423c3d844e4c249ae094a . Please try it out and let me know if it works for you.
2025-04-01T06:37:44.463241
2024-06-14T23:03:49
2354212981
{ "authors": [ "FoamyGuy", "tannewt" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3166", "repo": "adafruit/Adafruit_CircuitPython_BLE_File_Transfer", "url": "https://github.com/adafruit/Adafruit_CircuitPython_BLE_File_Transfer/issues/28" }
gharchive/issue
Infinite Loop Trying to run listdir Example When attempting to run the listdir example https://github.com/adafruit/Adafruit_CircuitPython_BLE_File_Transfer/blob/main/examples/ble_file_transfer_listdirs.py it's resulting in getting stuck in an infinite loop inside of here: https://github.com/adafruit/Adafruit_CircuitPython_BLE_File_Transfer/blob/main/adafruit_ble_file_transfer.py#L147 I added print statements and can see that the value of read is always 0. I also printed buffer and it was filled with \x00s It's unclear to me if this worked in the past and perhaps the workflow API was changed, but the library was not, or perhaps this functionality was not ever working. I tested with a Feather Sense with BLE Workflow enabled and an Itsy Bitsy nrf52840 as the client running the listdir script. I did originally notice this behavior first on the PC using Blinka_Bleio, but then moved off the PC to using two MCUs and confirmed seeing the same infinite looping behavior. Did you figure this out? I think I understand it a bit better, but wouldn't say I figured it out completely. When everything is working as expected this library does run successfully on the micro-controller and does not infinitely loop. It does still have an infinite loop when run on a PC under blinka_bleio. So this could be closed and if there is a desire to support that environment then an issue could be created over in that repo. For now I've moved to just using bleak module directly without blinka and blinka_bleio and I'm making progress that way. The times that I did see this library infinite loop on the MCU (mentioned in the original comment), I believe it was due to the "server device" (the one running BLE workflow) had gotten into a "bad state" ultimately caused by partially broken implementations while working on the PC / blinka_bleio version. I did not document specific examples that led to the bad state, but I did experience it a few times. I found that using ctrl+C / ctrl+D to restart the code.py or REPL would typically get it back into a good state. I think maybe once I ended up having to unplug / replug to let it fully reboot. I'll close this one.
2025-04-01T06:37:44.472244
2024-04-20T20:22:12
2254704856
{ "authors": [ "anecdata", "dhalbert", "justmobilize" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3167", "repo": "adafruit/Adafruit_CircuitPython_ConnectionManager", "url": "https://github.com/adafruit/Adafruit_CircuitPython_ConnectionManager/pull/8" }
gharchive/pull-request
Different pool, different ConnectionManager Update ConnectionManager, to support multiple different pools at the same time. Testing steps: Have a device with onboard WiFi and either a ESP32SPI or WIZNET5K Connect both and make requests on both and see that they are from different devices. You can also use a WIZNET5K create it's request session first, and then disconnect it and verify the other device still works. @anecdata here's another one that would be awesome if you could test (with everything else in main) Since you easily have the ability to see where requests are coming from Looks good from a testing perspective, maybe someone wants to code review. @dhalbert once this is merged, I can take care of any conflicts and open up the 3x use new socketpool PRs @justmobilize is this a bugfix version bump, or minor or major version bump, in terms of behavior? @dhalbert I would consider it minor. It shouldn't break anything as it's only changing how pseudo private data is stored.
2025-04-01T06:37:44.550197
2020-07-09T10:15:22
653942875
{ "authors": [ "edgar-bonet", "ivankravets", "jeroen24", "joshlikespics", "ladyada" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3168", "repo": "adafruit/RTClib", "url": "https://github.com/adafruit/RTClib/issues/175" }
gharchive/issue
rtc libary in visual studio code hey, i am trying to youse the exampel code . but every time i get this error . i think dear is a libary to short in visual studio code but i dont now witch.( i already installed TinywireM). can enyone help me? this is the errors i get: /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'void USI_TWI_Master_Initialise()': /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:64:3: error: 'PORT_USI' was not declared in this scope PORT_USI |= ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:66:11: error: 'PIN_USI_SDA' was not declared in this scope << PIN_USI_SDA); // Enable pullup on SDA, to set high as released state. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:69:11: error: 'PIN_USI_SCL' was not declared in this scope << PIN_USI_SCL); // Enable pullup on SCL, to set high as released state. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:71:3: error: 'DDR_USI' was not declared in this scope DDR_USI |= (1 << PIN_USI_SCL); // Enable SCL as output. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:74:3: error: 'USIDR' was not declared in this scope USIDR = 0xFF; // Preload dataregister with "released level" data. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:3: error: 'USICR' was not declared in this scope USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:17: error: 'USISIE' was not declared in this scope USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:33: error: 'USIOIE' was not declared in this scope USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:76:17: error: 'USIWM1' was not declared in this scope (1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:76:33: error: 'USIWM0' was not declared in this scope (1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:77:17: error: 'USICS1' was not declared in this scope (1 << USICS1) | (0 << USICS0) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:77:33: error: 'USICS0' was not declared in this scope (1 << USICS1) | (0 << USICS0) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:78:17: error: 'USICLK' was not declared in this scope (1 << USICLK) | // Software stobe as counter clock source ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:79:17: error: 'USITC' was not declared in this scope (0 << USITC); ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:3: error: 'USISR' was not declared in this scope USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:17: error: 'USISIF' was not declared in this scope USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:33: error: 'USIOIF' was not declared in this scope USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:49: error: 'USIPF' was not declared in this scope USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:81:17: error: 'USIDC' was not declared in this scope (1 << USIDC) | // Clear flags, ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:82:19: error: 'USICNT0' was not declared in this scope (0x0 << USICNT0); // and reset counter. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Start_Transceiver_With_Data(unsigned char*, unsigned char)': /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:13: error: 'USISIF' was not declared in this scope (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:29: error: 'USIOIF' was not declared in this scope (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:45: error: 'USIPF' was not declared in this scope he terminal process terminated with exit code: 1 Terminal will be reused by tasks, press any key to close it. Executing task: platformio run < Processing uno (platform: atmelavr; board: uno; framework: arduino) Verbose mode can be enabled via -v, --verbose option CONFIGURATION: https://docs.platformio.org/page/boards/atmelavr/uno.html PLATFORM: Atmel AVR 2.2.0 > Arduino Uno HARDWARE: ATMEGA328P 16MHz, 2KB RAM, 31.50KB Flash DEBUG: Current (simavr) On-board (simavr) PACKAGES: framework-arduino-avr 5.0.0 toolchain-atmelavr 1.50400.190710 (5.4.0) LDF: Library Dependency Finder -> http://bit.ly/configure-pio-ldf LDF Modes: Finder ~ chain, Compatibility ~ soft Found 9 compatible libraries Scanning dependencies... Dependency Graph |-- 1.10.0 | |-- 1.1.0 | |-- 1.0 |-- 1.0.12 Building in release mode Compiling .pio/build/uno/lib9ac/TinyWireM_ID797/USI_TWI_Master.cpp.o /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'void USI_TWI_Master_Initialise()': /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:64:3: error: 'PORT_USI' was not declared in this scope PORT_USI |= ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:66:11: error: 'PIN_USI_SDA' was not declared in this scope << PIN_USI_SDA); // Enable pullup on SDA, to set high as released state. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:69:11: error: 'PIN_USI_SCL' was not declared in this scope << PIN_USI_SCL); // Enable pullup on SCL, to set high as released state. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:71:3: error: 'DDR_USI' was not declared in this scope DDR_USI |= (1 << PIN_USI_SCL); // Enable SCL as output. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:74:3: error: 'USIDR' was not declared in this scope Compiling .pio/build/uno/liba29/Wire/utility/twi.c.o USIDR = 0xFF; // Preload dataregister with "released level" data. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:3: error: 'USICR' was not declared in this scope USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:17: error: 'USISIE' was not declared in this scope USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:75:33: error: 'USIOIE' was not declared in this scope USICR = (0 << USISIE) | (0 << USIOIE) | // Disable Interrupts. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:76:17: error: 'USIWM1' was not declared in this scope (1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:76:33: error: 'USIWM0' was not declared in this scope (1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:77:17: error: 'USICS1' was not declared in this scope (1 << USICS1) | (0 << USICS0) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:77:33: error: 'USICS0' was not declared in this scope (1 << USICS1) | (0 << USICS0) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:78:17: error: 'USICLK' was not declared in this scope (1 << USICLK) | // Software stobe as counter clock source ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:79:17: error: 'USITC' was not declared in this scope (0 << USITC); ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:3: error: 'USISR' was not declared in this scope USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:17: error: 'USISIF' was not declared in this scope USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:33: error: 'USIOIF' was not declared in this scope USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:80:49: error: 'USIPF' was not declared in this scope USISR = (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:81:17: error: 'USIDC' was not declared in this scope (1 << USIDC) | // Clear flags, ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:82:19: error: 'USICNT0' was not declared in this scope (0x0 << USICNT0); // and reset counter. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Start_Transceiver_With_Data(unsigned char*, unsigned char)': /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:13: error: 'USISIF' was not declared in this scope (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:29: error: 'USIOIF' was not declared in this scope (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:161:45: error: 'USIPF' was not declared in this scope (1 << USISIF) | (1 << USIOIF) | (1 << USIPF) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:162:13: error: 'USIDC' was not declared in this scope (1 << USIDC) | // Prepare register value to: Clear flags, and ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:163:15: error: 'USICNT0' was not declared in this scope (0x0 << USICNT0); // set USI to shift 8 bits i.e. count 16 clock edges. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:229:7: error: 'PORT_USI' was not declared in this scope PORT_USI &= ~(1 << PIN_USI_SCL); // Pull SCL LOW. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:229:26: error: 'PIN_USI_SCL' was not declared in this scope PORT_USI &= ~(1 << PIN_USI_SCL); // Pull SCL LOW. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:230:7: error: 'USIDR' was not declared in this scope USIDR = *(msg++); // Setup data. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:234:7: error: 'DDR_USI' was not declared in this scope DDR_USI &= ~(1 << PIN_USI_SDA); // Enable SDA as input. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:234:25: error: 'PIN_USI_SDA' was not declared in this scope DDR_USI &= ~(1 << PIN_USI_SDA); // Enable SDA as input. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:266:7: error: 'DDR_USI' was not declared in this scope DDR_USI &= ~(1 << PIN_USI_SDA); // Enable SDA as input. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:266:25: error: 'PIN_USI_SDA' was not declared in this scope DDR_USI &= ~(1 << PIN_USI_SDA); // Enable SDA as input. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:272:9: error: 'USIDR' was not declared in this scope USIDR = 0xFF; // Load NACK to confirm End Of Transmission. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:274:9: error: 'USIDR' was not declared in this scope USIDR = 0x00; // Load ACK. Set data register bit 7 (output for SDA) low. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Master_Transfer(unsigned char)': /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:295:3: error: 'USISR' was not declared in this scope USISR = temp; // Set USISR according to temp. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:297:16: error: 'USISIE' was not declared in this scope temp = (0 << USISIE) | (0 << USIOIE) | // Interrupts disabled ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:297:32: error: 'USIOIE' was not declared in this scope temp = (0 << USISIE) | (0 << USIOIE) | // Interrupts disabled ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:298:16: error: 'USIWM1' was not declared in this scope Compiling .pio/build/uno/lib864/RTClib_ID83/RTClib.cpp.o (1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:298:32: error: 'USIWM0' was not declared in this scope (1 << USIWM1) | (0 << USIWM0) | // Set USI in Two-wire mode. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:299:16: error: 'USICS1' was not declared in this scope (1 << USICS1) | (0 << USICS0) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:299:32: error: 'USICS0' was not declared in this scope (1 << USICS1) | (0 << USICS0) | ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:300:16: error: 'USICLK' was not declared in this scope (1 << USICLK) | // Software clock strobe as source. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:301:16: error: 'USITC' was not declared in this scope (1 << USITC); // Toggle Clock Port. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:304:5: error: 'USICR' was not declared in this scope USICR = temp; // Generate positve SCL edge. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:305:14: error: 'PIN_USI' was not declared in this scope while (!(PIN_USI & (1 << PIN_USI_SCL))) ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:305:30: error: 'PIN_USI_SCL' was not declared in this scope while (!(PIN_USI & (1 << PIN_USI_SCL))) ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:309:28: error: 'USIOIF' was not declared in this scope } while (!(USISR & (1 << USIOIF))); // Check for transfer complete. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:312:10: error: 'USIDR' was not declared in this scope temp = USIDR; // Read out data. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:314:3: error: 'DDR_USI' was not declared in this scope DDR_USI |= (1 << PIN_USI_SDA); // Enable SDA as output. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:314:20: error: 'PIN_USI_SDA' was not declared in this scope DDR_USI |= (1 << PIN_USI_SDA); // Enable SDA as output. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Master_Start()': /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:324:3: error: 'PORT_USI' was not declared in this scope PORT_USI |= (1 << PIN_USI_SCL); // Release SCL. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:324:21: error: 'PIN_USI_SCL' was not declared in this scope PORT_USI |= (1 << PIN_USI_SCL); // Release SCL. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:330:22: error: 'PIN_USI_SDA' was not declared in this scope PORT_USI &= ~(1 << PIN_USI_SDA); // Force SDA LOW. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:336:9: error: 'USISR' was not declared in this scope if (!(USISR & (1 << USISIF))) { ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:336:23: error: 'USISIF' was not declared in this scope if (!(USISR & (1 << USISIF))) { ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp: In function 'unsigned char USI_TWI_Master_Stop()': /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:349:3: error: 'PORT_USI' was not declared in this scope PORT_USI &= ~(1 << PIN_USI_SDA); // Pull SDA low. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:349:22: error: 'PIN_USI_SDA' was not declared in this scope PORT_USI &= ~(1 << PIN_USI_SDA); // Pull SDA low. ^ Compiling .pio/build/uno/lib631/SimpleDHT_ID849/SimpleDHT.cpp.o /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:350:21: error: 'PIN_USI_SCL' was not declared in this scope PORT_USI |= (1 << PIN_USI_SCL); // Release SCL. ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:351:12: error: 'PIN_USI' was not declared in this scope while (!(PIN_USI & (1 << PIN_USI_SCL))) ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:358:9: error: 'USISR' was not declared in this scope if (!(USISR & (1 << USIPF))) { ^ /home/jeroen/.platformio/lib/TinyWireM_ID797/USI_TWI_Master.cpp:358:23: error: 'USIPF' was not declared in this scope if (!(USISR & (1 << USIPF))) { ^ *** [.pio/build/uno/lib9ac/TinyWireM_ID797/USI_TWI_Master.cpp.o] Error 1 ============================================= [FAILED] Took 1.02 seconds ============================================= The terminal process terminated with exit code: 1 Terminal will be reused by tasks, press any key to close it. Why are you using TinyWireM? This is a Wire replacement for ATtiny microcontrollers that do not have a real I2C port. It instead relies on the “universal serial interface” (USI) to provide this functionality. The Uno does not have an USI, but it does have a regular I2C port: you should use the regular Wire library with it. Note that RTClib.cpp uses the TinyWireM library only when being compiled for the ATtiny85, which is clearly not your case. hey , i am not using the tinywirelibary i just addit the libary. this is the exapel code i am yousing: RTC_DS1307 rtc; char daysOfTheWeek[7][12] = {"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"}; void setup () { Serial.begin(9600); #ifndef ESP8266 while (!Serial); // wait for serial port to connect. Needed for native USB #endif if (! rtc.begin()) { Serial.println("Couldn't find RTC"); Serial.flush(); abort(); } if (! rtc.isrunning()) { Serial.println("RTC is NOT running, let's set the time!"); // When time needs to be set on a new device, or after a power loss, the // following line sets the RTC to the date & time this sketch was compiled rtc.adjust(DateTime(F(DATE), F(TIME))); // This line sets the RTC with an explicit date & time, for example to set // January 21, 2014 at 3am you would call: // rtc.adjust(DateTime(2014, 1, 21, 3, 0, 0)); } // When time needs to be re-set on a previously configured device, the // following line sets the RTC to the date & time this sketch was compiled // rtc.adjust(DateTime(F(DATE), F(TIME))); // This line sets the RTC with an explicit date & time, for example to set // January 21, 2014 at 3am you would call: // rtc.adjust(DateTime(2014, 1, 21, 3, 0, 0)); } What do you mean by “i just add it the libary”? You should not need to “add” TinyWireM anywhere to compile this example. If you can't prevent platform.io to try to compile it anyway, you should ask for help in a forum dedicated to platform.io. What do you mean by “i just add it the libary”? You should not need to “add” TinyWireM anywhere to compile this example. If you can't prevent platform.io to try to compile it anyway, you should ask for help in a forum dedicated to platform.io. ok thank you for the help. @edgar-bonet this is a feature in platformio where all dependancies are compiled, even when dependancies are not referenced in code for a platform, we've let @ivankravets know, for now folks have to manually remove the unused library Sorry for this issue. Please use lib_ignore option and skip libraries from a build process. This provided some more clarity for me on the TinyWireM issue in VS Code: https://github.com/platformio/platformio-core/issues/3543 Here is an example of the suggested fix in the platformio.ini file: [env:featheresp32] platform = espressif32 board = featheresp32 framework = arduino lib_ignore = TinyWireM
2025-04-01T06:37:44.578346
2016-08-17T01:52:21
171557032
{ "authors": [ "SKSJeffrey", "TomasHurtz", "adamcooke" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3169", "repo": "adamcooke/staytus", "url": "https://github.com/adamcooke/staytus/issues/94" }
gharchive/issue
Noob qstn Can this be installed to sit on a website rather than a server? How would I do that? You would install this onto a server. The server will then expose an IP address:port to the public. You can access that IP address:port in your web browser. I'm afraid I can't help with this issue through GitHub issues. These issues are just for making suggestions or submitting bug reports with Staytus. I cannot help individually with installing or running the application in the wild. I'd suggest trying posting your issues on a site like Stack Overflow.
2025-04-01T06:37:44.664482
2015-10-28T09:18:57
113775310
{ "authors": [ "addyosmani" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3179", "repo": "addyosmani/app-shell", "url": "https://github.com/addyosmani/app-shell/issues/13" }
gharchive/issue
README: Update setup/development instructions Verify that the current docs are correct and improve if we can. @gauntface Current workflow I'm using: Development: nodemon server/app.js && gulp dev` Production build: gulp Otherwise, we drop anything to do with AppEngine that's already in there. Anything else worth adding?
2025-04-01T06:37:44.670379
2022-07-25T10:55:14
1316642301
{ "authors": [ "adelphes", "akash07k" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3180", "repo": "adelphes/android-dev-ext", "url": "https://github.com/adelphes/android-dev-ext/issues/135" }
gharchive/issue
Is development stopped on this extension? I want to know that is development stopped on this extension? Asking since last update is more than 2 years ago. Hi @akash07k - it's a good question. It's true that no new functionality has been added for a while, but I believe the debugger still works in the latest version of VSCode and I'm happy to accept PRs for any bug fixes or new features people want to add. Ok, so does the last release work well with latest versions of VS Code? Also, will we get code autocompletion support? On 8/3/2022 1:28 AM, Dave Holoway wrote: Hi @akash07k https://github.com/akash07k - it's a good question. It's true that no new functionality has been added for a while, but I believe the debugger still works in the latest version of VSCode and I'm happy to accept PRs for any bug fixes or new features people want to add. — Reply to this email directly, view it on GitHub https://github.com/adelphes/android-dev-ext/issues/135#issuecomment-1203156498, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQWBM3SALXED3PEGTCP4G3VXF4WPANCNFSM54R2VT6A. You are receiving this because you were mentioned.Message ID: @.***> Yes, the current release of the Android extension still works with the latest version of VSCode. It includes: Debugging support (breakpoints, stepping, evaluate local variables, etc) View logcat Java code-completion Also, will we get code autocompletion support? Java code-completion for standard Android libraries should work. Pressing ctrl + space should bring up relevant identifiers if they don't appear automatically.
2025-04-01T06:37:44.716711
2020-03-09T16:44:50
578043232
{ "authors": [ "adferrand", "daniela-waranie", "yonathan9669" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3181", "repo": "adferrand/docker-letsencrypt-dns", "url": "https://github.com/adferrand/docker-letsencrypt-dns/issues/78" }
gharchive/issue
DNS challenge with cloudflare issue I am using your docker-compose file with latest letsencrypt-dns and cloudflare as dns provider. When starting the docker container it get the following output: letsencrypt-dns | 2020-03-09 16:18:55 circus[1] [INFO] Starting master on pid 1 letsencrypt-dns | 2020-03-09 16:18:55 circus[1] [INFO] Arbiter now waiting for commands letsencrypt-dns | 2020-03-09 16:18:55 circus[1] [INFO] crond started letsencrypt-dns | 2020-03-09 16:18:55 circus[1] [INFO] watch-domains started letsencrypt-dns | 2020-03-09 16:18:55 [17] | #### Registering Let's Encrypt account if needed #### letsencrypt-dns | 2020-03-09 16:18:55 [17] | Saving debug log to /etc/letsencrypt/logs/letsencrypt.log letsencrypt-dns | 2020-03-09 16:18:55 [17] | There is an existing account; registration of a duplicate account with this command is currently unsupported. letsencrypt-dns | 2020-03-09 16:18:55 [17] | #### Clean autorestart/autocmd jobs letsencrypt-dns | 2020-03-09 16:18:55 [17] | #### Creating missing certificates if needed (~1min for each) #### letsencrypt-dns | 2020-03-09 16:18:55 [17] | >>> Creating a certificate for domain(s): -d nextcloud.mydomain.de letsencrypt-dns | 2020-03-09 16:18:56 [17] | Saving debug log to /etc/letsencrypt/logs/letsencrypt.log letsencrypt-dns | 2020-03-09 16:18:56 [17] | Plugins selected: Authenticator manual, Installer None letsencrypt-dns | 2020-03-09 16:18:57 [17] | Obtaining a new certificate letsencrypt-dns | 2020-03-09 16:18:57 [17] | Performing the following challenges: letsencrypt-dns | 2020-03-09 16:18:57 [17] | dns-01 challenge for nextcloud.mydomain.de letsencrypt-dns | 2020-03-09 16:18:57 [17] | Running manual-auth-hook command: /var/lib/letsencrypt/hooks/authenticator.sh letsencrypt-dns | 2020-03-09 16:18:59 [17] | manual-auth-hook command "/var/lib/letsencrypt/hooks/authenticator.sh" returned error code 1 letsencrypt-dns | 2020-03-09 16:18:59 [17] | Error output from manual-auth-hook command authenticator.sh: letsencrypt-dns | 2020-03-09 16:18:59 [17] | Traceback (most recent call last): letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/bin/lexicon", line 8, in <module> letsencrypt-dns | 2020-03-09 16:18:59 [17] | sys.exit(main()) letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/cli.py", line 117, in main letsencrypt-dns | 2020-03-09 16:18:59 [17] | results = client.execute() letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute letsencrypt-dns | 2020-03-09 16:18:59 [17] | self.provider.authenticate() letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate letsencrypt-dns | 2020-03-09 16:18:59 [17] | return self._authenticate() letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate letsencrypt-dns | 2020-03-09 16:18:59 [17] | payload = self._get('/zones', { letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get letsencrypt-dns | 2020-03-09 16:18:59 [17] | return self._request('GET', url, query_params=query_params) letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request letsencrypt-dns | 2020-03-09 16:18:59 [17] | response.raise_for_status() letsencrypt-dns | 2020-03-09 16:18:59 [17] | File "/usr/local/lib/python3.8/site-packages/request letsencrypt-dns | 2020-03-09 16:18:59 [17] | s/models.py", line 941, in raise_for_status letsencrypt-dns | 2020-03-09 16:18:59 [17] | raise HTTPError(http_error_msg, response=self) letsencrypt-dns | 2020-03-09 16:18:59 [17] | requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.de&status=active letsencrypt-dns | 2020-03-09 16:18:59 [17] | Waiting for verification... letsencrypt-dns | 2020-03-09 16:19:01 [17] | Challenge failed for domain nextcloud.mydomain.de letsencrypt-dns | 2020-03-09 16:19:01 [17] | dns-01 challenge for nextcloud.mydomain.de letsencrypt-dns | 2020-03-09 16:19:01 [17] | Cleaning up challenges I am a little bit lost as this is the first time i use your docker container and also the first time i use cloudflare. I was creating the dns zone at cloudflare very minimalistic (just a A-record for main domain, but not an A-record for nextcloud.mydomain.de, as it should not be required when doing dns challenges). As far as i understand it is not even necessary that cloudflare is my active nameserver (at least not in this phase of the process: when trying to seed the TXT record) - at validation of dns challenge it is required to have cloudflare being the active nameserver for my domain. At cloudflare this token has all possilble permissions (account + zone) across all resources (accounts and zones), so i think it is not a permission problem. I made cloudflare nameservers active for my domain (confirmed as active by cloudflare webUI) for a second run but still get the same issue (do i have to wait 24h to be sure, that letsencrypt CA is picking the right nameservers?). Please help. OMG! I filled the env variable LEXICON_PROVIDER_OPTIONS with<EMAIL_ADDRESS>--auth-token=cloudflare_api_token, but instead of a cloudflare API Token i should use the cloudflare API Key for --auth-token - naming is a bit confusing here... Now it works. Maybe we should improve the documentation with more examples (at least for all DNS providers with confusing naming). A user-friendly error message instead of a stacktrace would be cool. Yes, I agree that the error is absolutely not obvious here. For cloudflare specifically pathological here, since you have global API keys, and more regular scoped API tokens. Here currently only global API keys are available, and an upstream issue is opened to get the scoped API tokens (https://github.com/AnalogJ/lexicon/pull/460). Getting a nice error from my Docker is quite difficult however, since Lexicon is not sending an fine-grain error about what was wrong. I think it should be quite useful to raise an issue on the Lexicon GitHub project (and continue there, since I also one of the maintainers for this project). I close the issue here, since the error is "technically" solved, but do not hesitate to give further feedbacks! Hello guys, I try to use the Globar API Key as @daniela-waranie suggested but it didn't work for me either. I've been using the docker image for this but I don't know if there is any difference between native and docker run. API Token log dnsrobocert | Saving debug log to /etc/letsencrypt/logs/letsencrypt.log dnsrobocert | Plugins selected: Authenticator manual, Installer None dnsrobocert | Obtaining a new certificate dnsrobocert | Performing the following challenges: dnsrobocert | dns-01 challenge for api.mydomain.com dnsrobocert | Running manual-auth-hook command: /usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com" dnsrobocert | manual-auth-hook command "/usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"" returned error code 1 dnsrobocert | Error output from manual-auth-hook command python3: dnsrobocert | 2020-04-19 23:23:51 0f6824afd7a9 __main__[73] ERROR Error while executing the `auth` hook: dnsrobocert | 2020-04-19 23:23:51 0f6824afd7a9 __main__[73] ERROR 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active dnsrobocert | Traceback (most recent call last): dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 48, in main dnsrobocert | globals()[parsed_args.type](dnsrobocert_config, parsed_args.lineage) dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 63, in auth dnsrobocert | _txt_challenge(profile, token, domain, action="create") dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 174, in _txt_challenge dnsrobocert | Client(lexicon_config).execute() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute dnsrobocert | self.provider.authenticate() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate dnsrobocert | return self._authenticate() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate dnsrobocert | payload = self._get('/zones', { dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get dnsrobocert | return self._request('GET', url, query_params=query_params) dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request dnsrobocert | response.raise_for_status() dnsrobocert | File "/usr/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status dnsrobocert | raise HTTPError(http_error_msg, response=self) dnsrobocert | requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active dnsrobocert | dnsrobocert | Waiting for verification... dnsrobocert | Challenge failed for domain api.mydomain.com dnsrobocert | dns-01 challenge for api.mydomain.com dnsrobocert | Cleaning up challenges dnsrobocert | Running manual-cleanup-hook command: /usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com" dnsrobocert | manual-cleanup-hook command "/usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"" returned error code 1 dnsrobocert | Error output from manual-cleanup-hook command python3: dnsrobocert | 2020-04-19 23:23:56 0f6824afd7a9 __main__[75] ERROR Error while executing the `cleanup` hook: dnsrobocert | 2020-04-19 23:23:56 0f6824afd7a9 __main__[75] ERROR 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active dnsrobocert | Traceback (most recent call last): dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 48, in main dnsrobocert | globals()[parsed_args.type](dnsrobocert_config, parsed_args.lineage) dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 125, in cleanup dnsrobocert | _txt_challenge(profile, token, domain, action="delete") dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 174, in _txt_challenge dnsrobocert | Client(lexicon_config).execute() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute dnsrobocert | self.provider.authenticate() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate dnsrobocert | return self._authenticate() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate dnsrobocert | payload = self._get('/zones', { dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get dnsrobocert | return self._request('GET', url, query_params=query_params) dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request dnsrobocert | response.raise_for_status() dnsrobocert | File "/usr/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status dnsrobocert | raise HTTPError(http_error_msg, response=self) dnsrobocert | requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active dnsrobocert | dnsrobocert | Some challenges have failed. dnsrobocert | IMPORTANT NOTES: dnsrobocert | - The following errors were reported by the server: dnsrobocert | dnsrobocert | Domain: api.mydomain.com dnsrobocert | Type: dns dnsrobocert | Detail: DNS problem: NXDOMAIN looking up TXT for dnsrobocert | _acme-challenge.api.mydomain.com - check that a DNS record dnsrobocert | exists for this domain dnsrobocert | ---------- dnsrobocert | 2020-04-19 23:23:56 0f6824afd7a9 dnsrobocert.core.main[1] ERROR An error occurred while processing certificate config `{'domains': ['api.mydomain.com'], 'profile': 'maestro_profile'}`: dnsrobocert | Command '['/usr/bin/python3', '-m', 'dnsrobocert.core.certbot', 'certonly', '-n', '--config-dir', '/etc/letsencrypt', '--work-dir', '/etc/letsencrypt/workdir', '--logs-dir', '/etc/letsencrypt/logs', '--manual', '--preferred-challenges=dns', '--manual-auth-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--manual-cleanup-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--manual-public-ip-logging-ok', '--expand', '--deploy-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t deploy -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--server', 'https://acme-staging-v02.api.letsencrypt.org/directory', '--cert-name', 'api.mydomain.com', '-d', 'api.mydomain.com']' returned non-zero exit status 1. dnsrobocert | 2020-04-19 23:23:56 0f6824afd7a9 dnsrobocert.core.main[1] INFO Revoke and delete certificates if needed Global API Key log dnsrobocert | Saving debug log to /etc/letsencrypt/logs/letsencrypt.log dnsrobocert | Plugins selected: Authenticator manual, Installer None dnsrobocert | Obtaining a new certificate dnsrobocert | Performing the following challenges: dnsrobocert | dns-01 challenge for api.mydomain.com dnsrobocert | Running manual-auth-hook command: /usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com" dnsrobocert | manual-auth-hook command "/usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"" returned error code 1 dnsrobocert | Error output from manual-auth-hook command python3: dnsrobocert | 2020-04-19 23:30:44 0f6824afd7a9 __main__[81] ERROR Error while executing the `auth` hook: dnsrobocert | 2020-04-19 23:30:44 0f6824afd7a9 __main__[81] ERROR 403 Client Error: Forbidden for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active dnsrobocert | Traceback (most recent call last): dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 48, in main dnsrobocert | globals()[parsed_args.type](dnsrobocert_config, parsed_args.lineage) dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 63, in auth dnsrobocert | _txt_challenge(profile, token, domain, action="create") dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 174, in _txt_challenge dnsrobocert | Client(lexicon_config).execute() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute dnsrobocert | self.provider.authenticate() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate dnsrobocert | return self._authenticate() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate dnsrobocert | payload = self._get('/zones', { dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get dnsrobocert | return self._request('GET', url, query_params=query_params) dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request dnsrobocert | response.raise_for_status() dnsrobocert | File "/usr/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status dnsrobocert | raise HTTPError(http_error_msg, response=self) dnsrobocert | requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active dnsrobocert | dnsrobocert | Waiting for verification... dnsrobocert | Challenge failed for domain api.mydomain.com dnsrobocert | dns-01 challenge for api.mydomain.com dnsrobocert | Cleaning up challenges dnsrobocert | Running manual-cleanup-hook command: /usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com" dnsrobocert | manual-cleanup-hook command "/usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"" returned error code 1 dnsrobocert | Error output from manual-cleanup-hook command python3: dnsrobocert | 2020-04-19 23:30:50 0f6824afd7a9 __main__[83] ERROR Error while executing the `cleanup` hook: dnsrobocert | 2020-04-19 23:30:50 0f6824afd7a9 __main__[83] ERROR 403 Client Error: Forbidden for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active dnsrobocert | Traceback (most recent call last): dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 48, in main dnsrobocert | globals()[parsed_args.type](dnsrobocert_config, parsed_args.lineage) dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 125, in cleanup dnsrobocert | _txt_challenge(profile, token, domain, action="delete") dnsrobocert | File "/usr/lib/python3.8/site-packages/dnsrobocert/core/hooks.py", line 174, in _txt_challenge dnsrobocert | Client(lexicon_config).execute() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/client.py", line 77, in execute dnsrobocert | self.provider.authenticate() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 69, in authenticate dnsrobocert | return self._authenticate() dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 32, in _authenticate dnsrobocert | payload = self._get('/zones', { dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/base.py", line 142, in _get dnsrobocert | return self._request('GET', url, query_params=query_params) dnsrobocert | File "/usr/lib/python3.8/site-packages/lexicon/providers/cloudflare.py", line 146, in _request dnsrobocert | response.raise_for_status() dnsrobocert | File "/usr/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status dnsrobocert | raise HTTPError(http_error_msg, response=self) dnsrobocert | requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://api.cloudflare.com/client/v4/zones?name=mydomain.com&status=active dnsrobocert | dnsrobocert | Some challenges have failed. dnsrobocert | IMPORTANT NOTES: dnsrobocert | - The following errors were reported by the server: dnsrobocert | dnsrobocert | Domain: api.mydomain.com dnsrobocert | Type: dns dnsrobocert | Detail: DNS problem: NXDOMAIN looking up TXT for dnsrobocert | _acme-challenge.api.mydomain.com - check that a DNS record dnsrobocert | exists for this domain dnsrobocert | ---------- dnsrobocert | 2020-04-19 23:30:50 0f6824afd7a9 dnsrobocert.core.main[1] ERROR An error occurred while processing certificate config `{'domains': ['api.mydomain.com'], 'profile': 'maestro_profile'}`: dnsrobocert | Command '['/usr/bin/python3', '-m', 'dnsrobocert.core.certbot', 'certonly', '-n', '--config-dir', '/etc/letsencrypt', '--work-dir', '/etc/letsencrypt/workdir', '--logs-dir', '/etc/letsencrypt/logs', '--manual', '--preferred-challenges=dns', '--manual-auth-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t auth -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--manual-cleanup-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t cleanup -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--manual-public-ip-logging-ok', '--expand', '--deploy-hook', '/usr/bin/python3 -m dnsrobocert.core.hooks -t deploy -c "/tmp/tmprcwq5zn8/dnsrobocert-runtime.yml" -l "api.mydomain.com"', '--server', 'https://acme-staging-v02.api.letsencrypt.org/directory', '--cert-name', 'api.mydomain.com', '-d', 'api.mydomain.com']' returned non-zero exit status 1. dnsrobocert | 2020-04-19 23:30:50 0f6824afd7a9 dnsrobocert.core.main[1] INFO Revoke and delete certificates if needed As you may notice, errors are like follow: API Token: 400 Client Error: Bad Request for url Global API KEY: 403 Client Error: Forbidden for url I'm setting the variables directly in the config.yml as suggested in the documentation draft: false acme: email_account<EMAIL_ADDRESS> staging: true profiles: - name: maestro_profile provider: cloudflare provider_options: auth_username<EMAIL_ADDRESS> auth_token: <global_api_key|api_token> certificates: - domains: - api.mydomain.com profile: maestro_profile Do you have any suggestions I could follow? I already checked the issues on the other repositories but they are not merged yet.
2025-04-01T06:37:44.728101
2017-05-15T13:55:22
228722996
{ "authors": [ "adieuadieu", "alekseykulikov", "bluepeter", "iamkhalidbashir", "miroljub1995" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:3182", "repo": "adieuadieu/serverless-chrome", "url": "https://github.com/adieuadieu/serverless-chrome/issues/15" }
gharchive/issue
Disable --single-process option Hey, thank you for great work on compiling chrome for Lambda. I'm trying to run perf audits, but it seems --single-process option, breaks performance metrics significantly. No --single-process: With --single-process: Could you explain, why there's --single-process option and how to avoid it? I've tried to run without it, but in this case http://<IP_ADDRESS>:9222/json returns empty value and chrome just does not work. Hi @alekseykulikov, I apologise for my brief reply. In short: I don't really know. I haven't had a chance to dig into it much myself—but, like you've discovered, headless Chrome doesn't run correctly without the --single-process flag when running within the Lambda environment and running in single-process mode breaks or disables some reporting. For example, Chrome will log to stderr Started multiple compositor clients (Browser, Renderer) in one process. Some metrics will be disabled. when started with --single-process—these may be the metrics tools like Lighthouse rely on for some of their reporting. My best guess is that it has something to do with the sandboxing of Chrome and it's processes. It's possible that there may also be a bug in headless Chrome itself. In the Lambda environment, AWS has things pretty restricted and I suspect some combination of things Chrome is trying to do to isolate / restrict the different layers of processes relies on Linux OS features which aren't available within Lambda. For example, if you listen to stderr on the spawned chrome process, without --single-process, Chrome will log a lot of prctl(PR_SET_NO_NEW_PRIVS) failed errors. This may be keeping Chrome from starting a separate process for a browser tab. I'll raise this issue on the headless-dev group and follow up here with any news. Thank you very much @adieuadieu for detailed reply! Yes, let's see in headless-dev group, maybe some prevention of prctl(PR_SET_NO_NEW_PRIVS) failed will allow to start a browser tab. I've also found, that without --no-sandbox option, chrome does not start at all. I've asked about --single-process and --no-sandbox here. This is a bit late: but anyone have any update on running Lighthouse via headless Chrome on Lambda? Try as we can, we can't get it working without due to the need for --single-process requirement. Did someone managed to run headless shell without --single-process on lambda? I am looking for the solution too