id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
324928973
CRF segmenter.train 参数问题 注意事项 请确认下列注意事项: 我已仔细阅读下列文档,都没有找到答案: 首页文档 wiki 常见问题 我已经通过Google和issue区检索功能搜索了我的问题,也没有找到答案。 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。 [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。 版本号 当前最新版本号是:v1.6.4 我使用的版本是:v1.6.4 我的问题 我希望在不使用命令行的情况下操作CRF训练模型,同时改变迭代次数以及其他参数, 查看到CRFtagger中有多个train的构造函数,但是为何只能调用带有两个参数的那个?? 问题代码如下: // segmenter.train("D:/Hanlp/199801.txt", CWS_MODEL_PATH); segmenter.train("D:/Hanlp/199801.txt", CWS_MODEL_PATH, false, 10, 1, 0.001, 1.0, Runtime.getRuntime().availableProcessors(), 20, "CRF-L2"); 复现问题 步骤 首先…… 然后…… 接着…… 触发代码 期望输出 期望输出 实际输出 实际输出 其他信息 其实在segmenter,tagger,recogniator都需要train 但是recogi给的实例感觉不全?或者说自己没看到?(如下) 但是segement就给的很全,调用了segment函数(如下) 随着看的深入越来越发现自己的无知,现在的问题是,分词,词性标注,NER三者的train的语料相同的话,请问train出来的model也是一样的吗?? 一样的话就不用在一个代码中train3次相同的语料了,直接使用同一个 CWS_MODEL_PATH = POS_MODEL_PATH = NER_MODEL_PATH
gharchive/issue
2018-05-21T14:06:51
2025-04-01T06:38:54.581448
{ "authors": [ "zxpeter" ], "repo": "hankcs/HanLP", "url": "https://github.com/hankcs/HanLP/issues/834", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
760152395
Fix server-side state machine for 0-RTT The server switches to receiving 0-RTT immediately after receiving a ClientHello indicating that the client wants to use 0-RTT. This approach and its current implementation have two issues: It does not match the state machine from the Spec, which mandates that the Server should look for 0-RTT only after sending its Finished message. The current implementation doesn't allow multiple 0-RTT records. The state machine should be reworked to switch to receiving 0-RTT after the server has sent its Finished messages. Moreover, in order to accomodate an arbitrary number of 0-RTT records, the 0-RTT handler should look for either (a) a 0-RTT message, (b) an EndOfEarlyData message. This should be easy based on the present splitting into coordinate, parse, postprocess sub-routines. In particular, there is no need for distinct handshake states for 0-RTT and EndOfEarlyData anymore. Fixed by https://github.com/hannestschofenig/mbedtls/pull/84 Fixed by https://github.com/hannestschofenig/mbedtls/pull/84
gharchive/issue
2020-12-09T09:17:24
2025-04-01T06:38:54.597725
{ "authors": [ "hanno-arm" ], "repo": "hannestschofenig/mbedtls", "url": "https://github.com/hannestschofenig/mbedtls/issues/82", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2177075357
播放失败的问题 环境是黑裙7.2,使用docker部署。 按照教程操作 HOST_NAME 配置为192.168.0.39 (小爱音箱的IP) ` 2024/03/09 09:02:19 | stdout |   -- | -- | -- 2024/03/09 09:02:19 | stdout | such file or directory: '' 2024/03/09 09:02:19 | stdout | mutagen.MutagenError: [Errno 2] No 2024/03/09 09:02:19 | stdout | raise MutagenError(e) 2024/03/09 09:02:19 | stdout | 272, in _openfile 2024/03/09 09:02:19 | stdout | ckages/mutagen/_util.py", line 2024/03/09 09:02:19 | stdout | "/app/.venv/lib/python3.10/site-pa 2024/03/09 09:02:19 | stdout | File 2024/03/09 09:02:19 | stdout | return next(self.gen) 2024/03/09 09:02:19 | stdout | lib.py", line 135, in enter 2024/03/09 09:02:19 | stdout | "/usr/local/lib/python3.10/context 2024/03/09 09:02:19 | stdout | File 2024/03/09 09:02:19 | stdout | filething, filename, fileobj, 2024/03/09 09:02:19 | stdout | with _openfile(None, 2024/03/09 09:02:19 | stdout | 162, in wrapper_func 2024/03/09 09:02:19 | stdout | ckages/mutagen/_util.py", line 2024/03/09 09:02:19 | stdout | "/app/.venv/lib/python3.10/site-pa 2024/03/09 09:02:19 | stdout | File 2024/03/09 09:02:19 | stdout | audio = mutagen.File(filename) 2024/03/09 09:02:19 | stdout | line 402, in get_file_duration 2024/03/09 09:02:19 | stdout | "/app/xiaomusic/xiaomusic.py", 2024/03/09 09:02:19 | stdout | File 2024/03/09 09:02:19 | stdout | e)) 2024/03/09 09:02:19 | stdout | int(self.get_file_duration(filenam 2024/03/09 09:02:19 | stdout | sec = 2024/03/09 09:02:19 | stdout | set_next_music_timeout 2024/03/09 09:02:19 | stdout | line 410, in 2024/03/09 09:02:19 | stdout | "/app/xiaomusic/xiaomusic.py", 2024/03/09 09:02:19 | stdout | File 2024/03/09 09:02:19 | stdout | self.set_next_music_timeout() 2024/03/09 09:02:19 | stdout | line 508, in play 2024/03/09 09:02:19 | stdout | "/app/xiaomusic/xiaomusic.py", 2024/03/09 09:02:19 | stdout | File 2024/03/09 09:02:19 | stdout | await func(arg1=oparg) 2024/03/09 09:02:19 | stdout | line 455, in run_forever 2024/03/09 09:02:19 | stdout | "/app/xiaomusic/xiaomusic.py", 2024/03/09 09:02:19 | stdout | File 2024/03/09 09:02:19 | stdout | Traceback (most recent call last): 2024/03/09 09:02:19 | stdout | 2024/03/09 09:02:19 | stdout | occurred: 2024/03/09 09:02:19 | stdout | exception, another exception 2024/03/09 09:02:19 | stdout | During handling of the above 2024/03/09 09:02:19 | stdout | 2024/03/09 09:02:19 | stdout | such file or directory: '' 2024/03/09 09:02:19 | stdout | FileNotFoundError: [Errno 2] No 2024/03/09 09:02:19 | stdout | if writable else "rb") 2024/03/09 09:02:19 | stdout | fileobj = open(filename, "rb+" 2024/03/09 09:02:19 | stdout | 251, in _openfile 2024/03/09 09:02:19 | stdout | ckages/mutagen/_util.py", line 2024/03/09 09:02:19 | stdout | "/app/.venv/lib/python3.10/site-pa 2024/03/09 09:02:19 | stdout | File 2024/03/09 09:02:19 | stdout | Traceback (most recent call last): 2024/03/09 09:02:19 | stdout | directory: '' 2024/03/09 09:02:19 | stdout | WARNING  执行出错 [Errno 2] No such file or xiaomusic.py:457 2024/03/09 09:02:19 | stdout | INFO     已经开始播放了                     xiaomusic.py:506 2024/03/09 09:02:19 | stdout | INFO     播放 http://192.168.0.39:8090/     xiaomusic.py:503 2024/03/09 09:02:19 | stdout | [03/09/24 01:02:19] INFO     cur_music 奥特曼                   xiaomusic.py:501 2024/03/09 09:02:19 | stdout | [download] Finished downloading playlist: 奥特曼 2024/03/09 09:02:19 | stdout | [BiliBili] 745273948: Downloading webpage - 2024/03/09 09:02:19 | stderr | ERROR: Failed to parse: http://192.168.0.1:80index.asp/ 2024/03/09 09:02:19 | stdout | [BiliBili] Extracting URL: http://www.bilibili.com/video/av745273948 2024/03/09 09:02:19 | stdout | [download] Downloading item 1 of 1 2024/03/09 09:02:19 | stdout | [BiliBiliSearch] Playlist 奥特曼: Downloading 1 items of 1 2024/03/09 09:02:18 | stdout | [BiliBiliSearch] 奥特曼: Extracting results from page 1 2024/03/09 09:02:18 | stdout | [download] Downloading playlist: 奥特曼 2024/03/09 09:02:18 | stdout | [BiliBiliSearch] Extracting URL: bilisearch:奥特曼 2024/03/09 09:02:18 | stdout | INFO     正在下载中 奥特曼                  xiaomusic.py:495 2024/03/09 09:02:18 | stdout | INFO     do_tts: 正在下载歌曲奥特曼         xiaomusic.py:225 2024/03/09 09:02:18 | stdout | opvalue:play oparg:奥特曼 2024/03/09 09:02:18 | stdout | [03/09/24 01:02:18] INFO     匹配到指令. opkey:播放歌曲         xiaomusic.py:481 ` 音箱有反应但是马上就中止了, ERROR: Failed to parse: http://192.168.0.1:80index.asp/ 这个ERROR 是什么导致的,我没有配置这个地址 (192.168.0.1为路由器) HOST_NAME 需要配置 docker 所在的机器的局域网 ip
gharchive/issue
2024-03-09T06:21:51
2025-04-01T06:38:54.627860
{ "authors": [ "hanxi", "kingszhe9664" ], "repo": "hanxi/xiaomusic", "url": "https://github.com/hanxi/xiaomusic/issues/29", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2716338645
openwrt的docker运行xiaomusic无法修改挂载目录 docker run -p 8090:8090 -v /xiaomusic/music:/mnt/sda1/music -v /xiaomusic/conf:/app/conf m.daocloud.io/docker.io/hanxi/xiaomusic 就是我把音乐目录改为openwrt下挂的硬盘,不行,读取不到,直接用app/music,内存有限 docker run -p 8090:8090 -v /xiaomusic/music:/mnt/sda1/music -v /xiaomusic/conf:/app/conf m.daocloud.io/docker.io/hanxi/xiaomusic 就是我把音乐目录改为openwrt下挂的硬盘,不行,读取不到,直接用app/music,内存有限 目录映射你写反了。 好吧,犯了一个低级错误谢谢了
gharchive/issue
2024-12-04T01:24:23
2025-04-01T06:38:54.630876
{ "authors": [ "hanxi", "ygqqianzi110" ], "repo": "hanxi/xiaomusic", "url": "https://github.com/hanxi/xiaomusic/issues/291", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1953918355
[Usage] The evaluation results on scienceQA Describe the issue I use the model-weights of 7b-v1.5 you released to evaluate the performance. The evaluation results on other datasets are the same with your MODEL-ZOO, but the scienceQA results is: Total: 4241, Correct: 2944, Accuracy: 69.42%, IMG-Accuracy: 67.97% which is different from the 66.8 in MODEL-ZOO for llava-7b-v1.5. Why? Hi, I am also having the same problem, and for the LoRA weight in the model zoo, I can only get: Total: 4241, Correct: 2763, Accuracy: 65.15%, IMG-Accuracy: 61.73% which differs a lot from the score reported from the model zoo. @haotian-liu Can you please help me check if any of my steps are wrong, I used the following command for reproducing your result: ` CUDA_VISIBLE_DEVICES=0 python -m llava.eval.model_vqa_science --model-path ./checkpoints/llava-v1.5-7b-lora --model-base liuhaotian/llava-v1.5-7b --question-file ./playground/data/eval/scienceqa/llava_test_CQM-A.json --image-folder ./playground/data/eval/scienceqa/images/test --answers-file ./playground/data/eval/scienceqa/answers/llava-v1.5-7b-lora.jsonl --single-pred-prompt --temperature 0 --conv-mode vicuna_v1 python llava/eval/eval_science_qa.py --base-dir ./playground/data/eval/scienceqa --result-file ./playground/data/eval/scienceqa/answers/llava-v1.5-7b-lora.jsonl --output-file ./playground/data/eval/scienceqa/answers/llava-v1.5-7b-lora_output.jsonl --output-result ./playground/data/eval/scienceqa/answers/llava-v1.5-7b-lora_result.json ` Thank you so much! Hi! I reproduced the same results as you on this dataset, but the reproduction results for TextQA are only 50.52, POPE is 76.32, and MME is 1394. Could you share how your reproduction results performed on these datasets? Did you encounter similar issues, and have you found any viable solutions? Thanks!
gharchive/issue
2023-10-20T09:42:58
2025-04-01T06:38:54.639897
{ "authors": [ "Carol-lyh", "EchoDreamer", "nbasyl" ], "repo": "haotian-liu/LLaVA", "url": "https://github.com/haotian-liu/LLaVA/issues/632", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1886397415
Bug - Conditional updates can modify resource body such that conditional URL no longer applied. Describe the bug Conditional updates are behaving inconsistently with respect to the supplied resource not satisfying the conditional URL. Please refer to: https://hl7.org/fhir/R5/http.html#cond-update To Reproduce Create a new Patient, for example: POST https://hapi.fhir.org/baseR4/Patient { "resourceType": "Patient", "identifier": [ { "system": "http://kookaburra.text/id", "value": "kookaburra1" } ], "gender": "male", "birthDate": "1980-07-03" } Attempt to conditionally update this Patient with an incorrect identifier.value in the conditional URL: PUT https://hapi.fhir.org/baseR4/Patient?identifier=http://kookaburra.text/id|kookaburra2 { "resourceType": "Patient", "identifier": [ { "system": "http://kookaburra.text/id", "value": "kookaburra1" } ], "gender": "male", "birthDate": "1980-07-03" } This fails as expected with a HAPI-0929. Attempt to conditionally update this Patient with an incorrect identifier.value in the body: PUT https://hapi.fhir.org/baseR4/Patient?identifier=http://kookaburra.text/id|kookaburra1 { "resourceType": "Patient", "identifier": [ { "system": "http://kookaburra.text/id", "value": "kookaburra2" } ], "gender": "male", "birthDate": "1980-07-03" } This succeeds but it shouldn't. Note the conditional URL and resource body no longer match. Expected behavior Step 3 above should also result in a HAPI-0929 because the supplied resource does not satisfy the conditional URL. Screenshots Environment (please complete the following information): GET https://hapi.fhir.org/baseR4/metadata "software": { "name": "HAPI FHIR Server", "version": "6.9.3-SNAPSHOT/7f15e62e20/2023-09-05" }, Additional context Meow. A couple things: You are getting the 929 error code as, since there are no matches, you are creating version 1 of a resource, which causes the conditional create validation to occur. Conditional Updates are permitted to invalidate the conditional URL post-update as done in your Step #3. This is not strictly defined in the spec, and the comment here BaseHapiFhirDao.java indicates that we permit it. I recommend we do what the comment suggests, and add a toggle to control this behaviour for users who wish to prevent this. Thanks, @tadgh. Will update the ticket to treat as a feature request. Appreciate your help! @dmuylwyk this could be lateral to the feature main point but could be the subject of a new bug. in 2. you mentioned: Because no such resource exists in the repository, this is treated as a conditional create. However, according to the spec, a ccreate must include an If-None-Exist: [search parameters] header, which is not included here, so maybe this shouldn't be treated as a ccreate?
gharchive/issue
2023-09-07T18:37:10
2025-04-01T06:38:54.654554
{ "authors": [ "dmuylwyk", "jmarchionatto", "tadgh" ], "repo": "hapifhir/hapi-fhir", "url": "https://github.com/hapifhir/hapi-fhir/issues/5290", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2328071512
Incorrect SHA in the final comparison report PREVIOUS_SHA replaced with master branch SHA after report completion. We are experiencing an issue with Happo where the PREVIOUS_SHA is being replaced with the SHA from the master branch after the report is completed. Here are the details of our setup and the issue: Setup: 1. We use `Happo` within `Storybook`. 2. We call `happo-ci-github-actions` with predefined environment variables: • HAPPO_PROJECT: Picasso/Storybook • HAPPO_API_KEY: ${{ env.HAPPO_API_KEY }} • HAPPO_API_SECRET: ${{ env.HAPPO_API_SECRET }} • PREVIOUS_SHA: ${{ github.event.pull_request.base.sha }} ("1252164f6ca4df5bf6f095079707afddc1b7a9f4") Issue: 1. After starting the job, Happo provides a **proper** link to the report page: https://happo.io/a/675/jobs/1187395 2. Once the report is ready, the link changes to: https://happo.io/a/675/p/1189/compare/a9768dd9cd4a118a4ae61857124eed0fa84e0090/7ddc840fdd5f3825f44b666d67b8e3ed131c2ad8 3. The SHA a9768dd9cd4a118a4ae61857124eed0fa84e0090 corresponds to the master branch, but it should be 1252164f6ca4df5bf6f095079707afddc1b7a9f4. Logs: Here are some logs from the job for reference: Using the following ENV variables: PREVIOUS_SHA: 1252164f6ca4df5bf6f095079707afddc1b7a9f4 CURRENT_SHA: 7ddc840fdd5f3825f44b666d67b8e3ed131c2ad8 CHANGE_URL: https://github.com/toptal/picasso/pull/4342 INSTALL_CMD: HAPPO_IS_ASYNC: true HAPPO_SKIP_START_JOB: HAPPO_GIT_COMMAND: git HAPPO_COMMAND: node_modules/happo.io/build/cli.js HAPPO_FALLBACK_SHAS_COUNT: 50 Job link: GitHub Actions job log It appears that PREVIOUS_SHA is correctly set initially, but it is replaced by the master branch SHA in the final report. This causes the visual comparison to be inaccurate as it does not reflect the intended base commit. We appreciate your assistance in resolving this issue. Link to the PR to test: https://github.com/toptal/picasso/pull/4342 Hi @dmaklygin, sorry for the delay here. 🙏 The most likely thing that's happening here is that the sha for PREVIOUS_SHA doesn't have a Happo report generated for it. Happo will then use a fallback SHA, starting from PREVIOUS_SHA and moving backwards. It happens in the happo-ci script, here. Can you check to make sure that Happo reports are generated on the master branch and that the jobs are successful? If not, that's where we should start. Hi, @trotzig ! Thank you for your response. Seems, the issue is fixed now. Your suggestion helped to have proper reports. I suppose we can close the issue now
gharchive/issue
2024-05-31T15:15:19
2025-04-01T06:38:54.679804
{ "authors": [ "dmaklygin", "trotzig" ], "repo": "happo/happo.io", "url": "https://github.com/happo/happo.io/issues/275", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1583815435
Feat: Precompiled binaries ala tailwindcss-rails Is your feature request related to a problem? Please describe. Yes. Instead of having to install Rust on production servers to install the gem and to increase install speed. it would be beneficial to have precompiled mrml binaries ala tailwindcss-rails does Describe the solution you'd like Precompile the gem extension Describe alternatives you've considered N/A Additional context N/A This gem does it somehow https://github.com/IAPark/tiktoken_ruby Hey @jonian, are you already working on that? Else I'd be happy to take a look at how to provide precompiled binaries for mrml-ruby, if you are okay with that? Hi @paulgoetze, no I'm not working on it. It would be great if you can submit a PR. Thanks! I think it should be implemented using oxidize-rb/actions. An example workflow in the polars-ruby gem. :+1: Alright, thanks, @jonian, then I'll take a look. @jonian ping, just to make sure you saw the pull request :) Let me know if you need anything else or if there's anything I can help with to get the pre-compiled gems released. I have released a new version with the cross-compiled gems. Thank you for the help @paulgoetze! The cross-compiled gems support only ruby 3.0, 3.1 and 3.2.
gharchive/issue
2023-02-14T09:34:48
2025-04-01T06:38:54.715388
{ "authors": [ "henrikbjorn", "jonian", "paulgoetze" ], "repo": "hardpixel/mrml-ruby", "url": "https://github.com/hardpixel/mrml-ruby/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2096928775
add auto-detect-targets-and-variants description STO-6974 Wrote a preliminary draft, preview link is here: Auto-detecting the target and variant Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65b024fd37dee23676bf1360--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65b1406b0afe751472c7898f--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65b15108abd1782631ed7dbb--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65b1a20679469b0a5c2dbcc7--harness-developer.netlify.app Spun this off into a new PR: https://github.com/harness/developer-hub/pull/5138
gharchive/pull-request
2024-01-23T20:30:15
2025-04-01T06:38:54.743030
{ "authors": [ "bot-gitexp-user", "douglas-j-bothwell" ], "repo": "harness/developer-hub", "url": "https://github.com/harness/developer-hub/pull/5080", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2125795874
[PL-46956] SMP 0.13.3 patch release notes SMP 0.13.3 patch release notes For reviewers, a preview is available: ---will be built after draft RN are added-- What Type of PR is This? [ ] Issue [ ] Feature [ ] Maintenance/Chore If tied to an Issue, list the Issue(s) here: Issue(s) House Keeping Some items to keep track of. Screen shots of changes are optional but would help the maintainers review quicker. [x] Tested Locally [ ] Optional Screenshot. Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c51c058dc4760074fa41d2--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c5257d47885c008e5c4b75--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c53599d060d20cfbd05cf4--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c5363b37b4810ad309088f--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c536b71eeaf00dfb9f10e1--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c54f47c4c9ae1f08ab3c55--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c54f6847885c1dad5c4e78--harness-developer.netlify.app
gharchive/pull-request
2024-02-08T18:10:56
2025-04-01T06:38:54.754293
{ "authors": [ "bot-gitexp-user", "brian-f-harness" ], "repo": "harness/developer-hub", "url": "https://github.com/harness/developer-hub/pull/5337", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1601009502
Chaos(aws-fault): Add docs for CLB and ALB az down chaos faults Harness Developer Pull Request Thanks for helping us make the Developer Hub better. The PR will be looked at by the maintainers. What Type of PR is This? [ ] Issue [ ] Feature [ ] Maintenance/Chore If tied to an Issue, list the Issue(s) here: Issue(s) House Keeping Some items to keep track of. Screen shots of changes are optional but would help the maintainers review quicker. [ ] Tested Locally [ ] Optional Screen Shoot. Preview environment: https://hdh.pr.harness.io/pr-824
gharchive/pull-request
2023-02-27T11:34:46
2025-04-01T06:38:54.757856
{ "authors": [ "bot-gitexp-user", "uditgaurav" ], "repo": "harness/developer-hub", "url": "https://github.com/harness/developer-hub/pull/824", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2161937
Multiple classifiers in pom dependency In case if project depends on a subproject the dependency tag for the subproject is incorrect: <dependency> <groupId>com.github.siasia</groupId> <artifactId>maven-plugin_2.9.1</artifactId> <version>0.11.1-0.1</version> <scope>compile</scope> <classifier>sources</classifier> <classifier>javadoc</classifier> </dependency> Multiple classifier tags result the duplicate tag error in Maven. I'm using this as a workaround. pomPostProcess := { import xml._ Rewrite.rewriter { case e: Elem if e.label == "classifier" && e.child.mkString == "sources" => NodeSeq.Empty } } import xml.transform.{RewriteRule, RuleTransformer} import xml._ object Rewrite { def rewriter(f: PartialFunction[Node, NodeSeq]): RuleTransformer = new RuleTransformer(rule(f)) def rule(f: PartialFunction[Node, NodeSeq]): RewriteRule = new RewriteRule { override def transform(n: Node) = if (f.isDefinedAt(n)) f(n) else n } } I took a better look at it. The problem is that the original patch looks at all artifacts in all configurations: https://github.com/harrah/xsbt/blob/0.11/ivy/MakePom.scala#L143 sbt puts the source and doc artifacts in the "sources" and "javadocs" configurations: https://github.com/harrah/xsbt/blob/0.11/main/Defaults.scala#L374 https://github.com/harrah/xsbt/blob/0.11/ivy/IvyInterface.scala#L431 We can ignore artifacts in those configurations using a method in DependencyDescriptor similar to getAllDependencyArtifacts but that only gets artifacts in the specified configurations (I'd have to look the name up). @indrajitr Do you want to do this or should I? Just the "jar" type doesn't quite work because the type might be "bundle", for example. The valid types are set in classpathTypes. (It is just "jar" + "bundle" right now, but it keeps it configurable in one spot.) Good point. classpathTypes can be added to MakePomConfiguration and used in makePom. While at that, should we use this for deriving packaging type as well (instead of an isolated use of IgnoreTypes)? That ignore types usage is different, though, and I think it is correct as a blacklist. It is legitimate for a user to say 'war' as the packaging, for example. Damn, I'm glad I'm not the only one. I just ran into this same issue. I should check the list first next time. I needed up carefully publishing one version then changing the dependency version's dependsOn(...) to a library dependency to ensure the jars are pulled @softprops Try out the snapshot at your convenience -- hopefully this has been sorted out now.
gharchive/issue
2011-11-07T12:05:34
2025-04-01T06:38:55.013431
{ "authors": [ "harrah", "indrajitr", "retronym", "siasia", "softprops" ], "repo": "harrah/xsbt", "url": "https://github.com/harrah/xsbt/issues/257", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
635736965
[Autopilot] rebalance approval for Portworx storage pools (15d2875e-f420-4929-85bf-9a31640514a9) rule: pool-rebalance This is a request to approve the following automated autopilot action What will get affected Type: StoragePool Name: Portworx storage pools Namespace: Owner information: Type: Name: What action will be taken Work summary: Rebalance actions: add replica Volume: 741568352796226165 Pool: 15d2875e-f420-4929-85bf-9a31640514a9 Node: 5e6af7d5-7a58-4a19-89ab-19f9136764bc Replication set ID: 0 Start: 2020-06-09T20:32:39.060637701Z End: (timestamp: nil Timestamp) Work summary: -> UnbalancedProvisionedSpaceBytes 32212254720 done, 0 pending -> UnbalancedVolumes 1 done, 0 pending remove replica Volume: 741568352796226165 Pool: da9771af-d325-48f4-b42d-040a5b952171 Node: 97f131dd-9e5b-431a-96f9-1c9a9c301478 Replication set ID: 0 Start: 2020-06-09T20:32:39.060638386Z End: (timestamp: nil Timestamp) Work summary: -> UnbalancedProvisionedSpaceBytes 32212254720 done, 0 pending -> UnbalancedVolumes 1 done, 0 pending add replica Volume: 1152731952087921482 Pool: da8f8818-399e-4e23-92bd-5eea444c4594 Node: 218462c7-0543-47a8-af06-9e44d89da497 Replication set ID: 0 Start: 2020-06-09T20:32:39.182589533Z End: (timestamp: nil Timestamp) Work summary: -> UnbalancedProvisionedSpaceBytes 32212254720 done, 0 pending -> UnbalancedVolumes 1 done, 0 pending remove replica Volume: 1152731952087921482 Pool: da9771af-d325-48f4-b42d-040a5b952171 Node: 97f131dd-9e5b-431a-96f9-1c9a9c301478 Replication set ID: 0 Start: 2020-06-09T20:32:39.182590778Z End: (timestamp: nil Timestamp) Work summary: -> UnbalancedProvisionedSpaceBytes 32212254720 done, 0 pending -> UnbalancedVolumes 1 done, 0 pending Why is the action needed. The action request was triggered based on an AutopilotRule pool-rebalance defined in your cluster. How do I approve Once you review the above, To approve, simply approve and merge this PR To declined, close the PR Autopilot will be watching for the merged specs in the cluster and will proceed with the action if approved and declined the action if not. [Autopilot] Closing PR as the action was found approved in the Kubernetes cluster.
gharchive/pull-request
2020-06-09T20:32:42
2025-04-01T06:38:55.028913
{ "authors": [ "harsh-px" ], "repo": "harsh-px/flux-get-started", "url": "https://github.com/harsh-px/flux-get-started/pull/405", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
426457853
Want to regard pattern variables as ordinary variables match 文でレコードのパターンを使った際、そのパターンの矢印の右側ではパターン変数を普通の変数としてドラッグして使えるようにしたいです。現在は、x :: xs のパターンなら x や xs をドラッグできますが、レコードのパターン変数はドラッグできないようです。加えて、これらのパターン変数がスコープお砂場にも現れて欲しいです。 報告ありがとうございます。 実装してみました。確認お願いします。 ちなみにですが、今は定義ブロックを変に組み替えたりしないことを仮定しています。 type a = { field: int } match ? with | { field: x } -> x + x とかしたあとに、record定義のブロックにある int のブロックを外したときの挙動はまだどうするか考えられてないです。。今はとりあえずクラッシュします。(笑) 早速、ありがとうございます。うまくいっているように見えます。 定義ブロックを組み替えた場合ですが、問題となるのは int ブロックを外し たりしたときですよね。とりあえず、パターンブロック中の該当するパターン 変数のコネクタが×になるのは良いと思います。その上でそのパターン変数が どこにも現れていなければ、そのままでOK。現れていたら パターン変数が現れていたら、そのパターン変数は削除される。(let rec の rec を消したら、対応する再帰呼び出しが丸ごと消えるのと同じ。) パターン変数が現れていたら、そもそも int ブロックを外せない。 のどちらかかなと思います。let rec と合わせて前者かなと思いますが、いか がでしょうか。
gharchive/issue
2019-03-28T12:10:51
2025-04-01T06:38:55.060325
{ "authors": [ "harukamm", "kenichi-asai" ], "repo": "harukamm/ocaml-blockly", "url": "https://github.com/harukamm/ocaml-blockly/issues/22", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2332392898
[DOC] Missing old version APIs documentation Is your doc request related to a problem? Please describe or add related issue ID. Although we have multiple harvester version options in documentation, but we only have v1.3 and dev APIs documentation. When we click v1.1 harvester documentation, then click API, it'll be redirected v1.3 APIs documentation. In other words, I can't see v1.1 APIs documentation. Describe the solution you'd like Do we need to add it back? The other thing is that our openapi generator follows our harvester/harvester repo. If we fix some errors of openapi generator in new release version, we can't apply it on our old release version unless we backport. But, it seems weird that only backport those fixes for fixing openapi generator. Maybe we could have another repo only for openapi generator? It's a open question here, welcome any ideas. @innobead I am not familiar with how exactly the API documentation is generated. What I know is from this PR, which @m-ildefons created. I'll check this one out. I'm sure it's a fairly small error in the docusaurus config somewhere. The API docs for the old versions is there (e.g. https://docs.harvesterhci.io/v1.2/api/create-namespaced-virtual-machine-backup/) but what's missing is a link or a menu that leads to it. Somehow the version-change menu doesn't work for the API docs of older versions, but it does work for v1.4/dev and v1.3. There also doesn't seem to be a landing page/category index for those versions.
gharchive/issue
2024-06-04T02:27:16
2025-04-01T06:38:55.075354
{ "authors": [ "Yu-Jack", "jillian-maroket", "m-ildefons" ], "repo": "harvester/harvester", "url": "https://github.com/harvester/harvester/issues/5948", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
78923323
Tour step 3 dialog is over streched Getting to the third step of the tour המקום שחיפשתם I noticed the window extends it's basic design: VS: @agamaloni While at it I think it would be nice adding a last step with a greeting notifying that the tour is over Maybe also add a description about opening a discussion On Thu, May 21, 2015 at 1:12 PM, omerxx notifications@github.com wrote: While at it I think it would be nice adding a last step with a greeting notifying that the tour is over — Reply to this email directly or view it on GitHub https://github.com/hasadna/anyway/issues/262#issuecomment-104212690. thanks @omerxx change back for cleaner view and resize .popover can add more steps, now i get hard time to find inside the code the easy way for setting up the current date any idea? You need to update the daterangepicker's startDate and endDate, and then notify it: https://github.com/hasadna/anyway/blob/master/static/js/app.js#L484 Hi @agamaloni The tour step is still stretched, I tried to modify it locally but I can't see the tour here (do you know why..?) plus, why is this specific step at app.js and not in tour ? Hi @omerxx , yes, For locate step over the map i go out from the tour and popup an infoWindow i know it not look exactly like the different steps.. do you know how to modify specific infoWindow ? (i will figure it out) that was my best solution for dealing with pointing on exactly spot over the map. according to that logic step 3 set inside setCenterWithMarker of app.js to catch the location after find place "this.locationMarker" and open infowindow.open(this.map, location);
gharchive/issue
2015-05-21T09:59:57
2025-04-01T06:38:55.082241
{ "authors": [ "agamaloni", "danielhers", "omerxx", "samuelregev" ], "repo": "hasadna/anyway", "url": "https://github.com/hasadna/anyway/issues/262", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
118457070
Separate United markers by severity We have severity for our united-hatzala markers so I think we should use them. Instead of having a generic blue colored markers we could separate them to colors based on severity and keep the current icon to imply this is a united report. @LironRS, @OmerSchechter, @galraij, what do you think? Absolutely! this was the original plan. I think @LironRS is working on it already. On 23 Nov 2015, at 21:44, Omer Hamerman notifications@github.com wrote: We have severity for our united-hatzala markers so I think we should use them. Instead of having a generic blue colored markers we could separate them to colors based on severity and keep the current icon to imply this is a united report. @LironRS https://github.com/LironRS, @OmerSchechter https://github.com/OmerSchechter, @galraij https://github.com/galraij, what do you think? — Reply to this email directly or view it on GitHub https://github.com/hasadna/anyway/issues/493.
gharchive/issue
2015-11-23T19:44:04
2025-04-01T06:38:55.085908
{ "authors": [ "galraij", "omerxx" ], "repo": "hasadna/anyway", "url": "https://github.com/hasadna/anyway/issues/493", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2171922476
feat: Add map to line profile Added a map to the bottom of this page: Fix #542 So do you want me to do it in a new pr? @Tamir198 it's up to you :) I think I will want you to merge this and do it in a new pr just to prevent bug conflicts with Darkmift user who is also working on this part on the page. Sounds good? @Tamir198 sure, thanks. Let's merge it. I gave you permissions to the repo, so feel free to merge things whenever you want (as long as there are no regressions and anyone made at least one code-review) @all-contributors please add @Tamir198 for his code :clap: :medal_sports:
gharchive/pull-request
2024-03-06T16:26:54
2025-04-01T06:38:55.088969
{ "authors": [ "NoamGaash", "Tamir198" ], "repo": "hasadna/open-bus-map-search", "url": "https://github.com/hasadna/open-bus-map-search/pull/557", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
163498459
Enabled to set a file from the preview panel. Also enabled to show the current input source in the preview. Added ui/events.py EVT_INPUT_FILE_ADDED: Event a new video file is added. EVT_INPUT_INITIALIZED: Event input source is updated. Moved the text box for video file from Options to Preview. Stopped to store the value of the text box to the config file. Screenshots: Thanks for the work. I'd like to do some tests on Windows environment before merging. Cool! :) Fixed the issue on Windows discussed in the team Slack. 66fa538 is the change.
gharchive/pull-request
2016-07-02T02:43:52
2025-04-01T06:38:55.101725
{ "authors": [ "frozenpandaman", "hasegaw", "hiroyuki-komatsu" ], "repo": "hasegaw/IkaLog", "url": "https://github.com/hasegaw/IkaLog/pull/358", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2731151482
Tool for converting Record Files to Block Stream Command line tool adding extra command for converting Record Files to Block Stream Not 100% done yet but made ready for review so the build actions are executed to make sure build works on sever because it as been such a pain to get to build locally. Comment on Gradle setup: I will revisit the "classpath-based" setup of the "tools" project this PR introduces as part of https://github.com/hiero-ledger/hiero-gradle-conventions/issues/53
gharchive/pull-request
2024-12-10T20:43:33
2025-04-01T06:38:55.115976
{ "authors": [ "jasperpotts", "jjohannes" ], "repo": "hashgraph/hedera-block-node", "url": "https://github.com/hashgraph/hedera-block-node/pull/389", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
782082838
Add PayBito HBAR/USD pair to ERT Summary PayBito has an HBAR/USD pair that should be added to the list of exchanges https://trade.paybito.com/view-exchange/#/?pair=hbar-usd API info page seems down https://www.paybito.com/api-end-points/ APIEndPoints.pdf APIEndPoints.pdf
gharchive/issue
2021-01-08T12:24:31
2025-04-01T06:38:55.118679
{ "authors": [ "paulmadsenhed" ], "repo": "hashgraph/hedera-exchange-rate-tool", "url": "https://github.com/hashgraph/hedera-exchange-rate-tool/issues/141", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1356239473
Health checks svc annotations Description: This PR Adds support for YAML and JSON annotations on the service Includes readiness and liveness probes to the deployment Updates to the configmap config.HEDERA_NETWORK logic and coinciding values.yaml comments Removed top-level secrets. from value.yaml, this was not ever being called or used Related issue(s): Fixes # Notes for reviewer: Deployment templating: --- # Source: hedera-json-rpc-relay/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: test-hedera-json-rpc-relay labels: app: hedera-json-rpc-relay helm.sh/chart: hedera-json-rpc-relay-0.7.0-SNAPSHOT app.kubernetes.io/name: hedera-json-rpc-relay app.kubernetes.io/instance: test app.kubernetes.io/version: "0.7.0-SNAPSHOT" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: hedera-json-rpc-relay app.kubernetes.io/instance: test template: metadata: labels: app: hedera-json-rpc-relay app.kubernetes.io/name: hedera-json-rpc-relay app.kubernetes.io/instance: test spec: imagePullSecrets: - name: ghcr-registry-auth serviceAccountName: test-hedera-json-rpc-relay securityContext: {} containers: - name: hedera-json-rpc-relay image: "ghcr.io/hashgraph/hedera-json-rpc-relay:0.7.0-SNAPSHOT" imagePullPolicy: Always env: - name: CHAIN_ID value: '' - name: CHAIN_ID valueFrom: configMapKeyRef: name: test-hedera-json-rpc-relay key: CHAIN_ID optional: true - name: HEDERA_NETWORK valueFrom: configMapKeyRef: name: test-hedera-json-rpc-relay key: HEDERA_NETWORK optional: false - name: OPERATOR_ID_ETH_SENDRAWTRANSACTION valueFrom: secretKeyRef: name: test-hedera-json-rpc-relay key: OPERATOR_ID_ETH_SENDRAWTRANSACTION optional: true - name: OPERATOR_KEY_ETH_SENDRAWTRANSACTION valueFrom: secretKeyRef: name: test-hedera-json-rpc-relay key: OPERATOR_KEY_ETH_SENDRAWTRANSACTION optional: true - name: MIRROR_NODE_URL valueFrom: configMapKeyRef: name: test-hedera-json-rpc-relay key: MIRROR_NODE_URL optional: false - name: LOCAL_NODE valueFrom: configMapKeyRef: name: test-hedera-json-rpc-relay key: LOCAL_NODE optional: false - name: SERVER_PORT valueFrom: configMapKeyRef: name: test-hedera-json-rpc-relay key: SERVER_PORT optional: false - name: OPERATOR_ID_MAIN valueFrom: secretKeyRef: name: test-hedera-json-rpc-relay key: OPERATOR_ID_MAIN optional: false - name: OPERATOR_KEY_MAIN valueFrom: secretKeyRef: name: test-hedera-json-rpc-relay key: OPERATOR_KEY_MAIN optional: false ports: - containerPort: 7546 name: jsonrpcrelay livenessProbe: httpGet: path: / port: jsonrpcrelay readinessProbe: httpGet: path: / port: jsonrpcrelay resources: {} Checklist [x] Documented (Code comments, README, etc.) Codecov Report Merging #482 (578686e) into main (4c3191c) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## main #482 +/- ## ======================================= Coverage 76.38% 76.38% ======================================= Files 12 12 Lines 923 923 Branches 144 144 ======================================= Hits 705 705 Misses 165 165 Partials 53 53 Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
gharchive/pull-request
2022-08-30T20:01:56
2025-04-01T06:38:55.125572
{ "authors": [ "codecov-commenter", "rustyShacklefurd" ], "repo": "hashgraph/hedera-json-rpc-relay", "url": "https://github.com/hashgraph/hedera-json-rpc-relay/pull/482", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1042140475
ContractExecuteTransaction.setFunctionParameters returns void Description The setFunctionParameters of ContractExecuteTransaction returns void, thus preventing it from being used in a builder pattern. Steps to reproduce let response = new ContractExecuteTransaction() .setMaxTransactionFee(new Hbar(15)) .setContractId(ContractId.fromSolidityAddress(to.replace("0x", ""))) .setFunctionParameters(Buffer.from(data, "hex")) .execute(client); fails because .setFunctionParameters returns void and .execute(client) is run against undefined There is a workaround which is to create the transaction object, then .setFunctionParameters on the object, then .execute(client) on the same object, however this isn't developer friendly and breaks the usual builder pattern. Additional context No response Hedera network mainnet, testnet, previewnet Version v2.4.0 Operating system macOS I think you are supposed to use setFunction() instead of setFunctionParameters()? const transaction = new ContractExecuteTransaction() .setContractId(newContractId) .setGas(100_000_000) .setFunction("set_message", new ContractFunctionParameters() .addString("hello from hedera again!")) setFunction returns this which allows .execute to be chained like you want. Will double check with @danielakhterov @gregscullard let us know if this resolves your issue @valtyr-naut Issue is correct. https://github.com/hashgraph/hedera-sdk-js/blob/55d93a160d29b67a5894b39c7ee62083e0d5df2d/src/contract/ContractExecuteTransaction.js#L209 Should return this
gharchive/issue
2021-11-02T10:32:10
2025-04-01T06:38:55.134049
{ "authors": [ "SimiHunjan", "danielakhterov", "gregscullard", "valtyr-naut" ], "repo": "hashgraph/hedera-sdk-js", "url": "https://github.com/hashgraph/hedera-sdk-js/issues/728", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
516386380
Module does not support well-behaved reverse lookup under systemd-resolved The current state of consul support for systemd-resolved is a bit funky in that forward resolution works well and simply, while reverse lookup of IPs will, with low percentage, return .consul domains or whatever the other DNS resolvers say. e.g. https://github.com/hashicorp/consul/issues/6462. https://github.com/hashicorp/consul/pull/6731 provides a solution for ensuring that all reverse lookups of an IP address known to consul results in a .consul domain. It has the added behavior (perhaps undesirable) of reverse lookup failing on IP addresses not known to consul unless one also configures the recursors option. e.g. { ... "recursors": ["10.0.0.2"], ... } I think it would be reasonable to consider adopting this configuration in the consul-cluster module and intend this issue to be a starting point for the conversation. Thanks for the module as it has facilitated very rapid progress on my side! Thanks for reporting! It has the added behavior (perhaps undesirable) of reverse lookup failing on IP addresses not known to consul What's the default behavior? unless one also configures the recursors option. e.g. What value would you plug into recursors? The behavior of the documented solution is described in https://github.com/hashicorp/consul/issues/6462. TL;DR: most of the time reverse resolution goes through whatever the system has been configured to use. A small percentage of the time reverse resolution goes through consul. That is to say, reverse lookup is not predictable in the documented systemd-resolved solution. The PR is predictable in that 100% of the time systemd-resolved will reverse lookup via the consul agent. The cost is that the consul agent will fail to reverse lookup for IPs not within the .consul domain. You can solve this by placing a DNS server with appropriate reverse lookup capability into recursors. In most situations, this probably means whatever DNS server you use for forward lookups outside of the .consul domain. i.e., whatever DNS servers you got from DHCP / static config. Make sense? Got it, thanks for the context. So, to summarize, you are proposing that for systems using systemd-resolved we update the run-consul script to be able to add the following to resolved.conf: DNS=127.0.0.1 Domains=~consul ~<CIDR>.in-addr.arpa Where <CIDR> is passed in via a new param to the script... As well as add the following to the consul config: "recursors": ["<DNS_SERVER>"], Where <DNS_SERVER> is also passed in via a new param to the script. Is that right? I'm definitely proposing the first thing, with caveat that it's actually the "backwards truncated CIDR" and it should be opt-in. The second thing, I'm outlining the pros/cons. You could imagine mimicking the dnsmasq behavior of using servers from /etc/resolve.conf by having run-consul automatically set recursors by parsing the output of systemd-resolve (or resolvectl on ultra-contemporary systems) or networkctl. I'm not really what is right these days. (sorry for delay, we were all away for a company offsite) Roger. I think if both options are opt-in based on passed-in params, this makes sense to add. A PR is very welcome!
gharchive/issue
2019-11-01T22:52:40
2025-04-01T06:38:55.301901
{ "authors": [ "brikis98", "tpdownes" ], "repo": "hashicorp/terraform-aws-consul", "url": "https://github.com/hashicorp/terraform-aws-consul/issues/155", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
372416391
Revert "Tempus-467, 468, 655" Reverts hashmapinc/Tempus#668 Reverting this code because of the following errors:- error Unable to resolve path to module 'well-log-viewer/node_modules/d3/build/d3' import/no-unresolved This is because the module is imported inside the well-log-viewer, import should be used on the main node_modules of the application. Please correct this. Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
gharchive/pull-request
2018-10-22T07:16:33
2025-04-01T06:38:55.596346
{ "authors": [ "CLAassistant", "Shobhit2884" ], "repo": "hashmapinc/Tempus", "url": "https://github.com/hashmapinc/Tempus/pull/824", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
33118443
Haddock should hide hidden fields of re-exported modules Original reporter: ezyang@ Suppose I have the following module: module Compiler.Hoopl ( module Compiler.Hoopl.Graph , module Compiler.Hoopl.MkGraph , module Compiler.Hoopl.XUtil , module Compiler.Hoopl.Collections , module Compiler.Hoopl.Checkpoint , module Compiler.Hoopl.Dataflow , module Compiler.Hoopl.Label , module Compiler.Hoopl.Pointed , module Compiler.Hoopl.Combinators , module Compiler.Hoopl.Fuel , module Compiler.Hoopl.Unique , module Compiler.Hoopl.Util , module Compiler.Hoopl.Debug , module Compiler.Hoopl.Show ) where import Compiler.Hoopl.Checkpoint import Compiler.Hoopl.Collections import Compiler.Hoopl.Combinators import Compiler.Hoopl.Dataflow hiding ( wrapFR, wrapFR2, wrapBR, wrapBR2 ) import Compiler.Hoopl.Debug import Compiler.Hoopl.Fuel hiding (withFuel, getFuel, setFuel, FuelMonadT) import Compiler.Hoopl.Graph hiding ( Body , BCat, BHead, BTail, BClosed -- OK to expose BFirst, BMiddle, BLast ) import Compiler.Hoopl.Graph (Body) import Compiler.Hoopl.Label hiding (uniqueToLbl, lblToUnique) import Compiler.Hoopl.MkGraph import Compiler.Hoopl.Pointed import Compiler.Hoopl.Show import Compiler.Hoopl.Util import Compiler.Hoopl.Unique hiding (uniqueToInt) import Compiler.Hoopl.XUtil Haddock will incorrectly generate documentation for all functions in these modules, where it should actually hide any of the functions/data constructors that were hidden in the imports. should be resolved by #688
gharchive/issue
2014-05-08T20:00:58
2025-04-01T06:38:55.614307
{ "authors": [ "gbaz", "ghc-mirror" ], "repo": "haskell/haddock", "url": "https://github.com/haskell/haddock/issues/174", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
520506051
Add a “time_offset” Requested from @tomoqv Add a “time_offset” to take into account the amount of time it takes to reach the station, i.e. if it takes 8 minutes to walk to the train, only show departures from now() + 8 minutes onwards. The thing is, that I would like to conserve space in some of my Lovelace views. Showing a number of departures that are impossible to make from my house adds a few lines of information that is of little or no interest to the person looking for the next possible departure. This is a feature request, so I fully understand if it may be low priority. Moved to https://github.com/hasl-platform/lovelace-hasl-departure-card/issues @tomoqv I guess you want it to show the actuall time left and time for the departures but hide the ones you are not able to catch. i.e. It takes 8 minutes from your place to the station and the time is 19:42. The next departures are 19:47 (5 min), 19:51(9 min), 19:54(12 min). In the card is going to so the actuall times but hide 19:47 because you don't have enough time to catch it. Yes, precisely. That way I won't have any superfluous information. Thanks!
gharchive/issue
2019-11-09T19:59:35
2025-04-01T06:38:55.883595
{ "authors": [ "dimmanramone", "tomoqv" ], "repo": "hasl-platform/lovelace-hasl-departure-card", "url": "https://github.com/hasl-platform/lovelace-hasl-departure-card/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2335113154
🛑 Git Server is down In d9f8526, Git Server (https://git.arash-hatami.ir) was down: HTTP code: 504 Response time: 15491 ms Resolved: Git Server is back up in 77c3207 after 9 minutes.
gharchive/issue
2024-06-05T07:25:11
2025-04-01T06:38:55.989241
{ "authors": [ "hatamiarash7" ], "repo": "hatamiarash7/MyWebSite_Status", "url": "https://github.com/hatamiarash7/MyWebSite_Status/issues/1430", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1437800979
🛑 Main Website is down In d5df8d1, Main Website (https://arash-hatami.ir) was down: HTTP code: 504 Response time: 16018 ms Resolved: Main Website is back up in 623291a.
gharchive/issue
2022-11-07T05:46:53
2025-04-01T06:38:55.991666
{ "authors": [ "hatamiarash7" ], "repo": "hatamiarash7/MyWebSite_Status", "url": "https://github.com/hatamiarash7/MyWebSite_Status/issues/355", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
512826054
nndata.py errors out nndata.py errors out at following line: t, l, loc, fs = util.load_physionet_data(subject_id, num_classes, long_edge=long_edge) since t, l, loc, fs have no data, therefore it says: np.array(trials).reshape((len(trials),) + trials[0].shape + (1,)), np.array(labels) IndexError: list index out of range Though if following changes are made my_variable = util.load_physionet_data(subject_id, num_classes, long_edge=long_edge) print(my_variable) Then I can see the entire data array returned from above function call ! what should be done to get this function call to execute properly ? did you get any idea to solve this issue !!
gharchive/issue
2019-10-26T11:46:39
2025-04-01T06:38:55.996334
{ "authors": [ "dhananjaisrmgpc", "nene11" ], "repo": "hauke-d/cnn-eeg", "url": "https://github.com/hauke-d/cnn-eeg/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
629010316
Bug in handling name change On ME screen the user can change his name. If he enters an empty string Guest is being assigned automatically. So far so good. But now if he clicks the 'Guest' to change it again, the input is not editable anymore and the user cannot change the name. Tested with https://letsmeet.no/. I think it is something to due with the store. @Astagor Can you retest it. It seems it is working now on letsmeet.no I will close I see this is fixed.. Yes, this has been fixed.
gharchive/issue
2020-06-02T08:32:53
2025-04-01T06:38:56.001368
{ "authors": [ "Astagor", "misi" ], "repo": "havfo/multiparty-meeting", "url": "https://github.com/havfo/multiparty-meeting/issues/454", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
603114855
fixes #203 also test PR in preparation for #222 :partying_face: Looks good, thank you!
gharchive/pull-request
2020-04-20T10:10:01
2025-04-01T06:38:56.002450
{ "authors": [ "christian-2", "havfo", "tapionx" ], "repo": "havfo/multiparty-meeting", "url": "https://github.com/havfo/multiparty-meeting/pull/240", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1956668221
JMX - Copy actions on an operation don't work if Hawtio is accessed at a host other than localhost If Hawtio is accessed at 0.0.0.0:8080 for instance instead of localhost:8080, Copy method name and Copy Jolokia URL actions provided by the Operations tab don't work as follows: It is because navigator.clipboard.writeText works only under https unless it's localhost. https://developer.mozilla.org/en-US/docs/Web/API/Clipboard Should we find some way to get around with the limitation or keep it as is honouring the security consideration done at navigator.clipboard.writeText? If we keep it as is, we can disable the menus when the console is on http and not localhost. Showing a warning notification with instruction on why it didn't work sounds like a good solution.
gharchive/issue
2023-10-23T08:50:04
2025-04-01T06:38:56.020041
{ "authors": [ "tadayosi" ], "repo": "hawtio/hawtio-next", "url": "https://github.com/hawtio/hawtio-next/issues/628", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1581079209
Docker image A Docker image would be great to make it easier to deploy. A draft for the quantized version's Dockerfile could look like FROM python:latest COPY . . RUN apt-get update RUN apt-get install -y ffmpeg RUN pip install streamlit RUN pip install setuptools-rust # RUN pip install git+https://github.com/openai/whisper.git RUN pip install git+https://github.com/MiscellaneousStuff/whisper.git ENV PATH="$HOME/.cargo/bin:$PATH" EXPOSE 8501 VOLUME /data/.whisper_settings.json CMD streamlit run app/01_🏠_Home.py I'm trying to figure out what directory is used for the data. Edit: EXPOSE instead of PORT Good call. Will add a docker config. Data folder gets created at the project root. You can change them as you see fit in the config file. a published image would also be nice. you could setup github actions
gharchive/issue
2023-02-12T01:30:22
2025-04-01T06:38:56.031391
{ "authors": [ "hayabhay", "rursache", "schklom" ], "repo": "hayabhay/whisper-ui", "url": "https://github.com/hayabhay/whisper-ui/issues/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1429359936
Worked amazing on Monterey but not working in Ventura :/ Please advise. Thank you so much Hi @BraveBird3291 can you please be more specific what's not working and any error messages? list or list enabled still works for me. I think list enabled is showing disabled items or disable seems to be not working (i.e. after disable adobe list enabled still shows items like com.adobe.AdobeCreativeCloud). Before upgrading to Ventura list would show launch status like enabled or disabled but after the upgrade it shows MachService or OnDemand, Onstartup, Always or just Unknown. This was tested on m1 mac. Thanks. Would it be possible do deliver me the file permissions ('ls -alh') of one of those plists and its contents? @hazcod Will do but I'm currently out travelling will send you in 2-3 days ;) @hazcod Sorry for the delay. Here is a few file permissions of plists. @user112200 Can you try running hte below and let me know what it returns? launchctl disable user/"$(id -u)"/com.adobe.AdobeCreativeCloud @user112200 And it doesn't show up in; launchctl print-disabled user/"$(id -u)" | grep adobe @hazcod Actually it does show up in the disabled. Rerun again after 13.0.1 update.
gharchive/issue
2022-10-31T07:01:22
2025-04-01T06:38:56.042056
{ "authors": [ "BraveBird3291", "hazcod", "user112200" ], "repo": "hazcod/maclaunch", "url": "https://github.com/hazcod/maclaunch/issues/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
42331984
worker Out Of Memory Error While running: class=com.hazelcast.stabilizer.tests.map.StringMapTest threadCount=40 keyLength=8 valueLength=131072 keyCount=100000 valueCount=100 writePercentage=0 basename=map logFrequency = 10000 performanceUpdateFrequency = 100 ``` and coordinator: ``` coordinator --monitorPerformance --duration 5m --workerVmOptions "-Xmx4g" map.properties ``` ``` WARN 20:28:30 Failure #1 192.168.2.102:5701 Worker Out Of Memory Error FATAL 20:28:50 Failed java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timeout while executing : DoneCommand{testId=''} at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.getAllFutures(AgentsClient.java:238) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.getAllFutures(AgentsClient.java:203) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.executeOnAllWorkers(AgentsClient.java:392) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.waitDone(AgentsClient.java:180) at com.hazelcast.stabilizer.coordinator.TestCaseRunner.run(TestCaseRunner.java:67) at com.hazelcast.stabilizer.coordinator.Coordinator.runSequential(Coordinator.java:197) at com.hazelcast.stabilizer.coordinator.Coordinator.runTestSuite(Coordinator.java:178) at com.hazelcast.stabilizer.coordinator.Coordinator.run(Coordinator.java:88) at com.hazelcast.stabilizer.coordinator.Coordinator.main(Coordinator.java:358) Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timeout while executing : DoneCommand{testId=''} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:206) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.getAllFutures(AgentsClient.java:212) ... 8 more Caused by: java.util.concurrent.TimeoutException: Timeout while executing : DoneCommand{testId=''} at com.hazelcast.stabilizer.agent.CommandFuture.get(CommandFuture.java:75) at com.hazelcast.stabilizer.agent.workerjvm.WorkerJvmManager.executeOnWorkers(WorkerJvmManager.java:214) at com.hazelcast.stabilizer.agent.workerjvm.WorkerJvmManager.executeOnAllWorkers(WorkerJvmManager.java:187) at com.hazelcast.stabilizer.agent.remoting.ClientSocketTask.execute(ClientSocketTask.java:81) at com.hazelcast.stabilizer.agent.remoting.ClientSocketTask.run(ClientSocketTask.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) at ------ End remote and begin local stack-trace ------.(Unknown Source) at com.hazelcast.stabilizer.coordinator.remoting.AgentClient.execute(AgentClient.java:41) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient$10.call(AgentsClient.java:382) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at ------ End remote and begin local stack-trace ------.(Unknown Source) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.getAllFutures(AgentsClient.java:233) ... 8 more INFO 20:28:50 Aborting testsuite due to failure INFO 20:28:50 Terminating workers INFO 20:28:50 All workers have been terminated INFO 20:28:50 Starting cool down (10 sec) FATAL 20:28:58 java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timeout while executing : GetOperationCountCommand{} java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timeout while executing : GetOperationCountCommand{} at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.getAllFutures(AgentsClient.java:238) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.getAllFutures(AgentsClient.java:203) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.executeOnAllWorkers(AgentsClient.java:392) at com.hazelcast.stabilizer.coordinator.PerformanceMonitor.checkPerformance(PerformanceMonitor.java:39) at com.hazelcast.stabilizer.coordinator.PerformanceMonitor.run(PerformanceMonitor.java:30) Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timeout while executing : GetOperationCountCommand{} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:206) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.getAllFutures(AgentsClient.java:212) ... 4 more Caused by: java.util.concurrent.TimeoutException: Timeout while executing : GetOperationCountCommand{} at com.hazelcast.stabilizer.agent.CommandFuture.get(CommandFuture.java:75) at com.hazelcast.stabilizer.agent.workerjvm.WorkerJvmManager.executeOnWorkers(WorkerJvmManager.java:214) at com.hazelcast.stabilizer.agent.workerjvm.WorkerJvmManager.executeOnAllWorkers(WorkerJvmManager.java:187) at com.hazelcast.stabilizer.agent.remoting.ClientSocketTask.execute(ClientSocketTask.java:81) at com.hazelcast.stabilizer.agent.remoting.ClientSocketTask.run(ClientSocketTask.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) at ------ End remote and begin local stack-trace ------.(Unknown Source) at com.hazelcast.stabilizer.coordinator.remoting.AgentClient.execute(AgentClient.java:41) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient$10.call(AgentsClient.java:382) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at ------ End remote and begin local stack-trace ------.(Unknown Source) at com.hazelcast.stabilizer.coordinator.remoting.AgentsClient.getAllFutures(AgentsClient.java:233) ... 4 more ``` I'm going to close this one till it pops up again. a lot has been done on dealing better with timeouts.
gharchive/issue
2014-09-09T17:37:53
2025-04-01T06:38:56.055880
{ "authors": [ "pveentjer" ], "repo": "hazelcast/hazelcast-stabilizer", "url": "https://github.com/hazelcast/hazelcast-stabilizer/issues/285", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1654531062
the code execution step Hello, I have been trying to run this code, but I keep encountering errors at the last step. Here's how I wrote the code, and I was wondering if you could help me identify where my mistake is: target/release/pcsrt [OPTIONS] --centroid [LAT(float)],[LON(float)],ELEVATION[(float)] --time-range FROM [(2020-01-01T12:00:00.000Z),TO(2020-03-23T18:00:00.000Z)] --step-mins [int] --linke-turbidity-factor [SINGLE_LINKE(float)] [INPUT_FILE] [OUTPUT_FILE] I would be very grateful if you could assist me. Thank you in advance. Hi, replace the [PLACEHOLDER]s with values you need, e.g. ./target/release/pcsrt --centroid 57.362845,12.976645,225 --time-range 2020-01-01T12:00:00.000Z,2020-03-23T18:00:00.000Z --step-mins 60 --linke-turbidity-factor 3.5 ./input.las ./output.las. Hello, thank you very much for your response. I have obtained the results, but I am encountering an error while constructing the normals (please refer to the yellow-highlighted line in the attached screenshot). Additionally, my point cloud includes the attributes X Y Z R G B. Could you please let me know the possible source of this error? Thank you very much for your assistance. It's just a warning. Normal vectors are usually not constructed in voxels that do not have sufficient points within their volume or volume of adjacent voxels. In that case a vertical normal vector ([0;0;1]) is used instead and the voxel is treated as horizontal plane when calculating the incidence angle of the solar rays. thanks a lot for your help
gharchive/issue
2023-04-04T20:25:17
2025-04-01T06:38:56.093054
{ "authors": [ "ayoubOUSSM2021", "hblyp" ], "repo": "hblyp/pcsrt", "url": "https://github.com/hblyp/pcsrt/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1833810988
Randomize table generation parameters Shouldn't be difficult to implement, foundation is already there to allow it. I'll get to it shortly. I recommend discussing this with @milortie. I'm not sure we want to start with fully random at first. Very true, waiting to have the option to generate column amount too (which should be soon) The variables to be varried in the test include the following (with upper and lower bounds): Different numbers of rows and columns: # column (min,max) = (1,15), # rows (min,max) = (1,50) -- may result in multipage tables (that's ok). Level of granularity: 1, 5, 10, 15 is likely OK (or something on that order of magnitude) - start with 1000 tables in the test case with even distribution. Track results by table type. Presence or lack of lines around the cells - 1000 tables in different configurations, even distribution - options include: Outer lines only, headder lines only, some horizontal lines only, some vertical lines only, lines that don't go all the way accross the row/column. Vary the ammount of padding around the tables and within the cells and table. Default is 0.7inch around the tables. Vary range from (0.0 to 1.5 inches) - this test only makes sense with text or other artifacts on the page around the table. Use Lorem Ipsum for that test. - 1000 tables with even distribution in increments of 0.1 inches Different types of characters in the cells (such as French characters, math symbols, chemical formulas, @ symbol). In assessing performance we should look for these character and rank how well each comes through. Start by making a list of relevant characters and then make "words" out of these characters. May not require many table tests, this is more about the OCR performance. Different fonts types, sizes, colours: use the test above (types of characters). Make a list of fonts, sizes and colours to try this with. PR #64 includes the results from these randomized runs
gharchive/issue
2023-08-02T19:46:30
2025-04-01T06:38:56.126230
{ "authors": [ "Louis-Boulet-HC", "pgaviganHC" ], "repo": "hc-sc-ocdo-bdpd/file-processing-tools", "url": "https://github.com/hc-sc-ocdo-bdpd/file-processing-tools/issues/36", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
240015171
fix paths in example notebooks to avoid changing directory This is a problem due to paths management issues with different OS os.chdir(..) on notebook this was already fixed
gharchive/issue
2017-07-02T14:59:36
2025-04-01T06:38:56.131657
{ "authors": [ "hcorona" ], "repo": "hcorona/recsys-101-workshop", "url": "https://github.com/hcorona/recsys-101-workshop/issues/5", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
936746088
how test my own dataset without groundtrouth? As described in title, the data from real world without GT_mask, but almost every algorithms need gt_mask to get roc_auc_score. Do you try to decide the threshold of anomaly score which higher than the threshold will be thought as anomaly ? Other than obtained the optim threshold from GT ? The problem confiused me all the time. Unfortunately, You have to slightly modify the code to make it work with your own dataset. In the case of Image level anomaly classification, to obtain optimal threshold you should prepare datasets with two class. then you can calculate roc curve then you can try several methods to get optimal threshold (like max of Youden's J statistic). On the other hand, generally it's hard to prepare pixel level anomaly GT. In this case, there is no way to get roc curve and optim threshold. I don't know but I would try checking distribution of normal pixel's score and setting threshold manually. Unfortunately, You have to slightly modify the code to make it work with your own dataset. In the case of Image level anomaly classification, to obtain optimal threshold you should prepare datasets with two class. then you can calculate roc curve then you can try several methods to get optimal threshold (like max of Youden's J statistic). On the other hand, generally it's hard to prepare pixel level anomaly GT. In this case, there is no way to get roc curve and optim threshold. I don't know but I would try checking distribution of normal pixel's score and setting threshold manually. Thks for your replay! Yes, the pixel-level threshold is the problem. I have an idea about the 3-sigma of the Gaussian distrabution of patches, but each patch has an only distribution. For how to use the infomation, I' m working on it ! Would you have advise? Sorry, I don't have great idea. But, as I said, If would try to check distribution of each pixel values of (normal) anomaly map.
gharchive/issue
2021-07-05T06:39:56
2025-04-01T06:38:56.142557
{ "authors": [ "hcw-00", "letmejoin" ], "repo": "hcw-00/PatchCore_anomaly_detection", "url": "https://github.com/hcw-00/PatchCore_anomaly_detection/issues/7", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2616440348
🛑 stagin-env is down In 6e9e8d0, stagin-env (https://staging.lessonpal.com) was down: HTTP code: 0 Response time: 0 ms Resolved: stagin-env is back up in 1de958f after 14 minutes.
gharchive/issue
2024-10-27T09:13:27
2025-04-01T06:38:56.149641
{ "authors": [ "heIsThePirate" ], "repo": "heIsThePirate/upptime-lessonpal", "url": "https://github.com/heIsThePirate/upptime-lessonpal/issues/1047", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2551556661
🛑 stagin-env is down In 0f6ca11, stagin-env (https://staging.lessonpal.com) was down: HTTP code: 0 Response time: 0 ms Resolved: stagin-env is back up in 735196f after 6 minutes.
gharchive/issue
2024-09-26T21:25:05
2025-04-01T06:38:56.152128
{ "authors": [ "heIsThePirate" ], "repo": "heIsThePirate/upptime-lessonpal", "url": "https://github.com/heIsThePirate/upptime-lessonpal/issues/835", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
610461299
2020.5.1 update update from origin on 5.1,2020 auto
gharchive/pull-request
2020-04-30T23:42:57
2025-04-01T06:38:56.162304
{ "authors": [ "heartdiamond" ], "repo": "heartdiamond/InternetArchitect", "url": "https://github.com/heartdiamond/InternetArchitect/pull/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
491327997
Constants applicable to all renderers Hi ! As you may have noticed, I'm currently writing a ncurses-based renderer for iced on my spare time, and am using my own fork of iced for that purpose. The only modification I did was to change some defaults. As it happens, the current constants are assuming pixel unit when pancurses of course works with lines and columns. This makes the current default on some Widgets (e.g. Checkbox, Slider, and Buttons) completely wrong. Even if not considering my use case, which I agree is fairly uncommon, I am not sure what these constants were there for in the first place, because they assume some sort of mandatory smaller screen resolution. What do you think would be the correct way of not enforcing those defaults upon every renderer implementor ? This is one of the main reasons the current release is alpha. I have shared my thoughts about this in #6. It should be simple to fix, we just need to choose a solution. Also, I think we can split layouting and events into its own crate. Thus, I am rethinking some of the design a bit and there may be related changes soon.
gharchive/issue
2019-09-09T21:09:21
2025-04-01T06:38:56.203464
{ "authors": [ "AlisCode", "hecrj" ], "repo": "hecrj/iced", "url": "https://github.com/hecrj/iced/issues/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
371428944
[AuthLDAP] User '' logging in This is not a real issue, but an asking for help. I'm running FreeIPA server on Centos 7, have a few WordPress websites on other Centos 7 server as well, and i'm looking to establish SSO for my WordPress users across those websites and some other apps. In order to do that i'm trying to configure AuthLDAP properly. This is my auth_gssapi.conf: <Location "/wp-login.php"> AuthType GSSAPI AuthName "GSSAPI Single Sign On Login" GssapiCredStore keytab:/usr/local/apache/conf/http.keytab GssapiBasicAuthMech krb5 Require valid-user </Location> As a AuthLDAP URI i've tried this and many many other combinations: ldap://admin_user_name:admin_password@ipa-server.example.com:389/dc=ipa-server,dc=example,dc=com When i visit my WordPress website at /wp-admin/ apache's error log gives this: [Thu Oct 18 09:49:41.265710 2018] [:error] [pid 1297:tid 140477911234304] [client my_ip_address] [AuthLDAP] User '' logging in [Thu Oct 18 09:49:41.265820 2018] [:error] [pid 1297:tid 140477911234304] [client my_ip_address] [AuthLDAP] Username not supplied: return false I've did telnet and ldapsearch tests from my WordPress websites server to a FreeIPA server, these are results: [root@server ~]# telnet IPA-SERVER_DOMAIN_IP_ADDRESS 389 Trying IPA-SERVER_DOMAIN_IP_ADDRESS... Connected to IPA-SERVER_DOMAIN_IP_ADDRESS. Escape character is '^]'. Connection closed by foreign host. ldapsearch gives response with a bunch of text, and i won't copy all of those, but just the end: ...... # search result search: 2 result: 0 Success # numResponses: 115 # numEntries: 114 [root@server ~]# so it looks like servers are able to communicate. I'm trying to SSO to WordPress from my Windows 7 machine, and i have got installed MIT Kerberos Ticket Manager and successfully obtained tickets: so, FreeIPA, my Mozzila browser (for Kerberos negotiation process) and everything else is configured as it should be. I've been investigating and trying last few days to make this to work but nothing has helped. I hope that i've given enough information. Can you please tell me what am i missing? Hey Adam. For me it looks like you are mixing two different things up. The one is the single sign on with kerberos and the other one is the LDAP authentication. The AuthLDAP-Plugin is only desigend to allow people to log into a wordpress site using their LDAP-credentials. But those credentials need to be put into the wordpres login form. But as far as I see it you want something different. You want the user to be logged into WordPress automatically using the Kerberos token. That's something completely different as the user does not need to provide any credentials at all. So you would need a plugin that checks the Kerberos-Token against the Kerberos-Server and then decides whether the user may enter WordPress or not. That's not the scope of AuthLDAP (as LDAP is not involved at all in that process) In a quick search I only found one plugin that mentions Kerberos and that is for ActiveDirectory (https://wordpress.org/plugins/next-active-directory-integration/). So ot looks like you either need to write it yourself or need a different approach :-( Sorry that I can't help you there. Andreas, thank you for your quick answer. Yes, you're right, i've been mixing these things, because i've never been in touch with LDAP/Kerberos before so it's a little bit tricky for me to understand properly how things work. But your answer has helped me to get a clearer picture. I think that i will follow LDAP as there is a plugin already that you've created. Thanks!
gharchive/issue
2018-10-18T09:01:47
2025-04-01T06:38:56.245018
{ "authors": [ "Adamcina", "heiglandreas" ], "repo": "heiglandreas/authLdap", "url": "https://github.com/heiglandreas/authLdap/issues/160", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1643860255
vpn mode could we use vpn mode with this project ? i wanna use it on ios application but with VPN mode ( full tunnel ) https://github.com/daemooon/Tun2SocksKit https://github.com/daemooon/Mango
gharchive/issue
2023-03-28T12:36:24
2025-04-01T06:38:56.247117
{ "authors": [ "TonDevv", "heiher" ], "repo": "heiher/hev-socks5-tunnel", "url": "https://github.com/heiher/hev-socks5-tunnel/issues/32", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1145079894
[Removal]: Hotspot Name Overt Arctic Hamster Hotspot b58 Address 11aHCWsePUUpTEAfisrPNqB5zsYD77uHd5DhfzEMzfJAhbV9gy8 Discord Handle No response Hotspot Manufacturer PantherX Removal Reason I bought a used hotspot Modifications No Extra forwarders No Extra antennas No Additional Information No response We have taken into account this removal request in the next iteration of the denylist. It may take some time before it's processed. This is an automated comment due to the volume of addition and removal requests. A future tool is in the works to provide more insight into the analysis approach made by the Helium team and community members that maintain this list.
gharchive/issue
2022-02-20T17:13:04
2025-04-01T06:38:56.281254
{ "authors": [ "abhay", "lucaschen520" ], "repo": "helium/denylist", "url": "https://github.com/helium/denylist/issues/1360", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1136006487
[Removal]: Bent Lemonade Viper Hotspot Name bent-lemonade-viper Hotspot b58 Address 112JqL42UbqMDnWDyHJFBXNyoGC671mXTUbdZFDNo3guAtTHh Discord Handle Merikon#6484 Hotspot Manufacturer PantherX Removal Reason The location and its actual place used to be not coincide with each other to reach a larger coverage. That's probably why my hotspots appear on the denylist. However, the problem has been fixed now and the hotspots appears at where they should be. Modifications No Extra forwarders No Extra antennas No Additional Information Hi Helium manage team, thanks for all the effort you have done to build an effective and honest network. I'm an officer of a local wlan & IoT service firm. Many of the residents in my town got involved into the Helium network and community by deploy the hotspots at their home last September. Most of them are using indoor hotspot while some of them deploy extra antennas (maybe hand-made). We were impressived by this genius idea of people's network. From my prospective as a IoT industry participant, this network definity lower the total IoT infrasturcture cost and must grow in a rapid way. Some of residents in the coverage region also tried the rak WisNode Senser and they really work well. However, in order to reach a larger coverage and increase the reward scale, some of the hotspots were located a small distance deviate from where it should be. We realized this attempt lead to the inclusion to the denylist so we correct this mistake by moving hotspots and relocate them on the Helium map. By far all of the hotspots appears at the right place (no more than 300m deviation). Also, I found that there were some hotspots locate near our town by they really shouldn't be at there. These virtual hotspots are included in the denylist either. I wonder if those gaming hotspots that never interact with us made all of us into the denylist? If so please update your algrothim in order to avoid this situation again. Thanks We have taken into account this removal request in the next iteration of the denylist. It may take some time before it's processed. This is an automated comment due to the volume of addition and removal requests. A future tool is in the works to provide more insight into the analysis approach made by the Helium team and community members that maintain this list.
gharchive/issue
2022-02-13T15:24:08
2025-04-01T06:38:56.286629
{ "authors": [ "abhay", "merikon2" ], "repo": "helium/denylist", "url": "https://github.com/helium/denylist/issues/767", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
925682467
Add totals to lists (e.g. "Witnesses (14)") fixes #457 preview: I made it as a component, so the following configurations are possible to reuse anywhere in Explorer: title: description: (could include links like: <a href='https://docs.helium.com'>Click here to learn more</a>) title and description (hidden description): hides description by default if a title and description are both given, so it doesn't take up too much vertical space title and description (expanded description): could we also add a count to the "Hotspots in Hex" tab? Love it. Do it. From: Daniel James @.> Sent: Monday, June 21, 2021 3:53:13 PM To: helium/explorer @.> Cc: Coco Tang @.>; Mention @.> Subject: Re: [helium/explorer] Add totals to lists (e.g. "Witnesses (14)") (#485) @danielcolinjames commented on this pull request. In components/InfoBox/HotspotDetails/NearbyHotspotsPane.jshttps://github.com/helium/explorer/pull/485#discussion_r655754532: @@ -23,7 +23,12 @@ const NearbyHotspotsPane = ({ hotspot }) => { 'overflow-y-hidden': loading, })} <NearbyHotspotsList hotspots={hotspots || []} isLoading={loading} /> <NearbyHotspotsList hotspots={hotspots || []} isLoading={loading} title={`Nearby Hotspots (${nearbyHotspots?.length})`} // description="[Nearby Hotspots description text]" oh yeah was gonna ask Coco if she wanted to put something there @cokes518https://github.com/cokes518 thoughts? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/helium/explorer/pull/485#discussion_r655754532, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AETPRSREH3373PYSN67HDNTTT67FTANCNFSM47ANRDYA. @cokes518 (or bones) what do you think about this text for Nearby and Witnesses? feel free to wordsmith. here's the text: witnesses: title="Witnesses" description={ <> <div> Hotspots on the Helium network that have successfully witnessed beacons sent by {animalHash(hotspot.address)}. There are many reasons a nearby Hotspot may not be a valid witness. </div> <div className="pt-1.5"> Learn more{' '} <a className="text-navy-400" href="https://docs.helium.com/troubleshooting/understanding-witnesses/" rel="noopener noreferrer" target="_blank" > here </a> . </div> </> } nearby: title="Nearby Hotspots" description={`Hotspots on the Helium network that are within an appropriate physical distance to witness ${animalHash( hotspot.address, )}'s beacons, or to have their beacons witnessed by it.`}
gharchive/pull-request
2021-06-20T21:09:18
2025-04-01T06:38:56.299549
{ "authors": [ "allenan", "cokes518", "danielcolinjames" ], "repo": "helium/explorer", "url": "https://github.com/helium/explorer/pull/485", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
934005004
Unify all text in the system interface to size 12 The text on the menu bar and button is too large, but the file name and window text on the desktop or Folder are too small Size 12 is the most comfortable size visually. With the consistent text size, the screen will not look very chaotic. Duplicate of #230
gharchive/issue
2021-06-30T18:45:23
2025-04-01T06:38:56.351390
{ "authors": [ "louies0623", "probonopd" ], "repo": "helloSystem/ISO", "url": "https://github.com/helloSystem/ISO/issues/229", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1101264999
feat: multilingualization of context menu Objectives related issue: #27 Implementation Using vue-i18n, the contextmenu can be displayed in English. You can switch the language by passing the locale to this module in props. well done, but vue-i18n is not tailored for use in component libraries, so maybe try another way. @hellowuxin well done, but vue-i18n is not tailored for use in component libraries, so maybe try another way. Why is vue-i18n not suitable for component libraries? Please explain in detail. Do you want to apply i18n resources asynchronously with lazy load? If so, I will try to support lazy load by referring to the following. https://github.com/intlify/vue-i18n-next/tree/master/examples/lazy-loading @hellowuxin well done, but vue-i18n is not tailored for use in component libraries, so maybe try another way. Why is vue-i18n not suitable for component libraries? Please explain in detail. Do you want to apply i18n resources asynchronously with lazy load? If so, I will try to support lazy load by referring to the following. https://github.com/intlify/vue-i18n-next/tree/master/examples/lazy-loading see https://github.com/kazupon/vue-i18n/issues/746 use i18n instead thanks!
gharchive/pull-request
2022-01-13T06:30:09
2025-04-01T06:38:56.376401
{ "authors": [ "hellowuxin", "kiibo382" ], "repo": "hellowuxin/vue3-mindmap", "url": "https://github.com/hellowuxin/vue3-mindmap/pull/28", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
464975497
[stable/redis] exporter to 1.0.3 Signed-off-by: Naseem naseemkullah@gmail.com What this PR does / why we need it: Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged) fixes # Special notes for your reviewer: Checklist [Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.] [x] DCO signed [x] Chart Version bumped [x] Variables are documented in the README.md [x] Title of the PR starts with chart name (e.g. [stable/chart]) There are some changes, mostly in metrics name, should we advertise this? /assign /ok-to-test Good idea. I've added some notes. Feel free to edit them to your liking! Thanks for the PR /lgtm My pleasure.
gharchive/pull-request
2019-07-07T17:40:27
2025-04-01T06:38:56.381673
{ "authors": [ "desaintmartin", "javsalgar", "naseemkullah" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/15302", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
480126235
Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist root@kube-master:~# helm version Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist root@kube-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} root@kube-master:~# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-8kzsc 1/1 Running 0 19m coredns-5c98db65d4-j4khc 1/1 Running 0 19m etcd-kube-master 1/1 Running 0 19m kube-apiserver-kube-master 1/1 Running 0 19m kube-controller-manager-kube-master 1/1 Running 0 19m kube-flannel-ds-amd64-6bd44 1/1 Running 0 19m kube-flannel-ds-amd64-fdr42 1/1 Running 0 16m kube-flannel-ds-amd64-s6d4r 1/1 Running 0 19m kube-proxy-fslxf 1/1 Running 0 16m kube-proxy-tmtdm 1/1 Running 0 19m kube-proxy-xtz9x 1/1 Running 0 19m kube-scheduler-kube-master 1/1 Running 0 19m tiller-deploy-6b9c575bfc-rzfcs 1/1 Running 0 5m56s Helm is configured using following commands kubectl -n kube-system create serviceaccount tiller kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller helm init --service-account=tiller --history-max 200 same error [root@centosvm01 ~]# helm version Client: &version.Version{SemVer:"v2.15.1", GitCommit:"cf1de4f8ba70eded310918a8af3a96bfe8e7683b", GitTreeState:"clean"} Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist [root@centosvm01 ~]# kubectl version --short Client Version: v1.16.2 Server Version: v1.16.2 Were you abnle to resolve this issue?
gharchive/issue
2019-08-13T12:21:56
2025-04-01T06:38:56.384889
{ "authors": [ "bacongobbler", "ghost", "willzhang" ], "repo": "helm/helm", "url": "https://github.com/helm/helm/issues/6216", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1880790967
feat(providers): first implementation of a 1password connect provider This PR adds a provider for 1password connect (https://developer.1password.com/docs/connect). From the README: For this provider to work you require a working and accessible 1Password connect server. The following env vars have to be configured: OP_CONNECT_HOST OP_CONNET_TOKEN 1Password is organized in vaults and items. An item can have multiple fields with or without a section. Labels can be set on fields and sections. Vaults, items, sections and labels can be accessed by ID or by label/name (and IDs and labels can be mixed and matched in one URL). If a section does not have a label the field is only accessible via the section ID. This does not hold true for some default fields which may have no section at all (e.g.username and password for a Login item). Caution: vals-expressions are parsed as URIs. For the 1Password connect provider the host component of the URI identifies the vault (by ID or name). Therefore vaults containing certain characters not allowed in the host component (e.g. whitespaces, see RFC-3986 for details) can only be accessed by ID. Examples: ref+onepasswordconnect://VAULT_ID/ITEM_ID#/[SECTION_ID.]FIELD_ID ref+onepasswordconnect://VAULT_LABEL/ITEM_LABEL#/[SECTION_LABEL.]FIELD_LABEL ref+onepasswordconnect://VAULT_LABEL/ITEM_ID#/[SECTION_LABEL.]FIELD_ID If merged, the PR resolves #54 - but there is one topic we should discuss first. Using URIs without a fragment is most probably useless in the current implementation. To support any combination of labels and IDs in the URI, the string map is populated with every possible combination of label and ID for its keys which does not lead to meaningful data. Currently I have no proper solution for that. Maybe 1password is not suited for this use case? (then we should simply drop the stringprovider ability) Or we should introduce PARAMS/query components to control the behavior of the stringprovider? (like preferLabels to use labels for the map whenever possible sacrificing the ability to match keys by ID?) Any ideas are appreciated. And since I'm not really fluent in go I'm also open to feedback regarding codestyle, stability etc :-) 1password is organized in vaults -> items -> [sections ->] fieldsSee also description of the vals-URI for this provider in the READMEOn 21 Sep 2023 15:07, yxxhero @.***> wrote: @yxxhero commented on this pull request. In pkg/providers/onepasswordconnect/onepasswordconnect.go: +} +// New creates a new 1Password Connect provider +func New(cfg api.StaticConfig) *provider { p := &provider{} return p +} +// Get secret string from 1Password Connect +func (p *provider) GetString(key string) (string, error) { var err error splits := strings.Split(key, "/") if len(splits) < 2 { return "", fmt.Errorf("invalid URI: %v", errors.New("vault or item missing")) vault? —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***> @juckerf BTW. could you add some tests for this feature? As the first implementation of a 1password connect provider. it's good for me. thanks for the merge!
gharchive/pull-request
2023-09-04T19:40:44
2025-04-01T06:38:56.395966
{ "authors": [ "juckerf", "yxxhero" ], "repo": "helmfile/vals", "url": "https://github.com/helmfile/vals/pull/164", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1696831114
Adds Python (Django) Web App with PostgreSQL via ACA Adds "Python (Django) Web App with PostgreSQL via Azure Container Apps" PR sent to wrong fork
gharchive/pull-request
2023-05-04T23:41:24
2025-04-01T06:38:56.407157
{ "authors": [ "kjaymiller" ], "repo": "hemarina/awesome-azd", "url": "https://github.com/hemarina/awesome-azd/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1081325979
Create release workflow [x] publish pre-compiled binary [x] publish container image [x] publish to crates.io [x] add cargo install command to the README.md [ ] publish to https://github.com/hendrikmaus/homebrew-tap The custom caching fails as the publisher action doesn't seem to use the expected paths: https://github.com/hendrikmaus/actions-digest/runs/4558169870?check_suite_focus=true Fixed in https://github.com/hendrikmaus/actions-digest/pull/12 Get the release workflow from https://github.com/hendrikmaus/rust-workflows #15 Use https://github.com/googleapis/release-please for a pull-request based approach.
gharchive/issue
2021-12-15T17:58:19
2025-04-01T06:38:56.423384
{ "authors": [ "hendrikmaus" ], "repo": "hendrikmaus/actions-digest", "url": "https://github.com/hendrikmaus/actions-digest/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
117776888
MIP version number in QC sample info? Is it a good idea to include the version of MIP along with other programs? Sure why not! Done
gharchive/issue
2015-11-19T09:44:34
2025-04-01T06:38:56.425423
{ "authors": [ "henrikstranneheim", "robinandeer" ], "repo": "henrikstranneheim/MIP", "url": "https://github.com/henrikstranneheim/MIP/issues/84", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
573174403
How it works for 2D? Hi, How it works for 2D? I read your source for 3D, but don't understand how i can detect x/y movement to my player do left/right and jump. Can you help me with a simple code here? Thanks. Exactly! The plugin creator has not even taken the effort to document HOW TO USE the damn thing @prsolucoes Hi! Sorry taking so long to answer. I actually made this addon thinking only in a 3D solution for movement + camera, so haven't thought on making other kinds of examples. It should be pretty straightforward tho; you'd only need to add an overlay on top of the addon with the 2D buttons you want to detect. When I get time I'll try updating it. @Hansel-Dsilva This is an open-source solution I provided for free with no warranties whatsoever. If you don't want to use it because it lacks documentation, it's fine by me, but your comment is pretty much you feeling entitled of something you don't have, so my recommendation for you is to be more polite next time, or else you risk simply being ignored by any sane developer. Ok sorry, my bad. So how does one go about utilizing your plugin? @Hansel-Dsilva Np. The plugin only does one thing, that is to get a user input and give back some 'interpreted' values (e.g. how far and in which direction the left analog stick is moving) based on the two halves of the device screen. The project example is just like that: it reads on the plugin Node what are the values for the left analog stick (left half of the screen) to move the character and the values of each swipe on the right half of the screen to move the camera. It's basically up to you how to interpret those values, but it is hopefully straightforward if you adapt the project this addon comes with to your necessities. I'm currently stacked with work and other projects I need to finish, but I'll try creating a simple 2D example by the end of today (GMT-3, mind you). @Hansel-Dsilva @prsolucoes I just created a branch called "2d_example" with a fully functional but ugly 2D example. The idea is to use the left analog stick do control the character, and use the button on the right side to shoot a projectile. It's worth noting that, when using this plugin, you can't use 'normal' buttons; that's an unfortunate limitation caused by how Godot emulates clicks on touch-based devices: when emulating mouse-clicks, Godot won't be able to get multitouch gestures (which is the core of this addon). Since this option is disabled, you'd have to create your own kind of button getting a "gui_input" event, or using Godot's TouchScreenButton.
gharchive/issue
2020-02-29T04:57:28
2025-04-01T06:38:56.430308
{ "authors": [ "Hansel-Dsilva", "henriquelalves", "prsolucoes" ], "repo": "henriquelalves/GodotCharacterInputController", "url": "https://github.com/henriquelalves/GodotCharacterInputController/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1639403436
Removing extrapolation in getinterp? Originally we have extrapolations evaluated to NaN for handling out-of-domain cases: https://github.com/henry2004y/TestParticle.jl/blob/66a8cbec9d976c06c4ab69c751183ea591ee8752/src/TestParticle.jl#L105 Now this is also handled by the isoutofdomain callback function provided by OrdinaryDiffeq.jl. We should test on the gain of removing the extrapolations. With more boundary options like periodic, we need to keep the extrapolation methods.
gharchive/issue
2023-03-24T13:32:56
2025-04-01T06:38:56.432331
{ "authors": [ "henry2004y" ], "repo": "henry2004y/TestParticle.jl", "url": "https://github.com/henry2004y/TestParticle.jl/issues/68", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
429938021
Upgrade Envoy to v1.9.1 Updates #983 by updating the example deployment files to use Envoy v1.9.1 Signed-off-by: Steve Sloka slokas@vmware.com You can stay with the alpine image if you prefer. These deployments are just guides. On 8 Apr 2019, at 17:44, so0k notifications@github.com wrote: @so0k commented on this pull request. In deployment/ds-hostnet-split/03-envoy.yaml: @@ -33,7 +33,7 @@ spec: - node0 command: - envoy image: docker.io/envoyproxy/envoy-alpine:v1.9.0 image: docker.io/envoyproxy/envoy:v1.9.1 any reason why you're switching from alpine to ubuntu based envoy image for the hostnet deployment? — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.
gharchive/pull-request
2019-04-05T21:15:37
2025-04-01T06:38:56.461279
{ "authors": [ "davecheney", "stevesloka" ], "repo": "heptio/contour", "url": "https://github.com/heptio/contour/pull/984", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
248858137
Don't mount /tmp as host volume in example pod We're encouraging using kubectl cp in the meantime, using host mounts was a temporary measure. This fixes an issue where the documented kubectl cp command is bringing in tarballs from previous runs. /lgtm oh I don't have permissions to stamp 🤷‍♀️ oh wait nvm, there just isn't a bot to recognize /lgtm I guess b/c we don't have turnkey validation in place I'll say ok-to-self merge on test verification.
gharchive/pull-request
2017-08-08T21:54:46
2025-04-01T06:38:56.463359
{ "authors": [ "abiogenesis-now", "kensimon", "timothysc" ], "repo": "heptio/sonobuoy", "url": "https://github.com/heptio/sonobuoy/pull/41", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
810478214
Is the project dead? Hello, I'm looking to use this project on some k8s but this project seems dead for about a year. Is there any people from Heptio Labs taking care of this project? I see a the current moment 11 PR opened during a year without a single answer from owners, and 30 issues. Please says if the project is dead, or will be some housekeeping. @timothysc @jbeda We are using it and it works well for us in porting the logs to s3. The description of the heptiolabs org (https://github.com/heptiolabs) now reads: "Former home of experimental projects from Heptio" I wonder if these projects had to be abandoned when Heptio was bought by VMware. Let me do the Devil's Advocate. 20 days - no official answers, no activities on the project. I agree this project is probably on hold, if not abandoned. Would love to hear from VMWare/former Heptio folks if they plan to pick it up or donate the project to the community, or if we should just fork it/reimplement it. This does indeed seem dead. I need the functionality that eventrouter provides for another project I'm working on. I'm going to be working on this fulltime for the near future. Please feel free to contact me if you need support/help. I've decided to base my efforts on Kubewatch and not EventRouter. Kubewatch is a little more maintained, but it also seems somewhat dormant since the VMWare-Bitnami acquisition. Kubewatch doesn't yet stream all the events that EventRouter does, so I'll start by adding all event types to the Kubewatch output. Feel free to let me know which other EventRouter features you need in Kubewatch. VMWare folks - I would love to merge my changes to Kubewatch/Eventrouter upstream. If not, I suppose I'll fork it, but that isn't my preference. @MatteoJoliveau @Elettronik what are your needs beyond what Kubewatch/EventRouter currently provide? How are you interested in using them? @aantn that seems like great news! Thank you for being willing to keep development going at least for now. I'll try to see if we can help as well. For us it's mainly getting Kubernetes events into something Grafana can display. Using EventRouter we were going to have Loki and Promtail scrape up the logs from stdout/stderr and then have them streamed to Grafana as JSON lines. This way developers could select the event stream they wanted using the same query language they already use for selecting application log streams. e.g. { app_kubernetes_io_name="my-app", stream="events" }(or something along those lines). But really any solution that allows us to collect and store events for later querying will do. Got it. I'm working on a more generic platform for running Python code as a result of Kubernetes changes/Prometheus alerts and automating common responses. It will be open sourced eventually, but for now it is still in private beta. One easy use-case, for example, is to add annotations to grafana so that you can see exactly when a new version of an application was updated and can quickly eyeball the difference in the performance before and after. For your use case, you're only interested in forwarding actual Kubernetes Events, right? In other words, if a pod is created/modified/dies, you don't need to forward that, but if there is a CrashLoopBackOff then you do need to forward it. Is that correct? Correct. The reason being that, especially now with Operators, k8s Event objects are a very useful tool to track changes and issues in k8s resources. Currently we have logs, metrics and traces centrally accessible in Grafana, but if a developer wants to debug a crash loop they have to manually run kubectl describe. Having them in Loki would allow for quick querying and alerting over them using common tools they are already used to Great, I'll update you when I have something you can use for that purpose. Looking forward to it, thanks @aantn! Maybe this is a bit OT, but... Is it possible to add metrics to the app? For example, first thing that pops into my mind: adding metrics for probe failures. Asking this because it would be nicer to have graphs around metrics in grafana than around logs. But maybe this is out of scope :D Maybe this is a bit OT, but... Is it possible to add metrics to the app? For example, first thing that pops into my mind: adding metrics for probe failures. Asking this because it would be nicer to have graphs around metrics in grafana than around logs. But maybe this is out of scope :D Yeah, this is the type of thing I'm working on. Can you help me understand exactly what you would like to achieve? I think you can already get probe failure metrics from (kubelet into prometheus)[https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/prober/prober_manager.go]. Do you want to enrich grafana with extra info? To run some remediation or enrichment steps? This while conversation is a little off topic, I guess. I think the eventrouter community could benefit from another open source project that is the spiritual successor of eventrouter, but if the discussion here bothers someone then let me know and I'll take it elsewhere. Maybe this is a bit OT, but... Is it possible to add metrics to the app? For example, first thing that pops into my mind: adding metrics for probe failures. Asking this because it would be nicer to have graphs around metrics in grafana than around logs. But maybe this is out of scope :D Yeah, this is the type of thing I'm working on. Can you help me understand exactly what you would like to achieve? I think you can already get probe failure metrics from kubelet into prometheus. Do you want to enrich grafana with extra info? To run some remediation or enrichment steps? I was not aware of such metrics! Thank you very much :D I'm close to releasing my open source project which supports many of the features that people want added to eventrouter. Can everyone on this thread let me know what they're actually using eventrouter for today or what they want to use it for? Are you sending events to slack for online notifications? Are you just logging changes to ELK so that you can troubleshoot when something goes wrong? What use cases does event router solve for you? @aantn Good to hear! I'm using eventrouter only for exporting k8s events to loki (and grafana), using promtail which tails the eventrouter container logs. So the only feature I'd be interested in is having all k8s events in the (eventrouter|your project) container logs. @varac do you have any interest in turning k8s events into grafana annotations? e.g. adding a dotted line to grafana whenever a deployment is updated and the image tags change? I'm using this to easily correlate upgrades w/ changes in CPU usage. Something like this: @aantn That looks great, sure that would be a good feature. Cool, I've already implemented that. I have a little more work before I release this, but it's coming along nicely. Let me know if there are more integrations you can think of which would be useful. I'm currently implementing two-way Slack integration. The typical usecase is something like this: There is a prometheus plert (e.g. low disk space on a persistent volume) The system sends a mesage to Slack with details and a recommended remediation (e.g. cleanup some logs and increase the volume size if it still is low on space) You click a button in Slack approving that remediation. The system receives your approval and executes some remediation commands I would like to use it to stream kubernetes events into a kafka topic so my system can make decisions based on some of these events (e.g. a job is finished, so do something). @cordoor we can do that already. can you reach out to me privately to discuss in more detail (either aantny@gmail.com or on linkedin here: https://www.linkedin.com/in/natanyellin/) hello @aantn I'm interested in your project.. Currently what my usecase involves is to stream changes occurring to a specific set of kubernetes objects such as pods that are part of a replicaset in a particular namespace into a kafka topic.. If you've a prototype or something in the works can you please point us to that so that we can start using it and communicate feedback and hopefully submit patches ourselves @aakarshg sure, send me an email and I'll send you the beta version. We are migrating to https://github.com/openshift/eventrouter @aantn - ditto on being interested in a maintained replacement. The grafana integration for annotations sounds great. @zswanson sorry about the delay! We've finally released the first version! https://docs.robusta.dev/master/ @aakarshg @cordoor @varac @antoniocascais @MatteoJoliveau might be relevant for you guys too If anyone wants to discuss, we're on Slack and happy to add features for anything you need. (Or just open a github issue) we've also reimplemented this as a Grafana Agent integration here - you can run a standalone Agent that only runs this integration as a drop-in eventrouter replacement. for now it supports shipping events directly to a Loki-compatible sink and solves the "duplicate events" bug that occurs when you restart eventrouter. from there you can create dashboard annotations and "metrics from logs" directly in Grafana. please file an issue in the Agent repo and ping me directly if you encounter any bugs or have feature suggestions! you may also want to checkout @joe-elliott's diff logger as well!
gharchive/issue
2021-02-17T19:39:24
2025-04-01T06:38:56.486233
{ "authors": [ "Elettronik", "Ghazgkull", "MatteoJoliveau", "aakarshg", "aantn", "alok87", "antoniocascais", "cordoor", "hjet", "sl4dy", "varac", "zswanson" ], "repo": "heptiolabs/eventrouter", "url": "https://github.com/heptiolabs/eventrouter/issues/126", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
663167839
Parameterized operators This PR adds binary64 and binary32 operators under the define-language! macro to replace the previous real / FPCore operators and constants. Herbie now parameterizes operators and constants (e.g. sin -> sin.f64 for binary64) to be representation specific (see uwplse/herbie#319). The list of operators is now annoyingly long. This should be fixed soon. Perhaps we should make the existing operators also work? This way you could use new-Egg-Herbie with old-Herbie. Oh duh. Backwards compatibility is a good idea. Fixed
gharchive/pull-request
2020-07-21T16:58:16
2025-04-01T06:38:56.489592
{ "authors": [ "bksaiki", "pavpanchekha" ], "repo": "herbie-fp/egg-herbie", "url": "https://github.com/herbie-fp/egg-herbie/pull/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
596551700
Add commit checker job and scripts Separate job must run only on PR stage. Job verify commit message and fails if requirements are not met. Requirements added into file: scripts/misc/commit_message_recom.txt Relates-To: OLPEDGE-1573 Signed-off-by: Yaroslav Stefinko ext-yaroslav.stefinko@here.com Codecov Report Merging #274 into master will not change coverage by %. The diff coverage is n/a. @@ Coverage Diff @@ ## master #274 +/- ## ====================================== Coverage 90.3% 90.3% ====================================== Files 57 57 Lines 1622 1622 Branches 194 194 ====================================== Hits 1464 1464 Misses 89 89 Partials 69 69 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update e34e81c...84d7df4. Read the comment docs.
gharchive/pull-request
2020-04-08T12:47:42
2025-04-01T06:38:56.516098
{ "authors": [ "codecov-io", "ystefinko" ], "repo": "heremaps/here-olp-sdk-typescript", "url": "https://github.com/heremaps/here-olp-sdk-typescript/pull/274", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2447472081
Help me backend eon Help me backend eon start This issue is pretty common with javascript and typescript, you need to execute npm i (that will install all required packages) and npm build (that will convert typescript code to javascript) before running the command npm start (which is the command started by the start.bat). npm i have to be executed just once and npm build should be executed after any changes in the code.
gharchive/issue
2024-08-05T02:36:25
2025-04-01T06:38:56.518006
{ "authors": [ "4lxprime", "NuZiuki240Hz" ], "repo": "hereswhisper/EonBackend", "url": "https://github.com/hereswhisper/EonBackend/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2450761438
logic update for QoS MOC mo.cfg added logic to process N5 QoS requests. used some variable to store session data in some HTABLE to use them later in the signaling ( AppSessionID /RTP Port/RTCP Port/ User ...etc) I think I need to modify the routing logic further, the N5 Trigger should start with the INVITE and then PATCH/UPDATE it, as this will prevent creating new Sessions on every respond. The Next think will be also to look if its the same Fork, but it now not so important in this basic setup, as we don't have forking here. change it now to trigger first on the initiale INVITE and PATCH on SDP Answer. still facing some issue where in the pfcp I see no update to the addresses in the request and the old one from initial still there
gharchive/pull-request
2024-08-06T12:27:31
2025-04-01T06:38:56.519780
{ "authors": [ "NUCLEAR-WAR" ], "repo": "herlesupreeth/docker_open5gs", "url": "https://github.com/herlesupreeth/docker_open5gs/pull/350", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
55390172
Consider adapting rubysl-timeout Long ago, I rewrote timeout.rb so that it didn't require a 1-to-1 relationship between a thread and timeout thread. This meant that no matter the number of threads using timeouts, there was only one timeout thread. Given that rack-timeout uses that same 1-to-1 relationship, it is in danger of easily ballooning the number of threads used. Here is the code for my timeout.rb you could adapt: https://github.com/rubysl/rubysl-timeout/blob/2.0/lib/rubysl/timeout/timeout.rb You'd need remove the Rubinius::Channel stuff by using a ConditionVariable, but other than that, it should be easy to use. This is not a bad idea, but I wonder how much it's a problem in practice. A thread only lives as long as request. A lot of ruby web apps are single-threaded, and even the threaded ones don't tend to use many threads. Is creating and destroying threads expensive? @kch Creating threads is fairly expensive as far as operations go, yes. Another example is that people use rack-timeout with puma and I've seen people configure puma to use 2000 threads. Using rack-timeout in that case will result in an additional 2000 threads as well as constantly tearing down and creating new ones. Since you only want the ability call Thread.raise after a time, using the single, persistent timer Thread has significant upsides. Fair enough. We'll do this. Might take me a while to get to it though. @evanphx you define @mutex and @requests but don't seem to use them? @kch Yeah, that's obviously dead code. But you'll have to adapt this to run on MRI anyway, that uses a Rubinius::Channel which you don't have to coordinate between the threads. @evanphx right. was just pointing those out to you. @evanphx in case you're curious, see #82 Implemented in #82; will be in beta2, tracked via #78.
gharchive/issue
2015-01-25T00:20:27
2025-04-01T06:38:56.544236
{ "authors": [ "evanphx", "kch" ], "repo": "heroku/rack-timeout", "url": "https://github.com/heroku/rack-timeout/issues/61", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1615962054
🛑 Software Center - Test 1 is down In d19b08d, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in ce3585d.
gharchive/issue
2023-03-08T21:21:51
2025-04-01T06:38:56.550298
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/12364", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1658948594
🛑 Software Center - Test 1 is down In f81dbcb, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 68e5ce8.
gharchive/issue
2023-04-07T15:51:15
2025-04-01T06:38:56.552636
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/13591", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1730903911
🛑 Auth-Bridge - Test 1 is down In 41ef1e2, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in ee33488.
gharchive/issue
2023-05-29T14:56:54
2025-04-01T06:38:56.554872
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/16122", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1800452628
🛑 Auth-Bridge - Test 1 is down In 86424a5, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 4787951.
gharchive/issue
2023-07-12T08:26:17
2025-04-01T06:38:56.557098
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/18195", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1900336651
🛑 Auth-Bridge - Test 1 is down In d8255bc, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 24a0dec after 10 minutes.
gharchive/issue
2023-09-18T07:31:22
2025-04-01T06:38:56.559458
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/21650", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1947262794
🛑 Auth-Bridge - Test 1 is down In 91914ab, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 7ef6d71 after 34 minutes.
gharchive/issue
2023-10-17T12:09:13
2025-04-01T06:38:56.561675
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/23133", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2016155799
🛑 Software Center - Test 1 is down In 215d7ab, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 03d6dfe after 54 minutes.
gharchive/issue
2023-11-29T09:22:43
2025-04-01T06:38:56.564129
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/25237", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2063804628
🛑 Software Center - Test 1 is down In 17f6daf, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in c2ae459 after 21 minutes.
gharchive/issue
2024-01-03T11:13:17
2025-04-01T06:38:56.566359
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/26971", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2074878489
🛑 Software Center - Test 1 is down In 725a4bf, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 418a0eb after 10 minutes.
gharchive/issue
2024-01-10T17:48:22
2025-04-01T06:38:56.568586
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/27329", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2094883320
🛑 Auth-Bridge - Test 1 is down In 0bc9e59, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 9193177 after 10 minutes.
gharchive/issue
2024-01-22T22:28:37
2025-04-01T06:38:56.570849
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/27897", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2312533699
🛑 Software Center - Test 1 is down In c156800, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in c976bec after 23 minutes.
gharchive/issue
2024-05-23T10:24:38
2025-04-01T06:38:56.573070
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/33994", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2323657044
🛑 Auth-Bridge - Test 1 is down In ad2324b, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 835020e after 18 minutes.
gharchive/issue
2024-05-29T15:50:30
2025-04-01T06:38:56.575530
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/34276", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2324908220
🛑 Software Center - Test 1 is down In b55f215, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 3043448 after 22 minutes.
gharchive/issue
2024-05-30T07:24:03
2025-04-01T06:38:56.577971
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/34304", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2342544643
🛑 Auth-Bridge - Test 1 is down In d76e8fc, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in b80c98f after 26 minutes.
gharchive/issue
2024-06-09T23:15:21
2025-04-01T06:38:56.580254
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/34808", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1265631578
🛑 Software Center - Test 1 is down In 8b4a7f3, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 003f2f5.
gharchive/issue
2022-06-09T05:58:02
2025-04-01T06:38:56.582385
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/4435", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1270324971
🛑 Auth-Bridge - Test 1 is down In 798ca8e, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in c145a53.
gharchive/issue
2022-06-14T06:37:50
2025-04-01T06:38:56.584821
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/4635", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1303591689
🛑 Auth-Bridge - Test 1 is down In 55090e5, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 438c973.
gharchive/issue
2022-07-13T15:18:44
2025-04-01T06:38:56.587245
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/5639", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1305822852
🛑 Software Center - Test 1 is down In 1cd1011, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in 8f6958d.
gharchive/issue
2022-07-15T09:48:01
2025-04-01T06:38:56.589491
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/5703", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1376727358
🛑 Software Center - Test 1 is down In c372421, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in c461b62.
gharchive/issue
2022-09-17T09:59:56
2025-04-01T06:38:56.591693
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/7664", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1384757203
🛑 Software Center - Test 1 is down In 8d52f8b, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in b2e065d.
gharchive/issue
2022-09-24T17:40:00
2025-04-01T06:38:56.593911
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/7820", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1429712841
🛑 Software Center - Test 1 is down In 2e81a3d, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in d9b1c98.
gharchive/issue
2022-10-31T11:45:48
2025-04-01T06:38:56.596157
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/8585", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2736545210
client-fetch 0.5.3 breaks TanStack Query plugin Description After updating @hey-api/client-fetch to 0.5.3, requests using the TanStack Query plugin do not get sent at all. Possibly caused by https://github.com/hey-api/openapi-ts/commit/646064d1aecea988d2b4df73bd24b2ee83394ae0 To reproduce (using the StackBlitz example below): click "Generate random pet" -> no request is sent change version to 0.5.2 -> request is sent Reproducible example or configuration https://stackblitz.com/edit/hey-api-client-fetch-plugin-tanstack-react-quer-babtkanp?file=package.json OpenAPI specification (optional) 3.1.0 System information (optional) all systems Sorry for that! Fixed in the latest
gharchive/issue
2024-12-12T17:51:45
2025-04-01T06:38:56.724070
{ "authors": [ "TruffleClock", "mrlubos" ], "repo": "hey-api/openapi-ts", "url": "https://github.com/hey-api/openapi-ts/issues/1423", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1154904222
Add and configure jest It would be cool to be able to write tests against PRs for this project and I noticed jest isn't setup yet! Happy to take some time to look into this one if somebody doesn't get to it before me 👍🏼 Yep unit testing is definitely something I’ve been meaning to get to now that there’s logic around the config / css generation. Jest is a good option, and I’m partial to uvu as well, but not that fussed on what we go with they’re all much of a muchness. Sounds good – I'm not too sold on Jest anyway, it's a bit of a pain to setup with ESM/TypeScript. Perhaps Vitest is an option too? 👀
gharchive/issue
2022-03-01T05:57:14
2025-04-01T06:38:56.726388
{ "authors": [ "madeleineostoja", "shannonrothe" ], "repo": "heybokeh/pollen", "url": "https://github.com/heybokeh/pollen/issues/67", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
162775768
19 - Hybrid method, using InetMgr IIS module and based on 10 & 16 MS fixes, works from Windows 7 up to 10rs1 14372. Not work after build ((( Very informative post, thanks. And it is fixed in 14376, forget about it.
gharchive/issue
2016-06-28T20:12:36
2025-04-01T06:38:56.727536
{ "authors": [ "elnat", "hfiref0x" ], "repo": "hfiref0x/UACME", "url": "https://github.com/hfiref0x/UACME/issues/6", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
131395124
Part 2, A pairwise correlation example Hi, Harry ! How are your quantitative feeling? =) I have a question about the content, as a young Quantic . On page 37 you mention about the csv file, but you have not created it , where you get information ? You have described the basic operation assignment , vectors , lists , matrices and data structures , but why you omitted the issue of export summary data in the form as described on page 38? I need to help! What is this? extract_prices <- function(filtered_symbols, file_path = rfortraders / Chapter_02 / prices.csv) { all_prices <- read.csv(file = file_path, header = TRUE, stringsAsFactors = FALSE) rownames(all_prices) <- all_prices$Date all_prices$Date <- NULL valid_columns <- colnames(all_prices) %in% filtered_symbols return(all_prices[, valid_columns]) } extract_prices result extract_prices function(filtered_symbols, file_path = rfortraders / Chapter_02 / prices.csv) { all_prices <- read.csv(file = file_path, header = TRUE, stringsAsFactors = FALSE) rownames(all_prices) <- all_prices$Date all_prices$Date <- NULL valid_columns <- colnames(all_prices) %in% filtered_symbols return(all_prices[, valid_columns]) } кое что нашёл urlfile <- 'https://raw.githubusercontent.com/hgeorgako/rfortraders/master/Chapter_02/prices.csv' dsin <- read.csv ( urlfile, getOption("max.print")) проблема в ограничении 1000 строк [ reached getOption("max.print") -- omitted 856 rows ] Hopefully you already got this to work. The file you need is here: https://github.com/hgeorgako/rfortraders/blob/master/Chapter_02/prices.csv The fastest way to get this is to download the file to your local compute and then use read.csv(file = "path on your local machine") to read in the data. Another useful package that I use all the time to read in .csv files is "readr". Once you install it, you can say: readr::read_csv(filepath) Oh Harry , I'm really glad you answered my Problems! I thought that you rarely come here and have grieved . I figured out how to load data into R from the site , but why limit of 1000 lines. I will try your method ! Thanks for the answer!
gharchive/issue
2016-02-04T16:09:20
2025-04-01T06:38:56.735240
{ "authors": [ "Djafar1985", "hgeorgako" ], "repo": "hgeorgako/rfortraders", "url": "https://github.com/hgeorgako/rfortraders/issues/3", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
64536575
pyvisa 1.6.1 can not set stopbits Hi, i have a problem with pyvisa 1.6.1 and the RS232 setting. my device requires two stop bits. But the syntax in phyton "device.stop_bits=2" returns an error "ValueError: 2 is an invalid value for attribute VI_ATTR_ASRL_STOP_BITS, should be a <enum 'StopBits'>". You should use the constants In the current dev version, from pint import constants device.stop_bits = constants.VI_ASRL_STOP_TWO or from pint import constants device.stop_bits = constants.StopBits.two In any case, we should improve the error message. I installed the pint 0.6 version. Now the Syntax "from pint Import constants" returns an error "ImportError: cannot import name constants" Did i build-in the wrong Pint version? Sorry. it should have said from visa import constants or (in older pyvisa versions) from pyvisa import constants Still does not work. File "C:/Workspaces/spyder/test/helloagilent.py", line 2, in from visa import constants ImportError: cannot import name constants As mentioned before, from visa only works in more recent versions of pyvisa I am closing this. Feel free to reopen.
gharchive/issue
2015-03-26T14:14:26
2025-04-01T06:38:56.743152
{ "authors": [ "StachowP", "hgrecco" ], "repo": "hgrecco/pyvisa", "url": "https://github.com/hgrecco/pyvisa/issues/127", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
504239177
Fix FutureWarning for missing label Passing list-likes to .loc or [] with any missing label will raise KeyError in the future, you can use .reindex() as an alternative. See the documentation here: https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike return self._getitem_tuple(key) Fixed in 6a8ecfb.
gharchive/issue
2019-10-08T19:41:29
2025-04-01T06:38:56.745124
{ "authors": [ "jekwatt" ], "repo": "hgsc-project-managers/pm-utils", "url": "https://github.com/hgsc-project-managers/pm-utils/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
175452315
No issues displayed on vb.net Is the plugin indend to work with vb.net? Tried two project C# und VB.NET. Only the first seems to work. It only supports C#, I'm afraid. I'll update the readme to point this out.
gharchive/issue
2016-09-07T09:15:22
2025-04-01T06:38:56.749757
{ "authors": [ "Matchile", "citizenmatt" ], "repo": "hhariri/CleanCode", "url": "https://github.com/hhariri/CleanCode/issues/13", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
94619953
1.13 I'm still working on changes for the other addons, but I figured that it would be good to start pushing as I have them ready so I can have early feedback. This PR upgrades the addons to ember-cli 1.13.1, but keeps ember 0.12.0 and ember-data 1.0.0-beta.18. The changes pave the way towards 2.0. core default: PASS ember-1.13.0: PASS ember-ember-data-1.13.0: FAIL ember-release: PASS ember-beta: FAIL ember-canary: FAIL storefront default: PASS ember-1.13.0: PASS ember-ember-data-1.13.0: FAIL ember-release: PASS ember-beta: FAIL ember-canary: FAIL auth default: PASS ember-1.13.0: FAIL ember-ember-data-1.13.0: FAIL ember-release: FAIL ember-beta: FAIL ember-canary: FAIL checkouts default: PASS ember-1.13.0: PASS ember-ember-data-1.13.0: FAIL ember-release: PASS ember-beta: FAIL ember-canary: FAIL Will ember-1.11.0 be supported? I don't see a strong reason for that. Yo @givanse you're the fucking MAN Comments coming shortly Only auth has failing tests for 1.13.0. It eludes me at the moment and right now I need to get other stuff done :/ I'll have a look at Auth! Just haven't got around to bumping the version of simple auth yet - it was in beta when 0.0.1 was released @givanse - I should have time to work on this this week - can I have push for your fork? I'll do all my work on this branch @hhff Yes, added you. thanks @givanse ! @hhff Did you update the backend that is used when the tests are run? (http://testing.spree-ember.com) @givanse - I haven't - it's been the same for a while. Does it need a change? Some tests that were ok before are failing now. Simple things like: was expecting 'Bags' actual result 'Mugs' Maybe a sorting change in the taxons? I fixed (edited for Mugs) and four more errors appeared. Maybe the seed data was edited? I didn't spend much time looking into this, decided to wait for a reply. If nothing has changed, I'll check what is going on. yeah i haven't changed it at all - but I'm guessing it's just an ordering thing - so if the tests explicitly "clicked" the mugs label rather than the first item, we should be cool thank you so so much for working on this @givanse u are the mannnnn
gharchive/pull-request
2015-07-13T01:56:40
2025-04-01T06:38:56.759854
{ "authors": [ "givanse", "hhff" ], "repo": "hhff/spree-ember", "url": "https://github.com/hhff/spree-ember/pull/89", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1643636695
使用archery导出sql语句的null是字符串'(null)'有问题 重现步骤 1,业务根据SQL查询执行sql查询出来的数据导出sql语句后,标识符null是'(null)' 2,然后研发把数据插入到字符串的字段中后的数据是(null),不是null,希望是标识符null 预期外的结果 这个是查询的数据是(null),然后导出的insert sql语句中是字符串,然后把数据插入到字符串的字段中数据变为了(null) 日志文本 No response 版本 1.7.2 部署方式 手工部署 是否还有其他可以辅助定位问题的信息?比如数据库版本等 No response 自己手工替换下不就好了 参考下 : https://github.com/hhyo/Archery/issues/401 https://github.com/hhyo/Archery/issues/488 https://github.com/hhyo/Archery/commit/9042b9d722e1723dfe997f02377fbf8f260eb50f 自己手工替换下不就好了 参考下 : [ 问题咨询 ]在线查询null值显示为 -,有些字段值本身为 - 。 #401 [ bug ] V1.3.7跟1.3.8 有一个查询显示问题,跟导出成sql 文件无非正确展示null问题 #488 9042b9d 手动替换没问题,主要是数据导出sql后不会做到全面的check一方面耗时另外一方面会忘记,这个人工去做校验风险太大,所以希望在导出的时候是正确的null 请参考下我列出的两个issue,有不同的需求在这里,有人希望能显示为空,但实际上有可能和空字符串混淆,也有人想显示为 null,但是也有可能和字符串 null 混淆,最后我们的方案是 (null) 并置为灰色。 另外是archery本身的行为就不是一个数据导出工具,而是简简单单一个查询工具,本来就不应该依赖这里提供的数据导出功能。 关于这个问题欢迎提出具体的方案,也欢迎pr。
gharchive/issue
2023-03-28T10:18:09
2025-04-01T06:38:56.814989
{ "authors": [ "LeoQuote", "bardyang" ], "repo": "hhyo/Archery", "url": "https://github.com/hhyo/Archery/issues/2096", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
718016472
没有查询权限时,增加申请链接 麻烦 rebase 下master吧,master 有更新 我要怎么操作呢?先把我的仓库删掉,然后再fork一个最新的,再修改,提pr吗?
gharchive/pull-request
2020-10-09T09:58:56
2025-04-01T06:38:56.816548
{ "authors": [ "LeoQuote", "dongqianzheng" ], "repo": "hhyo/Archery", "url": "https://github.com/hhyo/Archery/pull/904", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1646944513
Installation via npx does not work anymore npx degit zynth17/vitailse my-vitailse-app leads to could not download https://github.com/zynth17/vitailse/archive/8005a59cc665b2cdbc21506a598080cd35cebcf4.tar.gz Started working again after a while, not sure what the issue was.
gharchive/issue
2023-03-30T05:30:52
2025-04-01T06:38:56.817663
{ "authors": [ "supa-freak" ], "repo": "hi-reeve/vitailse", "url": "https://github.com/hi-reeve/vitailse/issues/250", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
684084702
Please consider splitting out tree formatting from the command-line utility I'd love to have a library crate for formatting trees in the style of tree. Would you consider splitting out the tree rendering from ruut into a library crate, without the dependencies like structopt and serde_json that would only be used for the command-line tool? This has finally been completed in a5902d62812b4c671ef0e368415dae09b8dddd00. See https://github.com/hibachrach/render_as_tree. Thanks for filing this!
gharchive/issue
2020-08-22T23:29:18
2025-04-01T06:38:56.819419
{ "authors": [ "hibachrach", "joshtriplett" ], "repo": "hibachrach/ruut", "url": "https://github.com/hibachrach/ruut/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }