added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:38:54.470692
| 2023-01-31T23:08:01
|
1565130694
|
{
"authors": [
"hamboneZA"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6635",
"repo": "hamboneZA/caffeine",
"url": "https://github.com/hamboneZA/caffeine/issues/5297"
}
|
gharchive/issue
|
⚠️ ClamAV has degraded performance
In 5a7d21c, ClamAV (https://spamassassin.apache.org/) experienced degraded performance:
HTTP code: 200
Response time: 88 ms
Resolved: ClamAV performance has improved in f84d4f1.
|
2025-04-01T06:38:54.473110
| 2023-02-06T00:33:52
|
1571690881
|
{
"authors": [
"hamboneZA"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6636",
"repo": "hamboneZA/caffeine",
"url": "https://github.com/hamboneZA/caffeine/issues/5412"
}
|
gharchive/issue
|
⚠️ ClamAV has degraded performance
In 9c51126, ClamAV (https://spamassassin.apache.org/) experienced degraded performance:
HTTP code: 200
Response time: 179 ms
Resolved: ClamAV performance has improved in f3f99d9.
|
2025-04-01T06:38:54.475658
| 2023-03-04T11:20:23
|
1609731382
|
{
"authors": [
"hamboneZA"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6637",
"repo": "hamboneZA/caffeine",
"url": "https://github.com/hamboneZA/caffeine/issues/5943"
}
|
gharchive/issue
|
⚠️ ClamAV has degraded performance
In 906858c, ClamAV (https://spamassassin.apache.org/) experienced degraded performance:
HTTP code: 200
Response time: 68 ms
Resolved: ClamAV performance has improved in 64537ab.
|
2025-04-01T06:38:54.478022
| 2023-05-27T16:53:40
|
1728834052
|
{
"authors": [
"hamboneZA"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6638",
"repo": "hamboneZA/caffeine",
"url": "https://github.com/hamboneZA/caffeine/issues/7943"
}
|
gharchive/issue
|
⚠️ ClamAV has degraded performance
In 617146e, ClamAV (https://spamassassin.apache.org/) experienced degraded performance:
HTTP code: 200
Response time: 64 ms
Resolved: ClamAV performance has improved in 8999f2e.
|
2025-04-01T06:38:54.480375
| 2022-05-14T21:39:56
|
1236126713
|
{
"authors": [
"hamboneZA"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6639",
"repo": "hamboneZA/caffeine",
"url": "https://github.com/hamboneZA/caffeine/issues/875"
}
|
gharchive/issue
|
⚠️ ClamAV has degraded performance
In 4d07316, ClamAV (https://spamassassin.apache.org/) experienced degraded performance:
HTTP code: 200
Response time: 68 ms
Resolved: ClamAV performance has improved in 123ec36.
|
2025-04-01T06:38:54.482694
| 2023-08-09T03:46:43
|
1842407529
|
{
"authors": [
"hamboneZA"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6640",
"repo": "hamboneZA/caffeine",
"url": "https://github.com/hamboneZA/caffeine/issues/9809"
}
|
gharchive/issue
|
⚠️ ClamAV has degraded performance
In fd872d4, ClamAV (https://spamassassin.apache.org/) experienced degraded performance:
HTTP code: 200
Response time: 480 ms
Resolved: ClamAV performance has improved in 2b4f5d6.
|
2025-04-01T06:38:54.487780
| 2019-10-19T17:19:51
|
509482780
|
{
"authors": [
"cannikin",
"zwl1619"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6641",
"repo": "hammerframework/example-blog",
"url": "https://github.com/hammerframework/example-blog/issues/2"
}
|
gharchive/issue
|
Could create an example that has authentication and authorization?
Could create an example that has authentication and authorization?
Not use Auth0.
Yup, it’s coming soon! I’m going to use Netlify Identity.
looking forward to it.
|
2025-04-01T06:38:54.489125
| 2019-08-19T14:06:31
|
482333247
|
{
"authors": [
"Niko-Kk",
"squadette"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6642",
"repo": "hammerjs/hammer.js",
"url": "https://github.com/hammerjs/hammer.js/pull/1228"
}
|
gharchive/pull-request
|
Updated to not error out on undefined window and document
Allows hammer to be used alongside webworkers. See rollup.config.js for full changes - rest is just rebuild of files.
This has been applied in proposed 2.1.0: https://github.com/squadette/hammer.js/issues/1
Thank you,
|
2025-04-01T06:38:54.499878
| 2016-09-22T19:02:30
|
178693837
|
{
"authors": [
"coveralls",
"ihodes",
"julia326",
"timodonnell"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6643",
"repo": "hammerlab/vaxrank",
"url": "https://github.com/hammerlab/vaxrank/pull/18"
}
|
gharchive/pull-request
|
Adding vaxrank HTML/PDF output, with some associated changes
refactored report generation to create a dictionary of values which then gets fed into ASCII/HTML/whatever template
added an HTML template and an ASCII template that's very close to previous output (only difference should be whitespace)
added report outputs to .gitignore
updated some requirements files
Fixes #3 - also see PDF for what this looks like
vaccine-peptides-report.pdf
If you rebase this on master, does Travis still fail? It's related to the recent biotypes changes @iskandr just made.
Just a random thought in case it turns out to be related to the tempfile
issue: if you are using tempfile.NamedTemporaryFile and haven't closed the
file handle after writing to it, you'll want to call .flush() on the handle
before passing its name to another tool. Otherwise the output may not have
made it into the file yet due to buffering.
On Fri, Sep 23, 2016 at 11:51 AM, Julia Kodysh<EMAIL_ADDRESS>wrote:
@julia326 commented on this pull request.
In vaxrank/report.py https://github.com/hammerlab/vaxrank/pull/18:
variants,
bam_path,
html_report_path,
pdf_report_path=None):
template = JINJA_ENVIRONMENT.get_template('templates/template.html')
template_data = compute_template_data(
ranked_variants_with_vaccine_peptides,
mhc_alleles,
variants,
bam_path)
with open(html_report_path, "w") as f:
f.write(template.render(template_data))
+def make_pdf_report(
template_data,
pdf_report_path):
path = "%s.html" % uuid.uuid4()
I tried to make it work with tempfile first, but pdfkit didn't seem to
like reading from a temp HTML file. I'll play around with it some more,
probably a weird permissions thing
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/hammerlab/vaxrank/pull/18, or mute the thread
https://github.com/notifications/unsubscribe-auth/AAcjuCFVldkX6pQBNlEXDhpZrWF4yS3Fks5qs_V7gaJpZM4KEP0O
.
@timodonnell that was indeed an issue, thank you!
LGTM! Merge away 👏 First PR = major need for the pipeline.
Coverage decreased (-6.8%) to 45.594% when pulling 453dbfadb9a98565323c394717b3c5ff5865aa3d on vaxrank-html into e2a1d9458f68997b3ad17d755a7b04ddab422622 on master.
|
2025-04-01T06:38:54.530956
| 2021-11-27T07:41:33
|
1064930248
|
{
"authors": [
"1725917163",
"hanchaoleng"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6644",
"repo": "hanchaoleng/ShapeConv",
"url": "https://github.com/hanchaoleng/ShapeConv/issues/3"
}
|
gharchive/issue
|
Thank you very much for your work. I am very interested in your work. I hope you can help explain the meaning of this sentence in the code. The code is in the script 'test_runner. py ' in line 56, as follows: pred_ rgb[label == 255] = np.array((0, 0, 0))
Thank you very much for your work. I am very interested in your work. I hope you can help explain the meaning of this sentence in the code. The code is in the script 'test_runner. py ' in line 56, as follows:
pred_ rgb[label == 255] = np.array((0, 0, 0))
Thank you for your attention, pred_ rgb[label == 255] = np.array((0, 0, 0)) is used to visualize the unlabeled pixels, note that unlabeled pixels are not involved in the calculation of the final result.
Thank you very much for taking some time out of your busy schedule to answer my questions
|
2025-04-01T06:38:54.597725
| 2020-12-09T09:17:24
|
760152395
|
{
"authors": [
"hanno-arm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6645",
"repo": "hannestschofenig/mbedtls",
"url": "https://github.com/hannestschofenig/mbedtls/issues/82"
}
|
gharchive/issue
|
Fix server-side state machine for 0-RTT
The server switches to receiving 0-RTT immediately after receiving a ClientHello indicating that the client wants to use 0-RTT.
This approach and its current implementation have two issues:
It does not match the state machine from the Spec, which mandates that the Server should look for 0-RTT only after sending its Finished message.
The current implementation doesn't allow multiple 0-RTT records.
The state machine should be reworked to switch to receiving 0-RTT after the server has sent its Finished messages. Moreover, in order to accomodate an arbitrary number of 0-RTT records, the 0-RTT handler should look for either (a) a 0-RTT message, (b) an EndOfEarlyData message. This should be easy based on the present splitting into coordinate, parse, postprocess sub-routines.
In particular, there is no need for distinct handshake states for 0-RTT and EndOfEarlyData anymore.
Fixed by https://github.com/hannestschofenig/mbedtls/pull/84
Fixed by https://github.com/hannestschofenig/mbedtls/pull/84
|
2025-04-01T06:38:54.639897
| 2023-10-20T09:42:58
|
1953918355
|
{
"authors": [
"Carol-lyh",
"EchoDreamer",
"nbasyl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6646",
"repo": "haotian-liu/LLaVA",
"url": "https://github.com/haotian-liu/LLaVA/issues/632"
}
|
gharchive/issue
|
[Usage] The evaluation results on scienceQA
Describe the issue
I use the model-weights of 7b-v1.5 you released to evaluate the performance. The evaluation results on other datasets are the same with your MODEL-ZOO, but the scienceQA results is:
Total: 4241, Correct: 2944, Accuracy: 69.42%, IMG-Accuracy: 67.97%
which is different from the 66.8 in MODEL-ZOO for llava-7b-v1.5. Why?
Hi, I am also having the same problem, and for the LoRA weight in the model zoo, I can only get:
Total: 4241, Correct: 2763, Accuracy: 65.15%, IMG-Accuracy: 61.73%
which differs a lot from the score reported from the model zoo.
@haotian-liu Can you please help me check if any of my steps are wrong, I used the following command for reproducing your result:
`
CUDA_VISIBLE_DEVICES=0
python -m llava.eval.model_vqa_science
--model-path ./checkpoints/llava-v1.5-7b-lora
--model-base liuhaotian/llava-v1.5-7b
--question-file ./playground/data/eval/scienceqa/llava_test_CQM-A.json
--image-folder ./playground/data/eval/scienceqa/images/test
--answers-file ./playground/data/eval/scienceqa/answers/llava-v1.5-7b-lora.jsonl
--single-pred-prompt
--temperature 0
--conv-mode vicuna_v1
python llava/eval/eval_science_qa.py
--base-dir ./playground/data/eval/scienceqa
--result-file ./playground/data/eval/scienceqa/answers/llava-v1.5-7b-lora.jsonl
--output-file ./playground/data/eval/scienceqa/answers/llava-v1.5-7b-lora_output.jsonl
--output-result ./playground/data/eval/scienceqa/answers/llava-v1.5-7b-lora_result.json
`
Thank you so much!
Hi! I reproduced the same results as you on this dataset, but the reproduction results for TextQA are only 50.52, POPE is 76.32, and MME is 1394. Could you share how your reproduction results performed on these datasets? Did you encounter similar issues, and have you found any viable solutions?
Thanks!
|
2025-04-01T06:38:54.654554
| 2023-09-07T18:37:10
|
1886397415
|
{
"authors": [
"dmuylwyk",
"jmarchionatto",
"tadgh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6647",
"repo": "hapifhir/hapi-fhir",
"url": "https://github.com/hapifhir/hapi-fhir/issues/5290"
}
|
gharchive/issue
|
Bug - Conditional updates can modify resource body such that conditional URL no longer applied.
Describe the bug
Conditional updates are behaving inconsistently with respect to the supplied resource not satisfying the conditional URL.
Please refer to:
https://hl7.org/fhir/R5/http.html#cond-update
To Reproduce
Create a new Patient, for example:
POST https://hapi.fhir.org/baseR4/Patient
{
"resourceType": "Patient",
"identifier":
[
{
"system": "http://kookaburra.text/id",
"value": "kookaburra1"
}
],
"gender": "male",
"birthDate": "1980-07-03"
}
Attempt to conditionally update this Patient with an incorrect identifier.value in the conditional URL:
PUT https://hapi.fhir.org/baseR4/Patient?identifier=http://kookaburra.text/id|kookaburra2
{
"resourceType": "Patient",
"identifier":
[
{
"system": "http://kookaburra.text/id",
"value": "kookaburra1"
}
],
"gender": "male",
"birthDate": "1980-07-03"
}
This fails as expected with a HAPI-0929.
Attempt to conditionally update this Patient with an incorrect identifier.value in the body:
PUT https://hapi.fhir.org/baseR4/Patient?identifier=http://kookaburra.text/id|kookaburra1
{
"resourceType": "Patient",
"identifier":
[
{
"system": "http://kookaburra.text/id",
"value": "kookaburra2"
}
],
"gender": "male",
"birthDate": "1980-07-03"
}
This succeeds but it shouldn't. Note the conditional URL and resource body no longer match.
Expected behavior
Step 3 above should also result in a HAPI-0929 because the supplied resource does not satisfy the conditional URL.
Screenshots
Environment (please complete the following information):
GET https://hapi.fhir.org/baseR4/metadata
"software": {
"name": "HAPI FHIR Server",
"version": "6.9.3-SNAPSHOT/7f15e62e20/2023-09-05"
},
Additional context
Meow.
A couple things:
You are getting the 929 error code as, since there are no matches, you are creating version 1 of a resource, which causes the conditional create validation to occur.
Conditional Updates are permitted to invalidate the conditional URL post-update as done in your Step #3. This is not strictly defined in the spec, and the comment here BaseHapiFhirDao.java indicates that we permit it.
I recommend we do what the comment suggests, and add a toggle to control this behaviour for users who wish to prevent this.
Thanks, @tadgh. Will update the ticket to treat as a feature request. Appreciate your help!
@dmuylwyk this could be lateral to the feature main point but could be the subject of a new bug.
in 2. you mentioned: Because no such resource exists in the repository, this is treated as a conditional create. However, according to the spec, a ccreate must include an If-None-Exist: [search parameters] header, which is not included here, so maybe this shouldn't be treated as a ccreate?
|
2025-04-01T06:38:54.679804
| 2024-05-31T15:15:19
|
2328071512
|
{
"authors": [
"dmaklygin",
"trotzig"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6648",
"repo": "happo/happo.io",
"url": "https://github.com/happo/happo.io/issues/275"
}
|
gharchive/issue
|
Incorrect SHA in the final comparison report
PREVIOUS_SHA replaced with master branch SHA after report completion.
We are experiencing an issue with Happo where the PREVIOUS_SHA is being replaced with the SHA from the master branch after the report is completed. Here are the details of our setup and the issue:
Setup:
1. We use `Happo` within `Storybook`.
2. We call `happo-ci-github-actions` with predefined environment variables:
• HAPPO_PROJECT: Picasso/Storybook
• HAPPO_API_KEY: ${{ env.HAPPO_API_KEY }}
• HAPPO_API_SECRET: ${{ env.HAPPO_API_SECRET }}
• PREVIOUS_SHA: ${{ github.event.pull_request.base.sha }} ("1252164f6ca4df5bf6f095079707afddc1b7a9f4")
Issue:
1. After starting the job, Happo provides a **proper** link to the report page: https://happo.io/a/675/jobs/1187395
2. Once the report is ready, the link changes to: https://happo.io/a/675/p/1189/compare/a9768dd9cd4a118a4ae61857124eed0fa84e0090/7ddc840fdd5f3825f44b666d67b8e3ed131c2ad8
3. The SHA a9768dd9cd4a118a4ae61857124eed0fa84e0090 corresponds to the master branch, but it should be 1252164f6ca4df5bf6f095079707afddc1b7a9f4.
Logs:
Here are some logs from the job for reference:
Using the following ENV variables:
PREVIOUS_SHA: 1252164f6ca4df5bf6f095079707afddc1b7a9f4
CURRENT_SHA: 7ddc840fdd5f3825f44b666d67b8e3ed131c2ad8
CHANGE_URL: https://github.com/toptal/picasso/pull/4342
INSTALL_CMD:
HAPPO_IS_ASYNC: true
HAPPO_SKIP_START_JOB:
HAPPO_GIT_COMMAND: git
HAPPO_COMMAND: node_modules/happo.io/build/cli.js
HAPPO_FALLBACK_SHAS_COUNT: 50
Job link: GitHub Actions job log
It appears that PREVIOUS_SHA is correctly set initially, but it is replaced by the master branch SHA in the final report. This causes the visual comparison to be inaccurate as it does not reflect the intended base commit.
We appreciate your assistance in resolving this issue.
Link to the PR to test: https://github.com/toptal/picasso/pull/4342
Hi @dmaklygin, sorry for the delay here. 🙏
The most likely thing that's happening here is that the sha for PREVIOUS_SHA doesn't have a Happo report generated for it. Happo will then use a fallback SHA, starting from PREVIOUS_SHA and moving backwards. It happens in the happo-ci script, here.
Can you check to make sure that Happo reports are generated on the master branch and that the jobs are successful? If not, that's where we should start.
Hi, @trotzig !
Thank you for your response. Seems, the issue is fixed now. Your suggestion helped to have proper reports.
I suppose we can close the issue now
|
2025-04-01T06:38:54.715388
| 2023-02-14T09:34:48
|
1583815435
|
{
"authors": [
"henrikbjorn",
"jonian",
"paulgoetze"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6649",
"repo": "hardpixel/mrml-ruby",
"url": "https://github.com/hardpixel/mrml-ruby/issues/1"
}
|
gharchive/issue
|
Feat: Precompiled binaries ala tailwindcss-rails
Is your feature request related to a problem? Please describe.
Yes. Instead of having to install Rust on production servers to install the gem and to increase install speed. it would be beneficial to have precompiled mrml binaries ala tailwindcss-rails does
Describe the solution you'd like
Precompile the gem extension
Describe alternatives you've considered
N/A
Additional context
N/A
This gem does it somehow https://github.com/IAPark/tiktoken_ruby
Hey @jonian, are you already working on that? Else I'd be happy to take a look at how to provide precompiled binaries for mrml-ruby, if you are okay with that?
Hi @paulgoetze, no I'm not working on it. It would be great if you can submit a PR. Thanks!
I think it should be implemented using oxidize-rb/actions. An example workflow in the polars-ruby gem.
:+1: Alright, thanks, @jonian, then I'll take a look.
@jonian ping, just to make sure you saw the pull request :)
Let me know if you need anything else or if there's anything I can help with to get the pre-compiled gems released.
I have released a new version with the cross-compiled gems. Thank you for the help @paulgoetze!
The cross-compiled gems support only ruby 3.0, 3.1 and 3.2.
|
2025-04-01T06:38:54.743030
| 2024-01-23T20:30:15
|
2096928775
|
{
"authors": [
"bot-gitexp-user",
"douglas-j-bothwell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6650",
"repo": "harness/developer-hub",
"url": "https://github.com/harness/developer-hub/pull/5080"
}
|
gharchive/pull-request
|
add auto-detect-targets-and-variants description STO-6974
Wrote a preliminary draft, preview link is here: Auto-detecting the target and variant
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65b024fd37dee23676bf1360--harness-developer.netlify.app
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65b1406b0afe751472c7898f--harness-developer.netlify.app
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65b15108abd1782631ed7dbb--harness-developer.netlify.app
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65b1a20679469b0a5c2dbcc7--harness-developer.netlify.app
Spun this off into a new PR: https://github.com/harness/developer-hub/pull/5138
|
2025-04-01T06:38:54.754293
| 2024-02-08T18:10:56
|
2125795874
|
{
"authors": [
"bot-gitexp-user",
"brian-f-harness"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6651",
"repo": "harness/developer-hub",
"url": "https://github.com/harness/developer-hub/pull/5337"
}
|
gharchive/pull-request
|
[PL-46956] SMP 0.13.3 patch release notes
SMP 0.13.3 patch release notes
For reviewers, a preview is available:
---will be built after draft RN are added--
What Type of PR is This?
[ ] Issue
[ ] Feature
[ ] Maintenance/Chore
If tied to an Issue, list the Issue(s) here:
Issue(s)
House Keeping
Some items to keep track of. Screen shots of changes are optional but would help the maintainers review quicker.
[x] Tested Locally
[ ] Optional Screenshot.
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c51c058dc4760074fa41d2--harness-developer.netlify.app
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c5257d47885c008e5c4b75--harness-developer.netlify.app
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c53599d060d20cfbd05cf4--harness-developer.netlify.app
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c5363b37b4810ad309088f--harness-developer.netlify.app
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c536b71eeaf00dfb9f10e1--harness-developer.netlify.app
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c54f47c4c9ae1f08ab3c55--harness-developer.netlify.app
Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://65c54f6847885c1dad5c4e78--harness-developer.netlify.app
|
2025-04-01T06:38:54.757856
| 2023-02-27T11:34:46
|
1601009502
|
{
"authors": [
"bot-gitexp-user",
"uditgaurav"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6652",
"repo": "harness/developer-hub",
"url": "https://github.com/harness/developer-hub/pull/824"
}
|
gharchive/pull-request
|
Chaos(aws-fault): Add docs for CLB and ALB az down chaos faults
Harness Developer Pull Request
Thanks for helping us make the Developer Hub better. The PR will be looked at
by the maintainers.
What Type of PR is This?
[ ] Issue
[ ] Feature
[ ] Maintenance/Chore
If tied to an Issue, list the Issue(s) here:
Issue(s)
House Keeping
Some items to keep track of. Screen shots of changes are optional but would help the maintainers review quicker.
[ ] Tested Locally
[ ] Optional Screen Shoot.
Preview environment: https://hdh.pr.harness.io/pr-824
|
2025-04-01T06:38:55.013431
| 2011-11-07T12:05:34
|
2161937
|
{
"authors": [
"harrah",
"indrajitr",
"retronym",
"siasia",
"softprops"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6653",
"repo": "harrah/xsbt",
"url": "https://github.com/harrah/xsbt/issues/257"
}
|
gharchive/issue
|
Multiple classifiers in pom dependency
In case if project depends on a subproject the dependency tag for the subproject is incorrect:
<dependency>
<groupId>com.github.siasia</groupId>
<artifactId>maven-plugin_2.9.1</artifactId>
<version>0.11.1-0.1</version>
<scope>compile</scope>
<classifier>sources</classifier>
<classifier>javadoc</classifier>
</dependency>
Multiple classifier tags result the duplicate tag error in Maven.
I'm using this as a workaround.
pomPostProcess := {
import xml._
Rewrite.rewriter {
case e: Elem if e.label == "classifier" && e.child.mkString == "sources" => NodeSeq.Empty
}
}
import xml.transform.{RewriteRule, RuleTransformer}
import xml._
object Rewrite {
def rewriter(f: PartialFunction[Node, NodeSeq]): RuleTransformer = new RuleTransformer(rule(f))
def rule(f: PartialFunction[Node, NodeSeq]): RewriteRule = new RewriteRule {
override def transform(n: Node) = if (f.isDefinedAt(n)) f(n) else n
}
}
I took a better look at it. The problem is that the original patch looks at all artifacts in all configurations:
https://github.com/harrah/xsbt/blob/0.11/ivy/MakePom.scala#L143
sbt puts the source and doc artifacts in the "sources" and "javadocs" configurations:
https://github.com/harrah/xsbt/blob/0.11/main/Defaults.scala#L374
https://github.com/harrah/xsbt/blob/0.11/ivy/IvyInterface.scala#L431
We can ignore artifacts in those configurations using a method in DependencyDescriptor similar to getAllDependencyArtifacts but that only gets artifacts in the specified configurations (I'd have to look the name up).
@indrajitr Do you want to do this or should I?
Just the "jar" type doesn't quite work because the type might be "bundle", for example. The valid types are set in classpathTypes. (It is just "jar" + "bundle" right now, but it keeps it configurable in one spot.)
Good point. classpathTypes can be added to MakePomConfiguration and used in makePom.
While at that, should we use this for deriving packaging type as well (instead of an isolated use of IgnoreTypes)?
That ignore types usage is different, though, and I think it is correct as a blacklist. It is legitimate for a user to say 'war' as the packaging, for example.
Damn, I'm glad I'm not the only one. I just ran into this same issue. I should check the list first next time. I needed up carefully publishing one version then changing the dependency version's dependsOn(...) to a library dependency to ensure the jars are pulled
@softprops Try out the snapshot at your convenience -- hopefully this has been sorted out now.
|
2025-04-01T06:38:55.028913
| 2020-06-09T20:32:42
|
635736965
|
{
"authors": [
"harsh-px"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6654",
"repo": "harsh-px/flux-get-started",
"url": "https://github.com/harsh-px/flux-get-started/pull/405"
}
|
gharchive/pull-request
|
[Autopilot] rebalance approval for Portworx storage pools (15d2875e-f420-4929-85bf-9a31640514a9) rule: pool-rebalance
This is a request to approve the following automated autopilot action
What will get affected
Type: StoragePool
Name: Portworx storage pools
Namespace:
Owner information:
Type:
Name:
What action will be taken
Work summary:
Rebalance actions:
add replica
Volume:<PHONE_NUMBER>96226165
Pool: 15d2875e-f420-4929-85bf-9a31640514a9
Node: 5e6af7d5-7a58-4a19-89ab-19f9136764bc
Replication set ID: 0
Start: 2020-06-09T20:32:39.060637701Z End: (timestamp: nil Timestamp)
Work summary:
-> UnbalancedProvisionedSpaceBytes<PHONE_NUMBER>0 done, 0 pending
-> UnbalancedVolumes 1 done, 0 pending
remove replica
Volume:<PHONE_NUMBER>96226165
Pool: da9771af-d325-48f4-b42d-040a5b952171
Node: 97f131dd-9e5b-431a-96f9-1c9a9c301478
Replication set ID: 0
Start: 2020-06-09T20:32:39.060638386Z End: (timestamp: nil Timestamp)
Work summary:
-> UnbalancedProvisionedSpaceBytes<PHONE_NUMBER>0 done, 0 pending
-> UnbalancedVolumes 1 done, 0 pending
add replica
Volume:<PHONE_NUMBER>087921482
Pool: da8f8818-399e-4e23-92bd-5eea444c4594
Node: 218462c7-0543-47a8-af06-9e44d89da497
Replication set ID: 0
Start: 2020-06-09T20:32:39.182589533Z End: (timestamp: nil Timestamp)
Work summary:
-> UnbalancedProvisionedSpaceBytes<PHONE_NUMBER>0 done, 0 pending
-> UnbalancedVolumes 1 done, 0 pending
remove replica
Volume:<PHONE_NUMBER>087921482
Pool: da9771af-d325-48f4-b42d-040a5b952171
Node: 97f131dd-9e5b-431a-96f9-1c9a9c301478
Replication set ID: 0
Start: 2020-06-09T20:32:39.182590778Z End: (timestamp: nil Timestamp)
Work summary:
-> UnbalancedProvisionedSpaceBytes<PHONE_NUMBER>0 done, 0 pending
-> UnbalancedVolumes 1 done, 0 pending
Why is the action needed.
The action request was triggered based on an AutopilotRule pool-rebalance defined in your cluster.
How do I approve
Once you review the above,
To approve, simply approve and merge this PR
To declined, close the PR
Autopilot will be watching for the merged specs in the cluster and will proceed with the action if approved and declined the action if not.
[Autopilot] Closing PR as the action was found approved in the Kubernetes cluster.
|
2025-04-01T06:38:55.075354
| 2024-06-04T02:27:16
|
2332392898
|
{
"authors": [
"Yu-Jack",
"jillian-maroket",
"m-ildefons"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6655",
"repo": "harvester/harvester",
"url": "https://github.com/harvester/harvester/issues/5948"
}
|
gharchive/issue
|
[DOC] Missing old version APIs documentation
Is your doc request related to a problem? Please describe or add related issue ID.
Although we have multiple harvester version options in documentation, but we only have v1.3 and dev APIs documentation. When we click v1.1 harvester documentation, then click API, it'll be redirected v1.3 APIs documentation. In other words, I can't see v1.1 APIs documentation.
Describe the solution you'd like
Do we need to add it back?
The other thing is that our openapi generator follows our harvester/harvester repo. If we fix some errors of openapi generator in new release version, we can't apply it on our old release version unless we backport. But, it seems weird that only backport those fixes for fixing openapi generator. Maybe we could have another repo only for openapi generator?
It's a open question here, welcome any ideas.
@innobead I am not familiar with how exactly the API documentation is generated. What I know is from this PR, which @m-ildefons created.
I'll check this one out. I'm sure it's a fairly small error in the docusaurus config somewhere.
The API docs for the old versions is there (e.g. https://docs.harvesterhci.io/v1.2/api/create-namespaced-virtual-machine-backup/) but what's missing is a link or a menu that leads to it. Somehow the version-change menu doesn't work for the API docs of older versions, but it does work for v1.4/dev and v1.3. There also doesn't seem to be a landing page/category index for those versions.
|
2025-04-01T06:38:55.082241
| 2015-05-21T09:59:57
|
78923323
|
{
"authors": [
"agamaloni",
"danielhers",
"omerxx",
"samuelregev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6656",
"repo": "hasadna/anyway",
"url": "https://github.com/hasadna/anyway/issues/262"
}
|
gharchive/issue
|
Tour step 3 dialog is over streched
Getting to the third step of the tour המקום שחיפשתם I noticed the window extends it's basic design:
VS:
@agamaloni
While at it I think it would be nice adding a last step with a greeting notifying that the tour is over
Maybe also add a description about opening a discussion
On Thu, May 21, 2015 at 1:12 PM, omerxx<EMAIL_ADDRESS>wrote:
While at it I think it would be nice adding a last step with a greeting
notifying that the tour is over
—
Reply to this email directly or view it on GitHub
https://github.com/hasadna/anyway/issues/262#issuecomment-104212690.
thanks @omerxx change back for cleaner view and resize .popover
can add more steps,
now i get hard time to find inside the code the easy way for setting up the current date
any idea?
You need to update the daterangepicker's startDate and endDate, and then notify it: https://github.com/hasadna/anyway/blob/master/static/js/app.js#L484
Hi @agamaloni
The tour step is still stretched, I tried to modify it locally but I can't see the tour here (do you know why..?)
plus, why is this specific step at app.js and not in tour ?
Hi @omerxx ,
yes,
For locate step over the map i go out from the tour and popup an infoWindow i know it not look exactly like the different steps.. do you know how to modify specific infoWindow ? (i will figure it out)
that was my best solution for dealing with pointing on exactly spot over the map.
according to that logic step 3 set inside setCenterWithMarker of app.js
to catch the location after find place "this.locationMarker" and open infowindow.open(this.map, location);
|
2025-04-01T06:38:55.085908
| 2015-11-23T19:44:04
|
118457070
|
{
"authors": [
"galraij",
"omerxx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6657",
"repo": "hasadna/anyway",
"url": "https://github.com/hasadna/anyway/issues/493"
}
|
gharchive/issue
|
Separate United markers by severity
We have severity for our united-hatzala markers so I think we should use them.
Instead of having a generic blue colored markers we could separate them to colors based on severity and keep the current icon to imply this is a united report.
@LironRS, @OmerSchechter, @galraij, what do you think?
Absolutely! this was the original plan. I think @LironRS is working on it already.
On 23 Nov 2015, at 21:44, Omer Hamerman<EMAIL_ADDRESS>wrote:
We have severity for our united-hatzala markers so I think we should use them.
Instead of having a generic blue colored markers we could separate them to colors based on severity and keep the current icon to imply this is a united report.
@LironRS https://github.com/LironRS, @OmerSchechter https://github.com/OmerSchechter, @galraij https://github.com/galraij, what do you think?
—
Reply to this email directly or view it on GitHub https://github.com/hasadna/anyway/issues/493.
|
2025-04-01T06:38:55.088969
| 2024-03-06T16:26:54
|
2171922476
|
{
"authors": [
"NoamGaash",
"Tamir198"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6658",
"repo": "hasadna/open-bus-map-search",
"url": "https://github.com/hasadna/open-bus-map-search/pull/557"
}
|
gharchive/pull-request
|
feat: Add map to line profile
Added a map to the bottom of this page:
Fix #542
So do you want me to do it in a new pr?
@Tamir198 it's up to you :)
I think I will want you to merge this and do it in a new pr just to prevent bug conflicts with Darkmift user who is also working on this part on the page.
Sounds good?
@Tamir198 sure, thanks. Let's merge it.
I gave you permissions to the repo, so feel free to merge things whenever you want (as long as there are no regressions and anyone made at least one code-review)
@all-contributors please add @Tamir198 for his code :clap: :medal_sports:
|
2025-04-01T06:38:55.101725
| 2016-07-02T02:43:52
|
163498459
|
{
"authors": [
"frozenpandaman",
"hasegaw",
"hiroyuki-komatsu"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6659",
"repo": "hasegaw/IkaLog",
"url": "https://github.com/hasegaw/IkaLog/pull/358"
}
|
gharchive/pull-request
|
Enabled to set a file from the preview panel.
Also enabled to show the current input source in the preview.
Added ui/events.py
EVT_INPUT_FILE_ADDED: Event a new video file is added.
EVT_INPUT_INITIALIZED: Event input source is updated.
Moved the text box for video file from Options to Preview.
Stopped to store the value of the text box to the config file.
Screenshots:
Thanks for the work. I'd like to do some tests on Windows environment before merging.
Cool! :)
Fixed the issue on Windows discussed in the team Slack.
66fa538 is the change.
|
2025-04-01T06:38:55.115976
| 2024-12-10T20:43:33
|
2731151482
|
{
"authors": [
"jasperpotts",
"jjohannes"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6660",
"repo": "hashgraph/hedera-block-node",
"url": "https://github.com/hashgraph/hedera-block-node/pull/389"
}
|
gharchive/pull-request
|
Tool for converting Record Files to Block Stream
Command line tool adding extra command for converting Record Files to Block Stream
Not 100% done yet but made ready for review so the build actions are executed to make sure build works on sever because it as been such a pain to get to build locally.
Comment on Gradle setup:
I will revisit the "classpath-based" setup of the "tools" project this PR introduces as part of https://github.com/hiero-ledger/hiero-gradle-conventions/issues/53
|
2025-04-01T06:38:55.118679
| 2021-01-08T12:24:31
|
782082838
|
{
"authors": [
"paulmadsenhed"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6661",
"repo": "hashgraph/hedera-exchange-rate-tool",
"url": "https://github.com/hashgraph/hedera-exchange-rate-tool/issues/141"
}
|
gharchive/issue
|
Add PayBito HBAR/USD pair to ERT
Summary
PayBito has an HBAR/USD pair that should be added to the list of exchanges
https://trade.paybito.com/view-exchange/#/?pair=hbar-usd
API info page seems down
https://www.paybito.com/api-end-points/
APIEndPoints.pdf
APIEndPoints.pdf
|
2025-04-01T06:38:55.125572
| 2022-08-30T20:01:56
|
1356239473
|
{
"authors": [
"codecov-commenter",
"rustyShacklefurd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6662",
"repo": "hashgraph/hedera-json-rpc-relay",
"url": "https://github.com/hashgraph/hedera-json-rpc-relay/pull/482"
}
|
gharchive/pull-request
|
Health checks svc annotations
Description:
This PR
Adds support for YAML and JSON annotations on the service
Includes readiness and liveness probes to the deployment
Updates to the configmap config.HEDERA_NETWORK logic and coinciding values.yaml comments
Removed top-level secrets. from value.yaml, this was not ever being called or used
Related issue(s):
Fixes #
Notes for reviewer:
Deployment templating:
---
# Source: hedera-json-rpc-relay/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-hedera-json-rpc-relay
labels:
app: hedera-json-rpc-relay
helm.sh/chart: hedera-json-rpc-relay-0.7.0-SNAPSHOT
app.kubernetes.io/name: hedera-json-rpc-relay
app.kubernetes.io/instance: test
app.kubernetes.io/version: "0.7.0-SNAPSHOT"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: hedera-json-rpc-relay
app.kubernetes.io/instance: test
template:
metadata:
labels:
app: hedera-json-rpc-relay
app.kubernetes.io/name: hedera-json-rpc-relay
app.kubernetes.io/instance: test
spec:
imagePullSecrets:
- name: ghcr-registry-auth
serviceAccountName: test-hedera-json-rpc-relay
securityContext:
{}
containers:
- name: hedera-json-rpc-relay
image: "ghcr.io/hashgraph/hedera-json-rpc-relay:0.7.0-SNAPSHOT"
imagePullPolicy: Always
env:
- name: CHAIN_ID
value: ''
- name: CHAIN_ID
valueFrom:
configMapKeyRef:
name: test-hedera-json-rpc-relay
key: CHAIN_ID
optional: true
- name: HEDERA_NETWORK
valueFrom:
configMapKeyRef:
name: test-hedera-json-rpc-relay
key: HEDERA_NETWORK
optional: false
- name: OPERATOR_ID_ETH_SENDRAWTRANSACTION
valueFrom:
secretKeyRef:
name: test-hedera-json-rpc-relay
key: OPERATOR_ID_ETH_SENDRAWTRANSACTION
optional: true
- name: OPERATOR_KEY_ETH_SENDRAWTRANSACTION
valueFrom:
secretKeyRef:
name: test-hedera-json-rpc-relay
key: OPERATOR_KEY_ETH_SENDRAWTRANSACTION
optional: true
- name: MIRROR_NODE_URL
valueFrom:
configMapKeyRef:
name: test-hedera-json-rpc-relay
key: MIRROR_NODE_URL
optional: false
- name: LOCAL_NODE
valueFrom:
configMapKeyRef:
name: test-hedera-json-rpc-relay
key: LOCAL_NODE
optional: false
- name: SERVER_PORT
valueFrom:
configMapKeyRef:
name: test-hedera-json-rpc-relay
key: SERVER_PORT
optional: false
- name: OPERATOR_ID_MAIN
valueFrom:
secretKeyRef:
name: test-hedera-json-rpc-relay
key: OPERATOR_ID_MAIN
optional: false
- name: OPERATOR_KEY_MAIN
valueFrom:
secretKeyRef:
name: test-hedera-json-rpc-relay
key: OPERATOR_KEY_MAIN
optional: false
ports:
- containerPort: 7546
name: jsonrpcrelay
livenessProbe:
httpGet:
path: /
port: jsonrpcrelay
readinessProbe:
httpGet:
path: /
port: jsonrpcrelay
resources: {}
Checklist
[x] Documented (Code comments, README, etc.)
Codecov Report
Merging #482 (578686e) into main (4c3191c) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #482 +/- ##
=======================================
Coverage 76.38% 76.38%
=======================================
Files 12 12
Lines 923 923
Branches 144 144
=======================================
Hits 705 705
Misses 165 165
Partials 53 53
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
|
2025-04-01T06:38:55.134049
| 2021-11-02T10:32:10
|
1042140475
|
{
"authors": [
"SimiHunjan",
"danielakhterov",
"gregscullard",
"valtyr-naut"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6663",
"repo": "hashgraph/hedera-sdk-js",
"url": "https://github.com/hashgraph/hedera-sdk-js/issues/728"
}
|
gharchive/issue
|
ContractExecuteTransaction.setFunctionParameters returns void
Description
The setFunctionParameters of ContractExecuteTransaction returns void, thus preventing it from being used in a builder pattern.
Steps to reproduce
let response = new ContractExecuteTransaction()
.setMaxTransactionFee(new Hbar(15))
.setContractId(ContractId.fromSolidityAddress(to.replace("0x", "")))
.setFunctionParameters(Buffer.from(data, "hex"))
.execute(client);
fails because .setFunctionParameters returns void and .execute(client) is run against undefined
There is a workaround which is to create the transaction object, then .setFunctionParameters on the object, then .execute(client) on the same object, however this isn't developer friendly and breaks the usual builder pattern.
Additional context
No response
Hedera network
mainnet, testnet, previewnet
Version
v2.4.0
Operating system
macOS
I think you are supposed to use setFunction() instead of setFunctionParameters()?
const transaction = new ContractExecuteTransaction()
.setContractId(newContractId)
.setGas(100_000_000)
.setFunction("set_message", new ContractFunctionParameters()
.addString("hello from hedera again!"))
setFunction returns this which allows .execute to be chained like you want. Will double check with @danielakhterov
@gregscullard let us know if this resolves your issue
@valtyr-naut Issue is correct. https://github.com/hashgraph/hedera-sdk-js/blob/55d93a160d29b67a5894b39c7ee62083e0d5df2d/src/contract/ContractExecuteTransaction.js#L209 Should return this
|
2025-04-01T06:38:55.301901
| 2019-11-01T22:52:40
|
516386380
|
{
"authors": [
"brikis98",
"tpdownes"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6664",
"repo": "hashicorp/terraform-aws-consul",
"url": "https://github.com/hashicorp/terraform-aws-consul/issues/155"
}
|
gharchive/issue
|
Module does not support well-behaved reverse lookup under systemd-resolved
The current state of consul support for systemd-resolved is a bit funky in that forward resolution works well and simply, while reverse lookup of IPs will, with low percentage, return .consul domains or whatever the other DNS resolvers say. e.g. https://github.com/hashicorp/consul/issues/6462.
https://github.com/hashicorp/consul/pull/6731 provides a solution for ensuring that all reverse lookups of an IP address known to consul results in a .consul domain.
It has the added behavior (perhaps undesirable) of reverse lookup failing on IP addresses not known to consul unless one also configures the recursors option. e.g.
{
...
"recursors": ["<IP_ADDRESS>"],
...
}
I think it would be reasonable to consider adopting this configuration in the consul-cluster module and intend this issue to be a starting point for the conversation.
Thanks for the module as it has facilitated very rapid progress on my side!
Thanks for reporting!
It has the added behavior (perhaps undesirable) of reverse lookup failing on IP addresses not known to consul
What's the default behavior?
unless one also configures the recursors option. e.g.
What value would you plug into recursors?
The behavior of the documented solution is described in https://github.com/hashicorp/consul/issues/6462.
TL;DR: most of the time reverse resolution goes through whatever the system has been configured to use. A small percentage of the time reverse resolution goes through consul. That is to say, reverse lookup is not predictable in the documented systemd-resolved solution.
The PR is predictable in that 100% of the time systemd-resolved will reverse lookup via the consul agent. The cost is that the consul agent will fail to reverse lookup for IPs not within the .consul domain.
You can solve this by placing a DNS server with appropriate reverse lookup capability into recursors. In most situations, this probably means whatever DNS server you use for forward lookups outside of the .consul domain. i.e., whatever DNS servers you got from DHCP / static config.
Make sense?
Got it, thanks for the context. So, to summarize, you are proposing that for systems using systemd-resolved we update the run-consul script to be able to add the following to resolved.conf:
DNS=<IP_ADDRESS>
Domains=~consul ~<CIDR>.in-addr.arpa
Where <CIDR> is passed in via a new param to the script... As well as add the following to the consul config:
"recursors": ["<DNS_SERVER>"],
Where <DNS_SERVER> is also passed in via a new param to the script.
Is that right?
I'm definitely proposing the first thing, with caveat that it's actually the "backwards truncated CIDR" and it should be opt-in.
The second thing, I'm outlining the pros/cons. You could imagine mimicking the dnsmasq behavior of using servers from /etc/resolve.conf by having run-consul automatically set recursors by parsing the output of systemd-resolve (or resolvectl on ultra-contemporary systems) or networkctl. I'm not really what is right these days.
(sorry for delay, we were all away for a company offsite)
Roger. I think if both options are opt-in based on passed-in params, this makes sense to add. A PR is very welcome!
|
2025-04-01T06:38:55.596346
| 2018-10-22T07:16:33
|
372416391
|
{
"authors": [
"CLAassistant",
"Shobhit2884"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6665",
"repo": "hashmapinc/Tempus",
"url": "https://github.com/hashmapinc/Tempus/pull/824"
}
|
gharchive/pull-request
|
Revert "Tempus-467, 468, 655"
Reverts hashmapinc/Tempus#668
Reverting this code because of the following errors:-
error Unable to resolve path to module 'well-log-viewer/node_modules/d3/build/d3' import/no-unresolved
This is because the module is imported inside the well-log-viewer, import should be used on the main node_modules of the application. Please correct this.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T06:38:55.883595
| 2019-11-09T19:59:35
|
520506051
|
{
"authors": [
"dimmanramone",
"tomoqv"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6666",
"repo": "hasl-platform/lovelace-hasl-departure-card",
"url": "https://github.com/hasl-platform/lovelace-hasl-departure-card/issues/2"
}
|
gharchive/issue
|
Add a “time_offset”
Requested from @tomoqv
Add a “time_offset” to take into account the amount of time it takes to reach the station, i.e. if it takes 8 minutes to walk to the train, only show departures from now() + 8 minutes onwards.
The thing is, that I would like to conserve space in some of my Lovelace views. Showing a number of departures that are impossible to make from my house adds a few lines of information that is of little or no interest to the person looking for the next possible departure. This is a feature request, so I fully understand if it may be low priority.
Moved to https://github.com/hasl-platform/lovelace-hasl-departure-card/issues
@tomoqv I guess you want it to show the actuall time left and time for the departures but hide the ones you are not able to catch. i.e. It takes 8 minutes from your place to the station and the time is 19:42.
The next departures are 19:47 (5 min), 19:51(9 min), 19:54(12 min). In the card is going to so the actuall times but hide 19:47 because you don't have enough time to catch it.
Yes, precisely. That way I won't have any superfluous information.
Thanks!
|
2025-04-01T06:38:55.996334
| 2019-10-26T11:46:39
|
512826054
|
{
"authors": [
"dhananjaisrmgpc",
"nene11"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6669",
"repo": "hauke-d/cnn-eeg",
"url": "https://github.com/hauke-d/cnn-eeg/issues/3"
}
|
gharchive/issue
|
nndata.py errors out
nndata.py errors out at following line:
t, l, loc, fs = util.load_physionet_data(subject_id, num_classes, long_edge=long_edge)
since t, l, loc, fs have no data, therefore it says:
np.array(trials).reshape((len(trials),) + trials[0].shape + (1,)), np.array(labels)
IndexError: list index out of range
Though if following changes are made
my_variable = util.load_physionet_data(subject_id, num_classes, long_edge=long_edge)
print(my_variable)
Then I can see the entire data array returned from above function call !
what should be done to get this function call to execute properly ?
did you get any idea to solve this issue !!
|
2025-04-01T06:38:56.001368
| 2020-06-02T08:32:53
|
629010316
|
{
"authors": [
"Astagor",
"misi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6670",
"repo": "havfo/multiparty-meeting",
"url": "https://github.com/havfo/multiparty-meeting/issues/454"
}
|
gharchive/issue
|
Bug in handling name change
On ME screen the user can change his name. If he enters an empty string Guest is being assigned automatically. So far so good. But now if he clicks the 'Guest' to change it again, the input is not editable anymore and the user cannot change the name. Tested with https://letsmeet.no/.
I think it is something to due with the store.
@Astagor Can you retest it.
It seems it is working now on letsmeet.no
I will close I see this is fixed..
Yes, this has been fixed.
|
2025-04-01T06:38:56.020041
| 2023-10-23T08:50:04
|
1956668221
|
{
"authors": [
"tadayosi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6671",
"repo": "hawtio/hawtio-next",
"url": "https://github.com/hawtio/hawtio-next/issues/628"
}
|
gharchive/issue
|
JMX - Copy actions on an operation don't work if Hawtio is accessed at a host other than localhost
If Hawtio is accessed at <IP_ADDRESS>:8080 for instance instead of localhost:8080, Copy method name and Copy Jolokia URL actions provided by the Operations tab don't work as follows:
It is because navigator.clipboard.writeText works only under https unless it's localhost.
https://developer.mozilla.org/en-US/docs/Web/API/Clipboard
Should we find some way to get around with the limitation or keep it as is honouring the security consideration done at navigator.clipboard.writeText? If we keep it as is, we can disable the menus when the console is on http and not localhost.
Showing a warning notification with instruction on why it didn't work sounds like a good solution.
|
2025-04-01T06:38:56.031391
| 2023-02-12T01:30:22
|
1581079209
|
{
"authors": [
"hayabhay",
"rursache",
"schklom"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6672",
"repo": "hayabhay/whisper-ui",
"url": "https://github.com/hayabhay/whisper-ui/issues/23"
}
|
gharchive/issue
|
Docker image
A Docker image would be great to make it easier to deploy.
A draft for the quantized version's Dockerfile could look like
FROM python:latest
COPY . .
RUN apt-get update
RUN apt-get install -y ffmpeg
RUN pip install streamlit
RUN pip install setuptools-rust
# RUN pip install git+https://github.com/openai/whisper.git
RUN pip install git+https://github.com/MiscellaneousStuff/whisper.git
ENV PATH="$HOME/.cargo/bin:$PATH"
EXPOSE 8501
VOLUME /data/.whisper_settings.json
CMD streamlit run app/01_🏠_Home.py
I'm trying to figure out what directory is used for the data.
Edit: EXPOSE instead of PORT
Good call. Will add a docker config.
Data folder gets created at the project root. You can change them as you see fit in the config file.
a published image would also be nice. you could setup github actions
|
2025-04-01T06:38:56.042056
| 2022-10-31T07:01:22
|
1429359936
|
{
"authors": [
"BraveBird3291",
"hazcod",
"user112200"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6673",
"repo": "hazcod/maclaunch",
"url": "https://github.com/hazcod/maclaunch/issues/23"
}
|
gharchive/issue
|
Worked amazing on Monterey but not working in Ventura :/
Please advise. Thank you so much
Hi @BraveBird3291 can you please be more specific what's not working and any error messages?
list or list enabled still works for me.
I think list enabled is showing disabled items or disable seems to be not working (i.e. after disable adobe list enabled still shows items like com.adobe.AdobeCreativeCloud). Before upgrading to Ventura list would show launch status like enabled or disabled but after the upgrade it shows MachService or OnDemand, Onstartup, Always or just Unknown. This was tested on m1 mac.
Thanks.
Would it be possible do deliver me the file permissions ('ls -alh') of one of those plists and its contents?
@hazcod
Will do but I'm currently out travelling will send you in 2-3 days ;)
@hazcod Sorry for the delay. Here is a few file permissions of plists.
@user112200 Can you try running hte below and let me know what it returns?
launchctl disable user/"$(id -u)"/com.adobe.AdobeCreativeCloud
@user112200 And it doesn't show up in;
launchctl print-disabled user/"$(id -u)" | grep adobe
@hazcod Actually it does show up in the disabled.
Rerun again after 13.0.1 update.
|
2025-04-01T06:38:56.093054
| 2023-04-04T20:25:17
|
1654531062
|
{
"authors": [
"ayoubOUSSM2021",
"hblyp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6674",
"repo": "hblyp/pcsrt",
"url": "https://github.com/hblyp/pcsrt/issues/1"
}
|
gharchive/issue
|
the code execution step
Hello,
I have been trying to run this code, but I keep encountering errors at the last step. Here's how I wrote the code, and I was wondering if you could help me identify where my mistake is:
target/release/pcsrt [OPTIONS] --centroid [LAT(float)],[LON(float)],ELEVATION[(float)] --time-range FROM [(2020-01-01T12:00:00.000Z),TO(2020-03-23T18:00:00.000Z)] --step-mins [int] --linke-turbidity-factor [SINGLE_LINKE(float)] [INPUT_FILE] [OUTPUT_FILE]
I would be very grateful if you could assist me. Thank you in advance.
Hi,
replace the [PLACEHOLDER]s with values you need, e.g. ./target/release/pcsrt --centroid 57.362845,12.976645,225 --time-range 2020-01-01T12:00:00.000Z,2020-03-23T18:00:00.000Z --step-mins 60 --linke-turbidity-factor 3.5 ./input.las ./output.las.
Hello, thank you very much for your response. I have obtained the results, but I am encountering an error while constructing the normals (please refer to the yellow-highlighted line in the attached screenshot). Additionally, my point cloud includes the attributes X Y Z R G B. Could you please let me know the possible source of this error? Thank you very much for your assistance.
It's just a warning. Normal vectors are usually not constructed in voxels that do not have sufficient points within their volume or volume of adjacent voxels. In that case a vertical normal vector ([0;0;1]) is used instead and the voxel is treated as horizontal plane when calculating the incidence angle of the solar rays.
thanks a lot for your help
|
2025-04-01T06:38:56.126230
| 2023-08-02T19:46:30
|
1833810988
|
{
"authors": [
"Louis-Boulet-HC",
"pgaviganHC"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6675",
"repo": "hc-sc-ocdo-bdpd/file-processing-tools",
"url": "https://github.com/hc-sc-ocdo-bdpd/file-processing-tools/issues/36"
}
|
gharchive/issue
|
Randomize table generation parameters
Shouldn't be difficult to implement, foundation is already there to allow it.
I'll get to it shortly.
I recommend discussing this with @milortie. I'm not sure we want to start with fully random at first.
Very true, waiting to have the option to generate column amount too (which should be soon)
The variables to be varried in the test include the following (with upper and lower bounds):
Different numbers of rows and columns: # column (min,max) = (1,15), # rows (min,max) = (1,50) -- may result in multipage tables (that's ok). Level of granularity: 1, 5, 10, 15 is likely OK (or something on that order of magnitude) - start with 1000 tables in the test case with even distribution. Track results by table type.
Presence or lack of lines around the cells - 1000 tables in different configurations, even distribution - options include: Outer lines only, headder lines only, some horizontal lines only, some vertical lines only, lines that don't go all the way accross the row/column.
Vary the ammount of padding around the tables and within the cells and table. Default is 0.7inch around the tables. Vary range from (0.0 to 1.5 inches) - this test only makes sense with text or other artifacts on the page around the table. Use Lorem Ipsum for that test. - 1000 tables with even distribution in increments of 0.1 inches
Different types of characters in the cells (such as French characters, math symbols, chemical formulas, @ symbol). In assessing performance we should look for these character and rank how well each comes through. Start by making a list of relevant characters and then make "words" out of these characters. May not require many table tests, this is more about the OCR performance.
Different fonts types, sizes, colours: use the test above (types of characters). Make a list of fonts, sizes and colours to try this with.
PR #64 includes the results from these randomized runs
|
2025-04-01T06:38:56.131657
| 2017-07-02T14:59:36
|
240015171
|
{
"authors": [
"hcorona"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6676",
"repo": "hcorona/recsys-101-workshop",
"url": "https://github.com/hcorona/recsys-101-workshop/issues/5"
}
|
gharchive/issue
|
fix paths in example notebooks to avoid changing directory
This is a problem due to paths management issues with different OS
os.chdir(..) on notebook
this was already fixed
|
2025-04-01T06:38:56.142557
| 2021-07-05T06:39:56
|
936746088
|
{
"authors": [
"hcw-00",
"letmejoin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6677",
"repo": "hcw-00/PatchCore_anomaly_detection",
"url": "https://github.com/hcw-00/PatchCore_anomaly_detection/issues/7"
}
|
gharchive/issue
|
how test my own dataset without groundtrouth?
As described in title, the data from real world without GT_mask, but almost every algorithms need gt_mask to get roc_auc_score. Do you try to decide the threshold of anomaly score which higher than the threshold will be thought as anomaly ? Other than obtained the optim threshold from GT ? The problem confiused me all the time.
Unfortunately, You have to slightly modify the code to make it work with your own dataset.
In the case of Image level anomaly classification, to obtain optimal threshold you should prepare datasets with two class. then you can calculate roc curve then you can try several methods to get optimal threshold (like max of Youden's J statistic).
On the other hand, generally it's hard to prepare pixel level anomaly GT. In this case, there is no way to get roc curve and optim threshold. I don't know but I would try checking distribution of normal pixel's score and setting threshold manually.
Unfortunately, You have to slightly modify the code to make it work with your own dataset.
In the case of Image level anomaly classification, to obtain optimal threshold you should prepare datasets with two class. then you can calculate roc curve then you can try several methods to get optimal threshold (like max of Youden's J statistic).
On the other hand, generally it's hard to prepare pixel level anomaly GT. In this case, there is no way to get roc curve and optim threshold. I don't know but I would try checking distribution of normal pixel's score and setting threshold manually.
Thks for your replay! Yes, the pixel-level threshold is the problem. I have an idea about the 3-sigma of the Gaussian distrabution of patches, but each patch has an only distribution. For how to use the infomation, I' m working on it ! Would you have advise?
Sorry, I don't have great idea. But, as I said, If would try to check distribution of each pixel values of (normal) anomaly map.
|
2025-04-01T06:38:56.203464
| 2019-09-09T21:09:21
|
491327997
|
{
"authors": [
"AlisCode",
"hecrj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6680",
"repo": "hecrj/iced",
"url": "https://github.com/hecrj/iced/issues/12"
}
|
gharchive/issue
|
Constants applicable to all renderers
Hi !
As you may have noticed, I'm currently writing a ncurses-based renderer for iced on my spare time, and am using my own fork of iced for that purpose.
The only modification I did was to change some defaults. As it happens, the current constants are assuming pixel unit when pancurses of course works with lines and columns. This makes the current default on some Widgets (e.g. Checkbox, Slider, and Buttons) completely wrong.
Even if not considering my use case, which I agree is fairly uncommon, I am not sure what these constants were there for in the first place, because they assume some sort of mandatory smaller screen resolution.
What do you think would be the correct way of not enforcing those defaults upon every renderer implementor ?
This is one of the main reasons the current release is alpha.
I have shared my thoughts about this in #6. It should be simple to fix, we just need to choose a solution.
Also, I think we can split layouting and events into its own crate. Thus, I am rethinking some of the design a bit and there may be related changes soon.
|
2025-04-01T06:38:56.245018
| 2018-10-18T09:01:47
|
371428944
|
{
"authors": [
"Adamcina",
"heiglandreas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6681",
"repo": "heiglandreas/authLdap",
"url": "https://github.com/heiglandreas/authLdap/issues/160"
}
|
gharchive/issue
|
[AuthLDAP] User '' logging in
This is not a real issue, but an asking for help.
I'm running FreeIPA server on Centos 7, have a few WordPress websites on other Centos 7 server as well, and i'm looking to establish SSO for my WordPress users across those websites and some other apps. In order to do that i'm trying to configure AuthLDAP properly.
This is my auth_gssapi.conf:
<Location "/wp-login.php">
AuthType GSSAPI
AuthName "GSSAPI Single Sign On Login"
GssapiCredStore keytab:/usr/local/apache/conf/http.keytab
GssapiBasicAuthMech krb5
Require valid-user
</Location>
As a AuthLDAP URI i've tried this and many many other combinations:
ldap://admin_user_name:admin_password@ipa-server.example.com:389/dc=ipa-server,dc=example,dc=com
When i visit my WordPress website at /wp-admin/ apache's error log gives this:
[Thu Oct 18 09:49:41.265710 2018] [:error] [pid 1297:tid<PHONE_NUMBER>34304] [client my_ip_address] [AuthLDAP] User '' logging in
[Thu Oct 18 09:49:41.265820 2018] [:error] [pid 1297:tid<PHONE_NUMBER>34304] [client my_ip_address] [AuthLDAP] Username not supplied: return false
I've did telnet and ldapsearch tests from my WordPress websites server to a FreeIPA server, these are results:
[root@server ~]# telnet IPA-SERVER_DOMAIN_IP_ADDRESS 389
Trying IPA-SERVER_DOMAIN_IP_ADDRESS...
Connected to IPA-SERVER_DOMAIN_IP_ADDRESS.
Escape character is '^]'.
Connection closed by foreign host.
ldapsearch gives response with a bunch of text, and i won't copy all of those, but just the end:
......
# search result
search: 2
result: 0 Success
# numResponses: 115
# numEntries: 114
[root@server ~]#
so it looks like servers are able to communicate.
I'm trying to SSO to WordPress from my Windows 7 machine, and i have got installed MIT Kerberos Ticket Manager and successfully obtained tickets:
so, FreeIPA, my Mozzila browser (for Kerberos negotiation process) and everything else is configured as it should be.
I've been investigating and trying last few days to make this to work but nothing has helped. I hope that i've given enough information. Can you please tell me what am i missing?
Hey Adam.
The one is the single sign on with kerberos and the other one is the LDAP authentication. The AuthLDAP-Plugin is only desigend to allow people to log into a wordpress site using their LDAP-credentials. But those credentials need to be put into the wordpres login form.
But as far as I see it you want something different. You want the user to be logged into WordPress automatically using the Kerberos token. That's something completely different as the user does not need to provide any credentials at all. So you would need a plugin that checks the Kerberos-Token against the Kerberos-Server and then decides whether the user may enter WordPress or not. That's not the scope of AuthLDAP (as LDAP is not involved at all in that process)
In a quick search I only found one plugin that mentions Kerberos and that is for ActiveDirectory (https://wordpress.org/plugins/next-active-directory-integration/). So ot looks like you either need to write it yourself or need a different approach :-(
Sorry that I can't help you there.
Andreas, thank you for your quick answer. Yes, you're right, i've been mixing these things, because i've never been in touch with LDAP/Kerberos before so it's a little bit tricky for me to understand properly how things work. But your answer has helped me to get a clearer picture.
I think that i will follow LDAP as there is a plugin already that you've created.
Thanks!
|
2025-04-01T06:38:56.247117
| 2023-03-28T12:36:24
|
1643860255
|
{
"authors": [
"TonDevv",
"heiher"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6682",
"repo": "heiher/hev-socks5-tunnel",
"url": "https://github.com/heiher/hev-socks5-tunnel/issues/32"
}
|
gharchive/issue
|
vpn mode
could we use vpn mode with this project ?
i wanna use it on ios application but with VPN mode ( full tunnel )
https://github.com/daemooon/Tun2SocksKit
https://github.com/daemooon/Mango
|
2025-04-01T06:38:56.281254
| 2022-02-20T17:13:04
|
1145079894
|
{
"authors": [
"abhay",
"lucaschen520"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6683",
"repo": "helium/denylist",
"url": "https://github.com/helium/denylist/issues/1360"
}
|
gharchive/issue
|
[Removal]:
Hotspot Name
Overt Arctic Hamster
Hotspot b58 Address
11aHCWsePUUpTEAfisrPNqB5zsYD77uHd5DhfzEMzfJAhbV9gy8
Discord Handle
No response
Hotspot Manufacturer
PantherX
Removal Reason
I bought a used hotspot
Modifications
No
Extra forwarders
No
Extra antennas
No
Additional Information
No response
We have taken into account this removal request in the next iteration of the denylist. It may take some time before it's processed. This is an automated comment due to the volume of addition and removal requests. A future tool is in the works to provide more insight into the analysis approach made by the Helium team and community members that maintain this list.
|
2025-04-01T06:38:56.286629
| 2022-02-13T15:24:08
|
1136006487
|
{
"authors": [
"abhay",
"merikon2"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6684",
"repo": "helium/denylist",
"url": "https://github.com/helium/denylist/issues/767"
}
|
gharchive/issue
|
[Removal]: Bent Lemonade Viper
Hotspot Name
bent-lemonade-viper
Hotspot b58 Address
112JqL42UbqMDnWDyHJFBXNyoGC671mXTUbdZFDNo3guAtTHh
Discord Handle
Merikon#6484
Hotspot Manufacturer
PantherX
Removal Reason
The location and its actual place used to be not coincide with each other to reach a larger coverage. That's probably why my hotspots appear on the denylist. However, the problem has been fixed now and the hotspots appears at where they should be.
Modifications
No
Extra forwarders
No
Extra antennas
No
Additional Information
Hi Helium manage team, thanks for all the effort you have done to build an effective and honest network. I'm an officer of a local wlan & IoT service firm. Many of the residents in my town got involved into the Helium network and community by deploy the hotspots at their home last September. Most of them are using indoor hotspot while some of them deploy extra antennas (maybe hand-made). We were impressived by this genius idea of people's network. From my prospective as a IoT industry participant, this network definity lower the total IoT infrasturcture cost and must grow in a rapid way. Some of residents in the coverage region also tried the rak WisNode Senser and they really work well.
However, in order to reach a larger coverage and increase the reward scale, some of the hotspots were located a small distance deviate from where it should be. We realized this attempt lead to the inclusion to the denylist so we correct this mistake by moving hotspots and relocate them on the Helium map. By far all of the hotspots appears at the right place (no more than 300m deviation).
Also, I found that there were some hotspots locate near our town by they really shouldn't be at there. These virtual hotspots are included in the denylist either. I wonder if those gaming hotspots that never interact with us made all of us into the denylist? If so please update your algrothim in order to avoid this situation again. Thanks
We have taken into account this removal request in the next iteration of the denylist. It may take some time before it's processed. This is an automated comment due to the volume of addition and removal requests. A future tool is in the works to provide more insight into the analysis approach made by the Helium team and community members that maintain this list.
|
2025-04-01T06:38:56.299549
| 2021-06-20T21:09:18
|
925682467
|
{
"authors": [
"allenan",
"cokes518",
"danielcolinjames"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6685",
"repo": "helium/explorer",
"url": "https://github.com/helium/explorer/pull/485"
}
|
gharchive/pull-request
|
Add totals to lists (e.g. "Witnesses (14)")
fixes #457
preview:
I made it as a component, so the following configurations are possible to reuse anywhere in Explorer:
title:
description:
(could include links like: <a href='https://docs.helium.com'>Click here to learn more</a>)
title and description (hidden description):
hides description by default if a title and description are both given, so it doesn't take up too much vertical space
title and description (expanded description):
could we also add a count to the "Hotspots in Hex" tab?
Love it. Do it.
From: Daniel James @.>
Sent: Monday, June 21, 2021 3:53:13 PM
To: helium/explorer @.>
Cc: Coco Tang @.>; Mention @.>
Subject: Re: [helium/explorer] Add totals to lists (e.g. "Witnesses (14)") (#485)
@danielcolinjames commented on this pull request.
In components/InfoBox/HotspotDetails/NearbyHotspotsPane.jshttps://github.com/helium/explorer/pull/485#discussion_r655754532:
@@ -23,7 +23,12 @@ const NearbyHotspotsPane = ({ hotspot }) => {
'overflow-y-hidden': loading,
})}
<NearbyHotspotsList hotspots={hotspots || []} isLoading={loading} />
<NearbyHotspotsList
hotspots={hotspots || []}
isLoading={loading}
title={`Nearby Hotspots (${nearbyHotspots?.length})`}
// description="[Nearby Hotspots description text]"
oh yeah was gonna ask Coco if she wanted to put something there
@cokes518https://github.com/cokes518 thoughts?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/helium/explorer/pull/485#discussion_r655754532, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AETPRSREH3373PYSN67HDNTTT67FTANCNFSM47ANRDYA.
@cokes518 (or bones) what do you think about this text for Nearby and Witnesses? feel free to wordsmith. here's the text:
witnesses:
title="Witnesses"
description={
<>
<div>
Hotspots on the Helium network that have successfully witnessed
beacons sent by {animalHash(hotspot.address)}. There are many
reasons a nearby Hotspot may not be a valid witness.
</div>
<div className="pt-1.5">
Learn more{' '}
<a
className="text-navy-400"
href="https://docs.helium.com/troubleshooting/understanding-witnesses/"
rel="noopener noreferrer"
target="_blank"
>
here
</a>
.
</div>
</>
}
nearby:
title="Nearby Hotspots"
description={`Hotspots on the Helium network that are within an appropriate physical distance to witness ${animalHash(
hotspot.address,
)}'s beacons, or to have their beacons witnessed by it.`}
|
2025-04-01T06:38:56.351390
| 2021-06-30T18:45:23
|
934005004
|
{
"authors": [
"louies0623",
"probonopd"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6686",
"repo": "helloSystem/ISO",
"url": "https://github.com/helloSystem/ISO/issues/229"
}
|
gharchive/issue
|
Unify all text in the system interface to size 12
The text on the menu bar and button is too large, but the file name and window text on the desktop or Folder are too small
Size 12 is the most comfortable size visually. With the consistent text size, the screen will not look very chaotic.
Duplicate of #230
|
2025-04-01T06:38:56.376401
| 2022-01-13T06:30:09
|
1101264999
|
{
"authors": [
"hellowuxin",
"kiibo382"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6687",
"repo": "hellowuxin/vue3-mindmap",
"url": "https://github.com/hellowuxin/vue3-mindmap/pull/28"
}
|
gharchive/pull-request
|
feat: multilingualization of context menu
Objectives
related issue: #27
Implementation
Using vue-i18n, the contextmenu can be displayed in English.
You can switch the language by passing the locale to this module in props.
well done, but vue-i18n is not tailored for use in component libraries, so maybe try another way.
@hellowuxin
well done, but vue-i18n is not tailored for use in component libraries, so maybe try another way.
Why is vue-i18n not suitable for component libraries? Please explain in detail.
If so, I will try to support lazy load by referring to the following.
https://github.com/intlify/vue-i18n-next/tree/master/examples/lazy-loading
@hellowuxin
well done, but vue-i18n is not tailored for use in component libraries, so maybe try another way.
Why is vue-i18n not suitable for component libraries? Please explain in detail.
If so, I will try to support lazy load by referring to the following. https://github.com/intlify/vue-i18n-next/tree/master/examples/lazy-loading
see https://github.com/kazupon/vue-i18n/issues/746
use i18n instead
thanks!
|
2025-04-01T06:38:56.381673
| 2019-07-07T17:40:27
|
464975497
|
{
"authors": [
"desaintmartin",
"javsalgar",
"naseemkullah"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6688",
"repo": "helm/charts",
"url": "https://github.com/helm/charts/pull/15302"
}
|
gharchive/pull-request
|
[stable/redis] exporter to 1.0.3
Signed-off-by: Naseem<EMAIL_ADDRESS>
What this PR does / why we need it:
Which issue this PR fixes
(optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged)
fixes #
Special notes for your reviewer:
Checklist
[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.]
[x] DCO signed
[x] Chart Version bumped
[x] Variables are documented in the README.md
[x] Title of the PR starts with chart name (e.g. [stable/chart])
There are some changes, mostly in metrics name, should we advertise this?
/assign
/ok-to-test
Good idea. I've added some notes. Feel free to edit them to your liking!
Thanks for the PR
/lgtm
My pleasure.
|
2025-04-01T06:38:56.384889
| 2019-08-13T12:21:56
|
480126235
|
{
"authors": [
"bacongobbler",
"ghost",
"willzhang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6689",
"repo": "helm/helm",
"url": "https://github.com/helm/helm/issues/6216"
}
|
gharchive/issue
|
Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist
root@kube-master:~# helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist
root@kube-master:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
root@kube-master:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-8kzsc 1/1 Running 0 19m
coredns-5c98db65d4-j4khc 1/1 Running 0 19m
etcd-kube-master 1/1 Running 0 19m
kube-apiserver-kube-master 1/1 Running 0 19m
kube-controller-manager-kube-master 1/1 Running 0 19m
kube-flannel-ds-amd64-6bd44 1/1 Running 0 19m
kube-flannel-ds-amd64-fdr42 1/1 Running 0 16m
kube-flannel-ds-amd64-s6d4r 1/1 Running 0 19m
kube-proxy-fslxf 1/1 Running 0 16m
kube-proxy-tmtdm 1/1 Running 0 19m
kube-proxy-xtz9x 1/1 Running 0 19m
kube-scheduler-kube-master 1/1 Running 0 19m
tiller-deploy-6b9c575bfc-rzfcs 1/1 Running 0 5m56s
Helm is configured using following commands
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account=tiller --history-max 200
same error
[root@centosvm01 ~]# helm version
Client: &version.Version{SemVer:"v2.15.1", GitCommit:"cf1de4f8ba70eded310918a8af3a96bfe8e7683b", GitTreeState:"clean"}
Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist
[root@centosvm01 ~]# kubectl version --short
Client Version: v1.16.2
Server Version: v1.16.2
Were you abnle to resolve this issue?
|
2025-04-01T06:38:56.395966
| 2023-09-04T19:40:44
|
1880790967
|
{
"authors": [
"juckerf",
"yxxhero"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6690",
"repo": "helmfile/vals",
"url": "https://github.com/helmfile/vals/pull/164"
}
|
gharchive/pull-request
|
feat(providers): first implementation of a 1password connect provider
This PR adds a provider for 1password connect (https://developer.1password.com/docs/connect).
From the README:
For this provider to work you require a working and accessible 1Password connect server.
The following env vars have to be configured:
OP_CONNECT_HOST
OP_CONNET_TOKEN
1Password is organized in vaults and items.
An item can have multiple fields with or without a section. Labels can be set on fields and sections.
Vaults, items, sections and labels can be accessed by ID or by label/name (and IDs and labels can be mixed and matched in one URL).
If a section does not have a label the field is only accessible via the section ID. This does not hold true for some default fields which may have no section at all (e.g.username and password for a Login item).
Caution: vals-expressions are parsed as URIs. For the 1Password connect provider the host component of the URI identifies the vault (by ID or name). Therefore vaults containing certain characters not allowed in the host component (e.g. whitespaces, see RFC-3986 for details) can only be accessed by ID.
Examples:
ref+onepasswordconnect://VAULT_ID/ITEM_ID#/[SECTION_ID.]FIELD_ID
ref+onepasswordconnect://VAULT_LABEL/ITEM_LABEL#/[SECTION_LABEL.]FIELD_LABEL
ref+onepasswordconnect://VAULT_LABEL/ITEM_ID#/[SECTION_LABEL.]FIELD_ID
If merged, the PR resolves #54 - but there is one topic we should discuss first.
Using URIs without a fragment is most probably useless in the current implementation. To support any combination of labels and IDs in the URI, the string map is populated with every possible combination of label and ID for its keys which does not lead to meaningful data.
Currently I have no proper solution for that. Maybe 1password is not suited for this use case? (then we should simply drop the stringprovider ability) Or we should introduce PARAMS/query components to control the behavior of the stringprovider? (like preferLabels to use labels for the map whenever possible sacrificing the ability to match keys by ID?)
Any ideas are appreciated. And since I'm not really fluent in go I'm also open to feedback regarding codestyle, stability etc :-)
1password is organized in vaults -> items -> [sections ->] fieldsSee also description of the vals-URI for this provider in the READMEOn 21 Sep 2023 15:07, yxxhero @.***> wrote:
@yxxhero commented on this pull request.
In pkg/providers/onepasswordconnect/onepasswordconnect.go:
+}
+// New creates a new 1Password Connect provider
+func New(cfg api.StaticConfig) *provider {
p := &provider{}
return p
+}
+// Get secret string from 1Password Connect
+func (p *provider) GetString(key string) (string, error) {
var err error
splits := strings.Split(key, "/")
if len(splits) < 2 {
return "", fmt.Errorf("invalid URI: %v", errors.New("vault or item missing"))
vault?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
@juckerf BTW. could you add some tests for this feature? As the first implementation of a 1password connect provider. it's good for me.
thanks for the merge!
|
2025-04-01T06:38:56.423384
| 2021-12-15T17:58:19
|
1081325979
|
{
"authors": [
"hendrikmaus"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6691",
"repo": "hendrikmaus/actions-digest",
"url": "https://github.com/hendrikmaus/actions-digest/issues/8"
}
|
gharchive/issue
|
Create release workflow
[x] publish pre-compiled binary
[x] publish container image
[x] publish to crates.io
[x] add cargo install command to the README.md
[ ] publish to https://github.com/hendrikmaus/homebrew-tap
The custom caching fails as the publisher action doesn't seem to use the expected paths:
https://github.com/hendrikmaus/actions-digest/runs/4558169870?check_suite_focus=true
Fixed in https://github.com/hendrikmaus/actions-digest/pull/12
Get the release workflow from https://github.com/hendrikmaus/rust-workflows
#15
Use https://github.com/googleapis/release-please for a pull-request based approach.
|
2025-04-01T06:38:56.425423
| 2015-11-19T09:44:34
|
117776888
|
{
"authors": [
"henrikstranneheim",
"robinandeer"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6692",
"repo": "henrikstranneheim/MIP",
"url": "https://github.com/henrikstranneheim/MIP/issues/84"
}
|
gharchive/issue
|
MIP version number in QC sample info?
Is it a good idea to include the version of MIP along with other programs?
Sure why not!
Done
|
2025-04-01T06:38:56.430308
| 2020-02-29T04:57:28
|
573174403
|
{
"authors": [
"Hansel-Dsilva",
"henriquelalves",
"prsolucoes"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6693",
"repo": "henriquelalves/GodotCharacterInputController",
"url": "https://github.com/henriquelalves/GodotCharacterInputController/issues/2"
}
|
gharchive/issue
|
How it works for 2D?
Hi,
How it works for 2D?
I read your source for 3D, but don't understand how i can detect x/y movement to my player do left/right and jump.
Can you help me with a simple code here?
Thanks.
Exactly! Sorry taking so long to answer. I actually made this addon thinking only in a 3D solution for movement + camera, so haven't thought on making other kinds of examples. It should be pretty straightforward tho; you'd only need to add an overlay on top of the addon with the 2D buttons you want to detect. When I get time I'll try updating it.
@Hansel-Dsilva This is an open-source solution I provided for free with no warranties whatsoever. If you don't want to use it because it lacks documentation, it's fine by me, but your comment is pretty much you feeling entitled of something you don't have, so my recommendation for you is to be more polite next time, or else you risk simply being ignored by any sane developer.
Ok sorry, my bad. So how does one go about utilizing your plugin?
@Hansel-Dsilva Np. The plugin only does one thing, that is to get a user input and give back some 'interpreted' values (e.g. how far and in which direction the left analog stick is moving) based on the two halves of the device screen. The project example is just like that: it reads on the plugin Node what are the values for the left analog stick (left half of the screen) to move the character and the values of each swipe on the right half of the screen to move the camera. It's basically up to you how to interpret those values, but it is hopefully straightforward if you adapt the project this addon comes with to your necessities.
I'm currently stacked with work and other projects I need to finish, but I'll try creating a simple 2D example by the end of today (GMT-3, mind you).
The idea is to use the left analog stick do control the character, and use the button on the right side to shoot a projectile. It's worth noting that, when using this plugin, you can't use 'normal' buttons; that's an unfortunate limitation caused by how Godot emulates clicks on touch-based devices: when emulating mouse-clicks, Godot won't be able to get multitouch gestures (which is the core of this addon). Since this option is disabled, you'd have to create your own kind of button getting a "gui_input" event, or using Godot's TouchScreenButton.
|
2025-04-01T06:38:56.432331
| 2023-03-24T13:32:56
|
1639403436
|
{
"authors": [
"henry2004y"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6694",
"repo": "henry2004y/TestParticle.jl",
"url": "https://github.com/henry2004y/TestParticle.jl/issues/68"
}
|
gharchive/issue
|
Removing extrapolation in getinterp?
Originally we have extrapolations evaluated to NaN for handling out-of-domain cases:
https://github.com/henry2004y/TestParticle.jl/blob/66a8cbec9d976c06c4ab69c751183ea591ee8752/src/TestParticle.jl#L105
Now this is also handled by the isoutofdomain callback function provided by OrdinaryDiffeq.jl. We should test on the gain of removing the extrapolations.
With more boundary options like periodic, we need to keep the extrapolation methods.
|
2025-04-01T06:38:56.461279
| 2019-04-05T21:15:37
|
429938021
|
{
"authors": [
"davecheney",
"stevesloka"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6695",
"repo": "heptio/contour",
"url": "https://github.com/heptio/contour/pull/984"
}
|
gharchive/pull-request
|
Upgrade Envoy to v1.9.1
Updates #983 by updating the example deployment files to use Envoy v1.9.1
Signed-off-by: Steve Sloka<EMAIL_ADDRESS>
You can stay with the alpine image if you prefer. These deployments are just guides.
On 8 Apr 2019, at 17:44, so0k<EMAIL_ADDRESS>wrote:
@so0k commented on this pull request.
In deployment/ds-hostnet-split/03-envoy.yaml:
@@ -33,7 +33,7 @@ spec:
- node0
command:
- envoy
image: docker.io/envoyproxy/envoy-alpine:v1.9.0
image: docker.io/envoyproxy/envoy:v1.9.1
any reason why you're switching from alpine to ubuntu based envoy image for the hostnet deployment?
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or mute the thread.
|
2025-04-01T06:38:56.463359
| 2017-08-08T21:54:46
|
248858137
|
{
"authors": [
"abiogenesis-now",
"kensimon",
"timothysc"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6696",
"repo": "heptio/sonobuoy",
"url": "https://github.com/heptio/sonobuoy/pull/41"
}
|
gharchive/pull-request
|
Don't mount /tmp as host volume in example pod
We're encouraging using kubectl cp in the meantime, using host mounts
was a temporary measure.
This fixes an issue where the documented kubectl cp command is
bringing in tarballs from previous runs.
/lgtm
oh I don't have permissions to stamp 🤷♀️
oh wait nvm, there just isn't a bot to recognize /lgtm I guess
b/c we don't have turnkey validation in place I'll say ok-to-self merge on test verification.
|
2025-04-01T06:38:56.486233
| 2021-02-17T19:39:24
|
810478214
|
{
"authors": [
"Elettronik",
"Ghazgkull",
"MatteoJoliveau",
"aakarshg",
"aantn",
"alok87",
"antoniocascais",
"cordoor",
"hjet",
"sl4dy",
"varac",
"zswanson"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6697",
"repo": "heptiolabs/eventrouter",
"url": "https://github.com/heptiolabs/eventrouter/issues/126"
}
|
gharchive/issue
|
Is the project dead?
Hello, I'm looking to use this project on some k8s but this project seems dead for about a year.
Is there any people from Heptio Labs taking care of this project?
I see a the current moment 11 PR opened during a year without a single answer from owners, and 30 issues.
Please says if the project is dead, or will be some housekeeping.
@timothysc @jbeda
We are using it and it works well for us in porting the logs to s3.
The description of the heptiolabs org (https://github.com/heptiolabs) now reads:
"Former home of experimental projects from Heptio"
I wonder if these projects had to be abandoned when Heptio was bought by VMware.
Let me do the Devil's Advocate.
20 days - no official answers, no activities on the project.
I agree this project is probably on hold, if not abandoned. Would love to hear from VMWare/former Heptio folks if they plan to pick it up or donate the project to the community, or if we should just fork it/reimplement it.
This does indeed seem dead. I need the functionality that eventrouter provides for another project I'm working on. I'm going to be working on this fulltime for the near future. Please feel free to contact me if you need support/help.
I've decided to base my efforts on Kubewatch and not EventRouter. Kubewatch is a little more maintained, but it also seems somewhat dormant since the VMWare-Bitnami acquisition. Kubewatch doesn't yet stream all the events that EventRouter does, so I'll start by adding all event types to the Kubewatch output. Feel free to let me know which other EventRouter features you need in Kubewatch.
VMWare folks - I would love to merge my changes to Kubewatch/Eventrouter upstream. If not, I suppose I'll fork it, but that isn't my preference.
@MatteoJoliveau @Elettronik what are your needs beyond what Kubewatch/EventRouter currently provide? How are you interested in using them?
@aantn that seems like great news! Thank you for being willing to keep development going at least for now. I'll try to see if we can help as well.
For us it's mainly getting Kubernetes events into something Grafana can display. Using EventRouter we were going to have Loki and Promtail scrape up the logs from stdout/stderr and then have them streamed to Grafana as JSON lines. This way developers could select the event stream they wanted using the same query language they already use for selecting application log streams. e.g. { app_kubernetes_io_name="my-app", stream="events" }(or something along those lines).
But really any solution that allows us to collect and store events for later querying will do.
Got it. I'm working on a more generic platform for running Python code as a result of Kubernetes changes/Prometheus alerts and automating common responses. It will be open sourced eventually, but for now it is still in private beta. One easy use-case, for example, is to add annotations to grafana so that you can see exactly when a new version of an application was updated and can quickly eyeball the difference in the performance before and after.
For your use case, you're only interested in forwarding actual Kubernetes Events, right? In other words, if a pod is created/modified/dies, you don't need to forward that, but if there is a CrashLoopBackOff then you do need to forward it. Is that correct?
Correct. The reason being that, especially now with Operators, k8s Event objects are a very useful tool to track changes and issues in k8s resources. Currently we have logs, metrics and traces centrally accessible in Grafana, but if a developer wants to debug a crash loop they have to manually run kubectl describe. Having them in Loki would allow for quick querying and alerting over them using common tools they are already used to
Great, I'll update you when I have something you can use for that purpose.
Looking forward to it, thanks @aantn!
Maybe this is a bit OT, but... Is it possible to add metrics to the app? For example, first thing that pops into my mind: adding metrics for probe failures.
Asking this because it would be nicer to have graphs around metrics in grafana than around logs.
But maybe this is out of scope :D
Maybe this is a bit OT, but... Is it possible to add metrics to the app? For example, first thing that pops into my mind: adding metrics for probe failures.
Asking this because it would be nicer to have graphs around metrics in grafana than around logs.
But maybe this is out of scope :D
Yeah, this is the type of thing I'm working on. Can you help me understand exactly what you would like to achieve? I think you can already get probe failure metrics from (kubelet into prometheus)[https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/prober/prober_manager.go]. Do you want to enrich grafana with extra info? To run some remediation or enrichment steps?
This while conversation is a little off topic, I guess. I think the eventrouter community could benefit from another open source project that is the spiritual successor of eventrouter, but if the discussion here bothers someone then let me know and I'll take it elsewhere.
Maybe this is a bit OT, but... Is it possible to add metrics to the app? For example, first thing that pops into my mind: adding metrics for probe failures.
Asking this because it would be nicer to have graphs around metrics in grafana than around logs.
But maybe this is out of scope :D
Yeah, this is the type of thing I'm working on. Can you help me understand exactly what you would like to achieve? I think you can already get probe failure metrics from kubelet into prometheus. Do you want to enrich grafana with extra info? To run some remediation or enrichment steps?
I was not aware of such metrics! Thank you very much :D
I'm close to releasing my open source project which supports many of the features that people want added to eventrouter. Can everyone on this thread let me know what they're actually using eventrouter for today or what they want to use it for? Are you sending events to slack for online notifications? Are you just logging changes to ELK so that you can troubleshoot when something goes wrong? What use cases does event router solve for you?
@aantn Good to hear! I'm using eventrouter only for exporting k8s events to loki (and grafana), using promtail which tails the eventrouter container logs. So the only feature I'd be interested in is having all k8s events in the (eventrouter|your project) container logs.
@varac do you have any interest in turning k8s events into grafana annotations? e.g. adding a dotted line to grafana whenever a deployment is updated and the image tags change? I'm using this to easily correlate upgrades w/ changes in CPU usage.
Something like this:
@aantn That looks great, sure that would be a good feature.
Cool, I've already implemented that. I have a little more work before I release this, but it's coming along nicely. Let me know if there are more integrations you can think of which would be useful. I'm currently implementing two-way Slack integration. The typical usecase is something like this:
There is a prometheus plert (e.g. low disk space on a persistent volume)
The system sends a mesage to Slack with details and a recommended remediation (e.g. cleanup some logs and increase the volume size if it still is low on space)
You click a button in Slack approving that remediation.
The system receives your approval and executes some remediation commands
I would like to use it to stream kubernetes events into a kafka topic so my system can make decisions based on some of these events (e.g. a job is finished, so do something).
@cordoor we can do that already. can you reach out to me privately to discuss in more detail (either<EMAIL_ADDRESS>or on linkedin here: https://www.linkedin.com/in/natanyellin/)
hello @aantn I'm interested in your project.. Currently what my usecase involves is to stream changes occurring to a specific set of kubernetes objects such as pods that are part of a replicaset in a particular namespace into a kafka topic..
If you've a prototype or something in the works can you please point us to that so that we can start using it and communicate feedback and hopefully submit patches ourselves
@aakarshg sure, send me an email and I'll send you the beta version.
We are migrating to https://github.com/openshift/eventrouter
@aantn - ditto on being interested in a maintained replacement. The grafana integration for annotations sounds great.
@zswanson sorry about the delay! We've finally released the first version! https://docs.robusta.dev/master/
@aakarshg @cordoor @varac @antoniocascais @MatteoJoliveau might be relevant for you guys too
If anyone wants to discuss, we're on Slack and happy to add features for anything you need. (Or just open a github issue)
we've also reimplemented this as a Grafana Agent integration here - you can run a standalone Agent that only runs this integration as a drop-in eventrouter replacement.
for now it supports shipping events directly to a Loki-compatible sink and solves the "duplicate events" bug that occurs when you restart eventrouter. from there you can create dashboard annotations and "metrics from logs" directly in Grafana.
please file an issue in the Agent repo and ping me directly if you encounter any bugs or have feature suggestions!
you may also want to checkout @joe-elliott's diff logger as well!
|
2025-04-01T06:38:56.489592
| 2020-07-21T16:58:16
|
663167839
|
{
"authors": [
"bksaiki",
"pavpanchekha"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6698",
"repo": "herbie-fp/egg-herbie",
"url": "https://github.com/herbie-fp/egg-herbie/pull/9"
}
|
gharchive/pull-request
|
Parameterized operators
This PR adds binary64 and binary32 operators under the define-language! macro to replace the previous real / FPCore operators and constants. Herbie now parameterizes operators and constants (e.g. sin -> sin.f64 for binary64) to be representation specific (see uwplse/herbie#319). The list of operators is now annoyingly long. This should be fixed soon.
Perhaps we should make the existing operators also work? This way you could use new-Egg-Herbie with old-Herbie.
Oh duh. Backwards compatibility is a good idea.
Fixed
|
2025-04-01T06:38:56.516098
| 2020-04-08T12:47:42
|
596551700
|
{
"authors": [
"codecov-io",
"ystefinko"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6699",
"repo": "heremaps/here-olp-sdk-typescript",
"url": "https://github.com/heremaps/here-olp-sdk-typescript/pull/274"
}
|
gharchive/pull-request
|
Add commit checker job and scripts
Separate job must run only on PR stage.
Job verify commit message and fails if
requirements are not met.
Requirements added into file:
scripts/misc/commit_message_recom.txt
Relates-To: OLPEDGE-1573
Signed-off-by: Yaroslav Stefinko<EMAIL_ADDRESS>
Codecov Report
Merging #274 into master will not change coverage by %.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #274 +/- ##
======================================
Coverage 90.3% 90.3%
======================================
Files 57 57
Lines 1622 1622
Branches 194 194
======================================
Hits 1464 1464
Misses 89 89
Partials 69 69
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e34e81c...84d7df4. Read the comment docs.
|
2025-04-01T06:38:56.518006
| 2024-08-05T02:36:25
|
2447472081
|
{
"authors": [
"4lxprime",
"NuZiuki240Hz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6700",
"repo": "hereswhisper/EonBackend",
"url": "https://github.com/hereswhisper/EonBackend/issues/3"
}
|
gharchive/issue
|
Help me backend eon
Help me backend eon start
This issue is pretty common with javascript and typescript, you need to execute npm i (that will install all required packages) and npm build (that will convert typescript code to javascript) before running the command npm start (which is the command started by the start.bat). npm i have to be executed just once and npm build should be executed after any changes in the code.
|
2025-04-01T06:38:56.519780
| 2024-08-06T12:27:31
|
2450761438
|
{
"authors": [
"NUCLEAR-WAR"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6701",
"repo": "herlesupreeth/docker_open5gs",
"url": "https://github.com/herlesupreeth/docker_open5gs/pull/350"
}
|
gharchive/pull-request
|
logic update for QoS MOC mo.cfg
added logic to process N5 QoS requests.
used some variable to store session data in some HTABLE to use them later in the signaling ( AppSessionID /RTP Port/RTCP Port/ User ...etc)
I think I need to modify the routing logic further, the N5 Trigger should start with the INVITE and then PATCH/UPDATE it, as this will prevent creating new Sessions on every respond.
The Next think will be also to look if its the same Fork, but it now not so important in this basic setup, as we don't have forking here.
change it now to trigger first on the initiale INVITE and PATCH on SDP Answer.
still facing some issue where in the pfcp I see no update to the addresses in the request and the old one from initial still there
|
2025-04-01T06:38:56.544236
| 2015-01-25T00:20:27
|
55390172
|
{
"authors": [
"evanphx",
"kch"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6702",
"repo": "heroku/rack-timeout",
"url": "https://github.com/heroku/rack-timeout/issues/61"
}
|
gharchive/issue
|
Consider adapting rubysl-timeout
Long ago, I rewrote timeout.rb so that it didn't require a 1-to-1 relationship between a thread and timeout thread. This meant that no matter the number of threads using timeouts, there was only one timeout thread. Given that rack-timeout uses that same 1-to-1 relationship, it is in danger of easily ballooning the number of threads used.
Here is the code for my timeout.rb you could adapt: https://github.com/rubysl/rubysl-timeout/blob/2.0/lib/rubysl/timeout/timeout.rb
You'd need remove the Rubinius::Channel stuff by using a ConditionVariable, but other than that, it should be easy to use.
This is not a bad idea, but I wonder how much it's a problem in practice.
A thread only lives as long as request. A lot of ruby web apps are single-threaded, and even the threaded ones don't tend to use many threads.
Is creating and destroying threads expensive?
@kch Creating threads is fairly expensive as far as operations go, yes. Another example is that people use rack-timeout with puma and I've seen people configure puma to use 2000 threads. Using rack-timeout in that case will result in an additional 2000 threads as well as constantly tearing down and creating new ones.
Since you only want the ability call Thread.raise after a time, using the single, persistent timer Thread has significant upsides.
Fair enough. We'll do this. Might take me a while to get to it though.
@evanphx you define @mutex and @requests but don't seem to use them?
@kch Yeah, that's obviously dead code. But you'll have to adapt this to run on MRI anyway, that uses a Rubinius::Channel which you don't have to coordinate between the threads.
@evanphx right. was just pointing those out to you.
@evanphx in case you're curious, see #82
Implemented in #82; will be in beta2, tracked via #78.
|
2025-04-01T06:38:56.724070
| 2024-12-12T17:51:45
|
2736545210
|
{
"authors": [
"TruffleClock",
"mrlubos"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6724",
"repo": "hey-api/openapi-ts",
"url": "https://github.com/hey-api/openapi-ts/issues/1423"
}
|
gharchive/issue
|
client-fetch 0.5.3 breaks TanStack Query plugin
Description
After updating @hey-api/client-fetch to 0.5.3, requests using the TanStack Query plugin do not get sent at all.
Possibly caused by https://github.com/hey-api/openapi-ts/commit/646064d1aecea988d2b4df73bd24b2ee83394ae0
To reproduce (using the StackBlitz example below):
click "Generate random pet" -> no request is sent
change version to 0.5.2 -> request is sent
Reproducible example or configuration
https://stackblitz.com/edit/hey-api-client-fetch-plugin-tanstack-react-quer-babtkanp?file=package.json
OpenAPI specification (optional)
3.1.0
System information (optional)
all systems
Sorry for that! Fixed in the latest
|
2025-04-01T06:38:56.726388
| 2022-03-01T05:57:14
|
1154904222
|
{
"authors": [
"madeleineostoja",
"shannonrothe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6725",
"repo": "heybokeh/pollen",
"url": "https://github.com/heybokeh/pollen/issues/67"
}
|
gharchive/issue
|
Add and configure jest
It would be cool to be able to write tests against PRs for this project and I noticed jest isn't setup yet! Happy to take some time to look into this one if somebody doesn't get to it before me 👍🏼
Yep unit testing is definitely something I’ve been meaning to get to now that there’s logic around the config / css generation. Jest is a good option, and I’m partial to uvu as well, but not that fussed on what we go with they’re all much of a muchness.
Sounds good – I'm not too sold on Jest anyway, it's a bit of a pain to setup with ESM/TypeScript. Perhaps Vitest is an option too? 👀
|
2025-04-01T06:38:56.727536
| 2016-06-28T20:12:36
|
162775768
|
{
"authors": [
"elnat",
"hfiref0x"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6726",
"repo": "hfiref0x/UACME",
"url": "https://github.com/hfiref0x/UACME/issues/6"
}
|
gharchive/issue
|
19 - Hybrid method, using InetMgr IIS module and based on 10 & 16 MS fixes, works from Windows 7 up to 10rs1 14372.
Not work after build (((
Very informative post, thanks.
And it is fixed in 14376, forget about it.
|
2025-04-01T06:38:56.735240
| 2016-02-04T16:09:20
|
131395124
|
{
"authors": [
"Djafar1985",
"hgeorgako"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6727",
"repo": "hgeorgako/rfortraders",
"url": "https://github.com/hgeorgako/rfortraders/issues/3"
}
|
gharchive/issue
|
Part 2, A pairwise correlation example
Hi, Harry ! How are your quantitative feeling? =)
I have a question about the content, as a young Quantic .
On page 37 you mention about the csv file, but you have not created it , where you get information ? You have described the basic operation assignment , vectors , lists , matrices and data structures , but why you omitted the issue of export summary data in the form as described on page 38?
I need to help!
What is this?
extract_prices <- function(filtered_symbols, file_path = rfortraders / Chapter_02 / prices.csv) {
all_prices <- read.csv(file = file_path, header = TRUE, stringsAsFactors = FALSE)
rownames(all_prices) <- all_prices$Date
all_prices$Date <- NULL
valid_columns <- colnames(all_prices) %in% filtered_symbols
return(all_prices[, valid_columns])
}
extract_prices
result
extract_prices
function(filtered_symbols, file_path = rfortraders / Chapter_02 / prices.csv) {
all_prices <- read.csv(file = file_path, header = TRUE, stringsAsFactors = FALSE)
rownames(all_prices) <- all_prices$Date
all_prices$Date <- NULL
valid_columns <- colnames(all_prices) %in% filtered_symbols
return(all_prices[, valid_columns])
}
кое что нашёл
urlfile <- 'https://raw.githubusercontent.com/hgeorgako/rfortraders/master/Chapter_02/prices.csv'
dsin <- read.csv ( urlfile, getOption("max.print"))
проблема в ограничении 1000 строк
[ reached getOption("max.print") -- omitted 856 rows ]
Hopefully you already got this to work. The file you need is here: https://github.com/hgeorgako/rfortraders/blob/master/Chapter_02/prices.csv
The fastest way to get this is to download the file to your local compute and then use read.csv(file = "path on your local machine") to read in the data. Another useful package that I use all the time to read in .csv files is "readr".
I thought that you rarely come here and have grieved .
I figured out how to load data into R from the site , but why limit of 1000 lines.
I will try your method !
Thanks for the answer!
|
2025-04-01T06:38:56.743152
| 2015-03-26T14:14:26
|
64536575
|
{
"authors": [
"StachowP",
"hgrecco"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6728",
"repo": "hgrecco/pyvisa",
"url": "https://github.com/hgrecco/pyvisa/issues/127"
}
|
gharchive/issue
|
pyvisa 1.6.1 can not set stopbits
Hi,
i have a problem with pyvisa 1.6.1 and the RS232 setting.
my device requires two stop bits. But the syntax in phyton "device.stop_bits=2" returns an error "ValueError: 2 is an invalid value for attribute VI_ATTR_ASRL_STOP_BITS, should be a <enum 'StopBits'>".
You should use the constants In the current dev version,
from pint import constants
device.stop_bits = constants.VI_ASRL_STOP_TWO
or
from pint import constants
device.stop_bits = constants.StopBits.two
In any case, we should improve the error message.
I installed the pint 0.6 version. Now the Syntax "from pint Import constants" returns an error "ImportError: cannot import name constants" Did i build-in the wrong Pint version?
Sorry. it should have said
from visa import constants
or (in older pyvisa versions)
from pyvisa import constants
Still does not work.
File "C:/Workspaces/spyder/test/helloagilent.py", line 2, in
from visa import constants
ImportError: cannot import name constants
As mentioned before, from visa only works in more recent versions of pyvisa
I am closing this. Feel free to reopen.
|
2025-04-01T06:38:56.745124
| 2019-10-08T19:41:29
|
504239177
|
{
"authors": [
"jekwatt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6729",
"repo": "hgsc-project-managers/pm-utils",
"url": "https://github.com/hgsc-project-managers/pm-utils/issues/13"
}
|
gharchive/issue
|
Fix FutureWarning for missing label
Passing list-likes to .loc or [] with any missing label will raise
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
return self._getitem_tuple(key)
Fixed in 6a8ecfb.
|
2025-04-01T06:38:56.749757
| 2016-09-07T09:15:22
|
175452315
|
{
"authors": [
"Matchile",
"citizenmatt"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6730",
"repo": "hhariri/CleanCode",
"url": "https://github.com/hhariri/CleanCode/issues/13"
}
|
gharchive/issue
|
No issues displayed on vb.net
Is the plugin indend to work with vb.net?
Tried two project C# und VB.NET. Only the first seems to work.
It only supports C#, I'm afraid. I'll update the readme to point this out.
|
2025-04-01T06:38:56.759854
| 2015-07-13T01:56:40
|
94619953
|
{
"authors": [
"givanse",
"hhff"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6731",
"repo": "hhff/spree-ember",
"url": "https://github.com/hhff/spree-ember/pull/89"
}
|
gharchive/pull-request
|
1.13
I'm still working on changes for the other addons, but I figured that it would be good to start pushing as I have them ready so I can have early feedback.
This PR upgrades the addons to ember-cli 1.13.1, but keeps ember 0.12.0 and ember-data 1.0.0-beta.18. I don't see a strong reason for that.
It eludes me at the moment and right now I need to get other stuff done :/
I'll have a look at Auth!
Just haven't got around to bumping the version of simple auth yet - it was in beta when 0.0.1 was released
@givanse - I should have time to work on this this week - can I have push for your fork? I'll do all my work on this branch
@hhff Yes, added you.
thanks @givanse !
@hhff Did you update the backend that is used when the tests are run? (http://testing.spree-ember.com)
@givanse - I haven't - it's been the same for a while. Does it need a change?
Some tests that were ok before are failing now. Simple things like:
was expecting 'Bags'
actual result 'Mugs'
Maybe a sorting change in the taxons? I fixed (edited for Mugs) and four more errors appeared. Maybe the seed data was edited?
I didn't spend much time looking into this, decided to wait for a reply. If nothing has changed, I'll check what is going on.
yeah i haven't changed it at all - but I'm guessing it's just an ordering thing - so if the tests explicitly "clicked" the mugs label rather than the first item, we should be cool
thank you so so much for working on this @givanse u are the mannnnn
|
2025-04-01T06:38:56.817663
| 2023-03-30T05:30:52
|
1646944513
|
{
"authors": [
"supa-freak"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6732",
"repo": "hi-reeve/vitailse",
"url": "https://github.com/hi-reeve/vitailse/issues/250"
}
|
gharchive/issue
|
Installation via npx does not work anymore
npx degit zynth17/vitailse my-vitailse-app
leads to
could not download https://github.com/zynth17/vitailse/archive/8005a59cc665b2cdbc21506a598080cd35cebcf4.tar.gz
Started working again after a while, not sure what the issue was.
|
2025-04-01T06:38:56.819419
| 2020-08-22T23:29:18
|
684084702
|
{
"authors": [
"hibachrach",
"joshtriplett"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6733",
"repo": "hibachrach/ruut",
"url": "https://github.com/hibachrach/ruut/issues/8"
}
|
gharchive/issue
|
Please consider splitting out tree formatting from the command-line utility
I'd love to have a library crate for formatting trees in the style of tree. Would you consider splitting out the tree rendering from ruut into a library crate, without the dependencies like structopt and serde_json that would only be used for the command-line tool?
This has finally been completed in a5902d62812b4c671ef0e368415dae09b8dddd00. See https://github.com/hibachrach/render_as_tree. Thanks for filing this!
|
2025-04-01T06:38:56.820455
| 2019-07-25T09:10:12
|
472752030
|
{
"authors": [
"Sanne"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6734",
"repo": "hibernate/hibernate-orm",
"url": "https://github.com/hibernate/hibernate-orm/pull/2954"
}
|
gharchive/pull-request
|
HHH-13511 Remove interning of aliases in org.hibernate.loader.Default…
…EntityAliases
Being discussed with @dreab8 and @gsmet
fixed and merged
|
2025-04-01T06:38:56.822903
| 2022-01-25T02:03:37
|
1113337365
|
{
"authors": [
"TheCodePope",
"beikov"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6735",
"repo": "hibernate/hibernate-orm",
"url": "https://github.com/hibernate/hibernate-orm/pull/4700"
}
|
gharchive/pull-request
|
Fix race condition that allowed Component.getType() to return null
This fix prevents an NPE in org.hibernate.mapping.SimpleValue.isValid() and likely elsewhere
Thanks for the fix!
|
2025-04-01T06:38:56.823967
| 2022-11-15T09:34:24
|
1449462878
|
{
"authors": [
"marko-bekhta",
"yrodiere"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6736",
"repo": "hibernate/hibernate-search",
"url": "https://github.com/hibernate/hibernate-search/pull/3308"
}
|
gharchive/pull-request
|
HSEARCH-4722 Fix postgresql container start timing out on CI
https://hibernate.atlassian.net/browse/HSEARCH-4722
Backported to 6.1 and 6.0.
|
2025-04-01T06:38:56.828614
| 2023-03-08T13:17:38
|
1615258687
|
{
"authors": [
"marko-bekhta",
"yrodiere"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6737",
"repo": "hibernate/hibernate-search",
"url": "https://github.com/hibernate/hibernate-search/pull/3437"
}
|
gharchive/pull-request
|
HSEARCH-2366 Add an Elasticsearch mapping export tool
https://hibernate.atlassian.net/browse/HSEARCH-2366
Should we add some options parameter into the export method even if it'll be empty for now or contain some setting to export mappings (what it's doing now) so that in the future we won't break the API or ... as it's incubating - don't bother with it now and later we'll just break the API? 😄
Then there also was a mention of a CLI ... is that what we need or that was just a thought?
[ERROR] --[ Constraint Violation ]-----------------------------------------
[ERROR] Constraint: hsearch:TypesExtendingTypeFromAnotherModuleMustHaveModuleSpecificKeywordInName
[ERROR] Severity: MAJOR
[ERROR] Number of rows: 2
[ERROR] Main (non-test) types extending/implementing a Hibernate Search type from another module must have a module-specific keyword in their name,
[ERROR] either at the very start or just after "Abstract".
[ERROR] This allows to more easily understand which module a given type comes from.
[ERROR] Exceptions are allowed when:
[ERROR] - the misnamed type is an anonymous type or a generated type;
[ERROR] - or the misnamed type is deprecated;
[ERROR] - or the implemented type is deprecated;
[ERROR] - or the implementing type is an inner/nested type;
[ERROR] and its surrounding type does use the module-specific keyword;
[ERROR] - or the implemented type is in a util module,
[ERROR] in which case the implemented interface may just be a detail.
[ERROR]
[ERROR] misnamedTypeArtifact.moduleSpecificKeyword=Pojo, misnamedType=org.hibernate.search.mapper.pojo.schema.management.impl.FileSearchSchemaCollector, externalParentType=org.hibernate.search.engine.backend.schema.management.IndexSchemaCollector
[ERROR] misnamedTypeArtifact.moduleSpecificKeyword=Pojo, misnamedType=org.hibernate.search.mapper.pojo.schema.management.SearchSchemaCollector, externalParentType=org.hibernate.search.engine.backend.schema.management.IndexSchemaCollector
[ERROR] -------------------------------------------------------------------
[ERROR]
I cannot say that I like the idea of naming these PojoSearchSchemaCollector and PojoFileSearchSchemaCollector / PojoSearchSchemaCollectorToFiles ... 🥲
I cannot say that I like the idea of naming these PojoSearchSchemaCollector and PojoFileSearchSchemaCollector / PojoSearchSchemaCollectorToFiles ...
The one about FileSearchSchemaCollector will disappear once we solve the problem for SearchSchemaCollector, I think.
Two solutions:
Add an exception to the JQAssistant rule...
Use composition instead of inheritance. I.e. make IndexSchemaCollector SPI, don't extend it in SearchSchemaCollector, declare an exportExpectedSchema method in SearchSchemaCollector, and in org.hibernate.search.mapper.pojo.schema.management.impl.PojoScopeSchemaManagerImpl#exportExpectedSchema(org.hibernate.search.mapper.pojo.schema.management.SearchSchemaCollector) create an instance of IndexSchemaCollector that delegates to the SearchSchemaCollector.
Your choice!
|
2025-04-01T06:38:56.832040
| 2020-05-13T10:42:17
|
617341658
|
{
"authors": [
"Hibernate-CI",
"gsmet",
"jGauravGupta"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6738",
"repo": "hibernate/hibernate-validator",
"url": "https://github.com/hibernate/hibernate-validator/pull/1095"
}
|
gharchive/pull-request
|
HV-1775 Rename CDI extension service file for new jakarta package
Signed-off-by: Gaurav Gupta<EMAIL_ADDRESS>
Can one of the admins add this person to the trusted builders? (reply with: "add to whitelist" or "ok to test")
ok to test
@jGauravGupta thanks! Unfortunately, I wasn't able to test the CDI extension back in the time because I had no CDI implementation available.
I amended your commit to include a JIRA issue number.
Do you have more fixes coming or should I release an Alpha2?
OK, I was able to restore the CDI extension tests as Weld has made progress on their side.
Following your PR, all the tests pass so I'm releasing 7.0.0.Alpha2. It should synced to Central sometime tomorrow.
@jGauravGupta Alpha2 is now available on Central.
Thanks @gsmet
|
2025-04-01T06:38:56.881281
| 2023-06-16T00:03:50
|
1759693018
|
{
"authors": [
"mbkumar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6745",
"repo": "hiddenSymmetries/simsopt",
"url": "https://github.com/hiddenSymmetries/simsopt/pull/326"
}
|
gharchive/pull-request
|
Descriptor protocall
This PR introduces descriptor protocol for constrained class fields, so one doesn't have to test (mostly) for the constraints on the arguments in the init method.
@landreman
This is ready for review
|
2025-04-01T06:38:56.884064
| 2018-09-19T19:59:36
|
361904747
|
{
"authors": [
"LouDou",
"hideakitai"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6746",
"repo": "hideakitai/MCP4728",
"url": "https://github.com/hideakitai/MCP4728/issues/1"
}
|
gharchive/issue
|
Function missing return statement
Hi,
First, thanks for the lib! It has helped me immensely get the DAC working with zero effort :)
However, there is just one issue reported by the compiler, which would be nice if it were fixed:
MCP4728.h: In member function 'uint8_t MCP4728::analogWrite(MCP4728::DAC_CH, uint16_t, bool)':
MCP4728.h:44:5: warning: no return statement in function returning non-void [-Wreturn-type]
The warning is pretty self-explanatory, literally just the return keyword is missing in the method implementation: https://github.com/hideakitai/MCP4728/blob/master/MCP4728.h#L43
Thanks,
Doug.
Hi, thank you for using and reporting issue!
I've fixed it in b2462e0e867b733ec51de99e3ff040c22e86780d, so check it out and let me know if you have another problem :) Thanks!
|
2025-04-01T06:38:56.977358
| 2021-10-01T17:14:52
|
1013585151
|
{
"authors": [
"Jonxslays",
"davfsa",
"norinorin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6749",
"repo": "hikari-py/hikari",
"url": "https://github.com/hikari-py/hikari/pull/809"
}
|
gharchive/pull-request
|
Implement GuildJoinEvent
Summary
Separate the guild join event from the guild available event since apparently, new guilds won't have the unavailable field present.
Checklist
[x] I have run nox and all the pipelines have passed.
[ ] I have made unittests according to the code I have added/modified/deleted.
Will the fragment go under feature or breaking?
Will the fragment go under feature or breaking?
likely breaking.
One test case missing, but LGTM
Is it related to guild available event?
Is it related to guild available event?
Amazingly GitHub didn't post my comment with it. Thanks GitHub.
The test case is to check what event gets deserialized based on whether unavailable is there or not
Is it related to guild available event?
Amazingly GitHub didn't post my comment with it. Thanks GitHub.
The test case is to check what event gets deserialized based on whether unavailable is there or not
Happens I guess. I see, I'll get into it today.
Added more test cases on 41ce43c4260cd566c6ef94fdf70839787508ddca. It's mostly copy-pasted because the only difference was the event type. Should I parametrize this?
|
2025-04-01T06:38:56.990323
| 2015-02-18T16:36:37
|
58091926
|
{
"authors": [
"MichaelTurk",
"dingo-d",
"hilios",
"vilius-g"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6750",
"repo": "hilios/jQuery.countdown",
"url": "https://github.com/hilios/jQuery.countdown/issues/100"
}
|
gharchive/issue
|
Still having odd display issue
After implementing the CSS fix from issue #72, we're still seeing odd display flickers and artifacts. To make sure it was not just our instance, I pulled up the countdown homepage and was able to duplicate the problem there (see attached).
Sometimes it is a relatively small line like this:
Sometimes it is significantly larger:
Anyone have thoughts as to what would be causing that? We're seeing it on a number of displays, cards, and OSes, so it doesn't seem to be specific to the hardware or the OS. It does, however, seem to be more common in Chrome than any other browser.
Thank you for your feedback, but this problem seems to be caused by hardware acceleration, dunno if there is a way around by css/html.
I never saw this bug on my computer though, and I use Chrome has my main browser, I think we need to investigate the real cause of this issue.
This happens on Chrome and Opera if I recall correctly. On FF there is no such bug, so it's definitely a rendering issue.
Is it specific to the "flip" clock implementation? I have used the clock
without the flip animation and never had issues. So I'm wondering if it's
something in that specific use case.
For me it happened only with the flip clock. From what I can tell other people only have the issue with the flip clock as well.
Guys, it's an issue with the CSS of the flip clock, I really don't have much experience with 3d animations and hardware acceleration at CSS level. Do you guys can guess where we begin to solve this problem?
The issue seems to go away when the perspective value for .main-example .time is under 480px. When I set it to 479px everything is displayed perfectly, but when I increase it to 480px the artifacts come back. This is very weird but seems to work for me.
For me it goes away when I put perspective to 0px;
The problem is not the script itself, but the machine hardware
acceleration! The output will dependent on your computer
Em sex, 27 de mar de 2015 06:04, dingo-d<EMAIL_ADDRESS>escreveu:
For me it goes away when I put perspective to 0px;
—
Reply to this email directly or view it on GitHub
https://github.com/hilios/jQuery.countdown/issues/100#issuecomment-86873713
.
Yeah, I think it also depends on the browser, because if I recall correctly, on firefox the issue didn't exist. And ff is known to handle css3 transformations better than chrome (or webkit based browsers).
Another thing I noticed that at transform: rotateX(90deg) the card is still visible. It seems that the transformation origin is not aligned properly.
|
2025-04-01T06:38:56.999871
| 2021-06-20T12:03:27
|
925583910
|
{
"authors": [
"JingZhang918",
"Miffyli"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6751",
"repo": "hill-a/stable-baselines",
"url": "https://github.com/hill-a/stable-baselines/issues/1124"
}
|
gharchive/issue
|
Installation Error: Stable_baselines
Hey guys! Basically I was just trying to install stable_baselines on Mac m1
Here is the code:
pip install stable-baselines
stable_baselines_error.txt
I got nothing. Please help me!!!!!
Looks like either Numpy or OpenCV could not be installed. I would recommend checking their communities for help on this.
PS: stable-baselines3 is more actively maintained, so you might want to try if it works :)
Looks like either Numpy or OpenCV could not be installed. I would recommend checking their communities for help on this.
PS: stable-baselines3 is more actively maintained, so you might want to try if it works :)
Thank You for your advice. I did try stable-baselines3 then I got:
stable_baselines3_error.txt
Then back to stable_baselines, I thought I should fix the opencv problem first, so I try to install OpenCV, then I got:
opencv_error.txt
I think there is a link.
Yeah indeed these are numpy and OpenCV errors. Looks like the older versions of these libraries do not work.
Try installing stable-baselines3 with only pip install stable-baselines3 (without the [extra]). That skips OpenCV installation but it is only used for some rendering purposes (visualizing envs etc).
pip_install_stable-baselines3.txt
I used pip install stable-baselines3 still got error. This sucks :(
I just realized we had same problem here: https://github.com/DLR-RM/stable-baselines3/issues/360
Try using conda to install libraries instead of pip.
Last login: Mon Jun 21 10:24:42 on ttys001
(base) zhangjing@zhangjingdeMacBook-Pro ~ % conda activate tf_m1
(tf_m1) zhangjing@zhangjingdeMacBook-Pro ~ % conda install -c conda-forge stable-baselines3 -y
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
(tf_m1) zhangjing@zhangjingdeMacBook-Pro ~ % conda install stable-baselines
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- stable-baselines
Current channels:
- https://repo.anaconda.com/pkgs/main/osx-arm64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/osx-arm64
- https://repo.anaconda.com/pkgs/r/noarch
- https://conda.anaconda.org/conda-forge/osx-arm64
- https://conda.anaconda.org/conda-forge/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
(tf_m1) zhangjing@zhangjingdeMacBook-Pro ~ % conda install stable-baselines3
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: -
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Still nothing 0.0
stable-baselines is not in conda repositories... Please look at the link I gave you.
I am not able to give further technical support on this as the issue lies in the dependencies of stable-baselines, not the stable-baselines itself, and close this issue as "no tech support".
|
2025-04-01T06:38:57.008964
| 2021-11-30T13:23:54
|
1067233829
|
{
"authors": [
"himashi92",
"xiaoiker"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6752",
"repo": "himashi92/VT-UNet",
"url": "https://github.com/himashi92/VT-UNet/issues/1"
}
|
gharchive/issue
|
Visualization part
Hi,
Great work and thanks for sharing the code. Could ask how did you visualize the result? Could you share the scripts?
Best,
Wei
Hi Wei,
Once you trained the model, you can use test.py script to generate predictions and these predictions will be saved as MRI volumes. You can use MRI/CT viewing application to view those predictions. For example, I use Mango Software: https://en.wikipedia.org/wiki/Mango_(software) . There is an option to get snapshots of the slices that you want to save as images.
Best Regards,
Himashi
|
2025-04-01T06:38:57.044364
| 2016-11-03T14:07:33
|
187069380
|
{
"authors": [
"hirohisa",
"lalmachado"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6753",
"repo": "hirohisa/ImageLoaderSwift",
"url": "https://github.com/hirohisa/ImageLoaderSwift/issues/94"
}
|
gharchive/issue
|
i am getting this error, only in devices: Crashed: swift.imageloader.queues.io
UIImageView+ImageLoader.swift line 29
specialized UIImageView.(url.getter).(closure #1)
Thank you for the issue.
I fix by #95
I released 0.12.0.
|
2025-04-01T06:38:57.048630
| 2023-10-02T15:22:51
|
1922135625
|
{
"authors": [
"MicaiahReid",
"csgui",
"losatnick"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6754",
"repo": "hirosystems/docs",
"url": "https://github.com/hirosystems/docs/issues/486"
}
|
gharchive/issue
|
Broken link -- Page not found --using clarity values --> "response value utilization"
Link to "response value utilization much simpler" sends to a 'page not found'
Hi @losatnick !
The mentioned link seems to be working properly. Can we close this issue?
The link still isn't working for me - I've attached a screenshot of the section im referencing
Are you following this link? https://docs.hiro.so/get-started/stacks-blockchain-api#using-clarity-values
Ah yeah - the second link is the one that I was viewing and discovered the broken link
All the docs im reviewing I'm getting to from hiro.so --> docs
Fixed by https://github.com/hirosystems/stacks-blockchain-api/pull/1726
|
2025-04-01T06:38:57.090378
| 2020-04-02T21:16:23
|
592920619
|
{
"authors": [
"hishamco",
"shahabganji"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6755",
"repo": "hishamco/My.Extensions.Localization.Json",
"url": "https://github.com/hishamco/My.Extensions.Localization.Json/issues/43"
}
|
gharchive/issue
|
Registering the service one should provide absolute path, otherwise exception is thrown
When registering service via services.AddJsonLocalization, if the path is not an absolute one it will throw the following exception when resolving the files by IStringLocalizer
// this will cause an error when _localizer["Hello"] is called later on
services.AddJsonLocalization(options => options.ResourcesPath = "Resources");
it should be changed to
var directory = Path.Combine(Directory.GetCurrentDirectory(), "Resources");
services.AddJsonLocalization(options => options.ResourcesPath = directory);
the latter will work perfect.
I think that extra line can be moved to the library itself.
System.ArgumentException: The path must be absolute. (Parameter 'root')
at Microsoft.Extensions.FileProviders.PhysicalFileProvider..ctor(String root, ExclusionFilters filters)
at Microsoft.Extensions.FileProviders.PhysicalFileProvider..ctor(String root)
at Microsoft.Extensions.Configuration.FileConfigurationExtensions.SetBasePath(IConfigurationBuilder builder, String basePath)
at My.Extensions.Localization.Json.JsonStringLocalizer.<>c__DisplayClass17_0.<BuildResourcesCache>b__0(String _)
at System.Collections.Concurrent.ConcurrentDictionary`2.GetOrAdd(TKey key, Func`2 valueFactory)
at My.Extensions.Localization.Json.JsonStringLocalizer.BuildResourcesCache(String culture)
at My.Extensions.Localization.Json.JsonStringLocalizer.GetStringSafely(String name)
at My.Extensions.Localization.Json.JsonStringLocalizer.get_Item(String name)
at My.Extensions.Localization.Json.Internal.StringLocalizer.get_Item(String name)
HEADERS
=======
Accept-Language: fr-FR
Thanks for reporting this, but did you try to run the sample, it should work as expected
Dear @hishamco, thanks for the prompt reply, yes I ran the sample with a controller trying to use IStringLocalizer
@shahabganji let me try your scenario to make sure that I can reproduce the bug, if you think your fix is a real bug, you could send a PR if you want ..
Thanks
@hishamco The PR created, hope it helps
I need to check the sample with your changes first before I review the PR
Thanks
@shahabganji I think there's some confusion here you 're using IStringLocalizer instead of IStringlocalizer<T> which is may causes a strange behavior
Please refer to my new added sample here
Hope it clarifies what's the preferred localizer you should use as well as resource name conventions
Merged into dev
@hishamco
I still get the same error, even with IStringLocalizer<HomeController>; besides using IStringLocalizer<T> only loads messages from T.{culture}.json file, right? however, using non-generic IStringLocalizer will load messages which are not only for a specific class/controller, like some global shared messages.
I added the following to resolve problem in Startup -> ConfigureServices
services.AddJsonLocalization(
options => options.ResourcesPath = Path.Combine(
Directory.GetCurrentDirectory(), "Resources"));
without that it is like :
note that the response is 500 Internal Server Error
and this is the log :
Microsoft.AspNetCore.Server.Kestrel: Error: Connection id "0HLUNLKCTNMP6", Request id "0HLUNLKCTNMP6:00000001": An unhandled exception was thrown by the application.
System.ArgumentException: The path must be absolute. (Parameter 'root')
at Microsoft.Extensions.FileProviders.PhysicalFileProvider..ctor(String root, ExclusionFilters filters)
at Microsoft.Extensions.FileProviders.PhysicalFileProvider..ctor(String root)
at Microsoft.Extensions.Configuration.FileConfigurationExtensions.SetBasePath(IConfigurationBuilder builder, String basePath)
at My.Extensions.Localization.Json.JsonStringLocalizer.<>c__DisplayClass16_0.<BuildResourcesCache>b__0(String _) in /Users/shahab/dev/playground/My.Extensions.Localization.Json/src/My.Extensions.Localization.Json/JsonStringLocalizer.cs:line 173
at System.Collections.Concurrent.ConcurrentDictionary`2.GetOrAdd(TKey key, Func`2 valueFactory)
at My.Extensions.Localization.Json.JsonStringLocalizer.BuildResourcesCache(String culture) in /Users/shahab/dev/playground/My.Extensions.Localization.Json/src/My.Extensions.Localization.Json/JsonStringLocalizer.cs:line 162
at My.Extensions.Localization.Json.JsonStringLocalizer.GetStringSafely(String name) in /Users/shahab/dev/playground/My.Extensions.Localization.Json/src/My.Extensions.Localization.Json/JsonStringLocalizer.cs:line 101
at My.Extensions.Localization.Json.JsonStringLocalizer.get_Item(String name) in /Users/shahab/dev/playground/My.Extensions.Localization.Json/src/My.Extensions.Localization.Json/JsonStringLocalizer.cs:line 42
at Microsoft.Extensions.Localization.StringLocalizer`1.get_Item(String name)
at LocalizationSample.Mvc.Controllers.HomeController.Index() in /Users/shahab/dev/playground/My.Extensions.Localization.Json/samples/LocalizationSample.Mvc/Controllers/HomeController.cs:line 19
at lambda_method(Closure , Object , Object[] )
at Microsoft.Extensions.Internal.ObjectMethodExecutor.Execute(Object target, Object[] parameters)
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.SyncObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeActionMethodAsync()
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeNextActionFilterAsync()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|19_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Microsoft.AspNetCore.Localization.RequestLocalizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)
That means the error occurs when you are using IStringLocalizer alone or beside IStringLocalizer<T> right?
I just checked out the project and run the new sample as dotnet project. I have the same error no matter which type of usage, IStringLocalizer<T> alone also produces the error. using both together also produces error.
my current controller :
using System;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Localization;
namespace LocalizationSample.Mvc.Controllers
{
[Route("")]
public class HomeController : Controller
{
private readonly IStringLocalizer<HomeController> _localizer;
public HomeController(IStringLocalizer<HomeController> localizer)
{
_localizer = localizer ?? throw new ArgumentNullException(nameof(localizer));
}
public string Index()
{
var result = _localizer["Hello"].Value;
return result;
}
}
}
and Startup.cs
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
namespace LocalizationSample.Mvc
{
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddJsonLocalization(options => options.ResourcesPath = "Resources");
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseRequestLocalization("en-US", "fr-FR");
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseRouting();
app.UseEndpoints(endpoints => endpoints.MapControllers());
}
}
}
I have two resource files, HomeController.fr-FR.json and fr-FR.json
changed the line to services.AddJsonLocalization( options => options.ResourcesPath = Path.Combine(Directory.GetCurrentDirectory(), "Resources")); and it works like a charm.
Also changing Line 174 to
.SetBasePath(Path.Combine(Directory.GetCurrentDirectory() ,_resourcesPath))
solved the issue.
I did few updates after I support JsonHtmlLocalizer, I will try to use IStringLocalizer alongside with IStringLocalizer to reproduce the error
you mean IStringLocalizer alongside with IStringLocalizer<T>? but in this case IStringLocalizer<T> also throws an exception. Anyway thanks for the follow ups.
Seems I confused, could please try the latest sample LocalizationSample.Mvc and let me know what are the required changes to raise the error
I exactly did that. please check here and here
I just checked out the project and run the new sample as dotnet project. I have the same error no matter which type of usage, IStringLocalizer alone also produces the error. using both together also produces error.
Seems you didn't fetch the latest bits ;) I just merged a new commits anyhow I don't have such T.{culture}.json file, furthermore there's no error occurs on IStringLocalizer<HomeController> at all
This is a French version of how Privacy page is look like at my end
By T I meant HomeController, I pulled the latest version, however, the problem still exist.
URL: https://localhost:8001/Home/Privacy/?culture-fr-FR&ui-culture=fr-FR
error the same as before:
Microsoft.AspNetCore.Server.Kestrel: Error: Connection id "0HLUNMASR1PM7", Request id "0HLUNMASR1PM7:00000001": An unhandled exception was thrown by the application.
System.ArgumentException: The path must be absolute. (Parameter 'root')
at Microsoft.Extensions.FileProviders.PhysicalFileProvider..ctor(String root, ExclusionFilters filters)
at Microsoft.Extensions.FileProviders.PhysicalFileProvider..ctor(String root)
at Microsoft.Extensions.Configuration.FileConfigurationExtensions.SetBasePath(IConfigurationBuilder builder, String basePath)
at My.Extensions.Localization.Json.JsonStringLocalizer.<>c__DisplayClass16_0.<BuildResourcesCache>b__0(String _) in /Users/shahab/dev/playground/My.Extensions.Localization.Json/src/My.Extensions.Localization.Json/JsonStringLocalizer.cs:line 173
at System.Collections.Concurrent.ConcurrentDictionary`2.GetOrAdd(TKey key, Func`2 valueFactory)
at My.Extensions.Localization.Json.JsonStringLocalizer.BuildResourcesCache(String culture) in /Users/shahab/dev/playground/My.Extensions.Localization.Json/src/My.Extensions.Localization.Json/JsonStringLocalizer.cs:line 162
at My.Extensions.Localization.Json.JsonStringLocalizer.GetStringSafely(String name) in /Users/shahab/dev/playground/My.Extensions.Localization.Json/src/My.Extensions.Localization.Json/JsonStringLocalizer.cs:line 101
at My.Extensions.Localization.Json.JsonStringLocalizer.get_Item(String name) in /Users/shahab/dev/playground/My.Extensions.Localization.Json/src/My.Extensions.Localization.Json/JsonStringLocalizer.cs:line 42
at Microsoft.Extensions.Localization.StringLocalizer`1.get_Item(String name)
at LocalizationSample.Mvc.Controllers.HomeController.Privacy() in /Users/shahab/dev/playground/My.Extensions.Localization.Json/samples/LocalizationSample.Mvc/Controllers/HomeController.cs:line 22
at lambda_method(Closure , Object , Object[] )
at Microsoft.Extensions.Internal.ObjectMethodExecutor.Execute(Object target, Object[] parameters)
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.SyncActionResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeActionMethodAsync()
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeNextActionFilterAsync()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|24_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeFilterPipelineAsync()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Microsoft.AspNetCore.Localization.RequestLocalizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)
I did the same thing to solve the problem and it worked.
I don't know whether the OS matters or not, mine is Mac OS X
I see, I don't now you are running on Mac OS, so I need to create a functional test, so AzurePipeline should fail if there's issue
I will try create it ASAP
Thanks again
Also I will try your change to check if it's work with me too on Windows
If you wish to create a functional test that runs the LocalizationSample.Mvc that I created in you PR it will be very good to make sure that your changes works on ever OS
No problem I'll put a time on that.
I request few changes on your PR, please fix them and let me know if you need help on creating a functional test
Sure, just give me some time till evening, currently at the middle of some important stuff, is that okay?
Take your time .. I will try to fix and improve something else meanwhile
Dear @hishamco, check PR #44 . I've added a functional test for one of the samples, LocalizationSample, do we need another one for LocalizationSample.Mvc?
I will have a look ...
Thanks
@hishamco Please also update the nuget feed. thanks 🙏
Do you mean upload a NuGet package?
Yes.
Am 03.04.2020 um 20:19 schrieb Hisham Bin Ateya<EMAIL_ADDRESS>
Do you mean upload a NuGet package?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
I need to add some new features, then I will release 2.1.0 hope this will not take more than few days
|
2025-04-01T06:38:57.108963
| 2024-02-02T07:10:15
|
2114262547
|
{
"authors": [
"MaksymOsovitnii",
"awacode21",
"did",
"its-lewin",
"lud-hu",
"positiveprogrammer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6756",
"repo": "histoire-dev/histoire",
"url": "https://github.com/histoire-dev/histoire/issues/668"
}
|
gharchive/issue
|
NuxtIcon not rendered in Histoire
Describe the bug
After having a lot of trouble trying to get Storybook running in my nuxt project, I figured I give histoire a try.
But unfortunately I'm running in a lot of problems as well.
One of them is the <NuxtIcon /> is just not rendered at all, but working fine in the nuxt app.
Reproduction
https://github.com/lud-hu/nuxt-histoire-reproduction
System Info
System:
OS: macOS 14.0
CPU: (8) arm64 Apple M1
Memory: 68.38 MB / 16.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 18.19.0 - /opt/homebrew/opt/node@18/bin/node
Yarn: 1.22.19 - /opt/homebrew/bin/yarn
npm: 10.2.3 - /opt/homebrew/opt/node@18/bin/npm
pnpm: 8.15.1 - /opt/homebrew/bin/pnpm
Browsers:
Chrome: 121.0.6167.139
Edge: 121.0.2277.98
Safari: 17.0
npmPackages:
@histoire/plugin-nuxt: ^0.17.9 => 0.17.9
@histoire/plugin-vue: ^0.17.9 => 0.17.9
histoire: ^0.17.9 => 0.17.9
Used Package Manager
yarn
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Check that this is a concrete bug. For Q&A open a GitHub Discussion.
[X] The provided reproduction is a minimal reproducible example of the bug.
Same here. I found two issues.
Problem 1: NuxtIcon is an async component, so Wrap NuxtIcon with Suspense.
Problem 2: NuxtIcon is accessing the vueApp.component that is stubbed out.
My workaround was to add a stub of the vueApp.component in the useNuxtApp composable stub of histoire. Or add guard code for vueApp.component access from NuxtIcon Side.
If we could override useNuxtApp stub, it would be nice for these problems
Thanks for the fix, @Akryum!
Do you publish new versions of histoire to npmjs regularly? The 0.17.11 that contains this fix is marked as a release here at github but not yet published to npmjs. Thx!
Thanks for the fix, @Akryum! Do you publish new versions of histoire to npmjs regularly? The 0.17.11 that contains this fix is marked as a release here at github but not yet published to npmjs. Thx!
It's published in "@histoire/plugin-vue": "^0.17.11".
My workaround was to add a stub of the vueApp.component in the useNuxtApp composable stub of histoire.
@positiveprogrammer Can you elaborate on how you did that?
You can find stub of composables in histoire/plugin-nuxt/runtime/composables.mjs
You could change the stub like the one below. Just skip the exception
export const useNuxtApp = () => ({ runWithContext: async (fn) => await fn(), vueApp: { components: [] } })
Hi all, thanks for the great lib.
But how was the issue described fix? How can NuxtIcon be used in stories? @positiveprogrammer as I understood, you have found the solution, correct?
@MaksymOsovitnii I found 2 issues; the first one was already fixed by the Histoire team, and the second issue is a little bit ambiguous since NuxtIcon tries to access vueApp to add NuxtIcon as a global component, but Histoire stub useNuxtApp composable, so NuxtIcon throw an exception.
You can take a look at nuxt-icon/dist/runtime/Icon.Vue
When you use nuxt-icon in your story, the highlighted code will throw an exception. So I proposed a workaround to add vueApp and composable in histoire stub manually.
@positiveprogrammer , thank you! Now it's clear to me.
MIght be releated: https://github.com/histoire-dev/histoire/issues/746
a potential hacky solution 👉 https://github.com/histoire-dev/histoire/discussions/773
|
2025-04-01T06:38:57.148513
| 2023-04-21T04:52:54
|
1677801343
|
{
"authors": [
"hiyouga",
"michaeloo0"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6757",
"repo": "hiyouga/ChatGLM-Efficient-Tuning",
"url": "https://github.com/hiyouga/ChatGLM-Efficient-Tuning/issues/18"
}
|
gharchive/issue
|
Dataset doesn't exist.
FileNotFoundError: Couldn't find a dataset script at
/content/JosephusCheung/GuanacoDataset/GuanacoDataset.py or any data file in the
same directory. Couldn't find 'JosephusCheung/GuanacoDataset' on the Hugging
Face Hub either: FileNotFoundError: Dataset 'JosephusCheung/GuanacoDataset'
doesn't exist on the Hub. If the repo is private or gated, make sure to log in
with `huggingface-cli login`.
Please check your internet connection, or log in with huggingface-cli login.
We strongly recommend logging in with your HuggingFace account since this dataset requires confirmation before using it.
|
2025-04-01T06:38:57.172115
| 2022-09-22T01:06:11
|
1381678217
|
{
"authors": [
"dplochcoder",
"flibber-hk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6760",
"repo": "hk-modding/modlinks",
"url": "https://github.com/hk-modding/modlinks/pull/648"
}
|
gharchive/pull-request
|
Update purenail mods
DarknessRandomizer: Fix a bunch of logic, UI overhaul for 4th lantern shard
MoreDoors: Inaugural beta release
SpoilerViewerMod: Fix bugs, integrate with MoreDoors
Did you want to bump the modlinks version number for Darkness?
Did you want to bump the modlinks version number for Darkness?
Yes, thanks.
|
2025-04-01T06:38:57.180824
| 2021-01-09T22:32:44
|
782697631
|
{
"authors": [
"Dobob1022",
"hkalexling"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6761",
"repo": "hkalexling/Mango",
"url": "https://github.com/hkalexling/Mango/issues/140"
}
|
gharchive/issue
|
[Feature Request]Scanned manga problem.
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. E.g. I'm always frustrated when [...]
I have a lots of scanned manga. it have 2page in one single image. So it need to be split into 2 page.
Describe the solution you'd like
A clear and concise description of what you want to happen.
The image ratio check and split. I think its best option.
Describe a small use-case for this feature request
How would you imagine this to be used? What would be the advantage of this for the users of the application?
Scanned manga have the issue like my case.
Additional context
Add any other context or screenshots about the feature request here.
Hi, thanks for the feature requests!
The image ratio check and split. I think its best option.
This might not work in some cases. Sometimes you get a few wide pages in a chapter (see the screenshot below), and if we do the ratio check and split, the wide pages would be wrongly split. I think the easiest solution for you would be to use the paged reader.
Hi, thanks for the feature requests!
The image ratio check and split. I think its best option.
This might not work in some cases. Sometimes you get a few wide pages in a chapter (see the screenshot below), and if we do the ratio check and split, the wide pages would be wrongly split. I think the easiest solution for you would be to use the paged reader.
As you can see this manga have perfect ratio. how can i split the manga?
As you can see this manga have perfect ratio. how can i split the manga?
What I was saying is that I agree it would be a useful feature, but I can't think of a user-friendly way to do this. If we just split all wide pages, some pages would be incorrectly split (as in my screenshot). We could add a double-page reading mode, where we split all pages regardless of their width/height ratios, but then the server would have to do the heavy image processing on the fly. I need to think about it and see what we can do, but of course, any suggestions are welcome.
In your screenshot, you are using the continuous mode. Perhaps you can give the paged mode a try as a workaround. The double-page images should look better in the paged mode.
What I was saying is that I agree it would be a useful feature, but I can't think of a user-friendly way to do this. If we just split all wide pages, some pages would be incorrectly split (as in my screenshot). We could add a double-page reading mode, where we split all pages regardless of their width/height ratios, but then the server would have to do the heavy image processing on the fly. I need to think about it and see what we can do, but of course, any suggestions are welcome.
In your screenshot, you are using the continuous mode. Perhaps you can give the paged mode a try as a workaround. The double-page images should look better in the paged mode.
Thanks for reply my submission. I really hope add the 2page mode in to the Mango. I will wait until get upgrade! TNX!
Thanks for reply my submission. I really hope add the 2page mode in to the Mango. I will wait until get upgrade! TNX!
|
2025-04-01T06:38:57.194909
| 2022-05-15T07:53:35
|
1236215469
|
{
"authors": [
"Leeingnyo",
"hkalexling",
"tr7zw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6762",
"repo": "hkalexling/Mango",
"url": "https://github.com/hkalexling/Mango/pull/305"
}
|
gharchive/pull-request
|
Support unzipped entry
Resolve #215
What I did
Change the Entry class to abstract class,
Implement DirectoryEntry, which treats unzipped images in a directory as an entry
the rule is described at here
Scan DirectoryEntry when Title#new and Title#examine are called.
Fix caching policy after some tests
Result and test
Here is an test library environment.
.:
info.json nested-folder/ 'THE iDOLM@STER Your Mess@ge 3rd stage.zip' 'THE iDOLM@STER Your Mess@ge 4th stage.zip' unzipped-files/
./nested-folder:
10.jpg 12.jpg 14.jpg 16.jpg 18.jpg 1.jpg 21.jpg 23.jpg 25.jpg 2.jpg 4.jpg 6.jpg 8.jpg info.json 'THE iDOLM@STER Your Mess@ge 1st stage.zip'
11.jpg 13.jpg 15.jpg 17.jpg 19.jpg 20.jpg 22.jpg 24.jpg 26.jpg 3.jpg 5.jpg 7.jpg 9.jpg nested-unziped-file/ 'THE iDOLM@STER Your Mess@ge 2nd stage.zip'
./nested-folder/nested-unziped-file:
10.jpg 11.jpg 12.jpg 13.jpg 14.jpg 15.jpg 16.jpg 17.jpg 18.jpg 19.jpg 1.jpg 20.jpg 21.jpg 22.jpg 23.jpg 24.jpg 25.jpg 26.jpg 2.jpg 3.jpg 4.jpg 5.jpg 6.jpg 7.jpg 8.jpg 9.jpg
./unzipped-files:
10.jpg 11.jpg 12.jpg 13.jpg 14.jpg 15.jpg 16.jpg 17.jpg 18.jpg 19.jpg 1.jpg 20.jpg 21.jpg 22.jpg 23.jpg 24.jpg 25.jpg 26.jpg 2.jpg 3.jpg 4.jpg 5.jpg 6.jpg 7.jpg 8.jpg 9.jpg
In tree view,
root (unzipped, have 4 entries, one title)
├ nested-folder (would be an entry and a title, have 3 entries)
│ ├ 1.jpg ~ 26.jpg
│ ├ nested-unziped-file (would be an entry)
│ │ └ 1.jpg ~ 26.jpg
│ ├ THE iDOLM@STER Your Mess@ge 1st stage.zip
│ └ THE iDOLM@STER Your Mess@ge 2nd stage.zip
├ unzipped-files (would be an entry)
│ └ 1.jpg ~ 26.jpg
├ THE iDOLM@STER Your Mess@ge 3rd stage.zip
└ THE iDOLM@STER Your Mess@ge 4th stage.zip
Result
The directories appeared as entries and titles
What I tested
remove an image file from directory
rename an image file in a directory
add new folder to scan
About class name
ZippedEntry -> ArchiveEntry
DirectoryEntry -> ?
I can't come up with good names...
I found that there's a bug after rename a directory entry and scan.
@tr7zw Hi!
you mean that the nested-folder in the sample would be a problem, and it shouldn't be treated as an entry because it has another titles and zipped entries though it has image files, right?
I think the responsibility of configuring a library is up to the library owners. If they don't want this, they would remove those files.
Huh I guess never mind? I still had in mind that if you have a folder "Manga" and in that folder a folder "One Piece" with the chapters inside, I think in the past it treated the "Manga" folder as something containing Chapters. But apparently this was already fixed, now it shows Titles/Entries.
Still a really neat pr, since using directories as chapters allows some more flexibility, and with the new abstract Entry class it could allow some nice new features(other compressed files than .zip, dynamically loading chapters from urls/network shares).
Thanks @Leeingnyo! I fixed the linter warnings and took a quick look, and it looks good overall! I think it's a nice application of abstract classes.
Re class names, yeah I think ArchiveEntry is more appropriate than ZippedEntry because we also support RAR/CBR files. I think DirectoryEntry looks fine. We could also use DirEntry to make it shorter but I have no strong feelings.
Also I think it makes more sense to break the classes into individual files, e.g., entry.cr, zipped_entry.cr, and directory_entry.cr. What do you think?
I will do a full review and some testings later this week :+1:
Hey @Leeingnyo sorry for the delay. I made quite some changes to the PR. Thanks!
@hkalexling sorry I pushed errors occurred version... wait for a moment
@hkalexling it looks great. I like the method that you did to recover entry instances :)
Since @page was added to Entry, it caused an error to recover old library.yml.zip file. But, I don't mind of this. fine!
Thanks @Leeingnyo! Sorry the comments above was just for my own reference and I accidentally published them ;-P
Since @page was added to Entry, it caused an error to recover old library.yml.zip file.
Could you elaborate a bit on this? I tried the following steps but didn't see the error.
Remove library.yml.gz
Build and run from the master branch to regenerate the library.yml.gz file
Build and run from this branch and there's no error
Oh that's the difference. I built it from the dev branch. but I had the same error when I tried from the master branch.
[ERROR] 2022/06/05 15:02:04 | Missing YAML attribute: path at line 14, column 7
because my ubuntu snap upgrades a Crystal implicitly, I use a Crystal 1.4.1... this would be a matter. :p
Ah sorry my bad. It does happen on master on Crystal 1.0.0 as well, but it's a minor issue and will only happen once so I think we are fine.
Thanks <3
|
2025-04-01T06:38:57.200100
| 2023-05-14T19:26:17
|
1709048128
|
{
"authors": [
"hkgnp",
"isosphere",
"stan-voo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6763",
"repo": "hkgnp/logseq-kanban-plugin",
"url": "https://github.com/hkgnp/logseq-kanban-plugin/issues/31"
}
|
gharchive/issue
|
[Feature] Interactive Kanban - Drag, Drop
The ability to drag and drop cards is a big deal for me; IMO it's the whole reason to Kanban!
Our upstream dependency react-kanban supports interactive boards[^1], but we'll have to add code that will modify the LogSeq graph according to actions taken by the user.
[^1]: though it should be acknowledged that the package is abandoned
I am attempting to implement this feature on my own, but I'd be happy for any tips or discussion here. I'll open a draft PR if I have any kind of progress so that other interested parties can collaborate on the feature.
I'm using the react-kanban demo as a guide.
I've made an initial attempt that matches the react-kanban demo but the event handler doesn't get triggered on drag. I've deleted the default logseq event handlers via the developer tools as a test but it had no effect.
The https://github.com/cannibalox/logtools project, when installed, enables showing a block as a set of kanban columns using CSS trickery. It has the added benefit of not requiring additional graph editing. It doesn't look as good, but it works.
I'd prefer to use this plugin but I think I'd need some help getting the modifications working. The current state of my attempt is here: https://github.com/isosphere/logseq-kanban-plugin/tree/drag-drop
second this
I suppose the plugin should have a disclaimer that you won't be able to move cards, which is mostly the point in having a kanban board itself.
For this function, you may like to use the logseq-plugin-kanban-board.
|
2025-04-01T06:38:57.244869
| 2022-07-20T09:18:18
|
1310725853
|
{
"authors": [
"tristone13th"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6764",
"repo": "hkupty/iron.nvim",
"url": "https://github.com/hkupty/iron.nvim/issues/277"
}
|
gharchive/issue
|
Improve compatibility when sending codes in Python
For an example, the following function definition will break as an inpur to the REPL window;
def generate_messages(since, until):
def daterange(start_date, end_date):
for n in range(int((end_date - start_date).days)):
yield start_date + timedelta(n)
start_date = date(since)
end_date = date(until)
for single_date in daterange(start_date, end_date):
print(single_date.strftime("%Y-%m-%d"))
because the ipython kernel will regard the empty line as the end of the function definition, thus causing an error. But, simply deleting the new line is also a good idea, because the following code;
class NoInputFileError(Exception):
pass
class InputFileError(Exception):
pass
will break if you remove the line between class definitions.
So is there a way to improve compatibility when sending codes to the ipython kernel?
That's may be the IPython issue, so close it.
|
2025-04-01T06:38:57.256583
| 2020-10-20T13:48:58
|
725609992
|
{
"authors": [
"hlissner",
"regadas",
"systemctl603"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6765",
"repo": "hlissner/doom-emacs",
"url": "https://github.com/hlissner/doom-emacs/issues/4121"
}
|
gharchive/issue
|
Scala lsp format on save
Hello!
What did you expect to happen?
For scala +lsp I would expect that format +onsave would pick the lsp format capability.
What actually happened?
It's using the CLI scalafmt tool.
If I do SPC c f it works as expected.
Additional details:
https://github.com/regadas/doom.d/
Thanks for this awesome project!
I just noticed this #4120 It might be related.
Yeah, that PR was meant to solve this issue.
Also, this may be a duplicate of #3626
#4120 was merged. Please update and let me know if that has resolved your issue.
Hi @hlissner that fixed it! Thanks
@systemctl603 Thanks as well for the PR!
|
2025-04-01T06:38:57.260210
| 2020-11-13T18:11:29
|
742671461
|
{
"authors": [
"acristoffers"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6766",
"repo": "hlissner/doom-emacs",
"url": "https://github.com/hlissner/doom-emacs/issues/4265"
}
|
gharchive/issue
|
Conflicting bindings
There are two elements with the same keybindings:
modules/editor/evil/config.el has
(map! :textobj "c" #'evilnc-inner-comment #'evilnc-outer-commenter)
and .local/straight/repos/evil-tex/evil-tex.el has
(define-key outer-map "c" 'evil-tex-a-command) and similar to inner.
So when working in TeX files, which one will be active is random. Sometimes visual-inner-c selects inside commands and sometimes inside comments (it is the same for the whole session, which one will be bound is defined when the buffer loads, but non-deterministically, so sometimes I get one and sometimes the other).
Since the tex one comes from a plugin, I would recommend changing the module one (comment) to capital c.
System information:
```
SYSTEM type darwin
config x86_64-apple-darwin18.7.0
shell /usr/local/bin/fish
uname Darwin 19.6.0 Darwin Kernel Version 19.6.0: Thu Oct 29 22:56:45 PDT 2020; root:xnu-6<IP_ADDRESS>~1/RELEASE_X86_64 x86_64
path (/usr/local/opt/openjdk/bin ~/.gem/ruby/2.6.0/bin ~/.npm-global/bin ~/.emacs.d/bin /usr/local/opt/qt/bin ~/.gocode/bin ~/Documents/MATLAB/mosek/tools/platform/osx64x86/bin ~/.cargo/bin ~/bin/flutter/bin ~/.config/yarn/global/node_modules/.bin ~/Library/Android/sdk/tools ~/Library/Android/sdk/platform-tools ~/bin/netbunch ~/bin /Library/TeX/texbin /Library/Frameworks/Mono.framework/Versions/Current/Commands /Applications/VMware Fusion.app/Contents/Public /Library/Apple/usr/bin /opt/X11/bin /usr/local/bin /usr/bin /usr/local/sbin /usr/sbin /sbin /bin /Applications/Emacs.app/Contents/MacOS/libexec /Applications/Emacs.app/Contents/MacOS/bin)
EMACS dir ~/.emacs.d/
version 27.1
build Aug 16, 2020
buildopts --with-mac --enable-mac-app=/Users/travis/build/railwaycat/homebrew-emacsmacport/build-scripts/emacs-source/tmproot --prefix=/Users/travis/build/railwaycat/homebrew-emacsmacport/build-scripts/emacs-source/tmproot --enable-mac-self-contained --with-modules
features NOTIFY KQUEUE ACL GNUTLS LIBXML2 ZLIB TOOLKIT_SCROLL_BARS MODULES THREADS JSON PDUMPER LCMS2 GMP
traits (batch server-running envvar-file)
DOOM dir ~/.doom.d/
version 2.0.9
build HEAD -> develop f7293fb67 2020-11-11 20:33:27 -0500
elc-files 0
modules (:completion company (ivy +icons +prescient) :ui doom doom-dashboard doom-quit fill-column hl-todo modeline neotree ophints (popup +defaults) tabs treemacs unicode vc-gutter vi-tilde-fringe window-select workspaces :editor (evil +everywhere) file-templates fold (format +onsave) multiple-cursors snippets word-wrap :emacs (dired +icons) electric (ibuffer +icons) undo vc :checkers syntax grammar :tools ein (eval +overlay) gist lookup lsp magit make pdf rgb tmux :os macos :lang (cc +lsp) (clojure +lsp) common-lisp (csharp +lsp +unity) data (dart +flutter +lsp) (elixir +lsp) (elm +lsp) emacs-lisp (erlang +lsp) (fsharp +lsp) (go +lsp) (haskell +dante +lsp) (json +lsp) (java +meghanada) (javascript +lsp) (julia +lsp) (kotlin +lsp) (latex +lsp +latexmk) (lua +lsp +moonscript) (markdown +grip) nim nix ocaml (org +dragndrop +journal +hugo +pretty) (python +lsp +cython +pyright) qt (ruby +lsp +rails) (rust +lsp) (scala +lsp) (sh +fish +lsp) (swift +lsp) web (yaml +lsp) :app calendar :config (default +bindings +smartparens))
packages ((tide) (vue-mode) (zig-mode) (alert) (lsp-julia :recipe (:host github :repo non-jedi/lsp-julia)) (julia-formatter :recipe (:host github :repo ki-chi/julia-formatter :files (julia-formatter.el scripts/server.el))))
unpin (n/a)
elpa (n/a)
```
Their code is already bound to mode keymaps. This has been bothering me for some time, but being random it is quite hard to pin down what is triggering it. I'll keep an eye open and see if I can find any pattern or anything that may help. You can close the issue if you want, I'll reopen it or fill a new one if I find out what is wrong.
|
2025-04-01T06:38:57.313349
| 2023-08-29T15:11:51
|
1871887439
|
{
"authors": [
"amol-anand"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6767",
"repo": "hlxsites/accenture-newsroom",
"url": "https://github.com/hlxsites/accenture-newsroom/issues/20"
}
|
gharchive/issue
|
Sub-Header
Article Pages:
Mobile
Desktop
Search Results Page:
Mobile:
Desktop:
Originally posted by @amol-anand in https://github.com/hlxsites/accenture-newsroom/issues/2#issuecomment-1671255194
closing because duplicate
|
2025-04-01T06:38:57.314595
| 2023-09-29T08:03:56
|
1918791887
|
{
"authors": [
"andreibogdan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6768",
"repo": "hlxsites/bitdefender",
"url": "https://github.com/hlxsites/bitdefender/issues/372"
}
|
gharchive/issue
|
Family Pack
https://main--bitdefender--hlxsites.hlx.live/solutions/family-pack#overview
In the section Compare Bitdefender Products, the TS button redirects to the cart, instead of PP as IS.
authoring issues. Added the correct link to TS button.
|
2025-04-01T06:38:57.315930
| 2023-07-19T11:28:35
|
1811747434
|
{
"authors": [
"krizel4",
"solaris007"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:6769",
"repo": "hlxsites/mammotome",
"url": "https://github.com/hlxsites/mammotome/issues/499"
}
|
gharchive/issue
|
Firefox issue
When in Firefox, the nav fails to display.
It seems that the CSS pseudo-class has is not supported in FF: https://caniuse.com/css-has
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.