added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:40:09.326006
| 2019-07-31T21:45:59
|
475371886
|
{
"authors": [
"Oddant1",
"nbokulich",
"thermokarst"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9937",
"repo": "qiime2/q2-sample-classifier",
"url": "https://github.com/qiime2/q2-sample-classifier/issues/167"
}
|
gharchive/issue
|
clean up unit tests
Improvement Description
The unit tests for this plugin are overgrown. It is time to get organized.
Current Behavior
At the time of writing, the unit tests are 1546 lines long. Some cleanup of the code may also improve test runtime.
Proposed Behavior
split up test_classifier.py into multiple thematic test modules. E.g., types/formats should be tested separately. Visualizations and utilities could also be tested separately.
split up or combine some test classes, based on shared test data
clean up the test data to simplify tests where possible. E.g., the full-sized datasets could be replaced by toy datasets for more tests
a. TestHeatmap should use toy data, the tests take close to 2 minutes to run for a fairly trivial set of tests.
Note: @Oddant1 mostly addressed this in #188
The remaining task is to transition other tests to a toy dataset (e.g., the tests that loop through all estimators could use a toy dataset)... but for now we can wait and see if we still need test runtime shortened. If not, we can close this issue.
Just wanted to add here in regards to "if test runtime is shortened we can close this issue" test runtime was cut from ~145 seconds to ~70-75 seconds on my machine.
Are we interested in shortening the test runtime here any further with a toy dataset? I don't really know how to build a toy dataset for this, but test runtime is down from ~145 seconds to ~70-75 seconds.
hey @Oddant1 — I have heard no more complaints from @thermokarst about these tests holding up busywork.... I am happy to close this issue if @thermokarst is happy with the current test runtime, since I agree at this point we can't expect much more than shaving off many 20 more sec...
No complaints here, ATM. Just wait, I'm sure I'll come up with somethin...
|
2025-04-01T06:40:09.327536
| 2017-05-10T16:33:25
|
227739912
|
{
"authors": [
"Oddant1",
"jairideout"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9938",
"repo": "qiime2/qiime2",
"url": "https://github.com/qiime2/qiime2/issues/259"
}
|
gharchive/issue
|
API: check whether path is valid artifact or visualization
Improvement Description
This has come up on the forum and in q2cli -- it'd be handy to have an API to determine whether a file path is a valid QIIME 2 artifact or visualization. .peek() works but a more explicit API would make discovering/accessing this functionality easier.
I think this is probably staying in qiime tools peek
|
2025-04-01T06:40:09.342144
| 2022-11-11T10:56:38
|
1445302811
|
{
"authors": [
"CLAassistant",
"SanyaNanda",
"coveralls",
"oscar-wallis"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9939",
"repo": "qiskit-community/qiskit-machine-learning",
"url": "https://github.com/qiskit-community/qiskit-machine-learning/pull/519"
}
|
gharchive/pull-request
|
Improved content for Tutorial 8 (QAMP fall'22)
Summary
Enhancing the documentation of Tutorial "08_quantum_kernel_trainer" as a mentee for QAMP 2022.
Mentor: @ElePT
Mentee: @SanyaNanda
Details and comments
Restructured the notebook (Overview, Introduction, Objective, Tutorial, accuracy table and what was learned)
Imports placed where required
Added more explanations and inline comments for better understanding of the code
Hyperlinks to API references wherever required
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 88.934%
Totals
Change from base Build<PHONE_NUMBER>:
0.0%
Covered Lines:
3544
Relevant Lines:
3985
💛 - Coveralls
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Closing this PR whilst working on updating tutorials. For more info - see [here].(https://github.com/qiskit-community/qiskit-machine-learning/pull/491#issuecomment-1979205893)
|
2025-04-01T06:40:09.451634
| 2018-03-16T15:07:34
|
305964041
|
{
"authors": [
"qooob",
"reidbiztech"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9940",
"repo": "qooob/authentic-theme",
"url": "https://github.com/qooob/authentic-theme/issues/1057"
}
|
gharchive/issue
|
File Manager: download broken in Firefox with no error in console
See: https://www.virtualmin.com/node/56365
I can't find the issue, I suspect changes in the javascript are to blame, but it seems only the minified versions of the javascript is present here.
Is the original source for the file manager javascript for recent versions available somewhere so I can examine the files to try and find the issue?
It's been already fixed.
Cool! Thanks for the update!
|
2025-04-01T06:40:09.454352
| 2015-05-29T02:24:19
|
82219030
|
{
"authors": [
"ignaworn",
"qooob"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9941",
"repo": "qooob/authentic-theme",
"url": "https://github.com/qooob/authentic-theme/issues/178"
}
|
gharchive/issue
|
Cant navigate within Webmin Servers
Hi,
After the last update (13.00 and 13.02) I could not navigate to other webmin servers. I use "Login via Webmin" method..
So... I connect from my main server to another server and I get doubled sidebars
If I press anything from the side pannel it sends me back to the main server information page
Thanks
I got it! Will fix it in 13.03 very soon. After update please report here if it's working or not! Thanks for reporting!
|
2025-04-01T06:40:09.496635
| 2024-11-29T09:00:04
|
2704432435
|
{
"authors": [
"JacobHast",
"nulinspiratie"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9942",
"repo": "qua-platform/quam",
"url": "https://github.com/qua-platform/quam/issues/85"
}
|
gharchive/issue
|
Setting referenced values
When using quam, we found that we often need to change values that are references, and doing so generally instantiates a "treasure-hunt", following references in our machine until the "bare" value is found. In practice, doing this can become a bit tedious - would it be possible to add a method QuamComponent, e.g. .set_at_reference(attr: str, new_value) which follows the references for you and sets it at the correct place?
And for added user-clarity, maybe this function could even search the machine from the "bare" value, to see if other components reference this value, and correspondingly print out exactly which components were affected by the change?
I like it! I created a PR introducing this functionality.
The second part about searching the machine for other attributes that are a reference to the updated value is a tricky one as it's not always apparent when something is referencing a given attribute. I'll leave this as a follow-up feature.
Amazing, thanks!
|
2025-04-01T06:40:09.498086
| 2022-03-19T06:57:25
|
1174194454
|
{
"authors": [
"justanhduc",
"quackduck"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9943",
"repo": "quackduck/devzat",
"url": "https://github.com/quackduck/devzat/issues/92"
}
|
gharchive/issue
|
Language support
OMG this is amazing. Thanks for sharing this. I would like to ask if it is possible to support other language? For e.g., when I use Vietnamese, the font is messed up. Is there a solution for this? Thanks!
Yup, I’m looking into what made Unicode break
awesome thanks!
|
2025-04-01T06:40:09.525872
| 2017-12-19T00:42:35
|
283068553
|
{
"authors": [
"jaycode",
"mmargenot",
"twiecki"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9950",
"repo": "quantopian/pyfolio",
"url": "https://github.com/quantopian/pyfolio/issues/498"
}
|
gharchive/issue
|
Bayesian Tear Sheet fails in Research
Calling a Bayesian Tear Sheet fails with AssertionError: axis must be between 0 and 1, input was columns. I'm pretty sure it's due to https://github.com/quantopian/pyfolio/blob/master/pyfolio/bayesian.py#L68.
I got the same error.
Closed by https://github.com/quantopian/pyfolio/pull/508.
|
2025-04-01T06:40:09.529172
| 2020-10-21T08:07:17
|
726243866
|
{
"authors": [
"ncrubin",
"obriente"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9951",
"repo": "quantumlib/OpenFermion",
"url": "https://github.com/quantumlib/OpenFermion/pull/673"
}
|
gharchive/pull-request
|
Added functions to init file
Finally fixed my commit history!
I've gone through and either exposed or made explicitly hidden all (I think) functions in the repo. Some of the names were a bit vague (e.g. 'two_body'), LMK if you think something should be exposed that isn't or vice versa.
@obriente Even resolving the merge conflict it still seems like there is an issue with importing. Maybe cutting this down to something less than 21 files. Maybe just bumping this back to when it was passing tests. Then we can add the extra files in a following PR.
@googlebot I fixed it.
@googlebot I signed it.
@googlebot I consent.
@obriente bump. I went in and cleaned up some of the imports that were broken from the move of a method. Let me know if everything else looks in place.
Thanks for sorting that out, looks like it must have been a pain to fix.
Everything looks good to me, I guess we just need one extra review now.
No need for extra review. thanks for kicking this off and sorting out the extra imports that users were not finding. This is an impactful PR.
Ah ok - I can't merge right now without an extra review, maybe you have more privileges than I do here. But everything LGTM
|
2025-04-01T06:40:09.541633
| 2022-12-29T16:56:59
|
1513900664
|
{
"authors": [
"juangon",
"metacosm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9952",
"repo": "quarkiverse/quarkus-operator-sdk",
"url": "https://github.com/quarkiverse/quarkus-operator-sdk/issues/465"
}
|
gharchive/issue
|
java.lang.IllegalArgumentException: There are multiple EventSources registered for type when upgrading from 5.0.0.Beta2 to 5.0.0
In the same project that worked fine in 5.0.0.Beta2, now I have the error:
java.lang.IllegalArgumentException: There are multiple EventSources registered for type io.strimzi.api.kafka.model.KafkaTopic
I have multiple dependent KafkaTopics and I use a ResourceDiscriminator. I am not using useEventSourceWithName as I want to avoid implementing prepareEventSources method in reconciler, as the java-operator-sdk example does. This approach worked fine in 5.0.0.Beta2 but it doesn't work now.
Any ideas? Thanks!
Hi @juangon,
Can you provide a more complete stack trace, please? Thanks in advance.
Here you have @metacosm . Thanks!
2022-12-29 23:13:54,248 ERROR [io.jav.ope.pro.eve.ReconciliationDispatcher] (ReconcilerExecutor-teis-app-controller-240) Error during event processing ExecutionScope{ resource id: ResourceID{name='teis-app-proactivanet', namespace='teis-proactivanet-soriana-demo'}, version: 795330345} failed.: io.javaoperatorsdk.operator.AggregatedOperatorException: Exception(s) during workflow execution. Details:
com.teis.operator.teisbackend.TeisBackendDependent -> java.lang.NullPointerException
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowResult.throwAggregateExceptionIfErrorsPresent(WorkflowResult.java:40)
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowReconcileResult.throwAggregateExceptionIfErrorsPresent(WorkflowReconcileResult.java:9)
at io.javaoperatorsdk.operator.processing.dependent.workflow.DefaultWorkflow.reconcile(DefaultWorkflow.java:92)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:140)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:103)
at io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:206)
at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:102)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:141)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64)
at io.javaoperatorsdk.operator.processing.event.EventProcessor$ReconcilerExecutor.run(EventProcessor.java:415)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-12-29 23:13:54,333 WARN [io.fab.kub.cli.dsl.int.VersionUsageUtils] (InformerWrapper [kafkas.kafka.strimzi.io/v1beta2] 255) The client is using resource type 'kafkas' with unstable version 'v1beta2'
2022-12-29 23:13:54,592 INFO [io.jav.ope.pro.Controller] (ForkJoinPool.commonPool-worker-3) 'teis-backend-controller' controller started, pending event sources initialization
2022-12-29 23:13:54,598 INFO [io.jav.ope.pro.dep.AbstractDependentResource] (pool-8-thread-3) Updating 'teis-backend-secrets' Secret for primary ResourceID{name='teis-backend-teis-proactivanet-soriana-demo', namespace='teis-proactivanet-soriana-demo'}
2022-12-29 23:13:54,618 INFO [io.jav.ope.pro.Controller] (ForkJoinPool.commonPool-worker-13) 'teis-kafka-controller' controller started, pending event sources initialization
2022-12-29 23:13:54,617 ERROR [io.jav.ope.pro.eve.ReconciliationDispatcher] (ReconcilerExecutor-teis-backend-controller-270) Error during event processing ExecutionScope{ resource id: ResourceID{name='teis-backend-teis-proactivanet-soriana-demo', namespace='teis-proactivanet-soriana-demo'}, version: 795330341} failed.: io.javaoperatorsdk.operator.AggregatedOperatorException: Exception(s) during workflow execution. Details:
com.teis.operator.teisbackend.kafka.KafkaBackendTicketPredictedDependent -> java.lang.IllegalArgumentException: There are multiple EventSources registered for type io.strimzi.api.kafka.model.KafkaTopic, you need to provide a name to specify which EventSource you want to query. Known names: kafka-backend-topic-ticket-predicted,kafka-backend-topic-ticket-import-errors,kafka-backend-topic-ticket-added
- com.teis.operator.teisbackend.kafka.KafkaBackendTicketImportErrorsDependent -> java.lang.IllegalArgumentException: There are multiple EventSources registered for type io.strimzi.api.kafka.model.KafkaTopic, you need to provide a name to specify which EventSource you want to query. Known names: kafka-backend-topic-ticket-predicted,kafka-backend-topic-ticket-import-errors,kafka-backend-topic-ticket-added
- com.teis.operator.teisbackend.kafka.KafkaBackendTicketAddedDependent -> java.lang.IllegalArgumentException: There are multiple EventSources registered for type io.strimzi.api.kafka.model.KafkaTopic, you need to provide a name to specify which EventSource you want to query. Known names: kafka-backend-topic-ticket-predicted,kafka-backend-topic-ticket-import-errors,kafka-backend-topic-ticket-added
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowResult.throwAggregateExceptionIfErrorsPresent(WorkflowResult.java:40)
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowReconcileResult.throwAggregateExceptionIfErrorsPresent(WorkflowReconcileResult.java:9)
at io.javaoperatorsdk.operator.processing.dependent.workflow.DefaultWorkflow.reconcile(DefaultWorkflow.java:92)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:140)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:103)
at io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:206)
at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:102)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:141)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64)
at io.javaoperatorsdk.operator.processing.event.EventProcessor$ReconcilerExecutor.run(EventProcessor.java:415)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Hi @juangon,
Can you provide a more complete stack trace, please? Thanks in advance. Could you also explain the reasoning behind not using useEventSourceWithName? I tend to think that ResourceDiscriminator should be removed but it seems your use case is proving me wrong so would like to hear more about it… smile
It's fine for me to use useEventSourceWithName but as long as I don't have to reimplement prepareEventSources because in reconcilers I have more than one type of DependentResource and wouldn't like to prepare all of those event sources, as I have all annotations in place.
You shouldn't have to re-implement prepareEventSources, I think simply providing an explicit name for your dependents should be enough to fix the issue. You probably don't need to use useEventSourceWithName either. Would you mind sharing your controller's configuration along with the dependents definition (the class signature ought to be enough)?
Yes, I have a name for those dependents (@DependentResource name), in fact those james are showing un logs I've sent. Any other idea? Thanks very much!
Here you have the reconciler config @metacosm :
@ControllerConfiguration(name = TeisBackendReconciler.TEIS_BACKEND_CONTROLLER,
dependents = {
@Dependent(type = KafkaBackendTicketImportErrorsDependent.class, name = "kafka-backend-topic-ticket-import-errors", readyPostcondition = KafkaBackendTicketImportErrorsDependent.class),
@Dependent(type = KafkaBackendTicketAddedDependent.class, name = "kafka-backend-topic-ticket-added", readyPostcondition = KafkaBackendTicketAddedDependent.class),
@Dependent(type = KafkaBackendTicketPredictedDependent.class, name = "kafka-backend-topic-ticket-predicted", readyPostcondition = KafkaBackendTicketPredictedDependent.class),
@Dependent(type = TeisBackendMariaDBDependent.class, name = "teis-backend-mariadb", readyPostcondition = TeisBackendMariaDBDependent.class),
@Dependent(type = TeisBackendConfigMapDependent.class, name="teis-backend-config"),
@Dependent(type = TeisBackendSecretDependent.class, name="teis-backend-secret"),
@Dependent(type = TeisBackendDeploymentDependent.class,
dependsOn = {
"kafka-backend-topic-ticket-import-errors",
"kafka-backend-topic-ticket-added",
"kafka-backend-topic-ticket-predicted",
"teis-backend-mariadb",
"teis-backend-config",
"teis-backend-secret"}),
@Dependent(type = TeisBackendServiceDependent.class)
})
I'm investigating the root cause, there seems to be funny going on. In the mean time, could you try the following configuration and let me know how it goes, please:
@ControllerConfiguration(name = TeisBackendReconciler.TEIS_BACKEND_CONTROLLER,
dependents = {
@Dependent(type = KafkaBackendTicketImportErrorsDependent.class, name = "kafka-backend-topic-ticket-import-errors", readyPostcondition = KafkaBackendTicketImportErrorsDependent.class, useEventSourceWithName="kafka-backend-topic-ticket-predicted"),
@Dependent(type = KafkaBackendTicketAddedDependent.class, name = "kafka-backend-topic-ticket-added", readyPostcondition = KafkaBackendTicketAddedDependent.class, useEventSourceWithName="kafka-backend-topic-ticket-predicted"),
@Dependent(type = KafkaBackendTicketPredictedDependent.class, name = "kafka-backend-topic-ticket-predicted", readyPostcondition = KafkaBackendTicketPredictedDependent.class),
@Dependent(type = TeisBackendMariaDBDependent.class, name = "teis-backend-mariadb", readyPostcondition = TeisBackendMariaDBDependent.class),
@Dependent(type = TeisBackendConfigMapDependent.class, name="teis-backend-config"),
@Dependent(type = TeisBackendSecretDependent.class, name="teis-backend-secret"),
@Dependent(type = TeisBackendDeploymentDependent.class,
dependsOn = {
"kafka-backend-topic-ticket-import-errors",
"kafka-backend-topic-ticket-added",
"kafka-backend-topic-ticket-predicted",
"teis-backend-mariadb",
"teis-backend-config",
"teis-backend-secret"}),
@Dependent(type = TeisBackendServiceDependent.class)
})
Ok so with your snippet @metacosm it seems that one of two Exceptions went away. The NullPointer is still there though:
com.teis.operator.teisbackend.TeisBackendDependent -> java.lang.NullPointerException
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowResult.throwAggregateExceptionIfErrorsPresent(WorkflowResult.java:40)
at io.javaoperatorsdk.operator.processing.dependent.workflow.WorkflowReconcileResult.throwAggregateExceptionIfErrorsPresent(WorkflowReconcileResult.java:9)
at io.javaoperatorsdk.operator.processing.dependent.workflow.DefaultWorkflow.reconcile(DefaultWorkflow.java:92)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:140)
at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:103)
at io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:206)
at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:102)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:141)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91)
at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64)
at io.javaoperatorsdk.operator.processing.event.EventProcessor$ReconcilerExecutor.run(EventProcessor.java:415)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Any idea about that? Thanks very much!
Just sent to your email @metacosm . Thanks very much for your help!
|
2025-04-01T06:40:09.660985
| 2019-10-23T07:57:40
|
511145780
|
{
"authors": [
"akomakom",
"efwe",
"jhouserizer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9953",
"repo": "quartz-scheduler/quartz",
"url": "https://github.com/quartz-scheduler/quartz/pull/515"
}
|
gharchive/pull-request
|
use varbinary(max) instead of image data-type - Fixes #157
As discussed before this is the minimal approach.
Hello there,
is there anything I can do better? What do you think about this minimal approach?
Keep up the good work,
~fw
Hello! Thank you very much for your contribution and interest in helping improve the Quartz community.
After a period of dormancy, the Quartz project is back under steady maintenance by multiple volunteers, who are working to once again handle contributions such as yours.
We notice that your contribution was made without use of the DCO feature (the sign-off feature on commits via the -s option). Can you please update your PR with commits that use the -s option, agreeing to assign copyright ownership and other terms as described at the contributor agreement referenced here: https://github.com/quartz-scheduler/contributing/blob/main/CONTRIBUTING.md
You can easily add signoff to your previous commits by running (on your PR branch):
git commit --amend --signoff --no-edit
git push -f
Or (for multiple commits):
# change 5 to the number of commits
git rebase HEAD~5 --signoff
git push -f
Hello @akomakom ,
I signed-off my commit.
Good to see the project back from hibernation :)
~fw
Thanks!
|
2025-04-01T06:40:09.694445
| 2023-10-01T09:23:03
|
1920662635
|
{
"authors": [
"quaxalber"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9954",
"repo": "quaxalber/bluetooth_2_usb",
"url": "https://github.com/quaxalber/bluetooth_2_usb/issues/15"
}
|
gharchive/issue
|
Fix PyPi package deployment
With the initial release the automatic package deployment to PyPi failed with
ERROR Source /home/runner/work/bluetooth_2_usb/bluetooth_2_usb does not appear to be a Python project: no pyproject.toml or setup.py
Investigate and fix.
https://packaging.python.org/en/latest/tutorials/packaging-projects/
|
2025-04-01T06:40:09.713286
| 2020-10-27T18:17:14
|
730699347
|
{
"authors": [
"VenkataKarthikP",
"ranganathhr"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9955",
"repo": "qubole/qds-sdk-java",
"url": "https://github.com/qubole/qds-sdk-java/issues/121"
}
|
gharchive/issue
|
Maven artifacts missing
Maven artifacts are missing for qds-sdk-java release 1.3.0 ( https://mvnrepository.com/artifact/com.qubole.qds-sdk-java/qds-sdk-java)
@VenkataKarthikP Its now available in maven central repo https://mvnrepository.com/artifact/com.qubole.qds-sdk-java/qds-sdk-java/1.3.0
|
2025-04-01T06:40:09.716612
| 2019-05-31T10:23:21
|
450727439
|
{
"authors": [
"qubvel",
"tinalegre"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9956",
"repo": "qubvel/classification_models",
"url": "https://github.com/qubvel/classification_models/issues/28"
}
|
gharchive/issue
|
Resnet models
It looks that your own RestNet implementation is based on Resnet Version 2 (Preact / BN->Relu->Conv). Could you please also consider adding ResNet Version 1 (code + pre-trained weights)? I'm unfortunately limited on GPU resources and can't therefore do a full train in imagenet, etc. I know that keras-applications has both V1/V2, but I find that your toolbox is better organized.
Hi @tinalegre
I did not train any of models, just convert or transfer weights from other frameworks/repos.
If you will find such pretrained models I will consider such option. Models from keras_applications are inside of this repo.
Hi @qubvel thank you. I was wondering why on your ResNet model, just after the last residual block and before the top classification layer, we need to have the additional BN+Relu layers (named respectively bn1 and relu1)?
Once again:
ResNet models was not created by me. I have just rewrite architecture from MXNet models zoo as it is and convert weights.
So the answer to your question is: "Because they have been created and trained with such architecture by someone (I actually don`t know the authors of this implementation)"
|
2025-04-01T06:40:09.718101
| 2020-03-03T08:25:29
|
574496314
|
{
"authors": [
"VladislavAD",
"johanneskpp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9957",
"repo": "qubvel/segmentation_models",
"url": "https://github.com/qubvel/segmentation_models/issues/307"
}
|
gharchive/issue
|
Problems with imagesize of 1024x1024
I am using the U-Net-model. When i tried to set the input size to 1024x1024 the first 20 epochs had a validation iou-score of 1e-11. I had no problems with other sizes like 512x512, 256x256 and 768x768. I already tried another backbone. So why does this input-size not work?
If it worked on other data it should work with other sizes too if the data is the same. Do you train multiclass segmentation? Have you checked that the data is correct and labels match input images? It take me ~300 epochs to gain first validation scores for my task, so 20 epochs looks a bit few for results. On the other hand if you use pretrained weights it should cover faster and I advice you to look for problems in input data.
|
2025-04-01T06:40:09.720332
| 2024-08-03T18:30:19
|
2446541152
|
{
"authors": [
"Afluttera",
"queer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9958",
"repo": "queer/boxxy",
"url": "https://github.com/queer/boxxy/issues/234"
}
|
gharchive/issue
|
Root owned files/directories are owned by "nobody" when running under boxxy
> ls -la /etc/environment
-rw-r--r-- 1 root root 97 Apr 11 04:47 /etc/environment
> boxxy ls -la /etc/environment
INFO boxxy::config > loading rules from /home/aflutter/.config/boxxy/boxxy.yaml
INFO boxxy::config > loaded 0 total rule(s)
INFO boxxy::enclosure > boxed "ls" ♥
-rw-r--r-- 1 nobody nobody 97 Apr 11 04:47 /etc/environment
I'm running into this problem because ssh checks to make sure a file is owned by root, and errors out for security reasons if not (Bad owner or permissions on /etc/ssh/ssh_config.d/20-systemd-ssh-proxy.conf).
Somehow, this is only a recent issue. I've used boxxy with ssh regularly until around June when it broke. I checked a few older version of boxxy, with the oldest being 5.1, and they all had the issue, though.
This is related to #6.
#6 is the tracking issue for this problem.
|
2025-04-01T06:40:09.760258
| 2024-04-28T10:26:38
|
2267476606
|
{
"authors": [
"marten-seemann"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9960",
"repo": "quic-go/masque-go",
"url": "https://github.com/quic-go/masque-go/issues/3"
}
|
gharchive/issue
|
connected and unconnected sockets
Currently the proxy uses connected sockets. This is a good idea performance-wise, but comes with the obvious limitation on the number of proxied connections.
We should have an API that allows an application more fine-grained control. One option would be a callback on the Server:
type Server struct {
// ... existing stuff
// PacketConnForRemote is called when establishing a new proxied connection.
// It is possible to return the same net.PacketConn (an unconnected UDP socket) for multiple distinct remote address.
// However, the same net.PacketConn cannot be used for the same remote address.
PacketConnForRemote(*net.UDPAddr) net.PacketConn
}
The problem here is that the same net.PacketConn can't be used for the same remote address: We need to know which QUIC connection to put a packet on. It's also not clear how timeouts should work: If one proxied connection is closed, it should be possible to reuse the same net.PacketConn at some point, but probably not immediately, since UDP packets might still be in flight between the remote and the proxy.
There are multiple ways to slice and dice it. One option that comes to mind is using one (unconnected) socket per client. This might make in a setting where the client is using the proxy to proxy all its traffic over the proxy.
However, it also breaks as soon as the client requests to connect to the same IP (not domain!) multiple times. This will be pretty common, given the current centralization of the edge infrastructure in the hands of a few giant CDN providers.
#43 opens up a path to using unconnected UDP sockets: When the application wishes to proceed with a masque.Request, it can either:
call Proxy.ProxyConnectedSocket(w http.ResponseWriter, _ *Request, conn *net.UDPConn), passing us a fresh connected UDP socket
call Proxy.ProxyUnconnectedSocket(w http.ResponseWriter, _ *Request, conn *net.UDPConn, target *net.UDPAddr), reusing a UDP socket
For ProxyUnconnectedSocket, we can then perform the necessary checks (are we already proxying another connection to the same net.UDPAddr as target?), and reject the proxying attempt with a masque.ErrAlreadyProxyingToThisTarget (better naming tbd). The application can then decide to either switch to a connected socket, attempt again using another unconnected socket, or even create a new unconnected socket.
|
2025-04-01T06:40:09.767539
| 2024-05-19T13:55:20
|
2304624277
|
{
"authors": [
"marten-seemann",
"phuslu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9961",
"repo": "quic-go/quic-go",
"url": "https://github.com/quic-go/quic-go/issues/4526"
}
|
gharchive/issue
|
http3: response body not implemented http3.HTTPStreamer
Seems that this is broken by https://github.com/quic-go/quic-go/pull/4469
It makes my http3 dialer not working
streamer, ok := resp.Body.(http3.HTTPStreamer)
if !ok {
return nil, errors.New("proxy: read from " + d.Host + " error: resp body not implemented http3.HTTPStreamer")
}
Any suggestion?
This was called out as a breaking change in the release notes of v0.43. It is now the http.ResponseWriter that implements the interface.
Thanks. In my understanding the http.ResponseWriter only can be used in server side. (Please correct me if I'm wrong)
How to unwrap the stream in client side (*http.Response), my existing code is here
You'll need to take the path via the SingleDestinationRoundTripper then.
Last question, shall/do we have a RoundTripOpt to let this underlying stream stay opening?
https://github.com/quic-go/quic-go/blob/v0.44.0/http3/client.go#L299
Finally I gave up this approach, I turn to another way with more compatibility and but lower performance in https://github.com/phuslu/liner/commit/1054f3b89798ab13dc677914b388a92b0be8147c
If the stream unwarp of *http.Response in quic-go/http3 comes back in future, please let me know. thanks again.
Please take a look at the WebTransport dialer: https://github.com/quic-go/webtransport-go/blob/master/client.go
I believe it does exactly what you need.
Understood, but currently I'd like keep unique/same logic in server side, like https://github.com/phuslu/liner/blob/master/handler_http_forward.go#L278-L298
There’s no change to the server side. This is purely a client side API change.
thanks for your patience, finally I have a modified webtransport-go dialer of https://github.com/phuslu/liner/blob/master/dialer_http3.go thanks again!
|
2025-04-01T06:40:09.797153
| 2022-08-10T16:53:19
|
1334907888
|
{
"authors": [
"codecov-commenter",
"coveralls",
"meling"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9962",
"repo": "quickfeed/quickfeed",
"url": "https://github.com/quickfeed/quickfeed/pull/714"
}
|
gharchive/pull-request
|
Remove access token cache accessible via qf.Course type
This PR removes the access token cache associated with the qf.Course type.
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
3 of 6 (50.0%) changed or added relevant lines in 3 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 25.213%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
database/gormdb.go
2
3
66.67%
qf/course.go
0
2
0.0%
Totals
Change from base Build<PHONE_NUMBER>:
0.0%
Covered Lines:
2663
Relevant Lines:
10562
💛 - Coveralls
Codecov Report
Merging #714 (27340c7) into master (6938ffe) will not change coverage.
The diff coverage is 50.00%.
@@ Coverage Diff @@
## master #714 +/- ##
=======================================
Coverage 22.69% 22.69%
=======================================
Files 79 79
Lines 10175 10175
=======================================
Hits 2309 2309
Misses 7552 7552
Partials 314 314
Flag
Coverage Δ
unittests
22.69% <50.00%> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
qf/course.go
0.00% <0.00%> (ø)
database/gormdb.go
64.40% <66.66%> (ø)
database/gormdb_course.go
50.34% <100.00%> (ø)
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
2025-04-01T06:40:09.799148
| 2022-05-16T14:02:50
|
1237210253
|
{
"authors": [
"fmassot"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9963",
"repo": "quickwit-oss/quickwit",
"url": "https://github.com/quickwit-oss/quickwit/issues/1468"
}
|
gharchive/issue
|
Add some key metrics on the ingestion
Now that we have an indexing server, we can expose some nice prometheus metrics.
Possibly interesting indexing metrics:
ingested_bytes
num_docs_in / num_docs_out
|
2025-04-01T06:40:09.800609
| 2023-09-08T09:34:09
|
1887307268
|
{
"authors": [
"fmassot"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9964",
"repo": "quickwit-oss/quickwit",
"url": "https://github.com/quickwit-oss/quickwit/issues/3816"
}
|
gharchive/issue
|
Use index UID instead of index ID for the queue_id
We missed to change that when we introduced index UID.
See https://github.com/quickwit-oss/quickwit/blob/ce856974177a26fc96bfabdb7768b0bcae2cbdea/quickwit/quickwit-indexing/src/source/ingest_api_source.rs#L82
duplicate of #3559
|
2025-04-01T06:40:09.814264
| 2022-06-19T00:16:44
|
1275915543
|
{
"authors": [
"codecov-commenter",
"fulmicoton",
"saroh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9965",
"repo": "quickwit-oss/tantivy",
"url": "https://github.com/quickwit-oss/tantivy/pull/1393"
}
|
gharchive/pull-request
|
add support for matching distance in phrase query
The first commit still allows ~ in words, might be a bit flimsy.
Second one exposes distance for all leafs. Do we need to explicitly fail for leaves that do not handle slop/distance + implement it for those who do ? (Term maybe, I haven't checked)
Requires a doc update also :)
Not realy sure on the name, slop is hard to understand, distance is maybe a little to vague.
closes #1390
Codecov Report
Merging #1393 (1ec8212) into main (83d0c13) will increase coverage by 0.00%.
The diff coverage is 100.00%.
:exclamation: Current head 1ec8212 differs from pull request most recent head c83bbb7. Consider uploading reports for the commit c83bbb7 to get more accurate results
@@ Coverage Diff @@
## main #1393 +/- ##
=======================================
Coverage 94.29% 94.30%
=======================================
Files 236 236
Lines 43418 43472 +54
=======================================
+ Hits 40942 40996 +54
Misses 2476 2476
Impacted Files
Coverage Δ
query-grammar/src/query_grammar.rs
99.67% <100.00%> (+0.01%)
:arrow_up:
query-grammar/src/user_input_ast.rs
97.87% <100.00%> (+0.09%)
:arrow_up:
src/query/phrase_query/phrase_query.rs
91.54% <100.00%> (+0.37%)
:arrow_up:
src/query/query_parser/logical_ast.rs
88.37% <100.00%> (+1.19%)
:arrow_up:
src/query/query_parser/query_parser.rs
94.95% <100.00%> (+0.07%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 83d0c13...c83bbb7. Read the comment docs.
@fulmicoton @PSeitz can we define the slop as being an u8 as it is the type that was chosen for the fuzzy query distance and 255 seems to be good enough of a max distance whatever the Query impl for the time being ?
Right now let's not handle fuzzy query in the grammar, it would break quickwit.
For the naming, I'd stick to slop in the grammar for the moment.
I've got this PR https://github.com/saroh/tantivy/pull/1/files which I can push in here or open later. Adds support for "foo"~1
|
2025-04-01T06:40:09.839495
| 2018-05-28T16:48:58
|
327078088
|
{
"authors": [
"Hiffi",
"brian-armstrong"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9966",
"repo": "quiet/libcorrect",
"url": "https://github.com/quiet/libcorrect/issues/20"
}
|
gharchive/issue
|
Bug in correct_reed_solomon_decode()
I have a small problem with this function. If there was no error while the transmission, the last parameter is still empty. To fix this problem, I have to do something like this:
err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg); if (rs->recvmsg[0] == 0) memcpy(rs->recvmsg, rs->encoded, rs->block_length);
I think this behavior is quite confusing, especially while the parameter is modified, if there was an error!
That's why Im really confused. I couldn't locate, why this is happening.
I use this as a helper:
#ifndef _correct_cpp_h
#define _correct_cpp_h
extern "C" {
#include <correct.h>
}
struct rs_struct{
size_t const_block_length;
size_t const_message_length;
size_t block_length;
size_t message_length;
size_t min_distance;
char *msg;
uint8_t *encoded;
correct_reed_solomon *encoder;
int *indices;
uint8_t *corrupted_encoded;
uint8_t *erasure_locations;
unsigned char *recvmsg;
rs_struct(size_t block_length, size_t min_distance) {
this->const_block_length = this->block_length = block_length;
this->const_message_length = this->min_distance = min_distance;
this->message_length = block_length - min_distance;
this->msg = (char*) calloc(message_length, sizeof(char));
this->encoded = (uint8_t*) malloc(block_length * sizeof(uint8_t));
this->encoder = correct_reed_solomon_create(correct_rs_primitive_polynomial_ccsds, 1, 1, min_distance);
this->indices = (int*) malloc(block_length * sizeof(int));
this->corrupted_encoded = (uint8_t*) malloc(block_length * sizeof(uint8_t));
this->erasure_locations = (uint8_t*) malloc(min_distance * sizeof(uint8_t));
this->recvmsg = (unsigned char*) malloc(sizeof(unsigned char) * block_length);
}
~rs_struct() {}
void reset() {
this->block_length = this->const_block_length;
this->min_distance = this->const_message_length;
this->message_length = block_length - min_distance;
memset(this->msg, 0, this->message_length);
memset(this->encoded, 0, this->block_length);
memset(this->indices, 0, this->block_length);
memset(this->corrupted_encoded, 0, this->block_length);
memset(this->erasure_locations, 0, this->min_distance);
memset(this->recvmsg, 0, this->block_length);
}
};
#endif
And then use it like that:
...
int main(int argc, char *argv[]) {
rs_struct *rs = new rs_struct(255, 32);
...
while(1) {
if (mySwitch.available()) {
rs->encoded = (uint8_t*)mySwitch.getReceivedValue();
cout<<"Empfangen: "<<rs->encoded<<"\n";
int err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
if (rs->recvmsg[0] == 0)
memcpy(rs->recvmsg, rs->encoded, rs->block_length);
cout<<"Decodiert: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<"\n";
...
rs->reset();
mySwitch.resetAvailable();
}
}
exit(0);
}
Overall I have to say, that it works really nice for me. Thanks for that 👍
Thanks, I hope it works well for you.
The one thing I can think of here is that it won't copy the received message when there are too many errors. Is err -1 when you're seeing an empty recvMsg? I didn't think it'd be useful to copy back a corrupted message, so it doesn't do that, but maybe it should.
Code:
memcpy(rs->encoded, (uint8_t*)mySwitch.getReceivedValue(), rs->block_length);
cout<<"Received: "<<rs->encoded<<"\n";
int err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
cout<<"Decoded: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<")\n";
Output:
Received: test
Decoded: (Message Length: 223 | String Length: 0)
Now comes the crazy part...
Code:
if ( recvfrom_inet_dgram_socket(sfd,rs->encoded,rs->block_length, src_host,sizeof(src_host),src_service,sizeof(src_service),0,LIBSOCKET_NUMERIC) < 0 ){
perror(0);
exit(1);
}
cout<<"Connection from "<<src_host<<" port "<<src_service<<": "<<rs->encoded<<"\n";
int err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
cout<<"Decoded: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<")\n";
Output:
Connection from <IP_ADDRESS> port 55723: 1234
Decoded: 1234(Message Length: 223 | String Length: 4)
The only diffrence is, that getReceivedValue() delivers a char* as return. The recvfrom_inet_dgram_socket() wants a void* as a parameter.
Unfortunately I still don't have quite enough code here to see what's going on. Can you come up with a short example that demonstrates this behavior and then provide me with the whole thing? I've reviewed the code in the decoder and can't find a likely explanation for how it could return a positive value but not write to the received message pointer.
Okay, it took me some hours, but I think I found the bug.
Code:
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <unistd.h>
#include <string.h>
#include "correct_cpp.h"
using namespace std;
int main() {
rs_struct *rs = new rs_struct(255, 32);
char buf[500] = { 72, 97, 108, 108, 111, 32, 87, 101, 108, 116, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 217, 1, 214,
5, 237, 210, 42, 9, 139, 10, 177, 149, 18, 112, 11, 151, 27, 202, 75, 66, 116, 2, 121,
145, 199, 123, 108, 53, 222, 90, 92, 121};
char buf2[500] = { 72, 97, 108, 108, 111, 32, 87, 101, 108, 116, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
memcpy(rs->encoded, buf2, rs->block_length);
int err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
cout<<"Decoded: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<"\n";
memcpy(rs->encoded, buf, rs->block_length);
err = correct_reed_solomon_decode(rs->encoder, rs->encoded, rs->block_length, rs->recvmsg);
cout<<"Decoded: "<<rs->recvmsg<<"(Message Length: "<<err<<" | String Length: "<<strlen((char*)rs->recvmsg)<<"\n";
delete rs;
return 0;
}
Output:
Decoded: (Message Length: 223 | String Length: 0
Decoded: Hallo Welt
(Message Length: 223 | String Length: 11
If the message is cut off at the end, the byte for the redundancy are missing (aka zeros). I would assume, that the decoder return -1, because it can not decode anything. But unfortunately it returns the message length.
This is a pretty fascinating bug report, but ultimately I've decided this is actually the correct behavior.
The first oddity to notice here is that a message that's all 0s has a parity section that's also all 0s (this may not be true for other primitive polynomials/FCR/root gaps, but is true for CCSDS/1/1).
The second issue is that your message is less than 16 characters long. For a block with 32 roots, up to 16 of the 255 bytes can be corrupted entirely and the message can still be recovered. By removing the last 32 bytes of the message and replacing them with 0, it would seem that decode should indeed error out. Instead, it succeeds but gives you back a payload of all 0s. That's because the block you gave to decode is actually less than 16 bytes away from the valid, all-0s block, and with the bytes repaired, it seems to be a successful decode from Reed-Solomon's point of view.
Unfortunately, with enough bytes replaced, this can happen. I have tried to make a note of this in the comments in correct.h:
In most cases, if the block is too corrupted, this function
will return -1 and not perform decoding. It is possible but
unlikely that the payload written to msg will contain
errors when this function returns a positive value.
It seems you hit one of these unfortunate but possible cases. If you want to reduce the likelihood that this can happen, you may want to add a CRC32 checksum to your message. Reed-Solomon can reject some message failures but not as well a good checksum can.
|
2025-04-01T06:40:09.868250
| 2018-03-29T07:49:27
|
309652911
|
{
"authors": [
"ChaelKruip",
"antw"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9967",
"repo": "quintel/etmodel",
"url": "https://github.com/quintel/etmodel/issues/2750"
}
|
gharchive/issue
|
Profitability chart for dashboard item pop-up looks very ugly
Are you able to reproduce this consistently? Although I don't see how this could affect anything, does it persist after providing a translation for the hydrogen turbine? Does it happen with a default scenario?
If it's still broken could you provide details of the branches, scenario, and browser being used?
It looks normal to me locally and on beta:
On beta this still doesn't look ideal:
|
2025-04-01T06:40:09.871392
| 2021-08-31T13:43:34
|
983880195
|
{
"authors": [
"MartLubben",
"mabijkerk",
"redekok"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9968",
"repo": "quintel/etmodel",
"url": "https://github.com/quintel/etmodel/issues/3817"
}
|
gharchive/issue
|
CCUS for waste incineration should be added to the investment table
The investment table now only includes the waste incinerator and waste CHP without CCS:
We are maybe removing this investment table: https://github.com/quintel/etmodel/issues/3781
Let's remove deploy September 2021. Don't you think @redekok Roos?
Do you know when the investment table will be removed @MartLubben? It's just a minor effort to update it so I wouldn't mind fixing that before the deploy.
What are your thought on this @mabijkerk ?
As I understood from Mart it is not yet fully clear what will happen with the investment table. It might be decided that it will stay in the model. If @MartLubben agrees and you have time to pick this up @redekok then that would be great!
Unfortunately by now I don't have any time to pick this up anymore anytime soon. Perhaps @Charlottevm or @MartLubben could help out here?
|
2025-04-01T06:40:09.873082
| 2023-02-21T10:51:50
|
1593255383
|
{
"authors": [
"noracato"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9969",
"repo": "quintel/etmodel",
"url": "https://github.com/quintel/etmodel/issues/4085"
}
|
gharchive/issue
|
Revert SavedScenario to older version through the API
Add the ability to revert to an older version of a SavedScenario.
Possible solution: when making a PUT request with a scenario_id present, and this id is present in the history, we revert to this scenario. This will erase all of the 'future' after said scenario without any possibility to get it back. To ensure users don't revert on accident, we could add an extra parameter revert_to instead, or as a compliment.
References quintel/etengine#1320
I think this one is still on the wishlist!
|
2025-04-01T06:40:09.874879
| 2023-11-24T23:43:28
|
2010326445
|
{
"authors": [
"quintindunn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9970",
"repo": "quintindunn/lapsepy",
"url": "https://github.com/quintindunn/lapsepy/issues/74"
}
|
gharchive/issue
|
[Future-Proofing] - Better replication of API calls
When writing the library I started with replicating all of the headers sent in the requests, when experimenting I found out all I needed to send in the headers was the authorization header, so when I made the library for simplicity that's all I included. With this, the developers could easily distinguish packets from from the actual app and Lapsepy just by checking something as simple as the user agent. The device ID is also sent so I'm not sure how spoofing that will work, though users could probably use their own unique ID with few modifications to my LapseRefreshTokenSniffer project.
Still needs better replication on other endpoints, but since most of it is done in sync-service, this is enough for now.
|
2025-04-01T06:40:09.884755
| 2022-02-20T23:02:51
|
1145160317
|
{
"authors": [
"jacob-keller"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9971",
"repo": "quisquous/cactbot",
"url": "https://github.com/quisquous/cactbot/issues/4152"
}
|
gharchive/issue
|
puppets bunker superior sharp turn incorrect callout
Description
During the Puppets bunker's split boss with the three superior flight units, in the 2nd half of the fight the callout for the "Formation: Sharp Turn" ability was incorrect. It said to move outside when the safe spot was actually on the inside. At that time 1 of the three flight units was already destroyed.
Its possible that the skill IDs we used are simple wrong for inside/outside, or perhaps something changes once one of the bosses dies. I've attached the log file which has the incorrect prediction. The 2nd use of Formation Sharp Turn should have said "inside" (I didn't capture screen footage of this unfortunately).
The first outside callout was correct, and I know I've had correct callouts for inside vs outside in the past. I suspect that the root cause is the death of one of the three units.
Additional information
puppets-bunker-inside-outside.tar.gz
we might want to revert this to just a warning about an upcoming sharp turn for now.
Perhaps heading is enough. I'd have to check. I think that it was caused by one of the three units being dead but not precisely sure how that impacted it yet.
|
2025-04-01T06:40:09.899174
| 2022-02-14T08:09:12
|
1136926499
|
{
"authors": [
"Ashbajawed",
"labusch"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9972",
"repo": "qurator-spk/sbb_ned",
"url": "https://github.com/qurator-spk/sbb_ned/issues/11"
}
|
gharchive/issue
|
loading forever
ned model is running in forever loop its not giving any output just stuck between loading embeddings and done prompt
Without a more detailed error message, it is not possible to solve your problem.
It is likely that some sub-process has terminated due to some problem and the error log is not propagated from the sub-process.
In order to repeat the computation in single process mode, set all the _PROCESSES configuration variables to 0.
Example:
https://github.com/qurator-spk/sbb_ned/blob/master/qurator/sbb_ned/webapp/de-config-debug.json
Then, you should observe some error message that can help to finally solve the underlying problem.
i made said changes in config file now getting this error
*
**
Could you provide your config file?
I'm using this file
https://github.com/qurator-spk/sbb_ned/blob/master/qurator/sbb_ned/webapp/en-config.json
There were some entries missing in that file. I added the missing entries.
Could you update the file and retry?
|
2025-04-01T06:40:09.900802
| 2015-06-07T02:04:48
|
85837889
|
{
"authors": [
"andrerpena",
"chollier",
"ipy",
"jneto"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9973",
"repo": "quri/react-bootstrap-datetimepicker",
"url": "https://github.com/quri/react-bootstrap-datetimepicker/issues/58"
}
|
gharchive/issue
|
set locale of moment.js
It would be great to expose a prop to set the locale
I never thought about that, this is a good point
@chollier , what is the status of supporting locale? More specifically, i'm interested in being able to translate like 'june 12' and the week days. This is a great project by the way. Thanks for that.
@andrerpena , I made a fork of the project and translated it to pt-br, but it is hardcoded for now.
|
2025-04-01T06:40:09.909330
| 2024-05-31T22:54:51
|
2328684634
|
{
"authors": [
"aarontrowbridge",
"albertomercurio",
"ytdHuang"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9974",
"repo": "qutip/QuantumToolbox.jl",
"url": "https://github.com/qutip/QuantumToolbox.jl/pull/152"
}
|
gharchive/pull-request
|
functionality for permuting tensor product order of Qobj
This pr addresses issue #95. It creates a function permute which permutes the subsystem order of composite Ket, Bra, and Operator objects.
A new file src/qobj/tensor_functions.jl was created to store tensor-related methods.
I believe I addressed all of the comments with this PR, the implementation is totally different. Thanks to @albertomercurio for pointing out a much cleaner way to address this.
Great! You have to include the missing docstrings in the documentation. Could you also format the documents you changed?
I will clean up this PR today, I also remembered I need to add tests for error handling.
Great, everything seems fine, except for documentation and format checking. Can you add the function in the documentation, and formatting all the changed files?
yes! I totally omitted those two issues.
BTW, can you remove the JuliaFormatter dependency? We don't need that. You can format the code by just using JuliaFormatter from another environment (or just calling Format Document from vscode)
@aarontrowbridge
I have added some comments above.
@ytdHuang thanks! I saw, but I came down with something yesterday and have been rather under the weather. I will try to resolve everything today
@aarontrowbridge
Thank you for addressing all the comments.
I saw you moved back the tensor and kron functions. But seems that you didn't delete the qobj/tensor_functions.jl file.
|
2025-04-01T06:40:09.916056
| 2021-08-17T06:51:00
|
972359179
|
{
"authors": [
"MrRobot2211",
"coveralls"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9975",
"repo": "qutip/qutip-cupy",
"url": "https://github.com/qutip/qutip-cupy/pull/42"
}
|
gharchive/pull-request
|
Add info to readme on how to use
This adds basic information to initialize a Qobj from a CuPyDense array
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
2 of 3 (66.67%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.2%) to 87.821%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
src/qutip_cupy/init.py
2
3
66.67%
Totals
Change from base Build<PHONE_NUMBER>:
-0.2%
Covered Lines:
274
Relevant Lines:
312
💛 - Coveralls
|
2025-04-01T06:40:09.964290
| 2024-08-12T14:39:27
|
2461189481
|
{
"authors": [
"drk-mtr",
"maiieul"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9984",
"repo": "qwikifiers/qwik-ui",
"url": "https://github.com/qwikifiers/qwik-ui/issues/924"
}
|
gharchive/issue
|
[📖]In the instructions for installing the styled kit, highlight that the "Make it yours" button is on the current page
Suggestion
On this page: https://qwikui.com/docs/styled/install/
It states Click on "make it yours" in order to customise the theme.
It wasn't immediately obvious to me that this is a button in the top header.
For people who are a bit slow like me, is it worth explicitly stating Click on "make it yours" in the header section of this page or similar?
Agreed, up for a PR? 😁
|
2025-04-01T06:40:09.965963
| 2024-06-29T05:09:36
|
2381495544
|
{
"authors": [
"qwrtln",
"tomaas-zeman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9985",
"repo": "qwrtln/Homm3BG-mission-book",
"url": "https://github.com/qwrtln/Homm3BG-mission-book/pull/30"
}
|
gharchive/pull-request
|
Fix header space
This fix works now but it's not perfect. Depending on the content and wrapping of multicols, the gap can still slightly vary. How can I fix it more reliably?
It's fine. In the rewritten rule book, we just used vspaces (sometimes conditionally) to fix those.
|
2025-04-01T06:40:09.986110
| 2020-06-08T18:07:25
|
634824825
|
{
"authors": [
"GegznaV",
"cbeleites"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9986",
"repo": "r-hyperspec/hySpc.dplyr",
"url": "https://github.com/r-hyperspec/hySpc.dplyr/issues/20"
}
|
gharchive/issue
|
Unit test that fails: expect_error(filter(chondro, spc > 250))
https://github.com/r-hyperspec/hySpc.dplyr/blob/6ae23ea0883f0984e304341647ea8cfe8a798c1a/R/filter.R#L78
I suggest accepting #18 before fixing this issue.
fixed by 131c7dbe
|
2025-04-01T06:40:10.118639
| 2023-05-23T15:40:04
|
1722333121
|
{
"authors": [
"blester125",
"craffel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9987",
"repo": "r-three/git-theta",
"url": "https://github.com/r-three/git-theta/issues/218"
}
|
gharchive/issue
|
Parameter groups that are more than just tensors?
Should we make sure that our logic is able to handle parameter groups that are more than a single tensor? For example if I defined a "parameter group" to be both the weight and the bias in a feed forward layer (because I want to get a bit of boost from using a larger file or because I don't want them to be changed independently) our code could be able to hash, serialize, save, etc this collection of tensors as a single group?
We already have something similar in how updates with multiple values are serialized. This shouldn't be too difficult to implement with something like jax's tree_map abstraction.
I really like the abstraction of "parameter group is a single tensor". Apart from convenience in terms of grouping things in other semantically-meaningful ways, is there any other benefit?
I didn't have a clear use-case, was just thinking about how one review asked how the groups were identified. That is part of the checkpoint plugin so technically it is currently user-overrideable.
Also some of the update classes processes dicts instead of single tensors (i.e. the two matrices in LoRA or the index and value in sparse updates) which have special code to deal with that. If all code could transparently do that it could simplify things.
I was mostly posting to see if someone else had a good use-case for it lol.
|
2025-04-01T06:40:10.126672
| 2018-07-07T03:31:47
|
339113480
|
{
"authors": [
"r0oth3x49",
"tofanelli"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9988",
"repo": "r0oth3x49/udemy-dl",
"url": "https://github.com/r0oth3x49/udemy-dl/issues/250"
}
|
gharchive/issue
|
Script don't download with and after lectures with /
Hey @r0oth3x49
I just noticed something.... There are a few courses, where has a / on lectures name, the script can't download the lecture with the / neither any lecture after it, even if I use the flag '--lecture-start' to get the lecture after it..
It returns this error
Traceback (most recent call last):
File "udemy-dl.py", line 1441, in main()
File "udemy-dl.py", line 1437, in main udemy.course_download(path=options.output, quality=options.quality, unsafe=options.unsafe)
File "udemy-dl.py", line 510, in course_download lecture.dump(filepath=filepath, unsafe=unsafe)
File "F:\Udemy\udemy-dl\udemy_shared.py", line 253, in dump with open(filename, 'wb') as f:
FileNotFoundError: [Errno 2] No such file or directory:
Cheers =)
@tofanelli please do follow Issue Reporting guideline i cannot fix the issue until unless i don't understand what the root cause is and what is the url etc...
@r0oth3x49 do you have an email address where I can send you more data?
@tofanelli yeah. you can email me
|
2025-04-01T06:40:10.180232
| 2016-02-28T07:12:10
|
137019378
|
{
"authors": [
"ChandraAddala",
"shintasmith",
"usnavi"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9990",
"repo": "rackerlabs/blueflood",
"url": "https://github.com/rackerlabs/blueflood/pull/634"
}
|
gharchive/pull-request
|
BFF code cleanup.
Inorder for me to make changes to talk to the new API, I felt the existing code/setup needs some cleanup. In that process I had made the following changes.
Created vagrant configuration so that we can run grafana with blueflood finder code pointing to local blueflood run.
Added logging capability. I cant deal with print statements. Also figured out a way to enable these logs in staging/prod servers.
Fixed the current enum bug that exists in prod while populating drop down for enum metrics.
Organized dependencies(setup.py, test_requirements.txt).
Wrote brand new tests to capture the behavior of finder for various scenarios.
I learned a little bit about the grafana server setup that happens with the heat template. I had documented all that info. I would appreciate if you can review the wiki as well along with this.
[https://one.rackspace.com/display/cloudmetrics/Blueflood+Finder+grafana+server+internal+details](Grafana server setup with blueflood finder)
Other than clarifying my vagrant question, lgtm.
me too... looks good other than the logging clarifications.
|
2025-04-01T06:40:10.189133
| 2015-08-08T20:11:15
|
99831354
|
{
"authors": [
"davezuko",
"ryanflorence",
"zoilorys"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9991",
"repo": "rackt/react-router",
"url": "https://github.com/rackt/react-router/issues/1682"
}
|
gharchive/issue
|
1.0.0-beta3 Nested route with params(and without) issue
I use route setup like this
<Route component={CoreLayout}>
<Route name='home' path='/' component={HomeView} />
<Route name='user-info' path='user/:id' component={Profile} />
<Route name='about' path='about' component={AboutView} />
<Route name='dashboard' path='dashboard' component={DashboardView} />
<Redirect from='/admin' to='dashboard' />
</Route>
Everything works fine except 'user/:id' path, and just any route like 'path/path' (when there is slash in path string). Server always returns 404.
Tried without params, just something like path='/user/profile', still no result.
Tried nesting them like
<Route name='user-info' path='user' component={UsersView}>
<Route path='profile' component={Profile} />
</Route>
Where users view was just a container that rendered {this.props.children}.
Maybe anyone have some tips about what might be the problem? Or maybe i'm missing the point?
This issue stemmed from use with my react-redux-starter-kit, and I believe it was just caused by the webpack dev server not handling the routes correctly, and has been fixed here: https://github.com/davezuko/react-redux-starter-kit/commit/e29692c33871e9339834308fcdc9af6ded88a203.
Hopefully that clarifies things for the react-router authors, as it's probably just a non-issue.
sounds like its not us, let me know if otherwise
|
2025-04-01T06:40:10.238726
| 2020-10-17T01:27:50
|
723611731
|
{
"authors": [
"keflavich"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9992",
"repo": "radio-astro-tools/spectral-cube",
"url": "https://github.com/radio-astro-tools/spectral-cube/issues/674"
}
|
gharchive/issue
|
Dump to zarr fails partway through
I've tried dumping to zarr several times with repeatable failures, e.g.:
In [9]: cube = SpectralCube.read('G327.29_B6_spw3_12M_h2co303.image/', format='casa_image')
WARNING: StokesWarning: Cube is a Stokes cube, returning spectral cube for I component [spectral_cube.io.core]
In [10]: cube
Out[10]:
DaskVaryingResolutionSpectralCube with shape=(147, 784, 1080) and unit=Jy / beam and chunk size (7, 56, 72):
n_x: 1080 type_x: RA---SIN unit_x: deg range: 238.242362 deg: 238.325221 deg
n_y: 784 type_y: DEC--SIN unit_y: deg range: -54.636348 deg: -54.601548 deg
n_s: 147 type_s: FREQ unit_s: Hz range:<PHONE_NUMBER>24.006 Hz:216367735320.595 Hz
In [11]: cube = cube.rechunk(save_to_tmp_dir=True)
Illegal instruction (core dumped)
and
Beginning field W43-MM2 band 6 config 12M line sio spw 1 suffix .image
WARNING: StokesWarning: Cube is a Stokes cube, returning spectral cube for I component [spectral_cube.io.core]
Saving to tmpdir
[####### ] | 19% Completed | 1.3sIllegal instruction (core dumped)
This issue may be solved by changes to default chunk sizing. I'm closing this for now but it might be a real issue.
|
2025-04-01T06:40:10.248295
| 2024-06-25T13:36:01
|
2372725119
|
{
"authors": [
"Kevin2"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9993",
"repo": "radionets-project/pyvisgen",
"url": "https://github.com/radionets-project/pyvisgen/pull/31"
}
|
gharchive/pull-request
|
Avoid samp obs
Use of observation class to pass sampling options to fits writer
avoid astropy versions >6.1.0
|
2025-04-01T06:40:10.254011
| 2024-10-22T23:34:18
|
2606751709
|
{
"authors": [
"rynowak",
"sk593",
"sylvainsf",
"willtsai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9994",
"repo": "radius-project/community",
"url": "https://github.com/radius-project/community/issues/62"
}
|
gharchive/issue
|
REQUEST: New membership for sk593
GitHub Username
@sk593
Requirements
[x] I have reviewed the community membership guidelines
[x] I have enabled 2FA on my GitHub account, see https://github.com/settings/security
[x] I have subscribed to the Radius community Discord server
[x] I am contributing (any of the following apply: issues, discussions, PRs, reviews) to 1 or more Radius repositories
List of contributions to the Radius project
/radius: APPROVER
Authored ~120 PRs
Opened ~50 issues
Key areas of work: Recipes, Bicep extensibility, Portable Resources, CLI, Terraform
/bicep-types-aws: MAINTAINER
Refactored a lot of the repository code and workflows: https://github.com/radius-project/bicep-types-aws/pull/43
I have also opened issues for errors in our workflows and PRs that I would like to fix, but would need maintainer-level status to do: https://github.com/radius-project/bicep-types-aws/issues/61
AB#13500
+1
+1
+1 to both
|
2025-04-01T06:40:10.260030
| 2023-08-11T20:42:15
|
1847405034
|
{
"authors": [
"jd-carroll",
"magicspon",
"needim"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9996",
"repo": "radix-ui/themes",
"url": "https://github.com/radix-ui/themes/issues/15"
}
|
gharchive/issue
|
global scope
https://github.com/radix-ui/themes/blob/39f5a438db0d8496134d4bc35d71899fee1b2562/packages/radix-ui-themes/src/styles/tokens.css#L8C16-L8C16
These tokens should be in :root namespace.
Not sure I follow, is there a reason they need to be in the root namespace?
If they are placed in the root namespace then nesting of theme scaling wouldn't be possible.
Take a look at this example which shows the ability to do nested scaling:
https://codesandbox.io/s/intelligent-framework-2xqjp8?file=/src/App.tsx
Note: The sizes are a little difficult (they look similar), but if you right click and view the actual font-size of the headings, you'll see the difference
agreed.
I'm a bit confused... Shouldn't I be able to set --scaling: 2 anywhere in the dom and for all nested values to use this scaling value.
:root {
--scaling: 1;
--spacing-5: calc(2rem * var(--scaling))
}
.my-node {
--scaling: 2
}
why is my-node scaling not applying to child elements... Am i missing something
|
2025-04-01T06:40:10.325417
| 2020-11-17T14:58:19
|
744811523
|
{
"authors": [
"derBeukatt",
"rafaelsetragni"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9997",
"repo": "rafaelsetragni/awesome_notifications",
"url": "https://github.com/rafaelsetragni/awesome_notifications/issues/39"
}
|
gharchive/issue
|
Cannot build on iOS
As the title states, i cannot build my app using your library on iOS.
Ich do not use the lib on iOS at all, i just need it for android and cannot exclude it from iOS build.
Environment:
MacOS 10.15.7
Flutter 1.22.4
XCode 12.1
lib version 0.0.5+2
It throws the following error:
awesome_notifications-0.0.5+2/ios/Classes/lib/SwiftAwesomeNotificationsPlugin.swift:104:17: error: parameter of 'messaging(:didReceiveRegistrationToken:)' has different optionality than required by protocol 'MessagingDelegate'
public func messaging( messaging: Messaging, didReceiveRegistrationToken fcmToken: String) {
^
?
FirebaseMessaging.MessagingDelegate:2:19: note: requirement 'messaging(:didReceiveRegistrationToken:)' declared here
optional func messaging( messaging: Messaging, didReceiveRegistrationToken fcmToken: String?)
Suggesting the parameter must be optional. I made it optional and unwrapped inside your swift plugin and the build succeeded. Maybe you can give more insight.
Hey there.
Is this just a specific problem for me or does anyone else run into this.
I was unable to reproduce your error. What steps did you take to create your iOS app?
Try modifying the messaging method below in your local code:
https://github.com/rafaelsetragni/awesome_notifications/blob/b3e4da96f0302fef43f4c801d819dad044021332/ios/Classes/lib/SwiftAwesomeNotificationsPlugin.swift#L97
Hi. Thanks for your answer.
I was just normally building my app via "flutter build ios" and the error occured.
I already did modify the following code:
https://github.com/rafaelsetragni/awesome_notifications/blob/b3e4da96f0302fef43f4c801d819dad044021332/ios/Classes/lib/SwiftAwesomeNotificationsPlugin.swift#L104
public func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String) {
print("Firebase registration token: \(fcmToken)")
let dataDict:[String: String] = ["token": fcmToken]
NotificationCenter.default.post(name: Notification.Name("FCMToken"), object: nil, userInfo: dataDict)
// TODO: If necessary send token to application server.
// Note: This callback is fired at each app startup and whenever a new token is generated.
}
to the following
public func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String?) {
if let unwrapped = fcmToken {
print("Firebase registration token: \(unwrapped)")
let dataDict:[String: String] = ["token": unwrapped]
NotificationCenter.default.post(name: Notification.Name("FCMToken"), object: nil, userInfo: dataDict)
// TODO: If necessary send token to application server.
// Note: This callback is fired at each app startup and whenever a new token is generated.
}
}
But I just don't know why I had to do this.
You are rigth. My messaging method is deprecated, thats why you couldnt compile the final version in your app. I gonna update it.
Did you use another firebase services in your app? What version are they?
You are rigth. My messaging method is deprecated, thats why you couldnt compile the final version in your app. I gonna update it.
Did you use another firebase services in your app? What version are they?
I use FirebaseMessaging 7.1.0. That is the version installed by cocoapods.
Better! Do you wanna do a fork and send me this fix? Your name will be included as this project contributor.
Does this mean you experience the same problem after updating to the above version?
If so, I gladly provide the fix. Just don't want to break it for somebody else.
Im did not experienced this issue. For me, when i applyied your changes, XCode complains about the override operation not being possible due the messaging methos be different.
But send me your fork. This way i can merge your source and see every change that you did in your source and figure out whats goin on.
In fact, im merged your changes into my local files, and the messaging firebase method that you sent always complain about the override operation not being possible. In this way, i could not reproduce your error.
But in think im figuring out whats going on.
This plugin is not necessary to send push notifications using awesome_notifications. All that you need is inside the plugin and you only need to follow the steps extrictely inside Using Firebase Services (Optional) topic. Is not necessary to use any other plugin or implement any kotlin or java extra script.
Maybe firebase_messaging version is conflicting with my awesome_notifications firebase library, because firebase_messaging is using another library version, that overides mine.
Bingo! That was exactely what happened!
After update my libraries with pod update, i got your error!
On the newer Firebase version, the firebase team did a lot of changes, including change the method messaging without keep a deprecated version. So, all the sources bellow 7.1.0 will break. And they also changed a lot of things on Android sdk, as to deprecate the entire FirebaseInstance iil.
You got the newer firebase version after install a firebase plugin or update your pod files. Thats why you have those issues.
All the others awesome_notification developers gonna face the same after update the firebase package. So your change is totaly legit.
The problem that i facing is how to keep both methods working on same time to not injure the olders projects.
Can you cancel the current pull request to master branch and send the same pull request to update-firebase-sdk branch instead? Theres a lot changes to do on Android side as well.
Yeah that is exactly what I was hoping was not happening.
But now that we now the problem it is maybe fixable.
If you need anything else from me, don't hesitate to ask. Thanks for your effort :)
Ive merged your pull request and included a suport for older versions, as the code bellow. This way, both library versions are compatible with the core code being reusable:
// For Firebase Messaging versions older than 7.0
// https://github.com/rafaelsetragni/awesome_notifications/issues/39
public func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String) {
didReceiveRegistrationToken(messaging, fcmToken: fcmToken)
}
public func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String?) {
if let unwrapped = fcmToken {
didReceiveRegistrationToken(messaging, fcmToken: unwrapped)
}
}
private func didReceiveRegistrationToken(_ messaging: Messaging, fcmToken: String){
print("Firebase registration token: \(fcmToken)")
let dataDict:[String: String] = ["token": fcmToken]
NotificationCenter.default.post(name: Notification.Name("FCMToken"), object: nil, userInfo: dataDict)
}
|
2025-04-01T06:40:10.332235
| 2022-04-27T01:58:32
|
1216678173
|
{
"authors": [
"Divish1032",
"TimeLord2010",
"nurhazbiy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9998",
"repo": "rafaelsetragni/awesome_notifications",
"url": "https://github.com/rafaelsetragni/awesome_notifications/issues/480"
}
|
gharchive/issue
|
Error with extra step in 0.7.0-beta.2
I'm trying to compile my app in IOS but I was having MissingImplementationException in AwesomeNotification.initialize.
I then noticed that this plugin as a beta version and in that, it requires a extra step for the IOS platform. However, I keep getting errors.
If I copy the following lines:
SwiftAwesomeNotificationsPlugin.setPluginRegistrantCallback { registry in
SwiftAwesomeNotificationsPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.awesomenotifications.AwesomeNotificationsPlugin")!)
FLTSharedPreferencesPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.sharedpreferences.SharedPreferencesPlugin")!)
}
To AppDelegate.swift, the I get:
.../ios/Runner/AppDelegate.swift:11:5: error: cannot find 'SwiftAwesomeNotificationsPlugin' in scope
But according to the plugin guide: "And you can check how to correctly call each plugin opening the file GeneratedPluginRegistrant.m". And in my GeratedPluginRegistrant.m, I cannot find SwiftAwesomeNotificationsPlugin. But I do find AwesomeNotificationsPlugin.
So I rewrite the code to:
AwesomeNotificationsPlugin.setPluginRegistrantCallback { registry in
AwesomeNotificationsPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.awesomenotifications.AwesomeNotificationsPlugin")!)
FLTSharedPreferencesPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.sharedpreferences.SharedPreferencesPlugin")!)
}
But I just get the same error, but updated:
.../ios/Runner/AppDelegate.swift:11:5: error: cannot find 'AwesomeNotificationsPlugin' in scope
Am I missing an import?
Fix
import shared_preferences_ios
import flutter_background_service_ios
Does above fix solved your problem? @TimeLord2010
It did. But I had to make some changes after:
import UIKit
import Flutter
import awesome_notifications
import shared_preferences_ios
import flutter_background_service_ios
@UIApplicationMain
@objc class AppDelegate: FlutterAppDelegate {
override func application(
_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?
) -> Bool {
GeneratedPluginRegistrant.register(with: self)
FlutterBackgroundServicePlugin.setPluginRegistrantCallback { registry in
GeneratedPluginRegistrant.register(with: registry)
}
SwiftAwesomeNotificationsPlugin.setPluginRegistrantCallback { registry in
SwiftAwesomeNotificationsPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.awesomenotifications.AwesomeNotificationsPlugin")!)
FLTSharedPreferencesPlugin.register(
with: registry.registrar(forPlugin: "io.flutter.plugins.sharedpreferences.SharedPreferencesPlugin")!)
}
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
}
}
hi, suddenly out of nowhere, i can't build my apps on ios anymore, i suspect because of this step. here is the error logs
Swift Compiler Error (Xcode): Cannot find 'FLTSharedPreferencesPlugin' in scope
/Users/hazbiy/Data/Kerja/Freelance/Project/Risearrow/Flutter/ios/Runner/AppDelegate.swift:17:12
Encountered error while building for device.
I think it's because i remove shared preferences on my main package because i moved it on my secondary packages, and when i check on pubscpec.lock, i can't find any line stating that shared_preferences_ios is installed (meanwhile all other shared_preferences is installed, even for web and linux). does it mean i need to manually include it in my apps? i thought it would be included by the awesome notification package
|
2025-04-01T06:40:10.396965
| 2020-05-20T08:41:46
|
621575314
|
{
"authors": [
"nephix",
"taleldayekh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9999",
"repo": "raiden-network/light-client",
"url": "https://github.com/raiden-network/light-client/pull/1550"
}
|
gharchive/pull-request
|
Transaction list fixes
Thank you for submitting this pull request :)
Fixes #1510
Short description
Definition of Done
[x] Steps to manually test the change have been documented
[x] Acceptance criteria are met
[x] Change has been manually tested by the reviewer (dApp)
Steps to manually test the change (dApp)
https://github.com/raiden-network/light-client/issues/1510
I thought that's how it was supposed to look? So all borders should be round?
I thought that's how it was supposed to look? So all borders should be round?
I think you might be right. The icon Sash gave me actually looks like that. I just viewed it in Illustrator.
Oh ok, it was a bit confusing to fix it because I didn't have access to the design
Thanks for the review 🙏
|
2025-04-01T06:40:10.398737
| 2019-02-25T08:32:56
|
413987241
|
{
"authors": [
"pirapira"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10000",
"repo": "raiden-network/raiden-contracts",
"url": "https://github.com/raiden-network/raiden-contracts/issues/603"
}
|
gharchive/issue
|
Typically the corrupt file contains only a part of the measurements
{
"SecretRegistry.registerSecret": 45757,
"TokenNetwork.closeChannel": 111236,
"TokenNetwork.openChannel": 97555,
"TokenNetwork.setTotalDeposit": 44509,
"TokenNetwork.settleChannel": 123338,
"TokenNetwork.unlock 1 locks": 32128,
"TokenNetwork.unlock 6 locks": 66029,
"TokenNetwork.updateNonClosingBalanceProof": 93752
}
This issue keeps track of, at least, detecting this in the CI and block merge with incomplete gas measurements.
This was fixed in 12961bea391e74aacd9f1193bfd08d7a74d349e2
|
2025-04-01T06:40:10.410393
| 2019-02-03T18:50:42
|
406117837
|
{
"authors": [
"XiCynx",
"raiguard"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10001",
"repo": "raiguard/ModernGadgets",
"url": "https://github.com/raiguard/ModernGadgets/issues/130"
}
|
gharchive/issue
|
HDD Temperature Not Functioning
In the below screenshots you can see that the correct values are selected and that it is displaying temperature values. However in Rainmeter the temperatures are not updating at all and seem to be stuck. The D: Storage is showing the temp for my C Drive, updating the values in SMV does not seem to do anything either.
Modern Gadgets Version: 1.4.1
HWInfo64 Version: 6.000-3620
https://i.imgur.com/YjHD1Pv.png
https://i.imgur.com/vwkFtTc.png
https://i.imgur.com/GDEQ63O.png
From the screenshots you sent me, it appears that you configured the values for the A: and B: disks, rather than C: and D:. Go back to the SMV and configure the temperatures for Disks C: and D:, and everything should work.
Yup, that seemed to work! My bad I figured drive A and B were just going in order but it is the letter assigned to it. Makes sense now that I'm thinking about it.
|
2025-04-01T06:40:10.452838
| 2015-08-06T06:10:17
|
99365522
|
{
"authors": [
"dhh",
"javan",
"leckylao"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10002",
"repo": "rails/actioncable",
"url": "https://github.com/rails/actioncable/issues/55"
}
|
gharchive/issue
|
Heartbeat improvement
Hi @dhh,
From the server side, there's heartbeat sending from the server to all clients every 3 seconds. And I think this is a bit excessive and it may cause performance issue when dealing with finance or heavy load on the WebSocket.
From the client side, connection_monitor is sending the subscribe message every 4-8 seconds, this would increase the server load especially when all of the clients sending the subscribe message at the same time.
The heartbeat and connection_monitor are similar to reintroduce the asynchronous notification polling which WebSocket try to resolve. I reckon it could be improved by using socket.onclose event and socket.readyState for the reconnection instead of checking the stale connection.
socket.onclose = function() {
consumer.connection.reopen() if socket.readyState === 3;
}
connection_monitor is sending the subscribe message every 4-8 seconds
Because your users change pages every 4-8 seconds?
We've found that those callbacks were not reliable across all situations
and browsers. The only reliable method we've found has been using a
heartbeat.
How is a heartbeat every 3s per connection a performance problem in your
case?
On Thu, Aug 6, 2015 at 7:48 PM, Javan Makhmali<EMAIL_ADDRESS>wrote:
connection_monitor is sending the subscribe message every 4-8 seconds
Because your users change pages every 4-8 seconds?
—
Reply to this email directly or view it on GitHub
https://github.com/rails/actioncable/issues/55#issuecomment-128457052.
Hi @javan,
not sure whether it's a bug or not. But if you
Clone the actioncable-exmaple and https://github.com/leckylao/em-websocket-example
Change actioncable-example/app/assets/javascripts/channels/index.coffee to point to ws://localhost:3001
Run actioncable-example/ rails s as client and run em-websocket-example/ ruby app.rb as server
Then you will see "Received: {"command":"subscribe","identifier":"{"channel":"CommentsChannel"}"}" every 4-8 seconds, which I think it's due to connection_monitor that reopening the connection.
Thanks @dhh for the explanation.
How is a heartbeat every 3s per connection a performance problem in your case?
Some metrics I found:
https://mrotaru.wordpress.com/2013/06/20/12-million-concurrent-connections-with-migratorydata-websocket-server/
In this benchmark scenario, MigratoryData scales up to 12 million concurrent users from a single Dell PowerEdge R610 server while pushing up to 1.015 Gbps live data (each user receives a 512-byte message every minute).
Result in 12m, Average latency 268 milliseconds, Maximum latency 2024 milliseconds, Network Utilization 1.015 Gbps.
http://colobu.com/2015/05/22/implement-C1000K-servers-by-spray-netty-undertow-and-node-js/ Sorry about it's in Chinese, opens in Chrome should translate it.
Netty Server - To all 1.2 million per minute websocket send a message, the message content for the current time server. Send display single-threaded send, the server sends finished about 1.2 million total time 15 seconds.
Spray Server - To all 1.2 million per minute websocket send a message, the message content for the current time server. High CPU usage, send quickly, bandwidth of up to 46M. After a mass takes about 8 seconds.
Undertow - To all 1.2 million per minute websocket send a message, the message content for the current time server. Mass takes about 15 seconds to play again.
Heartbeat would create similar load to the server and the bandwidth, which I think it would cause a performance issue on large concurrent connections.
I think you're going to be disappointed if you actually try to do any Ruby
work in these channels and expect that level of performance :). The
heartbeat is just a single piece of text data that requires nothing to
generate. Actual apps will be doing actual things in these channels that
are far slower than what a heartbeat overhead will be.
In any case, I haven't found it to be optional. If you can't reliably tell
when the connection has been cut, then the app might be fast, but it won't
work well.
On Thu, Aug 6, 2015 at 7:58 PM, Lecky Lao<EMAIL_ADDRESS>wrote:
Thanks @dhh https://github.com/dhh for the explanation.
How is a heartbeat every 3s per connection a performance problem in your
case?
Some metrics I found:
https://mrotaru.wordpress.com/2013/06/20/12-million-concurrent-connections-with-migratorydata-websocket-server/
In this benchmark scenario, MigratoryData scales up to 12 million
concurrent users from a single Dell PowerEdge R610 server while pushing up
to 1.015 Gbps live data (each user receives a 512-byte message every
minute).
Result in 12m, Average latency 268 milliseconds, Maximum latency 2024
milliseconds, Network Utilization 1.015 Gbps.
http://colobu.com/2015/05/22/implement-C1000K-servers-by-spray-netty-undertow-and-node-js/
Sorry about it's in Chinese, opens in Chrome should translate it.
Netty Server - To all 1.2 million per minute websocket send a message,
the message content for the current time server. Send display
single-threaded send, the server sends finished about 1.2 million total
time 15 seconds.
Spray Server - To all 1.2 million per minute websocket send a message,
the message content for the current time server. High CPU usage, send
quickly, bandwidth of up to 46M. After a mass takes about 8 seconds.
Undertow - To all 1.2 million per minute websocket send a message, the
message content for the current time server. Mass takes about 15 seconds to
play again.
Heartbeat would create similar load to the server and the bandwidth, which
I think it would cause a performance issue on large concurrent connections.
—
Reply to this email directly or view it on GitHub
https://github.com/rails/actioncable/issues/55#issuecomment-128551949.
I think you're going to be disappointed if you actually try to do any Ruby
work in these channels and expect that level of performance :)
Nah, those just some metrics I found that can better show the impact. Another thought, I don't think they implements something like heartbeat but can still achieve stability on that level :blush:
The heartbeat is just a single piece of text data that requires nothing to
generate. Actual apps will be doing actual things in these channels that
are far slower than what a heartbeat overhead will be.
Most of the case WebSocket are being use as notification which only happens when it occurs. In this case, I think heartbeat is generating more traffics than the actual jobs.
In any case, I haven't found it to be optional. If you can't reliably tell
when the connection has been cut, then the app might be fast, but it won't
work well.
In this case, how about moving heartbeat and connection_monitor into an optional configuration. So that it doesn't enable by default. If people facing trouble with the connection then they can enable it.
Optimizing for correctness out the box over peak performance seems like a better trade-off to me. If someone actually hits a point in a real application where heartbeat traffic proves to be an issue, we can consider offering a way to turn off heartbeats. Thanks for your consideration and I look forward to hearing about your Action Cable deployment!
On Aug 9, 2015, at 18:00, Lecky Lao<EMAIL_ADDRESS>wrote:
I think you're going to be disappointed if you actually try to do any Ruby
work in these channels and expect that level of performance :)
Nah, those just some metrics I found that can better show the impact. Another thought, I don't think they implements something like heartbeat but can still achieve stability on that level
The heartbeat is just a single piece of text data that requires nothing to
generate. Actual apps will be doing actual things in these channels that
are far slower than what a heartbeat overhead will be.
Most of the case WebSocket are being use as notification which only happens when it occurs. In this case, I think heartbeat is generating more traffics than the actual jobs.
In any case, I haven't found it to be optional. If you can't reliably tell
when the connection has been cut, then the app might be fast, but it won't
work well.
In this case, how about moving heartbeat and connection_monitor into an optional configuration. So that it doesn't enable by default. If people facing trouble with the connection then they can enable it.
—
Reply to this email directly or view it on GitHub.
We've found that those callbacks were not reliable across all situations and browsers
In order to further understand the issue, could you provide an example/demo or explain more about the issue please? How is it not reliable? Thank you.
http://caniuse.com/#search=websocket shows WebScoket is now 87.23% support that maybe the issue should be and has been resolved natively.
This was in testing across a variety of browsers, restore situations, and
mobile scenarios. Don't have a test suite available for reproduction. Would
be nice to have, but it's unlikely that we're going to invest any time into
this, given that we have a working solution. All this testing was done in
the last 3-4 months, though. Feel free to work on such a suite if you'd
like to advance this.
On Sunday, August 9, 2015, Lecky Lao<EMAIL_ADDRESS>wrote:
We've found that those callbacks were not reliable across all situations
and browsers
In order to further understand the issue, could you provide an
example/demo or explain more about the issue please? How is it not
reliable? Thank you.
http://caniuse.com/#search=websocket shows WebScoket is now 87.23%
support that maybe the issue should be and has been resolved natively.
—
Reply to this email directly or view it on GitHub
https://github.com/rails/actioncable/issues/55#issuecomment-129262433.
|
2025-04-01T06:40:10.580985
| 2015-08-18T06:46:44
|
101583888
|
{
"authors": [
"sedx"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10003",
"repo": "rails/turbolinks",
"url": "https://github.com/rails/turbolinks/issues/594"
}
|
gharchive/issue
|
Double request
Hi!
I have a strange problem: when i navigate by clicking link between some pages turbolinks processed double request (FF,Safari, Сhrome). It occur only on two pages.
When i tried debug I found strange behavior: method processResponse() return valid doc inside them but to doc variable inside fetchReplacement() method was assigned 'undefined'
When I remove //=reuire all my scripts this problem still exists.
When I remove //=require turbolinks problem solved
Please fix this broblem. I have no clue why processResponse() return 'undefined' when inside this metod doc is valid.
Solved.
It cause because this two page has different templates with different assets.
When add data-no-turbolink to link problem is gone.
|
2025-04-01T06:40:10.607467
| 2024-02-22T01:14:33
|
2147983175
|
{
"authors": [
"DanielSinclair",
"KosmosKey",
"kingnight153"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10004",
"repo": "rainbow-me/rainbowkit",
"url": "https://github.com/rainbow-me/rainbowkit/issues/1791"
}
|
gharchive/issue
|
[bug] wallet_addEthereumChain or useSwitchChain on Rainbow Wallet not working
Is there an existing issue for this?
[X] I have searched the existing issues
RainbowKit Version
2.0.0
wagmi Version
2.5.6
Current Behavior
I already add a custom chain to getDefaultConfig
When I used switchChain from useSwitchChain wagmi for Rainbow wallet to switch to this chain, error occured like this:
When I used switchChain for other wallets like MetaMask, it worked, but rainbow didn't
When I tried using wallet_addEthereumChain to add this chain to Rainbow wallet, though I approved the addition of this chain, the error occured. I checked Rainbow wallet networks, this custom chain is already added but error showed:
Expected Behavior
When I used switchChain from useSwitchChain wagmi for Rainbow wallet to switch to a custom chain, it should add the custom chain to RainBow wallet and switch to it
If I use wallet_addEthereumChain, Rainbow wallet should add the chain and not show any error
Steps To Reproduce
No response
Link to Minimal Reproducible Example (CodeSandbox, StackBlitz, etc.)
No response
Anything else?
No response
@kingnight153 Could you please tell what chain you're switching to ?
Also would be super helpful if you can show your RainbowKit and Wagmi configuration setup 🙏
Here is the chain info: https://chainlist.org/?search=carbon
Yeah so that's an unsupported network from rainbow wallet. We work on improving the custom network so hopefully should be fixed soon. What you can do now is add the network manually and try to switch to the chain.
@kingnight153 Thanks for reporting. Bumping this to our browser-extension repo to further investigate
|
2025-04-01T06:40:10.620552
| 2014-06-10T15:12:16
|
35391960
|
{
"authors": [
"rajgoel"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10006",
"repo": "rajgoel/reveal.js",
"url": "https://github.com/rajgoel/reveal.js/issues/2"
}
|
gharchive/issue
|
Canvas upon player control
The slideshow-recorder.js plugin puts a new canvas ontop of the player control of reveal.js. The plugin has to be updated if reveal.js changes the way of drawing the player control.
Redesigned plugin to use default audio player and not auto sliding (commit 8b5fa5dc12)
|
2025-04-01T06:40:10.739161
| 2016-09-25T06:57:08
|
179072212
|
{
"authors": [
"davidchambers",
"portons"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10008",
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/issues/1918"
}
|
gharchive/issue
|
without method doesn't use equals like it says in the documentation
I'm on version 0.21.
While I'm trying to use without function to remove exact strings from a strings array, some items are being removed (mistakenly?) due to usage of flip+contains methods.
Example:
R.without("ab", ["a","ab"])
// output: []
I would expect that the output would be ["a"] because it's a completely different item (although it is being contained by "ab" string)
You can run this code here
You have a type error:
without('ab', ['a', 'ab']);
// ! TypeError: Invalid value
//
// without :: Array a -> Array a -> Array a
// ^^^^^^^
// 1
//
// 1) "ab" :: String
//
// The value at position 1 is not a member of ‘Array a’.
Correct usage:
without(['ab'], ['a', 'ab']);
// => ['a']
See #1912.
|
2025-04-01T06:40:10.743359
| 2015-10-06T14:02:51
|
110014370
|
{
"authors": [
"branneman",
"davidchambers",
"raine"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10009",
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/pull/1428"
}
|
gharchive/pull-request
|
#1417: Added dispatchables to docstrings
See #1417
Right now it is undocumented behaviour for some functions that they actually dispatch to a method. This PR adds a sentence to the docstrings that this dispatching is happening.
:deciduous_tree:
Can a native English speaker explain why not:
Dispatches to the takeWhile method of the second argument, if present
Can a native English speaker explain why not:
Dispatches to the takeWhile method of the second argument, if present
I like your version better, @raine! What do you think, @branneman?
Updated!
One of my commits snuck into your branch. Could you remove it?
Yeah, the squashing went wrong somehow. Gimme a minute.
Ugh. Think it's ok now. Is it?
Looks good! Merging.
|
2025-04-01T06:40:10.745114
| 2024-09-06T08:49:24
|
2509878655
|
{
"authors": [
"CrossEye",
"nhannt201",
"yurkimus"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10010",
"repo": "ramda/ramda",
"url": "https://github.com/ramda/ramda/pull/3487"
}
|
gharchive/pull-request
|
feat: add lowerFirst string
Converts the first character of a string to lowercase.
IMHO: It looks too specific to add this to the library.
IMHO: It looks too specific to add this to the library.
Agreed. I think this comment on another recent PR is relevant:
https://github.com/ramda/ramda/pull/3488#issuecomment-2335287734
|
2025-04-01T06:40:10.797083
| 2021-11-29T08:46:44
|
1065760620
|
{
"authors": [
"mudler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10011",
"repo": "rancher-sandbox/cOS-toolkit",
"url": "https://github.com/rancher-sandbox/cOS-toolkit/pull/900"
}
|
gharchive/pull-request
|
Add packages
Adding packages while I tried to get OdroidC2 working locally. Sadly I don't get eth0 after boot (altought, I can see u-boot trying to get an IP ) so I cannot test upgrades, resets and the full cOS featureset. I'm suspecting kernel, but requires quite some time to experiment. This change adds firmware packages, firmware and dtbs to the example image
Another approach would be to test with the u-boot and old kernel, but would be suboptimal to release with those ( for the records, here is a spec that build the Odroid kernel https://github.com/rancher-sandbox/cOS-toolkit/commit/c31ebbd1ac151883b49a74fbf90955c38841eb3f).
Also, this PR adds packages to the toolchain image in order to run the arm image script successfully. For reference, that's where I was experimenting with : https://github.com/mudler/cos-embedded-images/
shouldn't affect CI, and OdroidC2 images are built only on master.. merging!
|
2025-04-01T06:40:10.798239
| 2018-07-20T20:23:04
|
343236358
|
{
"authors": [
"alena1108",
"kinarashah"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10012",
"repo": "rancher/cattle",
"url": "https://github.com/rancher/cattle/pull/3226"
}
|
gharchive/pull-request
|
default service_event for ipsec to up if global flag is true
https://github.com/rancher/rancher/issues/14668
LGTM. TF is unrelated
|
2025-04-01T06:40:10.801721
| 2015-10-23T20:37:56
|
113095032
|
{
"authors": [
"alena1108",
"ibuildthecloud"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10013",
"repo": "rancher/cattle",
"url": "https://github.com/rancher/cattle/pull/984"
}
|
gharchive/pull-request
|
Bug fixes
Cleanup instance from planner on its removal
When instance with missing dependencies gets removed up by deployment planner, we should remove it from the planner's list of instances. Otherwise it would participate in further deployment, and further down will hit the timeout waiting for this instance to hit valid state (Running/Stopped, etc)
Don't schedule service config.update on service in updating-active/activating state. Reconcile process invoked by update/activate operation, can result in some instances being stopped/destroyed. And stop/destroy processes have config.upgrade hooks resulting in subsequent reconcile requests.
At the end of every activate/update process, we double check that the reconcile have been finished successfully, and if not - the process gets rescheduled.
This 2 fixes above are supposed to fix the bugs related to service reconcile slowness, like:
https://github.com/rancher/rancher/issues/2044
Validate service selector on instance.restore
https://github.com/rancher/rancher/issues/2338
I feel like you need better PR titles. More uplifting like "Vast improvement in functional correctness"
@ibuildthecloud :) will improve the naming in the future PRs
|
2025-04-01T06:40:10.810532
| 2022-12-19T17:55:57
|
1503340465
|
{
"authors": [
"jameson-mcghee",
"rak-phillip"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10014",
"repo": "rancher/dashboard",
"url": "https://github.com/rancher/dashboard/issues/7762"
}
|
gharchive/issue
|
Unable to Deactivate multiple Cluster Drivers and Node Drivers
Setup
Rancher version: v2.7-head (5df9701)
Browser type & version: Firefox
Describe the bug
Nothing happens when attempting to deactivate Cluster Drivers or Node Drivers.
To Reproduce
The issue applies to both Cluster Drivers and Node Drivers
Navigate to Cluster Management => Drivers => Cluster/Node Drivers
Select all active drivers
Click Deactivate
Result
Nothing happens, no error in console.
Expected Result
Confirmation dialog is shown, positive selection results in Cluster/Node drivers becoming deactivated.
Additional context
This looks to be an issue with the confirmation dialog.
This only happens when attempting to delete multiple cluster/node drivers.
Holding ctrl before to skip the confirmation dialog while clicking delete will perform the action as expected.
@rak-phillip, can you update this ticket to include both Cluster Drivers and Node Drivers so that they are both fixed and tested?
@jameson-mcghee I updated to call out both cluster & node drivers
During testing on v2.7-head (Commit ID: e54432e) I was able to verify that the confirmation prompt is now being generated when attempting to Deactivate multiple Cluster Drivers or Node Drivers, and users are able to successfully Deactivate multiple Cluster Drivers or Node Drivers. Therefore I am closing this ticket as Done.
v2.7-head (Commit ID: e54432e):
Cluster Drivers:
Node Drivers:
|
2025-04-01T06:40:10.829094
| 2022-10-12T09:09:01
|
1405852152
|
{
"authors": [
"aalves08",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10015",
"repo": "rancher/dashboard",
"url": "https://github.com/rancher/dashboard/pull/7168"
}
|
gharchive/pull-request
|
Remaining shell items for Elemental in 2.7.0
Adds the remaining shell changes, which are completely isolated from any reference to Elemental, in 2.7.0
Codecov Report
Base: 36.86% // Head: 36.87% // Increases project coverage by +0.00% :tada:
Coverage data is based on head (ef6a419) compared to base (316889a).
Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## master #7168 +/- ##
=======================================
Coverage 36.86% 36.87%
=======================================
Files 986 986
Lines 17633 17633
Branches 4541 4541
=======================================
+ Hits 6501 6502 +1
+ Misses 11132 11131 -1
Flag
Coverage Δ
e2e
47.41% <ø> (+<0.01%)
:arrow_up:
merged
36.87% <ø> (+<0.01%)
:arrow_up:
unit
5.21% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
shell/components/GrowlManager.vue
0.00% <ø> (ø)
shell/components/Import.vue
0.00% <ø> (ø)
shell/components/form/MatchExpressions.vue
100.00% <ø> (ø)
shell/detail/provisioning.cattle.io.cluster.vue
0.00% <ø> (ø)
...dit/provisioning.cattle.io.cluster/MachinePool.vue
0.00% <ø> (ø)
shell/plugins/steve/steve-class.js
33.33% <0.00%> (-16.67%)
:arrow_down:
shell/utils/socket.js
48.52% <0.00%> (-12.75%)
:arrow_down:
shell/store/growl.js
23.33% <0.00%> (-3.34%)
:arrow_down:
shell/plugins/steve/subscribe.js
65.93% <0.00%> (-1.93%)
:arrow_down:
shell/models/provisioning.cattle.io.cluster.js
63.72% <0.00%> (-0.32%)
:arrow_down:
... and 9 more
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
|
2025-04-01T06:40:10.859365
| 2018-03-15T04:57:17
|
305415677
|
{
"authors": [
"loganhz",
"vincent99",
"walkafwalka",
"zionwu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10016",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/12116"
}
|
gharchive/issue
|
Missing fields when create a storageClass
rancher/server:master 03/14
allowVolumeExpansion and parameters of storageClass are not populated in rancher API.
UI sent the allowVolumeExpansion and parameters to backend for creating a storageClass.
But rancher server doesn't populate them.
The name is the ID so it's not going to be updateable. Disable the field in edit
parameters is present in lastet master.
I can't see allowVolumeExpansion field in the UI, and @loganhz found the code related to allowVolumeExpansion is commented. @vincent99 do we still need this field?
As far as I remember volume expansion is alpha and off by default and there's no obvious way to tell if it's on to show the option, so I commented it out.
Version - 2.0 master 3/29
Verified fixed
Are there plans to add this option?
|
2025-04-01T06:40:10.861817
| 2018-04-03T22:35:30
|
311022076
|
{
"authors": [
"adingilloRancher",
"tfiduccia"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10017",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/12478"
}
|
gharchive/issue
|
Unable to add Ingress when removing second rule
Rancher versions: 2.0
Steps to Reproduce:
Add Ingress
Fill out all fields for first rule (Request Host, Path, Target, Port)
Add second rule.
Remove second rule.
Click Save.
Other information:
For some reason this only occurs if there is any information in the path field. If that value is empty, I am unable to save new rule after performing the listed steps.
Results:
Unable to save new rule. Error message states port is still required.
Version - 2.0 master 4/11
Verified fixed
|
2025-04-01T06:40:10.871280
| 2018-09-03T03:29:05
|
356347307
|
{
"authors": [
"PeterZhai00",
"loganhz",
"sebastiansirch"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10018",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/15374"
}
|
gharchive/issue
|
The port that I exposed was also able to visit at the beginning
Rancher versions:
rancher/rancher:v2.0.8
rancher/rancher-agent:v2.0.8
**Docker version: **
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Experimental: false
Operating system and kernel: (cat /etc/os-release, uname -r preferred)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
3.10.0-514.el7.x86_64
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
VirtualBox
Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)
single node rancher
Environment Template: (Cattle/Kubernetes/Swarm/Mesos)
Kubernetes
Steps to Reproduce:
The port that I exposed was also able to visit at the beginning. All other namespace are normal.
Might be the same as https://github.com/rancher/rancher/issues/15372 ?
When you can access your port after you have deleted the network policy in your namespace, then it is the same issue, I guess.
Then how can we solve this problem? Our situation is the same.
Can you try it in 2.1.1, please?
If you can reproduce it, can you provide the reproduce steps and the logs, please?
|
2025-04-01T06:40:10.880348
| 2018-12-11T17:26:12
|
389871912
|
{
"authors": [
"jama707",
"janeczku"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10019",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/17049"
}
|
gharchive/issue
|
Ensure Rancher system pods are not killed by kubelet eviction manager
What kind of request is this (question/bug/enhancement/feature request):
Enhancement
Steps to reproduce (least amount of steps as possible):
Starve the free space in the root filesystem on a node that runs system-level pods (e.g. ingress-controller, pipeline jenkins or minio, prometheus, etc) by allocating disk space until kubelet's hard eviction threshold is crossed (by default nodefs.available<10%).
E.g. Create a large file with $ dd bs=1M if=/dev/zero of=/bigfile.tmp count=...
Result:
System-level pods are evicted/killed even before user pods (e.g. nginx-ingress-controller, Pipeline's Minio or Jenkins) resulting in service disruption (ingress) or data loss (pipeline build logs).
Warning Evicted Container nginx-ingress-controller was using 252Ki, which exceeds its request of 0.
The node was low on resource: ephemeral-storage.
Killing container with id docker://nginx-ingress-controller:Need to kill Pod nginx-ingress-controller-r7q7s.156f4e7f716145fe
Reason: Kubelet might evict a pod from its node when the node’s ephemeral storage is exhausted. Since these system-level pods have no resource requests/limit specified for ephemeral storage they are targeted first for eviction.
Expected Result:
System-level pods should never be killed/evicted by kubelet when resources are starved on a node or at least they should be selected for eviction with lower priority than user pods.
The kubelet ranks Pods for eviction first by whether or not their usage of the starved resource exceeds requests, then by Priority, and then by the consumption of the starved compute resource relative to the Pods’ scheduling requests.
(...)
Guaranteed pods and Burstable pods whose usage is beneath requests are evicted last. Guaranteed Pods are guaranteed only when requests and limits are specified for all the containers and they are equal. Such pods are guaranteed to never be evicted because of another Pod’s resource consumption.
source
In general, it is strongly recommended that DaemonSet not create BestEffort Pods to avoid being identified as a candidate Pod for eviction. Instead DaemonSet should ideally launch Guaranteed Pods.
source
Solutions that should be explored to address this:
Configure the pods of system-level services such as they are assigned a QOS class of Guaranteed and thus "guaranteed to never be evicted because of another Pod’s resource consumption." This would imho require to specify the following requests/limits for the pods:
spec.containers[].resources.limits.memory
spec.containers[].resources.requests.memory
spec.containers[].resources.limits.ephemeral-storage
spec.containers[].resources.requests.ephemeral-storage
Mark these pods as critical so they are protected from eviction: https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
Environment information
Rancher version 2.1.3
Kubernetes version v1.11.3
Is there any update on this? I think I'm having the same issue.
I'm getting such error for the cattle-system pods
The node was low on resource: ephemeral-storage. Container nginx-ingress-controller was using 56Ki, which exceeds its request of 0.
The node was low on resource: ephemeral-storage. Container agent was using 60Ki, which exceeds its request of 0.
|
2025-04-01T06:40:10.883524
| 2021-01-13T06:37:01
|
784833308
|
{
"authors": [
"Eric-TAS",
"ansilh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10020",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/30814"
}
|
gharchive/issue
|
[RFE] vSphere node driver - Need option to set 'numCoresPerSocket' in node template
What kind of request is this:
Feature request
Description
As of now, we can set the number of CPUs while defining the node template
But in some cases, the Cores Per Socket needs to be tuned (details are in the below blog post)
https://blogs.vmware.com/performance/2017/03/virtual-machine-vcpu-and-vnuma-rightsizing-rules-of-thumb.html
The go library provided by VMware has the option to change this parameter.
govc vm.change -vm master-01 -e cpuid.coresPerSocket=1
gz#14317
Hello,
I also have the same need.
Strange that few people are interested in this subject which would allow a better use of virtualization.
Someone would have a tip?
|
2025-04-01T06:40:10.891680
| 2021-10-17T09:09:34
|
1028264490
|
{
"authors": [
"all4innov"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10021",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/35151"
}
|
gharchive/issue
|
Import cluster command - Bug when more than first apply
Rancher Server Setup
Rancher version: 2.6.1
Installation option (Docker install/Helm Chart): Helm Chart
If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): EKS 1.20
Information about the Cluster
Kubernetes version: EKS version 1.21
Cluster Type (Local/Downstream): downstream
If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): imported
NB It's a simple Generic cluster import and not managed type EKS import
Describe the bug
The cluster import command cannot be applied more than one time. So no more idempotent as should be as a kubernetes deployment.
To Reproduce
Step 1) EKS Cluster creation OK
Step 2) Rancher Cluster creation using Generic Cluster Type OK
Step 3) Execute the import command : OK
kubectl apply -f https://rancher-instance.com/v3/import/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxj.yaml
and the imported cluster is well configured in Rancher
Step 4) Re-execute the import command: NOK
kubectl apply -f https://rancher-instance.com/v3/import/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxj.yaml
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver unchanged
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master unchanged
namespace/cattle-system unchanged
serviceaccount/cattle unchanged
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding unchanged
secret/cattle-credentials-2735ae9 unchanged
clusterrole.rbac.authorization.k8s.io/cattle-admin unchanged
deployment.apps/cattle-cluster-agent configured
service/cattle-cluster-agent unchanged
If the import command is applied a second time, the imported cluster is active, but can't explore anymore the cluster within Rancher:
As seen in previous command the cattle-cluster-agent is reconfigured and perhaps this is what causing the problem. But here we have a problem as the import command is not idempotent.
Result
Impossible to explore the cluster anymore
Expected Result
The cluster should stay accessible via Rancher
Still valid
Still valid
|
2025-04-01T06:40:10.895929
| 2024-11-27T20:27:14
|
2699716183
|
{
"authors": [
"deepakpunia-suse",
"jbiers"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10022",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/48199"
}
|
gharchive/issue
|
[BUG] rancher-monitoring ignoreNamespaceSelectors has wrong default value
The value of prometheus.prometheusSpec.ignoreNamespaceSelectors was set to "true" by accident during an old rebase. This diverges from upstream's default behavior seemingly for no special reason so it must be changed back to "false".
Additional context
SURE-9374
The bug related to the accidental setting of prometheus.prometheusSpec.ignoreNamespaceSelectors to true has been verified as resolved. After conducting tests on an RKE2 cluster, I can confirm that the value has been correctly reverted to false, aligning with upstream defaults.
Details:
Rancher Version: v2.10-head
Cluster Version: v1.31.3-rc1+rke2r1
Monitoring Chart Version: 105.1.1-rc.1+up61.3.2
The attached screenshot verifies the corrected configuration. The cluster and monitoring behavior are now consistent with the expected standards, ensuring no deviations from the upstream specifications.
|
2025-04-01T06:40:10.898899
| 2016-08-09T22:39:59
|
170288189
|
{
"authors": [
"cloudnautique",
"deniseschannon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10023",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/5677"
}
|
gharchive/issue
|
Enhancement Support CPU Quota and CPU period from compose/UI
Cattle doesn't support cpu_quota and cpu_period from compose. We need to add these to our launch configs.
It looks like libcompose is already able to support CPUQuota, it doesn't make it to the host though.
Closing in favor of the PRs that will allow us to support the missing cattle fields:
Cattle Changes: https://github.com/rancher/rancher/issues/4708
Rancher-compose: https://github.com/rancher/rancher/issues/6280
Exporting yml: https://github.com/rancher/rancher/issues/6281
UI: https://github.com/rancher/rancher/issues/6282
|
2025-04-01T06:40:10.900187
| 2017-06-05T21:13:19
|
233712608
|
{
"authors": [
"LLParse",
"loganhz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10024",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/8968"
}
|
gharchive/issue
|
Port etcd-operator to Rancher
Port CoreOS etcd-operator to Rancher orchestration platform. This will enable fully autonomous etcd clusters that self-heal, even in disaster situations such as majority member failure.
With the release of Rancher 2.0, development on v1.6 is only limited to critical bug fixes and security patches.
|
2025-04-01T06:40:10.903884
| 2017-08-23T18:59:19
|
252383077
|
{
"authors": [
"tfiduccia"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10025",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/issues/9744"
}
|
gharchive/issue
|
EC2 tag UI issues
Rancher versions: v1.6.8-rc5
Results:
[x] Rename Tags (EC2) to EC2 Tags
[x] Add more space between Tags section and IAM Profile sections
[x] Make the Tags section full width like Labels is below it
[x] Should not allow comma in the EC2 Tag sections (no commas in Tag or Value sections)
Version - v1.6.8-rc6
Verified fixed
|
2025-04-01T06:40:10.905835
| 2019-05-24T07:30:33
|
448020176
|
{
"authors": [
"loganhz",
"thxCode"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10026",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/pull/20441"
}
|
gharchive/pull-request
|
Support self-signed server CA chain on Windows
Problem:
Consider only the CA chain with only one CA
Solution:
Split multiple CAs from $SSL_CERT_DIR\serverca, then import one by one
Issue:
https://github.com/rancher/rancher/issues/20436
LGTM
|
2025-04-01T06:40:10.908811
| 2021-09-22T20:18:42
|
1004738095
|
{
"authors": [
"cmurphy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10027",
"repo": "rancher/rancher",
"url": "https://github.com/rancher/rancher/pull/34861"
}
|
gharchive/pull-request
|
Make impersonation service account more resilient
Currently, the impersonation path checks for the existence of a matching
impersonation clusterrole on the downstream cluster and then assumes
there is an impersonation service account to go with it. If for some
reason there was an interruption in between creating the clusterrole and
creating the service account, the service account won't exist and
further requests will be perpetually stuck. This change ensures that
won't happen by checking for a Not Found error when retrieving the
service account, and continuing to create the needed resources in such
an event. This also fixes an unclear error message to clarify that the
missing resource is the service account, not the secret, and to
distinguish it from another similar error message.
https://github.com/rancher/rancher/issues/34824
The function checks the role first, if the role does not exist then this whole section is skipped https://github.com/rancher/rancher/pull/34861/files#diff-fe913e39db39d26629ffb0d1e83c60678ce415c81af03283120afeec3233f011R57-R63
|
2025-04-01T06:40:10.910371
| 2018-09-13T22:30:56
|
360090180
|
{
"authors": [
"tfiduccia"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10028",
"repo": "rancher/rio",
"url": "https://github.com/rancher/rio/issues/50"
}
|
gharchive/issue
|
large numbers in yaml output are showing up in scientific notation
Steps:
rio run -n tstk1/tservice1 --memory-limit 150m nginx
rio inspect --format yaml tstk1/tservice1
Results: Notice that all large numbers are in scientific notations (for instance memoryLimitBytes is 1.572864e+08) instead of regular numbers.
no longer valid
|
2025-04-01T06:40:10.929926
| 2015-12-29T20:33:01
|
124264760
|
{
"authors": [
"ibuildthecloud",
"vincent99",
"westlywright"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10029",
"repo": "rancher/ui",
"url": "https://github.com/rancher/ui/pull/419"
}
|
gharchive/pull-request
|
Fixing various style bugs
Added a new container-flex and col-flex to replace the weird multi stat container on hosts, vms, and container details pages. Flex works much better as it gives us a full height column with out a lot of hackery.
@westlywright Can you do the same on the service view? Or whatever view this is http://localhost:8000/apps/1e16/services/1s121/containers
@ibuildthecloud taken care of
I'm not sure I see how flexbox is an improvement...
vs old:
|
2025-04-01T06:40:10.931841
| 2015-03-25T22:47:47
|
64390045
|
{
"authors": [
"sangeethah",
"tfiduccia",
"vincent99"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10030",
"repo": "rancherio/rancher",
"url": "https://github.com/rancherio/rancher/issues/307"
}
|
gharchive/issue
|
[UI][Private Registry] When adding credential , there is no email format validation done for the email field.
Version - V013.0
There should be some basic email format validation check done when entering email for in Credential creation in "Add Registry" and "Add Credential" page.
The field is called "email" but the registry can do whatever it wants with it and Docker does not require to actually be an email.
It's marked required in the API so you have to enter something in it now, but I don't want to get into does-it-look-like-a-valid-email when that isn't even a requirement from Docker.
v0.17.0
Verified fixed
|
2025-04-01T06:40:10.935273
| 2015-04-16T18:40:07
|
68985005
|
{
"authors": [
"sangeethah",
"tfiduccia"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10031",
"repo": "rancherio/rancher",
"url": "https://github.com/rancherio/rancher/issues/527"
}
|
gharchive/issue
|
Deleting and restoring a service's container loses association with service
v0.16.0
Steps:
Create a Environment
Add a service
Go to Container page
Delete and restore service container
Go back to Service page
Results:
Service container no longer shows up. If I delete and restore from service page, it stays. If I delete from service page, but restore from container, it's removed.
Expected:
Should stay until purged.
Even when we attempt to delete and restore from service page, the container is removed from the service. Notice that when you refresh , the service does not have this container associated with it anymore.
Removing UI tag from this bug , since it reproduces from both the container view and service view.
v0.18.1
Verified fixed
|
2025-04-01T06:40:10.952524
| 2017-08-22T13:15:44
|
251953080
|
{
"authors": [
"peerbanks",
"randallreedjr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10032",
"repo": "randallreedjr/roth_ira",
"url": "https://github.com/randallreedjr/roth_ira/pull/17"
}
|
gharchive/pull-request
|
Update Dependencies via Dependable
Update Dependencies via Dependable
Dependency updates
Gem
Previous Version
New Version
Source
Diff
byebug
9.0.6
9.1.0
deivid-rodriguez/byebug
view
Hi randalldrjr/roth I need to contact to discuss something important for
you about your platform
please contact to me at<EMAIL_ADDRESS>https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon
Virus-free.
www.avast.com
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
On Tue, Aug 22, 2017 at 9:15 AM, Randall Reed, Jr<EMAIL_ADDRESS>
wrote:
Update Dependencies via Dependable
Dependency updates
Gem Previous Version New Version Source Diff
byebug 9.0.6 9.1.0 deivid-rodriguez/byebug
https://github.com/deivid-rodriguez/byebug view
https://github.com/deivid-rodriguez/byebug/compare/v9.0.6...v9.1.0
You can view, comment on, or merge this pull request online at:
https://github.com/randallreedjr/roth_ira/pull/17
Commit Summary
Update Dependencies via Dependable
File Changes
M Gemfile.lock
https://github.com/randallreedjr/roth_ira/pull/17/files#diff-0 (2)
Patch Links:
https://github.com/randallreedjr/roth_ira/pull/17.patch
https://github.com/randallreedjr/roth_ira/pull/17.diff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/randallreedjr/roth_ira/pull/17, or mute the thread
https://github.com/notifications/unsubscribe-auth/AcIHEoOiIkImA8O3r5WzPCugwQiODiATks5satSAgaJpZM4O-ml_
.
--
PeerBanks Corporation
382 NE 191ST STSuite: 88441Miami, FL 33179
|
2025-04-01T06:40:10.982099
| 2017-08-26T15:59:23
|
253103594
|
{
"authors": [
"dweiss",
"msokolov"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10033",
"repo": "randomizedtesting/randomizedtesting",
"url": "https://github.com/randomizedtesting/randomizedtesting/issues/252"
}
|
gharchive/issue
|
Run tests in multiple threads
I've been using a multithreaded test runner I wrote for a while, and I was trying to see if there was any way to use this with, or maybe contribute it to this library. My runner runs each test case in its own thread; I get very nice speedups with it, and it can also be used to uncover some race conditions by allowing test cases to access shared state in a test class. I was poking around in here and found this comment in RandomizedRunner.runSuite:
// NOTE: this effectively means we can't run concurrent randomized runners.
final UncaughtExceptionHandler previous = Thread.getDefaultUncaughtExceptionHandler();
Does this mean it doesn't (and can't) run tests in multiple threads? I'm not sure where the restriction takes effect: is it global, per-suite, per-class or per-case?
Hi Michael. The reason we don't do multi-threaded parallel test execution is very simple: the JVM doesn't allow full sandboxing of tests and our primary goal is full reproducibility from a single seed. Think of the globals that can be conflicting -- thread management (as you quoted), security policies, system properties, class loading order (and static initializers). For very simple tests that don't interact with the environment parallel execution is great, but anything beyond that is a problem.
A half-baked solution to this is implemented in the ANT runner for randomized testing -- it splits the set of test suites into multiple JVMs (load-balancing tests execution between them). This requires multiple JVMs to start (overhead), but if you have many test suites, the concurrency is quite all right.
In short: it is possible to run tests in parallel within a single JVM, but there are deep-rooted potential more problems with it, so it won't be implemented as part of this project.
I understand the concern about repeatability. It's certainly true that you cannot guarantee a failure will be reproducible in the face of multi-threading in the general case. However I think that's also true of tests which themselves spawn multiple threads, so in some sense this is an impossibly high standard.
In addition to the perf gain for simpler tests, I've found the multithreaded test runner to be useful in cases where I explicitly want to test some supposedly thread-safe class by running tasks that call its methods from multiple threads. In such a case I'm not sure how one could ever guarantee reproducibility though?
Anyway, I still think this could be useful as a selectable option, say by annotating the test class in cases where the test writer explicitly want to enable it.
Tests that fork multiple threads are non-reproducible if they don't have a state. But still -- if something fails you at least get a chance to repeat the same test execution (with the same randomization seed), even if you re-run it multiple times just to see whether the failure is intermittent or permanent for a given seed. Randomized runner tries to ensure every thread forked from the same test has the same initial randomization seed, so it's not really dumb -- it does its best.
bq. I've found the multithreaded test runner to be useful in cases where I explicitly want to test some supposedly thread-safe class by running tasks that call its methods from multiple threads.
I think a better design here would be to create a single test, then fork a number (randomized!) of threads and pound your business logic from those threads (but still within a single test!).
Believe me I've been down the route of trying to come up with a clean multithreaded separation of tests within a single JVM, but it quickly hits a showstopper -- be it system property-dependent initialization of something (which many third-party libraries use), a race condition on a singleton somewhere, configured once... you name it.
bq. Anyway, I still think this could be useful as a selectable option, say by annotating the test class in cases where the test writer explicitly want to enable it.
Useful -- maybe. But misleading -- for sure. I bet people would abuse this and then be stuck at how to reproduce a particular problem. If you're looking for difficult-to-reproduce issues even in the current setup, look at Apache Solr tests -- they are notorious for failing on one machine, but not on another...
The better you can isolate a single test run, the easier it is to debug. If you're unhappy about it -- please go ahead and fork the project or create your own runner! I bet somebody wrote something like this already (scan JUnit's mailing list).
Yeah, OK. I do have my own runner already. It's not very complicated to write. I'm working in a project that is borrowing LuceneTestCase so trying to find a way to adapt to that. Perhaps we'll just use this runner in some classes and LuceneTestCase in others. Anyway, thanks for the thoughtful response.
|
2025-04-01T06:40:11.037197
| 2016-12-01T17:05:02
|
192905200
|
{
"authors": [
"DigitalSketch",
"raoulvdberge"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10034",
"repo": "raoulvdberge/refinedstorage",
"url": "https://github.com/raoulvdberge/refinedstorage/issues/698"
}
|
gharchive/issue
|
Search for tooltip in the grid (maybe even NBT)
I was trying to find a location to put a feature request, but haven't been able to find a place. Hope this is OK to put here.
I think it would be nice to be able to search NBT data, so when you're looking for say, an enchanted book, you could search for the enchant you want, and it will find them in the system :)
Can't you just do #efficiency to find for example efficiency enchanted tools?
Thanks!
|
2025-04-01T06:40:11.040740
| 2016-12-22T04:38:47
|
197093057
|
{
"authors": [
"Haddadmj",
"way2muchnoise"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10035",
"repo": "raoulvdberge/refinedstorage",
"url": "https://github.com/raoulvdberge/refinedstorage/issues/776"
}
|
gharchive/issue
|
Autocrafting .. craft with oredict
Issue description:
When i request to craft 64 sticky pistons .. I have the oredictionary for the pattren.
What happens:
it craft only one sticky pistion and continue with the crafting without giving the result
What you expected to happen:
i expect to have 64 sticky pistons in my RS System.. but what happend is i have only one.
Steps to reproduce:
make the pattren for it with oredict
request it from RS System
Wholla ... only one sticky piston is crafted
...
Version (Make sure you are on the latest version before reporting):
Minecraft: 1.10.2
Forge:Latest
Refined Storage:Latest
Does this issue occur on a server? [yes/no]
If a (crash)log is relevant for this issue, link it here:
[pastebin/gist/etc link here]
related to #766 , which has been fixed in dev.
|
2025-04-01T06:40:11.050764
| 2023-06-07T13:00:11
|
1745848292
|
{
"authors": [
"reJELIN",
"sashitt2"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10036",
"repo": "raphael-group/constrained-Dollo",
"url": "https://github.com/raphael-group/constrained-Dollo/issues/1"
}
|
gharchive/issue
|
Multiple question / MissionBio Tapestri Data
Hello,
Thank you for this tool !
I'm really interested but i'm struggling to understand how do you obtain the variant reads counts matrix ? My guess is that you paired VAF variants with total reads counts of his amplicons ? If so won't it be biased by non-uniformity of the reads coverage of the so-called amplicon ? if it has been compute with an other method would it be possible to share it ?
You seem to also recommend to add the the clustering (with or without the mutation matrix ?) what is the mutation matrix in tapestri ? are you refering to the genotype matrix (where in each cell/barcode. 0: is wildtype, 1: one allele is alternate, 2: both alleles are alternate, 3: Missing genotype) ?
Also i'm not sure to understand what is the command argument -k and what it is doing ? maximum number of losses for an SNV is a bit vague.
Also would it be possible to provide a conda yaml environment or even a singularity / docker ? it would greatly simplify the use of your tools !
I'm aware it is a lot of questions, so thank you in advance for your time and kindness
Hello,
Thank you for your interest in using this tool and sorry for the late response.
Variant read count matrix -- yes, you are correct. This matrix contains the number of variant reads for each mutation in each cell. You can generate this matrix by multiplying the VAF with the total read counts (reference reads + variant reads) for each mutation.
Mutation matrix -- I have defined the mutation matrix in the paper (I updated the README to contain a link to the paper). Mutation matrix has entry of 0 if the mutation is abset, 1 if it is present and -1 if there is no information about the mutation in that cell.
-k arugment --Yes, this is the number of losses for a SNV in the phylogeny. More information can be found in the paper I have linked in the README (https://www.biorxiv.org/content/10.1101/2<IP_ADDRESS>2408v1.abstract)
conda yaml -- Thank you for the suggestion. I will work upload a yaml file to use this tool in the next few days. The package requirement for ConDoR are very simple. We only need numpy, pandas, networkx and gurobipy.
|
2025-04-01T06:40:11.051989
| 2024-01-07T23:59:34
|
2069357777
|
{
"authors": [
"dipta007",
"raphaelheinz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10037",
"repo": "raphaelheinz/LeetHub-3.0",
"url": "https://github.com/raphaelheinz/LeetHub-3.0/pull/17"
}
|
gharchive/pull-request
|
Loader v2
Loader was not showing on the V2
Also, I changed the behavior to show from the start of submission, rather than later for showing real time feedback.
Thanks @dipta007 for your contribution! Great work! I will build a new version.
|
2025-04-01T06:40:11.061598
| 2022-05-06T06:35:18
|
1227475410
|
{
"authors": [
"Rauert",
"brunchboy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10038",
"repo": "rapi-doc/RapiDoc",
"url": "https://github.com/rapi-doc/RapiDoc/issues/747"
}
|
gharchive/issue
|
encoding.explode:true not working for schema properties
Hi Team,
I have several api endpoints that allow for a parameter to be specified multiple times. See example of one below.
{
"openapi": "3.0.1",
"info": {
"title": "TEST",
"version": "1.0.0"
},
"servers": [{
"url": "http://localhost:8080/api/",
"description": "TEST"
}],
"paths": {
"/deleteImage": {
"post": {
"tags": ["Images"],
"summary": "Delete Image",
"description": "The delete image service deletes the specified image. Multiple imageName parameters can be specified to delete multiple images.",
"operationId": "deleteImage",
"requestBody": {
"content": {
"application/x-www-form-urlencoded": {
"schema": {
"$ref": "#/components/schemas/DeleteImageRequest"
},
"encoding": {
"imageName": {
"explode": true
}
}
},
"multipart/form-data": {
"schema": {
"$ref": "#/components/schemas/DeleteImageRequest"
},
"encoding": {
"imageName": {
"explode": true
}
}
}
}
},
"responses": {
"200": {
"description": "Request succeeded"
}
}
}
}
},
"components": {
"schemas": {
"DeleteImageRequest": {
"required": ["accessKey", "imageName"],
"type": "object",
"properties": {
"accessKey": {
"type": "string",
"description": "Your unique access key that you can find on the Account page in the Cloud Console."
},
"imageName": {
"minItems": 1,
"type": "array",
"items": {
"type": "string",
"description": "The name of the image. This parameter can be specified multiple times to delete multiple files.",
"example": "/MyImageFolder/MyImage.png"
}
}
},
"description": "Delete Image Request Model"
}
}
}
}
In RapiDoc the explode:true property seems to be ignored and the imageName parameter is interpreted as a single parameter with comma delimitated values and an erroneous leading delimitator. If I test the example spec above from RapiDoc I get the following raw request bodies:
Multipart:
------WebKitFormBoundaryXXVgV2Xnct1AW3tT
Content-Disposition: form-data; name="accessKey"
XXX
------WebKitFormBoundaryXXVgV2Xnct1AW3tT
Content-Disposition: form-data; name="imageName"
,abc.png,def.jpg
------WebKitFormBoundaryXXVgV2Xnct1AW3tT--
Form Data:
accessKey=XXX&imageName=,abc.png,def.jpg
The example api above works correctly in Swagger UI though.
I also note that Parameters with explode:true seem to work fine in RapiDoc.
We have run into this problem as well. And the way I read the OpenAPI documentation it is supposed to explode arrays by default, without even requiring an explode:true setting:
Form fields can contain primitives values, arrays and objects. By default, arrays are serialized as array_name=value1&array_name=value2 and objects as prop1=value1&prop2=value2, but you can use other serialization strategies as defined by the OpenAPI 3.0 Specification.
But we are happy to supply the setting, we just need it to be supported.
|
2025-04-01T06:40:11.063485
| 2016-08-23T15:53:15
|
172737452
|
{
"authors": [
"athompson-r7",
"dgreene-r7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10039",
"repo": "rapid7/elasticsearch-drain",
"url": "https://github.com/rapid7/elasticsearch-drain/issues/20"
}
|
gharchive/issue
|
bytes_stored is broken when --nodes are passed
When you pass nodes the search for bytes_stored is broken.
/Users/athompson/code/personal/elasticsearch-drain/lib/elasticsearch/drain/cli.rb:80:in `block in remove_nodes': undefined method `bytes_stored' for nil:NilClass (NoMethodError)
from /Users/athompson/code/personal/elasticsearch-drain/lib/elasticsearch/drain/cli.rb:78:in `each'
from /Users/athompson/code/personal/elasticsearch-drain/lib/elasticsearch/drain/cli.rb:78:in `remove_nodes'
from /Users/athompson/code/personal/elasticsearch-drain/lib/elasticsearch/drain/cli.rb:33:in `asg'
from /Users/athompson/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
from /Users/athompson/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'
from /Users/athompson/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'
from /Users/athompson/.rbenv/versions/2.2.4/lib/ruby/gems/2.2.0/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'
from bin/drain:3:in `<main>'
This feature was introduced in #18
This is fixed by #23.
|
2025-04-01T06:40:11.176463
| 2023-11-13T20:41:42
|
1991449293
|
{
"authors": [
"pentschev"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10040",
"repo": "rapidsai/ucxx",
"url": "https://github.com/rapidsai/ucxx/pull/124"
}
|
gharchive/pull-request
|
Fix skbuild arguments in build.sh
The previous method was incorrect and did not work as expected, for example specifying -g would not result in a Cython debug build.
To be entirely honest, I'm not always sure about those build tools either, this time I borrowed from cuDF. In any case, I've tested this to be working currently, so I'm gonna go ahead and merged it as is. Thanks @wence- !
/merge
|
2025-04-01T06:40:11.178227
| 2015-09-19T15:33:28
|
107339635
|
{
"authors": [
"blochbergermax",
"raptorswing"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10041",
"repo": "raptorswing/Qt-Socks-Server",
"url": "https://github.com/raptorswing/Qt-Socks-Server/pull/11"
}
|
gharchive/pull-request
|
Fix parsing of IPv4 address in SOCKS4A
According to the protocol a domain is used instead of an IPv4 address if the address is 0.0.0.x with x non-zero.
Since (0x00000000 & 0x000000ff) == 0x00000000 the IP address <IP_ADDRESS> was considered to be a domain as well, so we need to check if not only the first 3 octets are zero but if the last octet is non-zero as well.
I'm not sure if any client sets <IP_ADDRESS> as IP address. I didn't see a real life issue with this, but nonetheless let's try to be true to the specification.
Good find - thanks.
|
2025-04-01T06:40:11.184288
| 2020-12-12T17:05:23
|
764058063
|
{
"authors": [
"scottolsonjr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10042",
"repo": "rarelysimple/RarelySimple.AvatarScriptLink",
"url": "https://github.com/rarelysimple/RarelySimple.AvatarScriptLink/issues/4"
}
|
gharchive/issue
|
zInvokeClient+208 when no MI Row selected and Registry set to send FormObject
Describe the bug
When the registry setting, 'Include FormObject for sections without a current row selected' is set to 'Y' multiple iteration FormObjects are sent in the ScriptLink payload with no CurrentRow. After sending the ScriptLink request, myAvatar shows the error 'zInvokeClient+208' when processing the response. - Reported by Compass Health.
To Reproduce
Steps to reproduce the behavior:
Set the Registry Setting 'Include FormObject for sections without a current row selected' to 'Y'
Open a myAvatar form with existing multiple iteration content and a ScriptLink call configured.
Trigger ScriptLink call without selecting a row in the multiple iteration table.
Error 'zInvokeClient+208' appears to user and shows in logging.
Expected behavior
Either no error should occur or a message provided by the ScriptLink API. THe 'zInvokeClient+208' should not occur.
Screenshots
Example Code
In this example, no modifications are made to the incoming OptionObject2015, so no error message should be displayed.
public class ParentGuardian : ICommand
{
private OptionObject2015 _optionObject2015;
private string _parameter;
public ParentGuardian(OptionObject2015 optionObject2015, string parameter)
{
_optionObject2015 = optionObject2015;
_parameter = parameter;
}
public OptionObject2015 Execute()
{
return _optionObject2015.ToReturnOptionObject();
}
}
This issue is caused by a missing null check when cloning the FormObject. This will be corrected in the next release.
|
2025-04-01T06:40:11.248112
| 2022-05-04T09:29:48
|
1225129444
|
{
"authors": [
"lurch",
"smurfix"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10043",
"repo": "raspberrypi/pico-sdk",
"url": "https://github.com/raspberrypi/pico-sdk/issues/804"
}
|
gharchive/issue
|
Feature request: pioasm: multiple programs
Let's say I want to use the PIO for something UART-like. Now obviously this requires a send program and a receive program. I want to pack both into one PIO device because they're both small enough.
My problem is that there seems to be no clean way to do that: pioasm doesn't tell me how long the first program is, thus I have no way to set the .offset for the second program correctly.
Have a look at some of the examples in https://github.com/raspberrypi/pico-examples/tree/master/pio/
https://github.com/raspberrypi/pico-examples/tree/master/pio/ir_nec/nec_transmit_library is an example that uses multiple PIO programs (and there might be others if you dig around? :man_shrugging: )
And I've not tried it myself, but you might want to look at https://github.com/harrywalsh/pico-hw_and_pio-uart-gridge/tree/HW_and_pio_uarts
|
2025-04-01T06:40:11.252032
| 2021-02-01T18:40:04
|
798611072
|
{
"authors": [
"cleverca22",
"kilograham",
"lurch"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10044",
"repo": "raspberrypi/pico-sdk",
"url": "https://github.com/raspberrypi/pico-sdk/pull/70"
}
|
gharchive/pull-request
|
enforce that resulting binaries dont have arm opcodes, fixes #50
the original fault, is that the arm-none-gcc on my host was built with multi-lib disabled
as a result, it could generate thumb binaries at compile time, but the crtbegin.o and the newlib libc.a had been pre-built in arm32 mode only
pico-sdk would silently link the arm32 variants, and then the code would just fault out at runtime, and only SWD could reveal why
this pr will cause the build to fail if such arm32 opcodes wind up in the binary, forcing you to fix the tooling before you can flash the pico
if we're going to do anything here, we should check in elf2uf2 (which should be able to tell the difference... it already checks a lot of things)...
@kilograham should i just throw in some c code and have cmake compile it the same way it does the other host utils?
ah, elf2uf2 does sound like a good place to add the check
yup add something here https://github.com/raspberrypi/pico-sdk/blob/master/tools/elf2uf2/main.cpp#L92
@kilograham is that a new git repo?, or do you just mean a new PR against the elf2uf2 file?
I suspect the latter. (as you can see above, elf2uf2 is maintained in this repo)
yes in this repo, but this PR seems a bit obsolete
|
2025-04-01T06:40:11.270209
| 2024-03-07T17:29:03
|
2174378585
|
{
"authors": [
"openoms",
"rootzoll"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10046",
"repo": "raspiblitz/raspiblitz",
"url": "https://github.com/raspiblitz/raspiblitz/issues/4455"
}
|
gharchive/issue
|
CLN update to v24.02
Testing the RECKLESS CLN UPDATE option on Raspiblitz v0.11.0 rc3 + the backup plugin patch in #4446
Got a warning from poetry, but otherwise looking good:
Cloning into 'lightning'...
remote: Enumerating objects: 139424, done.
remote: Counting objects: 100% (18553/18553), done.
remote: Compressing objects: 100% (1446/1446), done.
remote: Total 139424 (delta 17579), reused 17165 (delta 17101), pack-reused 120871
Receiving objects: 100% (139424/139424), 97.20 MiB | 1.23 MiB/s, done.
Resolving deltas: 100% (91450/91450), done.
Updating files: 100% (27334/27334), done.
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is re
commended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is re
commended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Warning: The current project could not be installed: No file/folder found for package cln-meta-project
If you do not want to install the current project use --no-root.
If you want to use Poetry only for dependency management but not for packaging, you can set the operating mode to "non-package" in your pypr
oject.toml file.
In a future version of Poetry this warning will become an error!
2024-03-07 17:17:20,872 [DEBUG] Primitive[path=Getinfo.id, required=True, type=pubkey]
2024-03-07 17:17:20,873 [DEBUG] Primitive[path=Getinfo.alias, required=True, type=string]
Should be ok for v0.11.0
Then could ship the next raspiblitz release with the CLN v24.02 from start.
At the moment in dev CLN version is ...
https://github.com/raspiblitz/raspiblitz/blob/4687506d12f958bbba03b05bcb54b0a4ec432596/home.admin/config.scripts/cl.install.sh#L5
.. closing issue.
|
2025-04-01T06:40:11.277045
| 2022-05-29T05:16:02
|
1251790854
|
{
"authors": [
"kwikslvr",
"zynos"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10047",
"repo": "raspibolt/raspibolt",
"url": "https://github.com/raspibolt/raspibolt/issues/1032"
}
|
gharchive/issue
|
Bitcoin Core install issues
Describe the issue
I'm at the installation step of Bitcoin client, have downloaded the latest Bitcoin Core binary along with the hash and signatures. Reference checksum was ok, and signature check said there were good signatures, but with warnings. I went ahead and extracted and installed Bitcoin Core anyway, but it fails the version check. Not sure if I should try to proceed with Steps 2a and following.
Location in guide
https://raspibolt.org/guide/bitcoin/bitcoin-client.html
Screenshots
Here's are some of the signature warnings:
Here's the error when I tried to check the version after installing:
I tried to change to the bitcoin-23.0/bin/ directory where I can clearly see the file "bitcoind" listed (as shown in the screenshot) but it came back with the same error "no such file or directory"
Environment
Hardware platform: Raspberry Pi 4
Operating system: Raspberry Pi OS 64bit Lite
Version: Debian GNU/Linux 11 (bullseye)
Any suggestions for how to resolve the issues I reported? Can I safely ignore the warnings on the signatures and assume the bitcoin core client I downloaded is safe to use? And why doesn't the version check work? If it's a simple noob error, please advise - I don't know linux well enough to troubleshoot it myself.
I installed btc core 23.0 yesterday on my raspi and it worked.
Why did you use the arm-linux version and not the suggested version from the guide?
$ tar -xvf bitcoin-23.0-aarch64-linux-gnu.tar.gz
I dont know if that is a problem but maybe you can try that version instead
@zynos I was going by the instruction in the guide to download the latest version in case there had been an update, as it says here:
But looking at it again, I see the latest version is in fact 23.0, so I'll retry it like you did. Thanks for the suggestion!
@zynos I went ahead and took your suggestion and just downloaded the version listed in the guide, and when I checked the signatures, got similar warnings as last time.
But I went ahead and installed the bitcoin core client and this time it worked as expected, so will close this issue. Still not sure if those warnings should be heeded though - makes this noob a bit uncertain. If they're not important, I feel like the guide should mention that they can be safely ignored.
|
2025-04-01T06:40:11.287928
| 2017-03-03T16:25:14
|
211735146
|
{
"authors": [
"cklmercer",
"ratiw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10048",
"repo": "ratiw/vuetable-2-tutorial-bootstrap",
"url": "https://github.com/ratiw/vuetable-2-tutorial-bootstrap/pull/5"
}
|
gharchive/pull-request
|
Update vue-events version
I fixed a bug in vue-events that was introduced in Vue 2.2.x, just updating this projects dependency.
https://github.com/cklmercer/vue-events/issues/8
@cklmercer Thanks, really appreciated. :)
|
2025-04-01T06:40:11.289653
| 2018-07-21T23:55:16
|
343364338
|
{
"authors": [
"RomanDavlyatshin",
"ratiw"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10049",
"repo": "ratiw/vuetable-2",
"url": "https://github.com/ratiw/vuetable-2/issues/501"
}
|
gharchive/issue
|
Documentation links to lessons are outdated
That page and I believe all other pages referencing lessons contain outdated links, as lessons moved from blob/master to wiki.
Please, fix.
Sorry for that. Those pages will soon be replaced when v2.0 is released (currently in beta). The document for the v2.0-beta is available here.
|
2025-04-01T06:40:11.321595
| 2024-02-28T15:54:38
|
2159285471
|
{
"authors": [
"rawhat"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10050",
"repo": "rawhat/glisten",
"url": "https://github.com/rawhat/glisten/issues/15"
}
|
gharchive/issue
|
sendfile is not supported on SSL
Right now I just FFI out to file:sendfile for both TCP and SSL sockets. However, sendfile does not work with SSL.
I'll need to scope this per-transport, and probably have some default implementation for SSL. Currently, ThousandIsland just does a file:pread and ssl:send... is it worth trying to do something a little less memory intensive? Not sure how likely this situation is. Maybe just add the basic one and see what people think.
Whoops, this should be on mist.
|
2025-04-01T06:40:11.325273
| 2024-12-14T21:34:17
|
2740153122
|
{
"authors": [
"MortalHappiness",
"kevin85421"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10051",
"repo": "ray-project/kuberay",
"url": "https://github.com/ray-project/kuberay/issues/2654"
}
|
gharchive/issue
|
[CI] TestAutoscalingRayService is flaky
Search before asking
[X] I searched the issues and found no similar issues.
KubeRay Component
ci
What happened + What you expected to happen
https://buildkite.com/ray-project/ray-ecosystem-ci-kuberay-ci/builds/5672#0193c1f9-ac0c-4a19-88eb-aee9aa146f14/3769
Reproduction script
https://buildkite.com/ray-project/ray-ecosystem-ci-kuberay-ci/builds/5672#0193c1f9-ac0c-4a19-88eb-aee9aa146f14/3769
Anything else
No response
Are you willing to submit a PR?
[ ] Yes I am willing to submit a PR!
Let's temporarily not address this issue, as I ran it many times on Buildkite and my local machine but couldn't reproduce the error. This is the PR where I tried running it 10 times: https://github.com/ray-project/kuberay/pull/2682
|
2025-04-01T06:40:11.327451
| 2022-08-22T00:42:45
|
1345648155
|
{
"authors": [
"DmitriGekhtman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10052",
"repo": "ray-project/kuberay",
"url": "https://github.com/ray-project/kuberay/issues/500"
}
|
gharchive/issue
|
[Feature] Add CI tests for Helm charts
Search before asking
[X] I had searched in the issues and found no similar feature requirement.
Description
Test Helm configuration in the CI.
The Helm charts are very easy to break.
We could take a look at testing strategies used by similar projects; no need to innovate.
This is actually a duplicate of https://github.com/ray-project/kuberay/issues/184.
|
2025-04-01T06:40:11.334058
| 2022-07-27T22:27:17
|
1320184953
|
{
"authors": [
"DmitriGekhtman",
"Jeffwan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10053",
"repo": "ray-project/kuberay",
"url": "https://github.com/ray-project/kuberay/pull/423"
}
|
gharchive/pull-request
|
[Autoscaler] Match autoscaler image to Ray head image for Ray >= 2.0.0
Why are these changes needed?
This PR updates the logic that selects a default autoscaler image.
For Ray versions at least 2.0.0, use the same image for the autoscaler as for the Ray container.
This eliminates the possibility of autoscaler/Ray incompatibility and reduces docker pull time.
For earlier Ray versions, use rayproject/ray:2.0.0 to guarantee up-to-date autoscaler functionality.
As of the Ray 2.0.0 branch cut earlier today, an image tagged rayproject/ray:2.0.0 exists on Dockerhub.
Until the official Ray release in two weeks, this image is unofficial and its actual contents are a moving target -- but I think we can live with the inconsistency in the short term. (I'm open to other opinions on this choice.)
Related issue number
Closes https://github.com/ray-project/kuberay/issues/360
Checks
[ ] I've made sure the tests are passing.
Testing Strategy
[ ] Unit tests
[ ] Manual tests
[ ] This PR is not tested :(
so rayproject/ray:2.0.0 will be overridden once the office image is out?
/cc @akanso to take a look
so rayproject/ray:2.0.0 will be overridden once the office image is out?
rayproject/ray:2.0.0 is updated each time we push to the Ray 2.0.0 release branch
The "official image is out" after the last commit is made to the release branch and we announce the release.
Maybe not the best way of doing things but that's the way it is at the moment.
#424 is merged to help improve test stability. You can rebase the change to see if nightly version pass
Let's merge this one and it's time to cut rc.0 release. If other reviewers have further feedback, feel free to leave it here.
|
2025-04-01T06:40:11.486017
| 2021-01-06T01:07:20
|
779822281
|
{
"authors": [
"amogkam",
"krfricke"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10054",
"repo": "ray-project/xgboost_ray",
"url": "https://github.com/ray-project/xgboost_ray/issues/41"
}
|
gharchive/issue
|
xgboost_ray fails on Python 3.8 on macOS
In Python 3.8 on macOS, multiprocessing uses a spawn strategy instead of a fork strategy for process creation. This change is no longer compatible with our subclassed RabitTracker that uses a process internally rather than a thread and causes xgboost_ray training to fail.
Works fine using the xgboost built in RabitTracker
Python 3.8.6 (default, Nov 20 2020, 18:29:40)
[Clang 12.0.0 (clang-1<IP_ADDRESS>)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ray
>>> from ray.services import get_node_ip_address
>>> import xgboost
>>> from xgboost import RabitTracker
>>> node_ip = get_node_ip_address()
>>> rabit_tracker = RabitTracker(hostIP=node_ip, nslave=2)
>>> rabit_tracker.start(nslave=2)
Fails using xgboost_ray internal _RabitTracker
Python 3.8.6 (default, Nov 20 2020, 18:29:40)
[Clang 12.0.0 (clang-1<IP_ADDRESS>)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ray
>>> from ray.services import get_node_ip_address
>>> from xgboost_ray.main import _RabitTracker
>>> node_ip = get_node_ip_address()
>>> rabit_tracker = _RabitTracker(hostIP=node_ip, nslave=2)
>>> rabit_tracker.start(nslave=2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/amog/dev/xgboost_ray/xgboost_ray/main.py", line 106, in start
self.thread.start()
File<EMAIL_ADDRESS>line 121, in start
self._popen = self._Popen(self)
File<EMAIL_ADDRESS>line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File<EMAIL_ADDRESS>line 284, in _Popen
return Popen(process_obj)
File<EMAIL_ADDRESS>line 32, in __init__
super().__init__(process_obj)
File<EMAIL_ADDRESS>line 19, in __init__
self._launch(process_obj)
File<EMAIL_ADDRESS>line 47, in _launch
reduction.dump(process_obj, fp)
File<EMAIL_ADDRESS>line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object '_RabitTracker.start.<locals>.run'
Once we fix this we should include 3.8 and macOS tests in the CI and make another release.
cc @krfricke
Current master with ray 1.2.0 works fine for me:
> python
Python 3.8.6 (default, Apr 14 2021, 14:07:57)
[Clang 12.0.0 (clang-1<IP_ADDRESS>)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ray
>>> from ray.services import get_node_ip_address
>>> from xgboost_ray.main import _RabitTracker
>>> node_ip = get_node_ip_address()
>>> rabit_tracker = _RabitTracker(hostIP=node_ip, nslave=2)
>>> rabit_tracker.start(nslave=2)
>>> del rabit_tracker
>>>
> python ../../xgboost_ray/examples/simple.py
File descriptor limit 2560 is too low for production servers and may result in connection errors. At least 8192 is recommended. --- Fix with 'ulimit -n 8192'
2021-04-14 14:11:37,731 INFO services.py:1264 -- View the Ray dashboard at http://<IP_ADDRESS>:8265
2021-04-14 14:11:39,656 INFO main.py:817 -- [RayXGBoost] Created 4 new actors (4 total actors). Waiting until actors are ready for training.
2021-04-14 14:11:40,582 INFO main.py:860 -- [RayXGBoost] Starting XGBoost training.
(pid=89787) [14:11:40] task [xgboost.ray]:4598953872 got new rank 3
(pid=89777) [14:11:40] task [xgboost.ray]:4556349584 got new rank 2
(pid=89790) [14:11:40] task [xgboost.ray]:4554084240 got new rank 0
(pid=89782) [14:11:40] task [xgboost.ray]:4535186576 got new rank 1
2021-04-14 14:11:41,440 INFO main.py:1304 -- [RayXGBoost] Finished XGBoost training on training data with total N=143 in 1.84 seconds (0.85 pure XGBoost training time).
Final validation error: 0.0210
> python --version
Python 3.8.6
> uname -a
Darwin Kais-MacBook-Pro.local 19.6.0 Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64 x86_64
@amogkam could you check if it works for you, too, and close the issue if it does?
Ah seems this was fixed in #42 and we only left this issue open. Closing this for now then.
|
2025-04-01T06:40:11.492739
| 2024-02-22T16:09:38
|
2149419903
|
{
"authors": [
"alwinsamson",
"sebdanielsson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10055",
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/issues/10886"
}
|
gharchive/issue
|
[URL Unshortener] Unable to Expand URL from https://is.gd
Extension
https://www.raycast.com/SebDanielsson/url-unshortener
Description
URL Unshortener is showing error when expanding URL from https://is.gd.
System: Raycast 1.68.1 on macOS 14.3.1
Example URL: https://is.gd/putPEo
Steps To Reproduce
Tried different URL from https://is.gd same result
Current Behaviour
No response
Expected Behaviour
No response
I just tried unshortening https://is.gd/putPEo with Raycast 1.68.1 running on macOS 14.3.1 and didn't get any error messages. Do you still have this issue?
It works, wondering if it's also possible to expand/clean URL like this, I use www.expandurl.net to get the canonical URL.
https://redirect.viglink.com/?key=0ab71bd42c2ca312a536dac167978a13&u=https%3A%2F%2Fwww.amazon.com%2FApple-MU8F2AM-A-Pencil-Generation%2Fdp%2FB07K1WWBJK%2F%3Ftag%3Dtoysb-20&type=ap&loc=https%3A%2F%2F9to5toys.com%2F2024%2F02%2F17%2Fapple-pencil-2-drops-to-to-79-at-amazon-its-second-best-price-reg-129%2F&ref=https%3A%2F%2F9to5toys.com%2F
Sorry for the late reply!
I'm not sure how to implement that functionality as sometimes the parameters are used to redirect the user to the final page. Preferably I would like to avoid depending on an online service. PRs are welcome of course😉
|
2025-04-01T06:40:11.504130
| 2022-03-27T18:05:35
|
1182613661
|
{
"authors": [
"AuHau",
"tonka3000"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:10056",
"repo": "raycast/extensions",
"url": "https://github.com/raycast/extensions/issues/1225"
}
|
gharchive/issue
|
[Extension Bug] Home Assistant extension can't connect to web-socket
Extension – Home Assistant
Author: @tonka3000
Description
Extension can't connect to Home Assistant Websocket API.
I have tried manually connecting to the Websocket API using Websocket Extension and I have managed successfully connect to it where this extension seems not to.
I wanted to see logs of what is going on, but thanks to https://github.com/raycast/extensions/issues/1224 I did not manage.
Maybe it is incompatible with the latest Home Assistant version? I have noticed that you are not running the latest home-assistant-js-websocket package.
Raycast version
Version: 1.31.0
Home Assistant info
version
core-2022.3.6
installation_type
Home Assistant Core
dev
false
hassio
false
docker
false
user
homeassistant
virtualenv
true
python_version
3.9.2
os_name
Linux
os_version
5.4.182
arch
armv7l
timezone
Europe/Prague
Home Assistant Cloud
logged_in
false
can_reach_cert_server
ok
can_reach_cloud_auth
ok
can_reach_cloud
ok
Lovelace
dashboards
1
resources
0
mode
auto-gen
Hey @AuHau ,
I have also the latest version of HA (2022.3.7) and it works for me.
Do you have a trailing whitespace in your url e.g. https://myhainstance.org/. If yes: Remove it (https://myhainstance.org) and try again.
If this does not work maybe you need to upgrade to 2022.3.7.
Hmmm, nothing traling.
I have no idea what has changed but today it works as expected 😅 Most probably it is then problem on my Home Assistant deployment, so closing this. Thanks for the ideas!
@AuHau Great to hear that it works now. Let me know if you miss something from home assistant in the exension.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.