id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1042338344 | [android] Use attribute subscriptions in sensors screen
Problem
Android CHIPTool's sensors screen periodically polls a monitored sensor instead of using the attribute subscription.
Change overview
Use the attribute subscription to refresh the sensor data.
Add a new method for cancelling all active subscriptions for a given device. Previously, there was only a method for cancelling a specific subscription, but controller applications currently have no way of knowing the subscription ID.
Update the IP address of a device when the sensors screen is entered or when a device ID is changed.
Testing
Tested manually using OTBR and a multi-sensor nRF Connect sample.
@bzbarsky-apple PTAL
I filed https://github.com/project-chip/connectedhomeip/issues/11431 on the ReadClient bits I think we should still do.
| gharchive/pull-request | 2021-11-02T13:21:48 | 2025-04-01T06:45:28.180401 | {
"authors": [
"Damian-Nordic",
"bzbarsky-apple"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/11318",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1124684864 | [OTA] Issue#13839 - Adding logging to OTA Provider
Problem
Logging is missing in ApplyUpdateRequest and NotifyUpdateApplied from the OTA Provider app.
Change overview
Add logging to ApplyUpdateRequest and NotifyUpdateApplied.
Testing
Manually tested OTA Provider app and ensured additional logs in ApplyUpdateRequest and NotifyUpdateApplied are observed in the terminal.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
| gharchive/pull-request | 2022-02-04T22:47:14 | 2025-04-01T06:45:28.183547 | {
"authors": [
"CLAassistant",
"isiu-apple"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/14810",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1176652298 | [ESP32] Set-regulatory-config command is working on ESP32
Problem
What is being fixed? Examples:
Set-regulatory-config command is working on ESP32.
MR #15240 fixed this todo.
Change overview
Set-regulatory-config command is working on ESP32.
Testing
Manually tested set-regulatory-config command.
Thanks!
| gharchive/pull-request | 2022-03-22T11:33:13 | 2025-04-01T06:45:28.185762 | {
"authors": [
"cecille",
"jadhavrohit924"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/16520",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1565337454 | [Andriod] Remove extra sub-interface layer from DeviceAttestationDelegate interface
Currently, there are two sub-interfaces (DeviceAttestationCompletionCallback and DeviceAttestationFailureCallback) within interface DeviceAttestationDelegate and only one of them could be implemented.
The prototype of sub-interface DeviceAttestationCompletionCallback and DeviceAttestationFailureCallback are similar, and DeviceAttestationCompletionCallback is always called if implemented and DeviceAttestationFailureCallback is only triggered when attestation failed
This design is really confusing, since DeviceAttestationCompletionCallback already contains the error code parameter which could be used to indicate attestation failure, we could simplify DeviceAttestationDelegate interface by removing the sub-interface layer
Hi Yufeng,
DeviceAttestationCompletionCallback is to align with the iOS implementation, if there is a better implementation that can be provided, I personally do not recommend deleting it
iOS implementation:
https://github.com/project-chip/connectedhomeip/blob/master/src/darwin/Framework/CHIP/MTRDeviceAttestationDelegate.h
https://github.com/project-chip/connectedhomeip/blob/master/src/darwin/Framework/CHIP/MTRDeviceAttestationDelegateBridge.mm
Hi Yufeng,
DeviceAttestationCompletionCallback is to align with the iOS implementation, if there is a better implementation that can be provided, I personally do not recommend deleting it
iOS implementation:
https://github.com/project-chip/connectedhomeip/blob/master/src/darwin/Framework/CHIP/MTRDeviceAttestationDelegate.h
https://github.com/project-chip/connectedhomeip/blob/master/src/darwin/Framework/CHIP/MTRDeviceAttestationDelegateBridge.mm
The detailed Java API implementation does not have to align with iOS due to language specific characteristics, the current design which wrapper two interfaces within another interface is redundant and does not pass review from our internal Java team
Hi Yufeng,
DeviceAttestationCompletionCallback is to align with the iOS implementation, if there is a better implementation that can be provided, I personally do not recommend deleting it
iOS implementation:
https://github.com/project-chip/connectedhomeip/blob/master/src/darwin/Framework/CHIP/MTRDeviceAttestationDelegate.h
https://github.com/project-chip/connectedhomeip/blob/master/src/darwin/Framework/CHIP/MTRDeviceAttestationDelegateBridge.mm
The detailed Java API implementation does not have to align with iOS due to language specific characteristics, per the feedback from our internal Java team, the current design which wrapper two interfaces within another interface is redundant and confusing,
| gharchive/pull-request | 2023-02-01T02:56:06 | 2025-04-01T06:45:28.192178 | {
"authors": [
"panliming-tuya",
"yufengwangca"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/24771",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
708447818 | Use C++11 when compiling Objective C++ files
Problem
We're not ending up in C++11 mode when compiling .mm files, so the Mac build breaks.
Summary of Changes
Use the right build flags.
fixes https://github.com/project-chip/connectedhomeip/issues/2812
@hawk248 @saurabhst @jelderton @BroderickCarlin ?
| gharchive/pull-request | 2020-09-24T20:17:51 | 2025-04-01T06:45:28.194091 | {
"authors": [
"bzbarsky-apple",
"rwalker-apple"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/2813",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2094909289 | Add Electrical power measurement to energy management app
This PR adds the EPM (Electrical Power Measurement) cluster into examples/energy-management-app.
This PR is based off #31518 and should not be reviewed until that has merged.
It is also based off #31622 which adds test event triggers to fake energy readings.
[x] Add Delegate into EnergyManagementApp
[x] Add helper code to demonstrate how to use the cluster
[x] Add TE trigger to send power readings
Misc changes:
[x] Added protection to ensure SafeAttributePersistenceProvider in EVSE cluster was initialised (relates to #31591 seen on ESP32)
Fixes #31095
https://github.com/project-chip/connectedhomeip/issues/31095
-[x] Added TC_EEM_2.1, 2.2, 2.3, 2.4, 2.5 - all passing
-[x] Added TC_EPM_2.1, 2.2 - all passing
More like a generic comment: why do we have the feature attribute as part of the API? Why not use the usual attribute storage for it? I found this to be confusing since this is the first time I saw such approach and I don't see benefit in adding a variation to how this is mostly done (unless there is one which I don't know of).
More like a generic comment: why do we have the feature attribute as part of the API? Why not use the usual attribute storage for it? I found this to be confusing since this is the first time I saw such approach and I don't see benefit in adding a variation to how this is mostly done (unless there is one which I don't know of).
A lot of clusters now have the feature attribute as part of the API (instead of using ZAP) - e.g. Modes and other clusters seem to be going down this route, where less dependency on ZAP is needed. In this case the only attribute that you do edit in ZAP seems to be the cluster revision.
It does seem that there is a pattern emerging compared to a couple of years ago where most clusters used ember framework, that clusters in 1.2 and 1.3 are starting to move away from that. I've just followed the more recent pattern of delegates and separate cluster server code here.
| gharchive/pull-request | 2024-01-22T22:48:58 | 2025-04-01T06:45:28.199102 | {
"authors": [
"fessehaeve",
"jamesharrow"
],
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/31616",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2726646372 | Draft: Adds UPDATE to the RayCluster Mutating Webhook so apply can be used
Issue link
What changes have been made
Adds UPDATE to the list of operations on which the RayCluster Mutating Webhook operates on.
This allows to oc apply -f or oc replace to work and gives the ability to update existing running rayclusters without deleting them.
Verification steps
Redeploy the mutating webhook and create, update and delete rayclusters and re-create them.
Checks
[ ] I've made sure the tests are passing.
Testing Strategy
[ ] Unit tests
[x] Manual tests
[ ] Testing is not required for this change
Just a note - it may be good to convert the PR to draft itself - that will avoid accidental merge.
| gharchive/pull-request | 2024-12-09T10:37:34 | 2025-04-01T06:45:28.202987 | {
"authors": [
"akram",
"sutaakar"
],
"repo": "project-codeflare/codeflare-operator",
"url": "https://github.com/project-codeflare/codeflare-operator/pull/638",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1941968221 | Uncaught SyntaxError: Unexpected identifier 'name' custom.navy
Describe the bug
Whenever i try to run any main.js with a navy file i am getting this error
Uncaught SyntaxError: Unexpected identifier 'name'
i am literally just trying to run custom overlays and it isn't working for me https://nightvision.dev/guide/intro/10-basic-examples.html#_7-custom-overlays
even if i take the line of name out it hits me with another unexpected token of { ....
everything works fine if i am not using a navy file ... but as soon as i try to use it then i am getting all these unexpected token errors
do i have to install something special to run navy files or something?
does anyone have any idea on why this is happening
Reproduction
if you want you can check out my code here ... everything should be working https://github.com/QuantFreedom1022/QuantFreedom/tree/neo/nightvision/overlay_test
Steps to reproduce
No response
Javascript Framework
no-framework (vanilla-js)
Logs
No response
Validations
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Make sure this is a NightVision issue and not a framework-specific issue.
[X] The provided reproduction is a minimal reproducible example of the bug.
ok so i finally got it to work ... can you guys please include in the example that you have to make sure you do
npm install vite
npm install @sveltejs/vite-plugin-svelte
then you have to add
import { defineConfig } from 'vite'
import { svelte } from '@sveltejs/vite-plugin-svelte'
import viteRawPlugin from "./vite/vite-raw-plugin.js";
// https://vitejs.dev/config/
export default defineConfig({
plugins: [
svelte({
emitCss: false,
}),
viteRawPlugin({
fileRegex: /\.navy$/,
})
]
})
as a vite.config.js file
then you have to create a folder called vite in your main folder and inside that vite folder you have to have a vite-raw-plugin.js file that says
export default function viteRawPlugin (options) {
return {
name: 'vite-raw-plugin',
transform (code, id) {
if (options.fileRegex.test(id)) {
const json = JSON.stringify(code)
.replace(/\u2028/g, '\\u2028')
.replace(/\u2029/g, '\\u2029')
return {
code: `export default ${json}`
}
}
}
}
}
i mean wow ... there is sooooooooooooooo much missing from that example ... hopefully you guys are able to update it
but all i am trying to do is draw a circle above one of the candles in my data and i have no idea how to do that because there aren't any examples on how to draw objects in the examples section ... if you guys could please add an example of how to draw a simple circle at the open of a candle and maybe three candles later you put a square 10 dollars points above the high of the candle that would be great ... all you would need is like 10 candles to do this
right now i am trying to do that with no success
https://github.com/QuantFreedom1022/QuantFreedom/tree/neo/nightvision/overlay_test ... you can refer to this to see what i am trying to do
Good to hear you solved the first problem. This library requires a good level of understanding of js / canvas api. I can suggest you to go through the examples here https://github.com/project-nv/night-vision-os/tree/main/overlays.
Good to hear you solved the first problem. This library requires a good level of understanding of js / canvas api. I can suggest you to go through the examples here https://github.com/project-nv/night-vision-os/tree/main/overlays.
yeah i understand that ... but to leave out such important info when others have had problems with this same situation 2 years ago shoudl show that the examples should be updated? would highly suggest just adding those very few lines ... you can even copy paste what i put up as an example
you can see someone else with the same type of issue 1 year ago https://github.com/project-nv/night-vision/issues/26
maybe also it could be included in the overlay or navy.js section of the docs as the first thing someone must do before using it because i am sure there are others out there who just don't report the issue
Added the line to docs
| gharchive/issue | 2023-10-13T13:27:48 | 2025-04-01T06:45:28.241538 | {
"authors": [
"C451",
"QuantFreedom1022"
],
"repo": "project-nv/night-vision",
"url": "https://github.com/project-nv/night-vision/issues/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
839020092 | Add encoder/decoder #672
fixes #672
I try to make circuits in APL-style whenever I can, composing blocks with >=> composition whenever possible to make the code look a bit like a "circuit schematic".
point-free
I got a little frustrated trying to remember which haskell sigils are/aren't around, so I ended up just writing ocaml-like coq. :) I'll take another look at what we have defined.
So I was explicating something like forkN (2^n) >=> map (vecConstEq n k)
with a suitable definition for forkN and map being the monadic map.
Oooops! I forgot an argument to mapV.
You might also want to give it a more specific name than mapV. Up to you!
Sorry for the change, but please can you define a general mapV first, and then make a specialized one for the case where you map over the vector of nats?
Definition mapV {n : nat} {a b : SignalType} (f : signal a -> cava (signal b)) (v : signal (Vec a n)) : cava (signal (Vec b n)).
Sorry for the change, but please can you define a general mapV first, and then make a specialized one for the case where you map over the vector of nats? Please can you add this to Combinators.v
Definition mapV {n : nat} {a b : SignalType}
(f : signal a -> cava (signal b))
(v : signal (Vec a n))
: cava (signal (Vec b n)) :=
v' <- unpackV v ;;
r <- mapT f v' ;;
packV r.
Isn't this a duplicate of Vec.map?
Sorry for the change, but please can you define a general mapV first, and then make a specialized one for the case where you map over the vector of nats? Please can you add this to Combinators.v
Definition mapV {n : nat} {a b : SignalType}
(f : signal a -> cava (signal b))
(v : signal (Vec a n))
: cava (signal (Vec b n)) :=
v' <- unpackV v ;;
r <- mapT f v' ;;
packV r.
Isn't this a duplicate of Vec.map?
Yes, I realized that later! I was not expecting our map to be called map!
| gharchive/pull-request | 2021-03-23T18:49:00 | 2025-04-01T06:45:28.248069 | {
"authors": [
"atondwal",
"jadephilipoom",
"satnam6502"
],
"repo": "project-oak/silveroak",
"url": "https://github.com/project-oak/silveroak/pull/685",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1389611467 | Support storing image logos as manifest annotations
This commit allows storing base64 encoded logos under image manifest annotations, using com.zot.logo key. Logos have type limitations (only jpeg, png and gif allowed) and a maximum allowed size (200x200 px).
What type of PR is this?
feature
Which issue does this PR fix:
closes https://github.com/project-zot/zot/issues/806
What does this PR do / Why do we need it:
If an issue # is not available please add repro steps and logs showing the issue:
Testing done on this change:
Automation added to e2e:
Will this break upgrades or downgrades?
Does this PR introduce any user-facing change?:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
We should really be using #936 for this.
We should really be using #936 for this.
I agree.
The graphql API would be the same, and return the base64 encoded image?
I think the image blob itself should not be compressed, right? That would all an overhead?
Will redesign this and implement as part of: https://github.com/project-zot/zot/pull/1018
| gharchive/pull-request | 2022-09-28T16:04:56 | 2025-04-01T06:45:28.259825 | {
"authors": [
"alexstan12",
"andaaron",
"rchincha"
],
"repo": "project-zot/zot",
"url": "https://github.com/project-zot/zot/pull/833",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2052920758 | Automate even more downloads
Incorporated two more systems into an automated setup. The first is making the GC compiler download an automated process, removing the need for a dedicated readme step & .gitkeep. The second is replacing the prior devkitPPC implementation with encounter's gc-wii-binutils, automatically pulling from whatever the latest release is. Because the packaging is a simple zip folder, this removes the need for the zstandard module, and consequently removes the need for a requirements.txt & extra readme step. While the latter fully replaces implementation for ninja, make requires powerpc-eabi-cpp which doesn't seem to be built atm. Also the build.yml was tweaked to accept a powerpc argument via /opt/devkitpro/devkitPPC/bin, similar to the compilers argument, as the devkitpro dependencies still exist for that test environment
TODO: Still need to figure out how to make ninja and make call to the python download scripts automatically, similar to how dtk is currently downloaded. Figuring out how to do so will remove the need for a configure.py section in the readme
configure.py is run before any ninja build anyway; ninja just runs the most recent configure.py output
| gharchive/pull-request | 2023-12-21T18:54:20 | 2025-04-01T06:45:28.263769 | {
"authors": [
"EpochFlame",
"Repiteo"
],
"repo": "projectPiki/pikmin2",
"url": "https://github.com/projectPiki/pikmin2/pull/191",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
337024168 | Add unused function param lint check
Signed-off-by: TomSweeneyRedHat tsweeney@redhat.com
Adding check for unused function parameters to the Buildah lint checks.
LGTM
Failure appears to be a network hiccup. Will kick the bot.
bot, retest this please.
LGTM
@rh-atomic-bot r+
| gharchive/pull-request | 2018-06-29T14:38:51 | 2025-04-01T06:45:28.284013 | {
"authors": [
"TomSweeneyRedHat",
"nalind",
"rhatdan"
],
"repo": "projectatomic/buildah",
"url": "https://github.com/projectatomic/buildah/pull/837",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1649959225 | bpf counters fv flake fix
Description
Related issues/PRs
Todos
[ ] Tests
[ ] Documentation
[ ] Release note
Release Note
TBD
Reminder for the reviewer
Make sure that this PR has the correct labels and milestone set.
Every PR needs one docs-* label.
docs-pr-required: This change requires a change to the documentation that has not been completed yet.
docs-completed: This change has all necessary documentation completed.
docs-not-required: This change has no user-facing impact and requires no docs.
Every PR needs one release-note-* label.
release-note-required: This PR has user-facing changes. Most PRs should have this label.
release-note-not-required: This PR has no user-facing changes.
Other optional labels:
cherry-pick-candidate: This PR should be cherry-picked to an earlier release. For bug fixes only.
needs-operator-pr: This PR is related to install and requires a corresponding change to the operator.
/merge-when-ready
OK, I will merge the pull request when it's ready, leave the commits as is when I merge it, and leave the branch after I've merged it.
| gharchive/pull-request | 2023-03-31T20:07:02 | 2025-04-01T06:45:28.302821 | {
"authors": [
"marvin-tigera",
"tomastigera"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/pull/7521",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2605093387 | Prometheus iptables rules metric only counts programmed rules.
Description
Related issues/PRs
Todos
[ ] Tests
[ ] Documentation
[ ] Release note
Release Note
The `felix_iptables_rules` Prometheus metric now only counts rules within referenced Iptables chains, no longer counts candidate rules.
Reminder for the reviewer
Make sure that this PR has the correct labels and milestone set.
Every PR needs one docs-* label.
docs-pr-required: This change requires a change to the documentation that has not been completed yet.
docs-completed: This change has all necessary documentation completed.
docs-not-required: This change has no user-facing impact and requires no docs.
Every PR needs one release-note-* label.
release-note-required: This PR has user-facing changes. Most PRs should have this label.
release-note-not-required: This PR has no user-facing changes.
Other optional labels:
cherry-pick-candidate: This PR should be cherry-picked to an earlier release. For bug fixes only.
needs-operator-pr: This PR is related to install and requires a corresponding change to the operator.
@fasaxc done away with the "settled" business now, instead checking for a particular IPTables chain to determine when the dataplane got programmed at start-of-day, and then using "Eventually" calls to wait until additional changes occur.
| gharchive/pull-request | 2024-10-22T10:57:23 | 2025-04-01T06:45:28.308585 | {
"authors": [
"aaaaaaaalex"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/pull/9374",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2670334958 | Enhance bpf interface autodetection
Description
Related issues/PRs
Todos
[ ] Tests
[ ] Documentation
[ ] Release note
Release Note
TBD
Reminder for the reviewer
Make sure that this PR has the correct labels and milestone set.
Every PR needs one docs-* label.
docs-pr-required: This change requires a change to the documentation that has not been completed yet.
docs-completed: This change has all necessary documentation completed.
docs-not-required: This change has no user-facing impact and requires no docs.
Every PR needs one release-note-* label.
release-note-required: This PR has user-facing changes. Most PRs should have this label.
release-note-not-required: This PR has no user-facing changes.
Other optional labels:
cherry-pick-candidate: This PR should be cherry-picked to an earlier release. For bug fixes only.
needs-operator-pr: This PR is related to install and requires a corresponding change to the operator.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.1 out of 2 committers have signed the CLA.:white_check_mark: sridhartigera:x: UbuntuUbuntu seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
| gharchive/pull-request | 2024-11-18T23:47:57 | 2025-04-01T06:45:28.316369 | {
"authors": [
"CLAassistant",
"sridhartigera"
],
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/pull/9498",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
330719655 | more descriptive error for field validation
Description
Change validation message from:
ETCD_ENDPOINTS=http://127.0.0.1:2382 calicoctl apply -f wep
Failed to execute command: error with the following fields:
- pod = 'wep-TOM'
- node = 'nodeTOM'
to:
Failed to execute command: error with the following fields:
- pod = 'wep-TOM' (Reason: failed to validate Field: Pod because of Tag: name )
- node = 'nodeTOM' (Reason: failed to validate Field: Node because of Tag: name )
Because of how field validation occurs, errors during validation of field types (name, containerID, selector, labels, etc.) were without a useful message. This PR adds more detail about what specific field and tag failed because of validation.
fixes https://github.com/projectcalico/libcalico-go/issues/854
Release Note
None required
Signed-off-by: derek mcquay derek@tigera.io
@dmmcquay - the change looks good. Can you add a validation test for one or two of these errors?
| gharchive/pull-request | 2018-06-08T16:13:33 | 2025-04-01T06:45:28.325117 | {
"authors": [
"bcreane",
"dmmcquay"
],
"repo": "projectcalico/libcalico-go",
"url": "https://github.com/projectcalico/libcalico-go/pull/884",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1949174804 | capsule-controller-manager is CrashLoopBackOff
Bug description
capsule-controller-manager is CrashLoopBackOff via helm install.
Reproduced Steps
helm repo add clastix https://clastix.github.io/charts
helm install capsule clastix/capsule -n fusionx --create-namespace
Expected behavior
capsule-controller-manager start successfully.
Logs
{"level":"info","ts":"2023-10-18T08:19:35.028Z","logger":"setup","msg":"Capsule Version v0.3.3 64513b8"}
{"level":"info","ts":"2023-10-18T08:19:35.028Z","logger":"setup","msg":"Build from: https://github.com/clastix/capsule"}
{"level":"info","ts":"2023-10-18T08:19:35.028Z","logger":"setup","msg":"Build date: 2023-06-27T17:12:25"}
{"level":"info","ts":"2023-10-18T08:19:35.028Z","logger":"setup","msg":"Go Version: go1.19.10"}
{"level":"info","ts":"2023-10-18T08:19:35.028Z","logger":"setup","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":"2023-10-18T08:19:35.030Z","logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":"2023-10-18T08:19:35.133Z","logger":"controllers.TLS","msg":"Generating new TLS certificate"}
{"level":"error","ts":"2023-10-18T08:20:31.331Z","logger":"controllers.TLS","msg":"cannot update Capsule TLS","error":"client rate limiter Wait returned an error: context canceled","stacktrace":"github.com/clastix/capsule/controllers/tls.Reconciler.ReconcileCertificates\n\t/workspace/controllers/tls/manager.go:110\nmain.main\n\t/workspace/main.go:183\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
{"level":"error","ts":"2023-10-18T08:20:31.332Z","logger":"setup","msg":"unable to reconcile Capsule TLS secret","error":"client rate limiter Wait returned an error: context canceled","stacktrace":"main.main\n\t/workspace/main.go:184\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"}
Additional context
Capsule version: 0.3.3
Helm Chart version: 0.4.6
Kubernetes version: v1.28.1+rke2r1
Thanks for opening this, @zhenlohuang!
May I ask you to give a try in increasing the deployment resources, especially CPU?
The first startup is responsible in creating the certificates, and it could take some time, causing then a loopbackoff.
We double the cpu and memory, however it doesn't work. The crash still happened.
Is there any workaround to disable the cert creating or increase the timeout?
You can offload certificate creation to cert-manager, if you have it installed.
https://github.com/projectcapsule/capsule/blob/9a21b408dde54c9d6522a0651135f76701dac263/charts/capsule/values.yaml#L9-L10
https://github.com/projectcapsule/capsule/blob/9a21b408dde54c9d6522a0651135f76701dac263/charts/capsule/values.yaml#L54-L55
https://github.com/projectcapsule/capsule/blob/9a21b408dde54c9d6522a0651135f76701dac263/charts/capsule/values.yaml#L158-L160
Let me know if this works for you, and maybe I could ask you to contribute to the documentation explaining how to install Capsule with cert-manager: WDYT?
@zhenlohuang please, can you provide some feedback about your issue? It would be helpful for the community and members who could potentially face the same issue!
@zhenlohuang We will close this issue for now, reopen if you need further assistance.
| gharchive/issue | 2023-10-18T08:30:16 | 2025-04-01T06:45:28.339063 | {
"authors": [
"oliverbaehler",
"prometherion",
"zhenlohuang"
],
"repo": "projectcapsule/capsule",
"url": "https://github.com/projectcapsule/capsule/issues/829",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1530177921 | Update Weekly External Release manifest
Update weekly manifest
Improper Commit Message
Proper Tracked on value is not present in commit message,make sure Tracked-on: jira-ticket is present
Improper Commit Message
Proper Tracked on value is not present in
commit message,make sure Tracked-on: jira-ticket is present
| gharchive/pull-request | 2023-01-12T06:26:30 | 2025-04-01T06:45:28.340701 | {
"authors": [
"sysopenci"
],
"repo": "projectceladon/manifest",
"url": "https://github.com/projectceladon/manifest/pull/313",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1752134689 | crawl error
error info
nt/1.1\r\nAccept-Encoding: gzip\r\n\r\n"},"error":"[hybrid:RUNTIME] context deadline exceeded \u003c- could not get dom\n"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x80 pc=0x101cc37]
goroutine 40658 [running]:
github.com/projectdiscovery/katana/pkg/engine/hybrid.(*Crawler).navigateRequest(0xc0007185d0, 0xc00c10ecd0, 0xc007e2efa0)
/home/runner/work/katana/katana/pkg/engine/hybrid/crawl.go:157 +0xc37
github.com/projectdiscovery/katana/pkg/engine/common.(*Shared).Do.func1()
/home/runner/work/katana/katana/pkg/engine/common/base.go:226 +0xeb
created by github.com/projectdiscovery/katana/pkg/engine/common.(*Shared).Do
/home/runner/work/katana/katana/pkg/engine/common/base.go:216 +0x2b
./katana -list scan.txt -headless -no-sandbox -jc -aff -iqp -d 9 -kf robotstxt,sitemapxml -nc -json -o at.json
Could you show me the command you were running when the error occurred?
Oh wait, is it this?
./katana -list scan.txt -headless -no-sandbox -jc -aff -iqp -d 9 -kf robotstxt,sitemapxml -nc -json -o at.json
Duplicate of https://github.com/projectdiscovery/katana/issues/380 and fixed in the latest release - https://github.com/projectdiscovery/katana/releases/tag/v1.0.2
| gharchive/issue | 2023-06-12T07:54:52 | 2025-04-01T06:45:28.357101 | {
"authors": [
"MetzinAround",
"defaul0t",
"ehsandeep"
],
"repo": "projectdiscovery/katana",
"url": "https://github.com/projectdiscovery/katana/issues/472",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
59165309 | jetty restart disconnecting from data
Using jettywrapper 1.8.2 and jetty 7.0.0. Perhaps it's the start.jar, but when I 'rake jetty:stop' then 'rake jetty:start' the solr data appears to be gone no longer shows up in
SolrAdmin://solr/#/development/query
but appears not to have change on the file system at jetty/solr/development-core/data/index/
It's as if there is some in-memory piece that is lost and it's unable to re-associate with existing data on restart. Fedora data on disk as well appears to be unchanged but ingest attempts now fail also. Any thoughts on this? Might there be some process ID association that persists with the indicies?
@rkhet this relates to solr autocommit. At this point the default solr config does not include auto commit. See this PR in sufia for the lines to add: https://github.com/projecthydra/sufia/pull/890/files (and or maybe just copy the entire file?). I'm not sure this is an issue with jetty as there is the expectation that you will configure jetty for your specific needs. That said, this might be a sane default, @awead, @jcoyne what do you think?
If you're using hydra-jetty with Hydra, the expectation is that you will run "rake jetty:config" before "rake jetty:start". This isn't in https://github.com/projecthydra/hydra/wiki/Lesson%3A-install-hydra-jetty, but it should be.
This is fixed in the latest version. See #11
I added the rake jetty:config step to the Install Hydra-Jetty instructions.
Thanks all.. the solrconfig.xml addition fixed my dangling data.
| gharchive/issue | 2015-02-26T23:54:30 | 2025-04-01T06:45:28.380655 | {
"authors": [
"acozine",
"cam156",
"jcoyne",
"rkhet"
],
"repo": "projecthydra/hydra-jetty",
"url": "https://github.com/projecthydra/hydra-jetty/issues/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1388004940 | Missing ModularVisualization in rainfall example?
Describe the bug
Cloned the repo and run the coverage tests:
$ pytest --cov=mesa_geo tests/
> from mesa_geo.visualization.ModularVisualization import ModularServer
E ModuleNotFoundError: No module named 'mesa_geo.visualization.ModularVisualization'
examples/rainfall/rainfall/server.py:5: ModuleNotFoundError
Expected behavior
Running the test suite should not give any errors
Indeed I don't see the module anymore in the visualization folder of mesa-geo
Maybe I'm getting confused from https://github.com/projectmesa/mesa-geo/issues/76
Or I simply messed up the dev installation?
Hmm this is interesting. But I do not have this issue on my side.
Could you try to use a clean virtual environment from scratch, and install dependencies and Mesa-Geo via pip install -e ".[dev]"?
Hey, thanks for the quick answer. I'm testing it now on my home pc (ubuntu)
This is what I tried;
git clone https://github.com/projectmesa/mesa-geo.git
cd mesa-geo/
mamba create -n testmg python
mamba activate testmg
pip install -e ".[dev]"
pytest --cov=mesa_geo tests/
[...]
import mesa
> from mesa_geo.visualization.ModularVisualization import ModularServer
E ModuleNotFoundError: No module named 'mesa_geo.visualization.ModularVisualization'
examples/geo_schelling/server.py:2: ModuleNotFoundError
Here the full output of the pytest errors and the mamba package list:
pytest_errors.txt
mamba_list.txt
Thanks for your help!
I see. Looks like Mesa-Geo is not properly installed for some reason.
We do not have a mesa_geo.visualization.ModularVisualization module in our code base; it is actually copied over from Mesa during setup. See: https://github.com/projectmesa/mesa-geo/blob/a3f01ad47a454246c2da05d0d6c7ccc27bcd963b/setup.py#L43
How about pip install -e . or install from PyPI directly, i.e., pip install mesa-geo? I am not very sure why this is happening.
okay so this is interesting: it seems that (at least on my machine) there is a subtle difference in the way setuptools treats command classes between install and install_editable with pip.
I start from a fresh clone (so the ModularVisualization.py file is not there) then:
If I install with pip install . the build command is properly called and the file is copied. Everything work and this is the expected behaviour
This is also the case, if I manually run python setup.py install/develop
Now, if I install with pip install -e . the develop command is not called and I don't get the visualization/ModularVisualization.py. Tests then obviously fail.
This is also the case for pip install -e .[dev]
So the question is, why --editable is not calling neither the build nor the develop subcommands?
Actually pip install . calls the custom build command, as we can see from this log ("INSIDE BUILD") is my debug text
Building wheels for collected packages: Mesa-Geo
Created temporary directory: /tmp/pip-wheel-uy4v8atq
Destination directory: /tmp/pip-wheel-uy4v8atq
Running command Building wheel for Mesa-Geo (pyproject.toml)
/tmp/pip-build-env-6f4grtxt/overlay/lib/python3.10/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running bdist_wheel
running build
INSIDE BUILD
Downloading the leaflet.js dependency from the internet...
Downloading the leaflet.css dependency from the internet...
running build_py
On the other side, pip install --editable . calls editable_wheel and not develop
Building wheels for collected packages: Mesa-Geo
Created temporary directory: /tmp/pip-wheel-0upylsq5
Destination directory: /tmp/pip-wheel-0upylsq5
Running command Building editable for Mesa-Geo (pyproject.toml)
/tmp/pip-build-env-atgt3u8t/overlay/lib/python3.10/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running editable_wheel
creating /tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo.egg-info
writing /tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo.egg-info/dependency_links.txt
writing requirements to /tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo.egg-info/requires.txt
writing top-level names to /tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo.egg-info/top_level.txt
writing manifest file '/tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo.egg-info/SOURCES.txt'
reading manifest file '/tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo.egg-info/SOURCES.txt'
adding license file 'LICENSE'
writing manifest file '/tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo.egg-info/SOURCES.txt'
creating '/tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo-0.3.0.dist-info'
creating /tmp/pip-wheel-0upylsq5/tmpjw6p7snr/Mesa_Geo-0.3.0.dist-info/WHEEL
/tmp/pip-build-env-atgt3u8t/overlay/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build_py
running egg_info
Thus no file is copied...
As a proof, if I change the setup.py accordingly:
--- a/setup.py
+++ b/setup.py
@@ -12,6 +12,7 @@ from distutils.command.build import build
from setuptools import setup
from setuptools.command.develop import develop
+from setuptools.command.editable_wheel import editable_wheel
def get_version_from_package() -> str:
@@ -22,10 +23,22 @@ def get_version_from_package() -> str:
return version
+class EdiWheelCommand(editable_wheel):
+ """Installation for development mode."""
+
+ def run(self):
+ print("INSIDE EDIWHEEL")
+ get_mesa_viz_files()
+ get_frontend_dep()
+ editable_wheel.run(self)
if __name__ == "__main__":
+
setup(
name="Mesa-Geo",
version=get_version_from_package(),
cmdclass={
+ "editable_wheel": EdiWheelCommand,
"develop": DevelopCommand,
:
Building wheels for collected packages: Mesa-Geo
Created temporary directory: /tmp/pip-wheel-556vurov
Destination directory: /tmp/pip-wheel-556vurov
Running command Building editable for Mesa-Geo (pyproject.toml)
/tmp/pip-build-env-ysf_0t7s/overlay/lib/python3.10/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running editable_wheel
INSIDE EDIWHEEL
And the ModularVisualization.py file is present again!
We should replace the get_mesa_viz_files mechanism. This pattern seems to be non-standard in Python package setups.
Yes, if you can confirm this, we could:
Leave it like this (nobody a part me seem to have had the same problem)
Extend the build_py command instead of build and develop as this seems to be always called
Maybe consider using the recent sub_command extension from setuptools to add additional build steps
And additional question: is there any reason why build is imported from distutils and the rest from setuptools?
@mrceresa If get_mesa_viz_files() is not called as expected, how about get_frontend_dep()? Do the Leaflet files (js and css) get downloaded?
If not then we may still need to address this issue for the Leaflet dependency.
I got the same problem when I installed mesa-geo from git. I reinstall this package by using 'pip install mesa-geo' and it works.
@zlfdodo Sorry about that. Could you share which operating system and python version were used?
@wang-boyu Thanks for the prompt reply. I am using Windows system and conda python 3.10.6. The mesa-geo package only works under this python version on my computer. When I use any other python version in Anaconda and run any model/run.py file in the example folder, it will return "ImportError: DLL load failed while importing_version."
| gharchive/issue | 2022-09-27T16:02:10 | 2025-04-01T06:45:28.412775 | {
"authors": [
"mrceresa",
"rht",
"wang-boyu",
"zlfdodo"
],
"repo": "projectmesa/mesa-geo",
"url": "https://github.com/projectmesa/mesa-geo/issues/105",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1439718693 | PostgreSQL - Run the init non root
Context
The current PostgreSQL init requires obviously special permissions to set the volume permissions. Check if this can be prevented by using Delegating volume permission and ownership change to CSI driver.
Alternatives
Keep it less secure as is
The documentation still states that the volumePermission init container would be required. However the TLS key has also the right permission of 600without the volumePermission initcontainer.
| gharchive/issue | 2022-11-08T08:26:13 | 2025-04-01T06:45:28.421807 | {
"authors": [
"megian"
],
"repo": "projectsyn/component-keycloak",
"url": "https://github.com/projectsyn/component-keycloak/issues/178",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
376596673 | Don't tag the build docker container as belonging to 'docker'
Based on other container tags I've seen, as well as https://docs.docker.com/engine/reference/commandline/build/#tag-an-image--t , it looks like we probably want to use a tag prefix that describes our organization, rather than adding our container to the docker organization. This changes the yarn command for building Docker images to remedy that.
Side note: Not sure what's up with the "push" Travis build, but the "PR" one seems to be passing
| gharchive/pull-request | 2018-11-01T22:33:04 | 2025-04-01T06:45:28.423784 | {
"authors": [
"okeefm"
],
"repo": "projecttacoma/cqm-execution-service",
"url": "https://github.com/projecttacoma/cqm-execution-service/pull/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
428946051 | Stops pollution of Measure model population_sets
Stops the pollution of the measure model population_sets when using cqm-measures passed in.
Updates cqm-models to a version that exposes PopulationSet model
Iterates over temporary list of population sets instead of modifying measure list.
Adds test
Pull requests into cqm-execution require the following. Submitter and reviewer should :white_check_mark: when done. For items that are not-applicable, note it's not-applicable ("N/A") and :white_check_mark:.
Submitter:
[x] This pull request describes why these changes were made.
[ ] Internal ticket for this PR:
[ ] Internal ticket links to this PR
[x] Code diff has been done and been reviewed
[x] Tests are included and test edge cases
[x] Tests have been run locally and pass
Reviewer 1:
Name:
[ ] Code is maintainable and reusable, reuses existing code and infrastructure where appropriate, and accomplishes the task’s purpose
[ ] The tests appropriately test the new code, including edge cases
[ ] You have tried to break the code
Reviewer 2:
Name:
[ ] Code is maintainable and reusable, reuses existing code and infrastructure where appropriate, and accomplishes the task’s purpose
[ ] The tests appropriately test the new code, including edge cases
[ ] You have tried to break the code
Codecov Report
Merging #36 into master will increase coverage by 0.16%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #36 +/- ##
==========================================
+ Coverage 84.8% 84.97% +0.16%
==========================================
Files 6 6
Lines 678 679 +1
Branches 202 202
==========================================
+ Hits 575 577 +2
+ Misses 103 102 -1
Impacted Files
Coverage Δ
lib/models/calculator.js
94.25% <100%> (+1.14%)
:arrow_up:
lib/helpers/calculator_helpers.js
80.21% <100%> (+0.1%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ad4b6fe...36ed842. Read the comment docs.
| gharchive/pull-request | 2019-04-03T19:53:33 | 2025-04-01T06:45:28.436083 | {
"authors": [
"codecov-io",
"hossenlopp"
],
"repo": "projecttacoma/cqm-execution",
"url": "https://github.com/projecttacoma/cqm-execution/pull/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1522495069 | homework13 commit
Задан некоторый список содержащий целые числа. Pазработать программу, которая вычисляет сумму элементов списка.
Исходный список [1, 2, 3, 4, 5, 6, 7, 8, 9]
Cумма элементов списка = 45
| gharchive/pull-request | 2023-01-06T12:17:42 | 2025-04-01T06:45:28.495642 | {
"authors": [
"prokudavlad"
],
"repo": "prokudavlad/Python-homeworks",
"url": "https://github.com/prokudavlad/Python-homeworks/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2029938777 | fix: boolean flags when enabling collectors
When enabling boolean flags such as collector.systemd.enable-start-time-metrics, the flag must not specify any value.
Good:
--collector.systemd.enable-start-time-metrics
Bad:
--collector.systemd.enable-start-time-metrics=true
--collector.systemd.enable-start-time-metrics=True
--collector.systemd.enable-start-time-metrics=
To enable boolean flags, use the following method:
roles:
- role: prometheus.prometheus.node_exporter
vars:
node_exporter_enabled_collectors:
- systemd:
enable-start-time-metrics:
Would you please add some molecule tests for this?
ping @oneoneonepig
| gharchive/pull-request | 2023-12-07T05:32:23 | 2025-04-01T06:45:28.502070 | {
"authors": [
"SuperQ",
"gardar",
"oneoneonepig"
],
"repo": "prometheus-community/ansible",
"url": "https://github.com/prometheus-community/ansible/pull/257",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
947117745 | Index lifecycle management execution metrics
I already suggested it in #306, but got closed by me before the repo got transferred to prometheus-community. Maybe it could be worth making another one pull request cause I believe I might be not the only one who'll find these metrics useful.
Basically in my daily routine I'd like to monitor ILM execution stats - how many indexes are covered by ILM policies, how many errors I've got, etc. This is a simple representation of my goal I use since 7.3.2, now I'm on 7.11.smth and it still works. Haven't seen any changes to ILM recently so I assume it's compatible with any 7.* and maybe even earlier.
Example of metrics available:
elasticsearch_ilm_index_status{action="rollover",index="foo_2",phase="hot",step="check-rollover-ready"} 1
elasticsearch_ilm_index_status{action="shrink",index="foo_3",phase="warm",step="shrunk-shards-allocated"} 1
elasticsearch_ilm_index_status{action="complete",index="foo_4",phase="warm",step="complete"} 1
elasticsearch_ilm_index_status{action="complete",index="foo_5",phase="hot",step="complete"} 1
elasticsearch_ilm_index_status{action="complete",index="foo_6",phase="new",step="complete"} 1
elasticsearch_ilm_index_status{action="",index="foo_7",phase="",step=""} 0
Numeric values represent if exact index is covered by ILM policy at all (in the example above index foo_7 has no policy attached, other have one). Everything else in tags is just _all/_ilm/explain API result.
Is this not missing labels such as cluster ?
@tgrondier yes, it actually misses them. I have cluster tag in my prometheus environment added by default, so I missed its absence in exporter. Gonna fix soon.
Hi 👋
Any news on this PR?
@mokrinsky you will terminate the work for preparing to merge it?
Also interested if this is going to be picked up & finished off
We've pulled this change into our fork and it does do the job. I think there's room for improvement when it comes to using these metrics for alerting, specifically around actions that can be retried:
The metric does not tell you whether an action can be retried, and if so how many retries have been attempted
When an action is retrying, the Error metric disappears while the action is retried
Both of these factors make it a bit more difficult to alert on. For our case we'd want to alert on:
A failed action that is not retriable
A failed action that is retriable but has failed n number of retries
Not sure exactly what the metrics would look like for this. It's difficult as the ILM explain API itself hides the error state when the action is retrying.
| gharchive/pull-request | 2021-07-18T21:08:50 | 2025-04-01T06:45:28.508040 | {
"authors": [
"Evesy",
"mokrinsky",
"paulojmdias",
"tgrondier"
],
"repo": "prometheus-community/elasticsearch_exporter",
"url": "https://github.com/prometheus-community/elasticsearch_exporter/pull/457",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
179982862 | Invalid config undetected.
Alertmanager is starting fine without any errors about the group_interval in the global section.
./alertmanager -version
alertmanager, version 0.4.2 (branch: master, revision: 9a5ab2fa63dd7951f4f202b0846d4f4d8e9615b0)
build user: root@2811d2f42616
build date: 20160902-15:33:13
go version: go1.6.3
global:
resolve_timeout: 2m
group_interval: 1m # this is invalid config
hipchat_auth_token: 'xxxx'
# The root route on which each incoming alert enters.
route:
group_by: ['cluster']
group_interval: 1m # this is valid config
receiver: team-hipchat
routes:
- match_re:
severity: hipchat
receiver: team-hipchat
receivers:
- name: 'team-hipchat'
hipchat_configs:
- room_id: 123456
send_resolved: true
message_format: html
notify: true
This appears to be fixed in 0.6.0.
I couldn't reproduce issue with the provided config
| gharchive/issue | 2016-09-29T08:26:37 | 2025-04-01T06:45:28.511372 | {
"authors": [
"Conorbro",
"itatabitovski"
],
"repo": "prometheus/alertmanager",
"url": "https://github.com/prometheus/alertmanager/issues/519",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
244368099 | alertmanager crashes when receive alert.
CentOS release 6.8 2.6.32-642.4.2.el6.x86_64
prometheus 1.5.2
alertmanager 0.7.1
prometheus config:
global:
external_labels:
env: "test"
rule_files:
- "/opt/prometheus/prometheus-1.5.2.linux-amd64/configs/alerts.yml"
scrape_configs:
- job_name: nodes
static_configs:
- targets:
- *******************************
- *******************************
- *******************************
- *******************************
labels:
exporters: node
alerts:
ALERT low_disk_space
IF (node_filesystem_avail * 100)/node_filesystem_size <= 20
FOR 5m
LABELS { severity = "warning" }
ANNOTATIONS {
summary = "Less than 20% left on disk {{$labels.mountpoint}} on {{$labels.instance}}"
}
alertmanager config
global:
smtp_smarthost: 'smtp.domain.ru:25'
smtp_from: 'alertmanager@domain.ru'
route:
group_by: ['alertname', 'service', 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 3h
receiver: omni_adm
routes:
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'service']
receivers:
- name: 'adm'
email_configs:
- to: 'admin.domain.ru'
log
time="2017-07-20T11:14:57+03:00" level=info msg="Listening on :9084" source="main.go:308"
time="2017-07-20T11:15:37+03:00" level=debug msg="Received alert" alert=low_disk_space[0426069][active] component=dispatcher source="dispatch.go:183"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x78 pc=0x8c762f]
goroutine 117 [running]:
main.meshWait.func1(0x18)
/go/src/github.com/prometheus/alertmanager/cmd/alertmanager/main.go:349 +0x3f
main.main.func6(0x45d964b800, 0xc4201aa960)
/go/src/github.com/prometheus/alertmanager/cmd/alertmanager/main.go:229 +0x49
github.com/prometheus/alertmanager/dispatch.(*aggrGroup).run(0xc4200f6e10, 0xc4201b1710)
/go/src/github.com/prometheus/alertmanager/dispatch/dispatch.go:347 +0x1e0
created by github.com/prometheus/alertmanager/dispatch.(*Dispatcher).processAlert
/go/src/github.com/prometheus/alertmanager/dispatch/dispatch.go:264 +0x32c
Are you running in HA mode, or with a single alertmanager instance?
single
alertmanager -config.file=${conffile} -web.listen-address=${web_listen_address} -mesh.listen-address= -storage.path=${storage_path}
I can reproduce this at current head. It only happens when I set -mesh.listen-address= (empty value), like in the command line example given by @pytker. The reason is that the mesh router r in line https://github.com/prometheus/alertmanager/blob/c4c0875ba32e6976fd151967e1e14c2d5cae44cc/cmd/alertmanager/main.go#L349 is nil.
Fix is out for review in https://github.com/prometheus/alertmanager/pull/919
| gharchive/issue | 2017-07-20T13:46:10 | 2025-04-01T06:45:28.516630 | {
"authors": [
"juliusv",
"pytker",
"stuartnelson3"
],
"repo": "prometheus/alertmanager",
"url": "https://github.com/prometheus/alertmanager/issues/914",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1870451554 | amtool: command to test receivers
This PR is an extension of @OktarianTB's 9-month old PR https://github.com/prometheus/alertmanager/pull/3139 that's updated to work against the latest code in main, to hopefully increase its likelihood of being merged.
Credit for this work goes to @OktarianTB - I believe I've marked the commit authorship correctly, please let me know if I made any mistakes here.
This PR aims to make progress towards #2845 by adding a new command to amtool to send test notifications to every receiver in an alertmanager config file. The command takes in an alertmanager config file and optionally an alert file to mock values for labels and annotations. This is the MVP for the command, potentially a few other options can be added later if deemed of use. This largely re-uses the equivalent implementation in grafana/grafana.
I have only tested this with Slack so far.
Thanks for addressing the issues directly - I've lost most of the context since making that PR last year but hopefully it comes in useful 😄
We'll put it to good use; thanks very much @OktarianTB.
I opened https://github.com/prometheus/alertmanager/pull/3553 to reduce some of the code duplication here.
| gharchive/pull-request | 2023-08-28T20:25:25 | 2025-04-01T06:45:28.520556 | {
"authors": [
"OktarianTB",
"alexweav",
"gotjosh"
],
"repo": "prometheus/alertmanager",
"url": "https://github.com/prometheus/alertmanager/pull/3491",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
734223991 | Exception occurred during processing of request from ('10.12.220.146', 52792)
I am not sure if this is happening because of prometheus pyhton source code. I have been looking this error but could not figure out what is going on. Can anyone please tell me what it is happening here?
I deployed a custom exporter as a container in a kubernetes cluster and expose the metrics on a http server.
Here is my code: https://github.com/eminaktas/get-of-metrics/blob/master/get-of-metrics.py
Exception occurred during processing of request from ('10.12.220.146', 52792) Traceback (most recent call last): File "/usr/local/lib/python3.9/socketserver.py", line 650, in process_request_thread self.finish_request(request, client_address) File "/usr/local/lib/python3.9/socketserver.py", line 360, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/local/lib/python3.9/socketserver.py", line 720, in __init__ self.handle() File "/usr/local/lib/python3.9/http/server.py", line 427, in handle self.handle_one_request() File "/usr/local/lib/python3.9/http/server.py", line 415, in handle_one_request method() File "/usr/local/lib/python3.9/site-packages/prometheus_client/exposition.py", line 152, in do_GET output = encoder(registry) File "/usr/local/lib/python3.9/site-packages/prometheus_client/openmetrics/exposition.py", line 14, in generate_latest for metric in registry.collect(): File "/usr/local/lib/python3.9/site-packages/prometheus_client/registry.py", line 75, in collect for metric in collector.collect(): File "/get-of-metrics/./get-of-metrics-v2.py", line 70, in collect matches = finditer(regex, data) File "/usr/local/lib/python3.9/re.py", line 248, in finditer return _compile(pattern, flags).finditer(string) TypeError: expected string or bytes-like object
Thank you
That is an error entirely in your code, nothing to do with this library.
It makes more sense to ask questions like this on the prometheus-users mailing list rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided.
| gharchive/issue | 2020-11-02T06:27:48 | 2025-04-01T06:45:28.559300 | {
"authors": [
"brian-brazil",
"eminaktas"
],
"repo": "prometheus/client_python",
"url": "https://github.com/prometheus/client_python/issues/590",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1000232737 | Incomple arp entries are returned with MAC address 30:30:3a:...
Hello,
to reproduce the bug you have to ping wide LAN range, that will create a lot of incomplete entries, and, after execution procfs.GatherARPEntries(), incomplete entries are returned like:
{192.168.0.111 30:30:3a:30:30:3a:30:30:3a:30:30:3a:30:30:3a:30:30 eth0}
{192.168.0.46 30:30:3a:30:30:3a:30:30:3a:30:30:3a:30:30:3a:30:30 eth0}
while arp -n reports:
192.168.0.111 (incomplete) eth0
192.168.0.46 (incomplete) eth0
This is not critical as all incomplete entries may be filtered by MAC address, still it might be better to return such entries with empty MAC address
The function gathers data from /proc/net/arp, can you include a sample output from that file? Normally, I would expect incomplete entries to have an address of 00:00:00:00:00:00.
The function gathers data from /proc/net/arp, can you include a sample output from that file? Normally, I would expect incomplete entries to have an address of 00:00:00:00:00:00.
Please check my commit for sample data. Also I've found that actually MAC addresses were parsing incorrectly, fixed that with net.ParseMAC() and incomplete entries become 00:00:00:00:00:00 as you described
| gharchive/issue | 2021-09-19T07:15:31 | 2025-04-01T06:45:28.577813 | {
"authors": [
"SuperQ",
"und3f"
],
"repo": "prometheus/procfs",
"url": "https://github.com/prometheus/procfs/issues/413",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
546817716 | Export metric for WAL write errors
When prometheus is unable to write to the WAL, it logs a warning, but there is no metric exported to allow alerting on this condition. This means that an operator might not know there's a corrupted/truncated WAL until the next compaction cycle runs (assuming they have an alert on prometheus_tsdb_wal_truncations_failed_total).
Example log entries:
level=warn ts=2020-01-08T11:57:49.195Z caller=scrape.go:945 component="scrape manager" scrape_pool=border_routers target=<redacted>/metrics msg="appending scrape report failed" err="write to WAL: log samples: write data/wal/00000000: no space left on device"
level=warn ts=2020-01-08T11:57:49.195Z caller=manager.go:584 component="rule manager" group=alert.rules msg="rule sample appending failed" err="write to WAL: log samples: write data/wal/00000000: no space left on device"
What is the value of prometheus_tsdb_wal_corruptions_total ?
Shouldn't /-/healthy and /-/ready reflect these failures?
Shouldn't /-/healthy and /-/ready reflect these failures?
This is discussed in https://github.com/prometheus/prometheus/issues/3807
@roidelapluie Thank you
Looks like this is fixed by https://github.com/prometheus/prometheus/pull/6647
| gharchive/issue | 2020-01-08T11:59:24 | 2025-04-01T06:45:28.581797 | {
"authors": [
"alippai",
"codesome",
"kormat",
"roidelapluie"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/6577",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
849517156 | Use a common TS PrometheusClient between codemirror-promql and prometheus
I just noticed that in https://github.com/prometheus/prometheus/blob/main/web/ui/react-app/src/pages/graph/PanelList.tsx#L134 there is a way to retrieve the list of the metric.
Do you think it would be a good idea to actually use the PrometheusClient that has been implemented here: https://github.com/prometheus-community/codemirror-promql/blob/master/src/lang-promql/client/prometheus.ts#L83
I hope you will find it a bit cleaner and it will share the way to contact Prometheus in TS.
I don't know if it makes sense to put this client in a single library to provide a TS HTTP Prometheus Client.
By the way, I wasn't sure if it is the correct way to ask for this kind of "refactoring". So please let me know, if I was wrong to do it like that :).
I guess the answer should come from you @juliusv ^^ (following the code ownership)
@Nexucis We use the useFetch() hook on every page to fetch info from the API (not just data-related info), which I think is a fine and React-ish way of doing things, and doesn't need features such as client-side caching (since it's always just a single request upon page / component load). The Prometheus client in cm-promql is focused around metadata querying only, and currently I don't see a good reason to extend and integrate it into the rest of Prometheus for the other purposes yet.
But yeah, I was thinking earlier that it could be nice if Prometheus (or any other user) could pass in the list of metrics that it already has to the codemirror-promql layer upon initialization, so it doesn't have to be fetched twice.
mmm ok. So you are more thinking about injecting a list of data into the promClient of cm-promql ?
currently I don't see a good reason to extend and integrate it into the rest of Prometheus for the other purposes yet.
Well we could think about creating a TS lib that would contains a complete Prometheus Client covering the whole API like it has been done in Golang and other language.
| gharchive/issue | 2021-04-02T22:50:19 | 2025-04-01T06:45:28.586669 | {
"authors": [
"Nexucis",
"juliusv"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/8686",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
395600105 | Move the build badge to the badge list
Move the build badge to the badge list.
Thanks!
| gharchive/pull-request | 2019-01-03T14:50:47 | 2025-04-01T06:45:28.587876 | {
"authors": [
"brian-brazil",
"zhulongcheng"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/5060",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
513299800 | Update aggregation operator docs
Update the aggregation operator documentation.
Include before expression style syntax as valid.
Update examples to show before style.
Signed-off-by: Ben Kochie superq@gmail.com
Updated, PTAL.
:+1:
| gharchive/pull-request | 2019-10-28T13:22:29 | 2025-04-01T06:45:28.589619 | {
"authors": [
"SuperQ",
"juliusv"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/6240",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
549235220 | Remove old checkpoint dir if it still exists
Signed-off-by: Julien Pivotto roidelapluie@inuits.eu
Fix #6619
Thanks!
| gharchive/pull-request | 2020-01-13T23:09:48 | 2025-04-01T06:45:28.590941 | {
"authors": [
"brian-brazil",
"roidelapluie"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/6621",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1050490551 | Fix a typo in docs/configuration/configuration.md
Signed-off-by: Hu Shuai hus.fnst@fujitsu.com
Thanks!
| gharchive/pull-request | 2021-11-11T01:52:32 | 2025-04-01T06:45:28.592229 | {
"authors": [
"LeviHarrison",
"hs0210"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/9717",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
308315847 | Avoid bind-mounting to allow building with a remote docker engine
Fixes #94
This works when running against a remote Docker engine (for example, in CircleCI), but this is slower, since it adds the step of recursively copying the current working directory into the build container, rather than being bind-mounted in.
Signed-off-by: Dave Henderson David.Henderson@qlik.com
LGTM. This also has the benefit of creating the .build output directory owned by the current user instead of root.
Should we make this a config option?
| gharchive/pull-request | 2018-03-25T02:14:39 | 2025-04-01T06:45:28.594093 | {
"authors": [
"SuperQ",
"hairyhenderson",
"pgier"
],
"repo": "prometheus/promu",
"url": "https://github.com/prometheus/promu/pull/95",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
430700856 | Update deps
Upgrade to go 1.12 && apply debian security fix && upgrade kapprover to 0.6.0 && run 'dep ensure'
Appears to no longer be relevant. Closing.
| gharchive/pull-request | 2019-04-08T23:26:58 | 2025-04-01T06:45:28.606132 | {
"authors": [
"christianjoun",
"johngmyers"
],
"repo": "proofpoint/certificate-init-container",
"url": "https://github.com/proofpoint/certificate-init-container/pull/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1617528353 | Fair/unbiased versions for crps
So what i did following #7 is threefold :
--> modified the 'vectorized' _crps_ensemble to get the unbiased version. The computational cost is not modified, and the 'fair' keyword has been added as an option, defaulting to True
--> added a 'fair' gufunc to use numba's power, with the sorting algorithm corresponding to the Taillardat et al., 2016 and Zamo et al., 2017 version of CRPS ('PWT')
--> created a toy wrapper class to be able to pass the 'fair' option to numba. Dunno if ti's the most pythonic way to do, any better idea welcome.
Just to give you a glimpse over previous results, a piece of code to test on gaussian ensembles
`import numpy as np
import matplotlib.pyplot as plt
import properscoring as ps
tests = np.zeros((256,10,4))
for t in range(256) :
biased = []
unbiased = []
integrals = []
integrals_b = []
for N in range(2, 50, 5) :
forecasts = np.random.randn(N)
res_biased = ps.crps_ensemble(0, forecasts, fair = False, framework='vectorized')
res_unbiased = ps.crps_ensemble(0, forecasts, fair = True, framework='vectorized')
res_numba = ps.crps_ensemble(0, forecasts, fair = True)
res_numba_b = ps.crps_ensemble(0, forecasts, fair = False)
biased.append(res_biased)
unbiased.append(res_unbiased)
integrals.append(res_numba)
integrals_b.append(res_numba_b)
tests[t,:,0] = np.array(biased)
tests[t,:,1] = np.array(unbiased)
tests[t,:,2] = np.array(integrals)
tests[t,:,3] = np.array(integrals_b)
plt.plot(range(2,50,5), tests.mean(axis = 0)[:,0], 'kd', linewidth = 2.5, label ='Biased')
plt.plot(range(2,50,5), tests.mean(axis = 0)[:,3] , 'b--', linewidth = 2.5, label ='With gufunc Biased')
plt.plot(range(2,50,5), tests.mean(axis = 0)[:,1] , 'r--', linewidth = 2.5, label ='Unbiased')
plt.plot(range(2,50,5), tests.mean(axis = 0)[:,2] , 'bd', linewidth = 2.5, label ='With gufunc Unbiased')
plt.title('Ensemble CRPS with Gaussian, 256 random drawings' , fontsize = 15)
plt.grid()
plt.xlabel('Number of samples', fontsize = 15)
plt.yscale('log')
plt.legend()
plt.show()
spread = np.abs(tests[:,:,0]-tests[:,:,1])
spread_numba = np.abs(tests[:,:,2]-tests[:,:,3])
plt.plot(range(2,50,5), spread.mean(axis=0), 'k--', linewidth = 2.5, label = r'$\vert Biased-Unbiased\vert$, Numpy')
plt.fill_between(range(2,50,5), spread.mean(axis=0) + spread.std(axis = 0), spread.mean(axis=0) - spread.std(axis=0), alpha = 0.5)
plt.plot(range(2,50,5), spread_numba.mean(axis=0), 'r--', linewidth = 2.5, label = r'$\vert Biased-Unbiased\vert$, Numba')
plt.fill_between(range(2,50,5), spread_numba.mean(axis=0) + spread_numba.std(axis = 0), spread_numba.mean(axis=0) - spread_numba.std(axis=0), alpha=0.5, color = 'grey')
plt.legend()
plt.title('Biased vs Fair discrepancy, 256 random drawings', fontsize = 15)
plt.xlabel('Number of samples', fontsize = 15)
plt.yscale('log')
plt.grid()
plt.show()`
So basically this produces what's expected, and I didn't notice any harm to performance (Numba being at least on order of magnitude faster than Numpy)
And that's all for me :) let me know about comments
| gharchive/pull-request | 2023-03-09T16:00:05 | 2025-04-01T06:45:28.614174 | {
"authors": [
"flyIchtus"
],
"repo": "properscoring/properscoring",
"url": "https://github.com/properscoring/properscoring/pull/14",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
608314363 | Check keys (based on #2)
Clean up field key iteration, no need to enumerate
Check remaining arguments to catch unexpected inputs
@hwwhww Merging this instead; there's no need for a set difference to catch the unexpected arguments, one length check will do.
| gharchive/pull-request | 2020-04-28T13:21:59 | 2025-04-01T06:45:28.683016 | {
"authors": [
"protolambda"
],
"repo": "protolambda/remerkleable",
"url": "https://github.com/protolambda/remerkleable/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
33451909 | change identification method to work with jQ2.2
needs testing with 1.x
fixed #26
thanks. this has been fixed using a different approach: https://github.com/protonet/jquery.inview/commit/be2769f8fc9b22587df2bc293b8dd94ef26e0888
| gharchive/pull-request | 2014-05-14T00:26:48 | 2025-04-01T06:45:28.695110 | {
"authors": [
"enapupe",
"tiff"
],
"repo": "protonet/jquery.inview",
"url": "https://github.com/protonet/jquery.inview/pull/27",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
814687368 | #30 - Github action for binary release workflow
Description
relates to: #30
Before we can merge this PR, please make sure that all the following items have been
checked off. If any of the checklist items are not applicable, please leave them but
write a little note why.
[x] Targeted PR against correct branch (see CONTRIBUTING.md)
[x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
[ ] Wrote unit and integration tests
Not applicable: change was adding github actions.
[ ] Updated relevant documentation (docs/) or specification (x/<module>/spec/)
Not applicable: no go code was changed.
[ ] Added relevant godoc comments.
Not applicable: no go code was changed.
[x] Added a relevant changelog entry to the Unreleased section in CHANGELOG.md
[x] Re-reviewed Files changed in the Github PR explorer
[x] Review Codecov Report in the comment section below once CI passes
Might be nice to have our change logs linked up by running a script over them
import fileinput
import re
# This script goes through the provided file, and replaces any " \#<number>",
# with the valid mark down formatted link to it. e.g.
# " [\#number](https://github.com/cosmos/cosmos-sdk/issues/<number>)
# Note that if the number is for a PR, github will auto-redirect you when you click the link.
# It is safe to run the script multiple times in succession.
#
# Example:
#
# $ python ./scripts/linkify_changelog.py CHANGELOG.md
for line in fileinput.input(inplace=1):
line = re.sub(r"\s\\#([0-9]+)", r" [\\#\1](https://github.com/provenance-io/provenance/issues/\1)", line.rstrip())
print(line)
@iramiller i think we are good to merge this now.
@iramiller i created a new issue to clean up the changelog links
| gharchive/pull-request | 2021-02-23T17:44:40 | 2025-04-01T06:45:28.724017 | {
"authors": [
"iramiller",
"mtps"
],
"repo": "provenance-io/provenance",
"url": "https://github.com/provenance-io/provenance/pull/92",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1172286184 | [Bug]: check extra764 doesn't handle bucket region properly
What happened?
When running the checks for S3 it picks up the global region instead of the region the bucket is created in
How to reproduce it
Steps to reproduce the behavior:
What command are you running? Check 7.64 [extra764] Check if S3 buckets have secure transport policy enabled
Running in Control tower multi-account environment, running as administrator in specific account
Error - Picks up us-east-1 but our region is not this region
Expected behavior
I would expect it to return the correct region as it has done for the following S3 checks :
[extra73] Ensure there are no S3 buckets open to Everyone or Any AWS use
[extra734] Check if S3 buckets have default encryption (SSE) enabled or use a bucket policy to enforce it
Screenshots or Logs
From where are you running Prowler?
Resource: Workstation
OS: Mac OS Monteray 12.2.1 (21D62)
AWS-CLI Version: aws-cli/2.4.13 Python/3.8.8 Darwin/21.3.0
Prowler Version: Prowler 2.7.0-24January2022
Shell and version:
Others:
Additional context
I picked it up by going to the finding and then going to the S3 instance, only to find it in the correct region and not the US EAST 1 region.
Hi @RoxyR44 , good catch! We have open a Pull Request on behalf of this issue: https://github.com/prowler-cloud/prowler/pull/1077
Please, test it to see if that works for you 😄
This is now fixed from PR https://github.com/prowler-cloud/prowler/pull/1077. Available in master branch now. It will be part of the next fix release in a week. Thanks!
| gharchive/issue | 2022-03-17T11:57:44 | 2025-04-01T06:45:28.736743 | {
"authors": [
"RoxyR44",
"sergargar",
"toniblyx"
],
"repo": "prowler-cloud/prowler",
"url": "https://github.com/prowler-cloud/prowler/issues/1076",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
382254680 | ss全局与proxyee-down可否一起使用?
问题描述(必要)
校园网环境使用ss+ipv6的方式全局代理的情况下(也就是要求所有流量都要经过ss-server的远程服务器),这种情况该怎么配置呢?
版本号(必要)
3.4
操作系统(必要)
macOS
相关截图
null
相关日志
null
可以开proxyee-down二级代理吧……二级代理用127.0.0.1:1080
不过问题大概会出在,国外IP访问百度云是有困难的
可以开proxyee-down二级代理吧……二级代理用127.0.0.1:1080
不过问题大概会出在,国外IP访问百度云是有困难的
我现在的解决方案是用插件导出真实下载地址,然后放在proxyeeDown里去下载,有个问题就是要添加请求头,这样也能成功,但是请求头要先在浏览器里拿到,然后手动一行一行输入到proxyeeDown里,有什么好的方法可以免输请求头吗?
可以开proxyee-down二级代理吧……二级代理用127.0.0.1:1080
不过问题大概会出在,国外IP访问百度云是有困难的
看了一下只有Cookie是必须要放请求头的,也还行吧,就是复杂了一点要能接受,小文件就直接下了,大文件
| gharchive/issue | 2018-11-19T15:03:08 | 2025-04-01T06:45:28.755663 | {
"authors": [
"Sumacat",
"Zhangxuri"
],
"repo": "proxyee-down-org/proxyee-down",
"url": "https://github.com/proxyee-down-org/proxyee-down/issues/1099",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2279769264 | Upgrade Binutils to 2.31.1
Description
This PR makes 2 things:
Upgrade binutils to 2.31.1, the process to achieve this has been to isolate specific DVP changes from original patches, and then go version by version applying the patches and verify that it is working fine.
Add MacOS ARM support.
To check if it was working fine I have been using the @Ziemas repo https://github.com/Ziemas/xtc/tree/ps2sdk
I couldn't upgraded to a newer version, even I have been able to resolved conflicts and make it to works it was producing runtime errors when using it.
Maybe I will try to resume in the future
Cheers.!
It works fine for my PS2DVD example. There is some issue with the old dvp-as where using a global label as a branch target causes some odd relocation errors. I don't have a reproducible sample, and I'm far too busy to try and make one to see if it's fixed here.
Anyways, it works good. Nice job o7
The EE toolchain does not have the DVP relocation patches, which probably explains that? IIRC that's something you'd run into with VU0 code though.
| gharchive/pull-request | 2024-05-05T22:38:58 | 2025-04-01T06:45:28.807404 | {
"authors": [
"Ziemas",
"fjtrujy"
],
"repo": "ps2dev/ps2toolchain-dvp",
"url": "https://github.com/ps2dev/ps2toolchain-dvp/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1111609845 | Can't reconnect to ESP32 after reboot: Connection refused
Tried this today with a ESP-A1S module (ES8388) and it works just fine :-)
Only thing:
I can connect to it from my Mac Mini and sue it as default audio output, but after reboot or power up, the connection is lost and after a while this message is displayed:
"Connection refused"
Only soliution is to power-cycle the ESP32 and it is selected as default audio output again...
Looks that the ESP32 keeps the previous connection active and denies further connection attempts...so only a reset helps...
Is there a callback mechanism to detect if audio stopped playing for a while?
cheers
richard
https://github.com/pschatzmann/ESP32-A2DP/wiki/Auto-Reconnect
It might be that the ESP32 is still trying to reconnect but it fails because the Mc Mini has been restarted.
If you set the log level to verbose you might be able to follow what is heppening.
Hmm..regardless what I try..I don't get any ESP_LOG output inside Arduino IDE...
And with a lower reconnect time also no difference:
a2dp_sink.set_auto_reconnect(true, false, 20);
It's in Arduino Tools -> Core Debug Level: e.g. Verbose
Then it must be not the official Arduino IDE as there is no such menu under "Tools":
That's strange: I usually work with the generic ESP32 Dev Module - I suggest to give that a try...
Ah okay..seems not all ESP32 baords enable this and other menu points (o;
So this is the output when I do a restart:
22:07:01.211 -> D][BluetoothA2DPSink.cpp:1124] ccall_app_a2d_callback(): ccall_app_a2d_callback
22:07:01.211 -> [D][BluetoothA2DPSink.cpp:904] app_a2d_callback(): app_a2d_callback
22:07:01.211 -> [D][BluetoothA2DPSink.cpp:908] app_a2d_callback(): app_a2d_callback ESP_A2D_CONNECTION_STATE_EVT
22:07:01.211 -> [D][BluetoothA2DPSink.cpp:310] app_work_dispatch(): app_work_dispatch event 0x0, param len 16
22:07:01.244 -> [D][BluetoothA2DPSink.cpp:342] app_send_msg(): app_send_msg
22:07:01.244 -> [D][]lueuettht2A2Sininkpcpp65130ppctall_aandrer(t: alp_tcs()h ndlll_asp 0_c, 0allba[k]
22:07:01.244 -> BDlu[tluthAotPSiDPS.ck.c368] 9p] aas_rcand_er(lbaakp_taapphan_le_c APbaSIG
22:07:01.244 -> [O][BDIStATth sDPS 1k
22:07:01.244 -> [D]:5lu] apt_r2_ctinkl.bapk(33]app_rcoct_dislback d()AaRp_Co_C_diECTtcN_STATE[BTuetoo[BluDtoitk.cDPS1nk.]pcca10] v_pdl_rk_devpa)chcca app_v_rk__a2datct evD][ lue,ootramDPen 12c
22:07:01.277 -> pD][6] etvotdl_a2PSenk():p:3_2]dlpp_d_ent mst 0
22:07:01.277 -> [D]_seuetmsgh
22:07:01.277 -> 2DPSink.cpp:540] av_hdl_a2d_evt(): av_hdl_a2d_evt ESP_A2D_CONNECTION_STATE_EVT
22:07:01.277 -> [D][BluetoothA2DPSink.cpp:641] handle_connection_state(): handle_connection_state evt 0
22:07:01.277 -> [I][BluetoothA2DPSink.cpp:646] handle_connection_state(): partner address: 14:98:77:6d:8c:fc
22:07:01.319 -> [I][BluetoothA2DPSink.cpp:654] handle_connection_state(): A2DP connection state: Disconnected, [14:98:77:6d:8c:fc]
22:07:01.319 -> [I][BluetoothA2DPSink.cpp:657] handle_connection_state(): ESP_A2D_CONNECTION_STATE_DISCONNECTED
22:07:01.319 -> [I][BluetoothA2DPSink.cpp:668] handle_connection_state(): i2s_stop
22:07:01.319 -> [I][BluetoothA2DPSink.cpp:676] handle_connection_state(): Connection try number: 0
22:07:01.319 -> [D][BluetoothA2DPCommon.cpp:177] connect_to_last_device(): connect_to_last_device
22:07:01.343 -> [D][BluetoothA2DPSink.cpp:365] app_tDs]_Bandtoo()A appitnsk_hpn:ler,] cgalx_a 0_0
d[I]llbuct()t Ac2DPSank._p2:d6c]llpb_ckask_h[nBlur(o:oahA_tasknkandpe904APa_p_a_dORKllISPkTCH apg_a1
22:07:01.343 -> [ca[Blbacoo
22:07:01.343 -> hADD][Blu.cpp:3h32Dapp_w.rkp:i0p]tappd(): applwack(): aapchedd
22:07:01.376 -> cDa[BlaekooShA_A2DinC.NNE:TIO9]SccTl__av
22:07:01.376 -> d[D]vBluetoo): 2Dallnkvchpl_avrc_eapp
22:07:01.376 -> [Dr[B_uitoottch(P:inp._wo:k6di aathh eavnc_0vt,(p: av_hdlnavr6
evt ][Bl e
22:07:01.376 -> o[thABlPetonthcpp:S4nk.app_7en] ms_hdl_avrcsent()s A
22:07:01.376 -> RC conn_state evt: state 0, [14:98:77:6d:8c:fc]
22:07:01.376 -> [D][BluetoothA2DPSink.cpp:365] app_task_handler(): app_task_handler, sig 0x1, 0x0
22:07:01.376 -> [I][BluetoothA2DPSink.cpp:368] app_task_handler(): app_task_handler, APP_SIG_WORK_DISPATCH si⸮: 1
22:07:01.409 -> [D][BluetoothA2DPSink.cpp:333] app_work_dispatched(): app_work_dispatched
22:07:01.409 -> [D][BluetoothA2DPSink.cpp:1156] ccall_av_hdl_a2d_evt(): ccall_av_hdl_a2d_evt
22:07:01.409 -> [D][BluetoothA2DPSink.cpp:536] av_hdl_a2d_evt(): av_hdl_a2d_evt evt 0
22:07:01.409 -> [D][BluetoothA2DPSink.cpp:540] av_hdl_a2d_evt(): av_hdl_a2d_evt ESP_A2D_CONNECTION_STATE_EVT
22:07:01.409 -> [D][BluetoothA2DPSink.cpp:641] handle_connection_state(): handle_connection_state evt 0
22:07:01.442 -> [I][BluetoothA2DPSink.cpp:646] handle_connection_state(): partner address: 14:98:77:6d:8c:fc
22:07:01.442 -> [I][BluetoothA2DPSink.cpp:654] handle_connection_state(): A2DP connection state: Connecting, [14:98:77:6d:8c:fc]
22:07:01.442 -> [I][BluetoothA2DPSink.cpp:726] handle_connection_state(): ESP_A2D_CONNECTION_STATE_CONNECTING
22:07:06.441 -> [D][BluetoothA2DPSink.cpp:1124] ccall_app_a2d_callback(): ccall_app_a2d_callback
22:07:06.474 -> [D][BluetoothA2DPSink.cpp:904] app_a2d_callback(): app_a2d_callback
22:07:06.474 -> [D][BluetoothA2DPSink.cpp:908] app_a2d_callback(): app_a2d_callback ESP_A2D_CONNECTION_STATE_EVT
22:07:06.474 -> [D][BluetoothA2DPSink.cpp:310] app_work_dispatch(): app_work_dispatch event 0x0, param len 16
22:07:06.474 -> [D][BluetoothA2DPSink.cpp:342] app_send_msg(): app_send_msg
22:07:06.507 -> [D][BluetoothA2DPSink.cpp:365] app_task_handler(): app_task_handler, sig 0x1, 0x0
22:07:06.507 -> [I][BluetoothA2DPSink.cpp:368] app_task_handler(): app_task_handler, APP_SIG_WORK_DISPATCH sig: 1
22:07:06.507 -> [D][BluetoothA2DPSink.cpp:333] app_work_dispatched(): app_work_dispatched
22:07:06.507 -> [D][BluetoothA2DPSink.cpp:1156] ccall_av_hdl_a2d_evt(): ccall_av_hdl_a2d_evt
22:07:06.507 -> [D][BluetoothA2DPSink.cpp:536] av_hdl_a2d_evt(): av_hdl_a2d_evt evt 0
22:07:06.540 -> [D][BluetoothA2DPSink.cpp:540] av_hdl_a2d_evt(): av_hdl_a2d_evt ESP_A2D_CONNECTION_STATE_EVT
22:07:06.540 -> [D][BluetoothA2DPSink.cpp:641] handle_connection_state(): handle_connection_state evt 0
22:07:06.540 -> [I][BluetoothA2DPSink.cpp:646] handle_connection_state(): partner address: 14:98:77:6d:8c:fc
22:07:06.540 -> [I][BluetoothA2DPSink.cpp:654] handle_connection_state(): A2DP connection state: Disconnected, [14:98:77:6d:8c:fc]
22:07:06.573 -> [I][BluetoothA2DPSink.cpp:657] handle_connection_state(): ESP_A2D_CONNECTION_STATE_DISCONNECTED
22:07:06.573 -> [I][BluetoothA2DPSink.cpp:668] handle_connection_state(): i2s_stop
22:07:06.573 -> [I][BluetoothA2DPCommon.cpp:255] set_scan_mode_connectable(): set_scan_mode_connectable true
Ah this looks better when doing a restart:
a2dp_sink.set_auto_reconnect(true, true, 20);
Not sure what the number of tries mean in terms of tries/seconds....
So I assume the number of tries has to be very high so it reconnects when the Mac is shutdown over night...
| gharchive/issue | 2022-01-22T16:10:10 | 2025-04-01T06:45:28.859249 | {
"authors": [
"pschatzmann",
"richardklingler"
],
"repo": "pschatzmann/ESP32-A2DP",
"url": "https://github.com/pschatzmann/ESP32-A2DP/issues/160",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
415578494 | unable to open some psd files
Hello!
I'm using the latest version of psd_tools to batch-process some psd files.
The process breaks for some specific psd files, apparently random:
File “/Users/davide/Desktop/cruncher-internal/src/cruncher.py”, line 48, in exportExerciseJson
psd = PSDImage.open(filename)
File “/usr/local/lib/python3.7/site-packages/psd_tools/api/psd_image.py”, line 101, in open
self = cls(PSD.read(f))
File “/usr/local/lib/python3.7/site-packages/psd_tools/psd/init.py”, line 72, in read
LayerAndMaskInformation.read(fp, encoding, header.version),
File “/usr/local/lib/python3.7/site-packages/psd_tools/psd/layer_and_mask.py”, line 57, in read
return cls._read_body(f, encoding, version)
File “/usr/local/lib/python3.7/site-packages/psd_tools/psd/layer_and_mask.py”, line 64, in _read_body
global_layer_mask_info = GlobalLayerMaskInfo.read(fp)
File “/usr/local/lib/python3.7/site-packages/psd_tools/psd/layer_and_mask.py”, line 914, in read
data = read_length_block(fp)
File “/usr/local/lib/python3.7/site-packages/psd_tools/utils.py”, line 77, in read_length_block
length = read_fmt(fmt, fp)[0]
File “/usr/local/lib/python3.7/site-packages/psd_tools/utils.py”, line 36, in read_fmt
len(data), fmt_size
AssertionError: read=3, expected=4
This is the error log
Maybe I'm missing something?
@mylastgg Thanks for reporting a but. Can you share any sample file?
Sorry I am not sure I can distribute the files for copyright reasons. I'll ask my colleagues.
In the meantime, I've noticed that this only happens for some files after I've resized them using imagemagick like this:
'convert ' + filename + ' -matte +distort SRT "0,0 .25 0 0,0" ' + filename
so maybe I am not using imagemagick correctly to resize the psd.
Side question: is psd_tools able to resize a psd file?
@mylastgg psd-tools mainly tests files created by Photoshop. ImageMagick might be generating some incompatible binary but I'm not completely sure. Currently resizing is not supported.
@kyamagu thanks for the update! I think the original files had grouped layers and image magick was corrupting them in the process. I think we can close this issue
| gharchive/issue | 2019-02-28T11:32:20 | 2025-04-01T06:45:28.872391 | {
"authors": [
"kyamagu",
"mylastgg"
],
"repo": "psd-tools/psd-tools",
"url": "https://github.com/psd-tools/psd-tools/issues/97",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1251809691 | pattern scan with regex
background
toy-arms' pattern scanning has been significantly slow, therefore I replaced it with the way uses regex.
progress
external replacement is done so far.
internal is waiting to be done
Did you test if that fix the problem with get_module_handle?
https://github.com/pseuxidal/toy-arms/blob/484aef39541fb2bae0ea750cd4ee5e48e601024f/src/internal/utils.rs#L65-L68
@codecheck01 I'm sorry for replying late.
I've been concentrating other things last few weeks.
Thanks for the information, I'll be working on it.
@pseuxide I should tell you too, that I am not using winapi-rs, but windows-rs instead.
But still your get_module_handle function is 100% wrong. Otherwise the current implementation of pattern scanning works great, still waiting to see the better one with regex (I'm using internal).
I of course used to try windows-rs out before. If im not mistaken, it has drawback around type which u can't easily type punning from its exported type like HMODULE to something like usize. (i dont remember clearly tho)
But, yeah you are right you should use windows-rs cuz it's official. I'll shift to it when the major version is released.
Let me know if u get to know why my get_module_handle is malfunctioning
I of course used to try windows-rs out before. If im not mistaken, it has drawback around type which u can't easily type punning from its exported type like HMODULE to something like usize. (i dont remember clearly tho) But, yeah you are right you should use windows-rs cuz it's official. I'll shift to it when the major version is released.
Let me know if u get to know why my get_module_handle is malfunctioning
@pseuxide
Here is my get_module_handle (do not crash/fail):
pub fn get_module_handle(module_name: &str) -> Result<HINSTANCE, Error> {
let hinstance = unsafe { GetModuleHandleA(PCSTR(module_name.as_ptr())) }?;
if hinstance.is_invalid() {
Err(unsafe { GetLastError().into() })
} else {
Ok(hinstance)
}
}
ughh I have been fullsent on my actual job, but finally somewhat completed internal part. I'll make sure that examples works fine next.
I found that Ive created so many bugs in this crate lol. I need to fix them too. Forgive me, Rust is overwhelming to me.
I'll make crate for external and internal respectively cuz the current file structure threw me off.
@pseuxide, Nice, I just got the notification, is there any bugs, is it safe to use in production?
Alright, give me a day and a half, u see another pending PR which I need to complete. sorry for making u wait...i was too lazy to write Rust
Ive merged the PR. developed, and tested at least x86 arch and confirm its ok.
@codecheck01
Ive merged the PR. developed, and tested at least x86 arch and confirm its ok.
But as other said this lib's read is not that rich. I implemented memory protect circumvent this time, but let's say if u wanna read bigger than single memory page, u should perform VirtualProtect or ZwProtectVirtualMemory or whatever u choose to change its right.
| gharchive/pull-request | 2022-05-29T07:15:13 | 2025-04-01T06:45:28.884028 | {
"authors": [
"codecheck01",
"pseuxide"
],
"repo": "pseuxide/toy-arms",
"url": "https://github.com/pseuxide/toy-arms/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
524852026 | Allow custom mappings
This makes it possible to disable the default mappings, for example:
let g:smoothie_use_default_mappings = v:false
nmap <C-j> <Plug>(SmoothieDownwards)
nmap <C-k> <Plug>(SmoothieUpwards)
Also, the commit adds so that existing mappings will not be overwritten by the plugin.
Was about to send this PR myself, glad I checked, thanks @segeljakt!
I've adjusted it slightly to use g:smoothie_no_default_mappings flag instead, which seems a bit more idiomatic to me. Thanks a lot, @segeljakt!
| gharchive/pull-request | 2019-11-19T08:32:08 | 2025-04-01T06:45:29.014741 | {
"authors": [
"expelledboy",
"psliwka",
"segeljakt"
],
"repo": "psliwka/vim-smoothie",
"url": "https://github.com/psliwka/vim-smoothie/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
723202133 | Update k8s process so we don't have to invoke /vault/secrets/config
When logging into a k8s Rails instance via kubectl, we need to run source /vault/secrets/config in order to get console access. This should be done automatically without having to run it.
WDLL
I can run:
$ kubectl exec -it [pod] /bin/bash
Then from inside the pod:
$ bundle exec rails c
And it "just works" ;)
@whereismyjetpack I wasn't sure where this was supposed to go, so I'm just leaving it here so I don't forget about it.
there is a wrapper script that may help. try this
k exec -it scholarsphere-qa-646bb8459d-zrnm2 /app/bin/vaultshell
Works for me. I'll leave this open until one of us can update the wiki.
Did you want this dropped in the gh wiki?
yes, but I'm now noticing that we didn't have that information in there to begin with. If you think it's a good place for it, then the wiki would be the easiest. But, since it's not really a Scholarsphere-specific thing, maybe it ought to go someplace else?
we had this stuff in our dlt sites page, and i fear that's where things go to get lost. I can put ss specific stuff in the gh wiki, since that's probably where a developer is going to go first
https://github.com/psu-stewardship/scholarsphere-4/wiki/kubernetes we can add to this as we see fit!
| gharchive/issue | 2020-10-16T13:09:17 | 2025-04-01T06:45:29.034208 | {
"authors": [
"awead",
"whereismyjetpack"
],
"repo": "psu-stewardship/scholarsphere-4",
"url": "https://github.com/psu-stewardship/scholarsphere-4/issues/590",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1725902107 | Draft for user-facing docs
This PR introduces some user guide content, divided in two sections, DataLad- and Browser-based access.
Feel free to add to this PR, or on top of this PR.
I guess that's it from me for today - this largely has a feeling of "this article is a stub" from Wikipedia, but I think there's already some meaningful information and a reasonable structure to build upon.
This is only the "User guide" section, "ICF Personnel guide" and "Developer guide" sections are still empty.
I like this current content, well done.
I was thinking about adding more content about the catalog, but I'm not sure that much more than the current
(optional) a catalog directory containing a catalog of the study visits (which can be used to view e.g. available modalities and DICOM series for each visit)
is actually needed. This gives a good summary and once people click on the catalog the rest is pretty self-explanatory.
Have you thought about what the content of the developer and ICF personnel sections would be? I was thinking the former could contain explanations of how to generate metadata from tarballs and how to reproducibly build the catalog and datalad datasets by using the available scripts.
Have you thought about what the content of the developer and ICF personnel sections would be? I was thinking the former could contain:
an overview of the complete workflow
an explanations of how to generate metadata from tarballs and how to reproducibly build the catalog and datalad datasets by using the available scripts.
perhaps also a rundown of the CI setup
In my mind that would totally be the scope of developer docs. How deep we want to go and whether this is the place is really up to us.
And the ICF personnel - probably just a rundown of the scripts, what they do, what are the inputs, and a hint that they are meant to be used in automation.
Probably in separate PRs though?
Treating Stephan's comment as a positive review, I think we can merge this for starters, so that the readthedocs page is less empty, and improve later.
| gharchive/pull-request | 2023-05-25T14:06:15 | 2025-04-01T06:45:29.039376 | {
"authors": [
"jsheunis",
"mslw"
],
"repo": "psychoinformatics-de/inm-icf-utilities",
"url": "https://github.com/psychoinformatics-de/inm-icf-utilities/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1683283117 | Clean logging statements
This should also be considered when addressing #255, but is an independent topic.
Stumbled upon code like this: (example from get )
if error_path_absent:
log.error("The following paths do not exist:")
log.error("\n".join(error_path_absent))
log.error("\n Exiting.")
sys.exit(1)
While this looks fine on terminal when everything is (pretty much) default, log.error is not an elaborate print statement. Instead, it generates log "records" (objects) that may be acted upon by potentially a whole bunch of different filters, handlers and formatters. Code like above would produce three distinct records instead of one record with a multi-line message. Thus, it's kinda sabotaging potential future filters, handlers, formatters.
Leaving a record of a somewhat related issue:
Logging of exceptions is currently happening in the form of
def some_command():
try:
subroutine()
except SomeError:
do_stuff()
sys.exit()
finalize_stuff()
def do_stuff():
...
if something_is_wrong:
log.error("tell the user that the command stopped and didn't proceed to finalize stuff")
raise SomeError
This is fundamentally flawed. do_stuff must not incorporate knowledge (rather: assumptions) about what the caller is or isn't doing because of the exception. The entire point of raising an exception is to leave it the caller what to do with it. This kind of messaging needs to happen where the exception is treated, not where it is raised.
| gharchive/issue | 2023-04-25T14:26:17 | 2025-04-01T06:45:29.052824 | {
"authors": [
"bpoldrack"
],
"repo": "psyinfra/onyo",
"url": "https://github.com/psyinfra/onyo/issues/338",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
810112602 | combine --port and --host-address
The ip:port syntax is common, and allows this to be exposed as a single option.
In the Prometheus community, the flag is usually called --web.listen-address.
For example:
--web.listen-address=":9290"
The --port and --host-address options should be removed, since --web.listen-address covers both use cases.
--web.listen-address should also be added to exporter.py (or argparsing removed from that file entirely).
| gharchive/issue | 2021-02-17T11:47:22 | 2025-04-01T06:45:29.055058 | {
"authors": [
"aqw"
],
"repo": "psyinfra/ups-prometheus-exporter",
"url": "https://github.com/psyinfra/ups-prometheus-exporter/issues/4",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
1583720366 | fix: 将传给Synology DownloadStation的下载任务附带正确的种子文件名
Start
挺老的一个issue了,一直还没解决。 关联issue:https://github.com/pt-plugins/PT-Plugin-Plus/issues/618
看了下现有过程,现在创建下载任务时调用的获取种子文件的方法没有直接返回种子对象本身(种子经过parse后拿到的结构化数据,以下都称之为torrent),只有content(种子的file blob,以下都称之为content)。
遂从后端入手,获取种子的方法在src/background/controller.ts#881 ,该方法其实是有解析种子的参数,但如果将该参数置为true则不会返回种子的content(鱼和熊掌不可兼得是吧)
于是决定扩展该方法,考虑到方法较为底层,所以改动的思路是影响范围尽可能小。于是分析该方法的过程中发现,如果入参parseTorrent为false时,resolve出去的则直接是content,没有经过任何对象的封装,如果需要追加返回torrent的话,势必改变出参的数据类型,这样改动影响范围太大,风险不可控。
而parseTorrent为true时,resove出去的数据经过对象封装,扩展字段很轻松,然后决定就干它了
根据调用链判断,该参数只被用在了src/options/views/search/KeepUpload.vue#465的getTorrent方法,查看上下文判断是辅种的搜索功能?辅种功能没用过,不能对改动进行测试,但考虑到仅仅在出参的对象中新增了一个key,应当不属于breakchange
End
新建的下载任务被附加了真实的filename,DownloadStation也可以正常查看
BTW
感觉项目有点久了,很多实现和方法都很emmmmmm,维护起来确实有一些难度了
因为很难从全局角度去理解每一个模块和抽象的意义,所以这个PR的风险由你们来评估,我不确定实为了一个文件名就这么做是否真的足够优雅,甚至十分真的需要解决
主要没群晖,而我太菜
主要没群晖,而我太菜
后面要是有群晖相关的issue可以at我一下,虽然不保证一定第一时间能修,甚至都不保证能100%解决,但大概率会攒一攒然后跟通马桶一样全修掉
| gharchive/pull-request | 2023-02-14T08:29:24 | 2025-04-01T06:45:29.137454 | {
"authors": [
"sdjnmxd",
"ted423"
],
"repo": "pt-plugins/PT-Plugin-Plus",
"url": "https://github.com/pt-plugins/PT-Plugin-Plus/pull/1341",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2062047273 | 🛑 Roundcube is down
In 1488140, Roundcube (https://webmail.thexyzserver.com) was down:
HTTP code: 502
Response time: 500 ms
Resolved: Roundcube is back up in 8da98dd after 11 minutes.
| gharchive/issue | 2024-01-02T07:40:20 | 2025-04-01T06:45:29.153638 | {
"authors": [
"ptoone"
],
"repo": "ptoone/Thexyz-Network-Status",
"url": "https://github.com/ptoone/Thexyz-Network-Status/issues/125",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1028408560 | Experimental: ANT+ / BLE support for speed
Tries to address #75 by:
Changes:
Adds wheel simulation to generate speed data
Assumes tire with 2.096 meter circumference (Garmin default), which corresponds to 700x23C
Adds speed support to bot bike for testing
Bikes that already support speed (Flywheel, IC4) pass through speed
Added speed calculation for Peleton based on https://ihaque.org/posts/2020/12/25/pelomon-part-ib-computing-speed/
Updated existing BLE GATT CSC + BLE GATT CPS server for speed (via wheel rotation + timestamp)
Created ANT+ SPD server for speed (via wheel rotation + timestamp)
Conscious decisions:
ANT+ support a SPD+CAD profile, which is very similar to the SPD profile. But as the PWR profile already provides cadence, the SPD profile seemed sufficient.
Issues:
Unable to get ANT+ PWR and SPD working at the same time
It appears that the required different broadcast intervals for PWR and SPD lead to a collision of messages at some point
Unable to test speed on BLE GATT CSC due to lack of supported devices
Unable to test the Peleton and Flywheel bikes
To try this out:
Start with current Gymnasticon Pi image
Place the device into R/W mode:
overctl -rw
Setup build environmant:
sudo apt-get update --allow-releaseinfo-change
sudo apt-get install git
cd
git clone https://github.com/ptx2/gymnasticon.git
cd gymnasticon
sudo service gymnasticon stop
sudo setcap cap_net_raw+eip $(eval readlink -f $(which node))
Checkout experimental branch and build it
git fetch origin pull/89/head:antble-speed
git checkout antble-speed
npm install
npm run build
npm link
gymnasticon
Provide output if things are not working
@ptx2 Would be great if you could have a look at the BLE GATT CSC part.
@chriselsen - Hi Chris, I have the upgraded Keiser computer which shows speed - if it’s any use I’m happy to test speed at various watts this evening to build a power/speed graph - I’m assuming it likely works similar to the peloton calcs as the bike has no idea of users weight etc??…
@nealjane The speed formula for Peleton is based on this article. If you can come up with a similar formula for the Keiser bike, I can definitely implement that.
You probably want to get started by graphing displayed speed at various power levels to see if you spot a pattern/formula.
@chriselsen , ok, got it working with fresh eyes this morning.!
the new Keiser computer (with Bluetooth converter now built in) clearly has more power and I thinks it’s interfering with Gymnasticon (had to move Gymnasticon closer to bike since I fitted it and still get intermittent drops in power over ant), so I may just replace it with my older computer this am - just for avoidance of doubt.
CaS appears under ant sensors on watch, adding shows all power, speed and cadence in indoor bike on my fenix. It tends to loose connection (but I’m putting this down to the new Keiser computer and will see what happens when I swap computers this am)
connecting to the Bluetooth on Gymnasticon - zwift/RGT- power drops to zero intermittently on my iphone (both apps have power buffer set to zero).
these issues could be connected to the new Bluetooth Keiser computer (although I’ve never had the Bluetooth power dropping to zero previously.
NODE_DEBUG=gymnasticon:bike* gymnasticon --bike keiser
[2021-10-21T10:53:52.176Z] connecting to bike...
Keiser M3 bike version: 6.38 (Stats timeout: 2 sec.)
[2021-10-21T10:54:14.526Z] bike connected d8:b0:b5:6a:ce:3b
[2021-10-21T10:54:14.614Z] ANT+ stick opened
[2021-10-21T10:54:14.805Z] received stats from bike [power=20W cadence=65rpm speed=0km/h]
[2021-10-21T10:54:14.829Z] pedal stroke [timestamp=1634813654810 revolutions=1 cadence=65rpm power=20W]
[2021-10-21T10:54:15.129Z] received stats from bike [power=20W cadence=65rpm speed=0km/h]
[2021-10-21T10:54:15.450Z] received stats from bike [power=20W cadence=66rpm speed=0km/h]
[2021-10-21T10:54:15.720Z] pedal stroke [timestamp=1634813655719.0908 revolutions=2 cadence=66rpm power=20W]
[2021-10-21T10:54:15.772Z] received stats from bike [power=20W cadence=66rpm speed=0km/h]
[2021-10-21T10:54:16.099Z] received stats from bike [power=20W cadence=66rpm speed=0km/h]
[2021-10-21T10:54:16.427Z] received stats from bike [power=20W cadence=67rpm speed=0km/h]
[2021-10-21T10:54:16.615Z] pedal stroke [timestamp=1634813656614.6133 revolutions=3 cadence=67rpm power=20W]
[2021-10-21T10:54:16.753Z] received stats from bike [power=20W cadence=67rpm speed=0km/h]
[2021-10-21T10:54:17.079Z] received stats from bike [power=20W cadence=68rpm speed=0km/h]
[2021-10-21T10:54:17.497Z] pedal stroke [timestamp=1634813657496.9663 revolutions=4 cadence=68rpm power=20W]
[2021-10-21T10:54:17.725Z] received stats from bike [power=21W cadence=68rpm speed=0km/h]
[2021-10-21T10:54:18.043Z] received stats from bike [power=21W cadence=68rpm speed=0km/h]
[2021-10-21T10:54:18.370Z] received stats from bike [power=21W cadence=68rpm speed=0km/h]
[2021-10-21T10:54:18.380Z] pedal stroke [timestamp=1634813658379.3193 revolutions=5 cadence=68rpm power=21W]
[2021-10-21T10:54:19.014Z] received stats from bike [power=21W cadence=68rpm speed=0km/h]
[2021-10-21T10:54:19.262Z] pedal stroke [timestamp=1634813659261.6724 revolutions=6 cadence=68rpm power=21W]
[2021-10-21T10:54:19.339Z] received stats from bike [power=21W cadence=68rpm speed=0km/h]
[2021-10-21T10:54:19.664Z] received stats from bike [power=21W cadence=68rpm speed=0km/h]
[2021-10-21T10:54:20.145Z] pedal stroke [timestamp=1634813660144.0254 revolutions=7 cadence=68rpm power=21W]
[2021-10-21T10:54:20.314Z] received stats from bike [power=21W cadence=69rpm speed=0km/h]
[2021-10-21T10:54:20.636Z] received stats from bike [power=22W cadence=69rpm speed=0km/h]
[2021-10-21T10:54:20.965Z] received stats from bike [power=22W cadence=69rpm speed=0km/h]
[2021-10-21T10:54:21.014Z] pedal stroke [timestamp=1634813661013.5906 revolutions=8 cadence=69rpm power=22W]
[2021-10-21T10:54:21.299Z] received stats from bike [power=22W cadence=69rpm speed=0km/h]
[2021-10-21T10:54:21.615Z] received stats from bike [power=22W cadence=69rpm speed=0km/h]
[2021-10-21T10:54:21.884Z] pedal stroke [timestamp=1634813661883.1558 revolutions=9 cadence=69rpm power=22W]
[2021-10-21T10:54:21.942Z] received stats from bike [power=22W cadence=69rpm speed=0km/h]
[2021-10-21T10:54:22.595Z] received stats from bike [power=22W cadence=69rpm speed=0km/h]
[2021-10-21T10:54:22.753Z] pedal stroke [timestamp=1634813662752.721 revolutions=10 cadence=69rpm power=22W]
[2021-10-21T10:54:23.566Z] received stats from bike [power=22W cadence=70rpm speed=0km/h]
[2021-10-21T10:54:23.612Z] pedal stroke [timestamp=1634813663609.8638 revolutions=11 cadence=70rpm power=22W]
[2021-10-21T10:54:23.893Z] received stats from bike [power=22W cadence=70rpm speed=0km/h]
[2021-10-21T10:54:24.216Z] received stats from bike [power=22W cadence=70rpm speed=0km/h]
[2021-10-21T10:54:24.468Z] pedal stroke [timestamp=1634813664467.0066 revolutions=12 cadence=70rpm power=22W]
[2021-10-21T10:54:24.859Z] received stats from bike [power=25W cadence=76rpm speed=0km/h]
[2021-10-21T10:54:25.257Z] pedal stroke [timestamp=1634813665256.4802 revolutions=13 cadence=76rpm power=25W]
[2021-10-21T10:54:25.504Z] received stats from bike [power=25W cadence=76rpm speed=0km/h]
[2021-10-21T10:54:25.826Z] received stats from bike [power=25W cadence=81rpm speed=0km/h]
[2021-10-21T10:54:26.000Z] pedal stroke [timestamp=1634813665997.221 revolutions=14 cadence=81rpm power=25W]
[2021-10-21T10:54:26.150Z] received stats from bike [power=25W cadence=81rpm speed=0km/h]
[2021-10-21T10:54:26.738Z] pedal stroke [timestamp=1634813666737.9617 revolutions=15 cadence=81rpm power=25W]
[2021-10-21T10:54:26.793Z] received stats from bike [power=25W cadence=82rpm speed=0km/h]
[2021-10-21T10:54:27.447Z] received stats from bike [power=25W cadence=84rpm speed=0km/h]
[2021-10-21T10:54:27.454Z] pedal stroke [timestamp=1634813667452.2473 revolutions=16 cadence=84rpm power=25W]
[2021-10-21T10:54:27.774Z] received stats from bike [power=53W cadence=84rpm speed=15km/h]
[2021-10-21T10:54:27.786Z] wheel rotation [timestamp=1634813667778 revolutions=1 speed=15km/h power=53W]
[2021-10-21T10:54:28.101Z] received stats from bike [power=53W cadence=84rpm speed=15km/h]
[2021-10-21T10:54:28.168Z] pedal stroke [timestamp=1634813668166.533 revolutions=17 cadence=84rpm power=53W]
[2021-10-21T10:54:28.282Z] wheel rotation [timestamp=1634813668281.04 revolutions=2 speed=15km/h power=53W]
[2021-10-21T10:54:28.426Z] received stats from bike [power=60W cadence=83rpm speed=14km/h]
[2021-10-21T10:54:28.821Z] wheel rotation [timestamp=1634813668820.0115 revolutions=3 speed=14km/h power=60W]
[2021-10-21T10:54:28.890Z] pedal stroke [timestamp=1634813668889.4246 revolutions=18 cadence=83rpm power=60W]
[2021-10-21T10:54:29.072Z] received stats from bike [power=63W cadence=83rpm speed=17km/h]
[2021-10-21T10:54:29.264Z] wheel rotation [timestamp=1634813669263.8704 revolutions=4 speed=17km/h power=63W]
[2021-10-21T10:54:29.396Z] received stats from bike [power=63W cadence=83rpm speed=17km/h]
[2021-10-21T10:54:29.613Z] pedal stroke [timestamp=1634813669612.3162 revolutions=19 cadence=83rpm power=63W]
[2021-10-21T10:54:29.708Z] wheel rotation [timestamp=1634813669707.7292 revolutions=5 speed=17km/h power=63W]
[2021-10-21T10:54:30.044Z] received stats from bike [power=64W cadence=84rpm speed=17km/h]
[2021-10-21T10:54:30.155Z] wheel rotation [timestamp=1634813670151.5881 revolutions=6 speed=17km/h power=64W]
[2021-10-21T10:54:30.327Z] pedal stroke [timestamp=1634813670326.6018 revolutions=20 cadence=84rpm power=64W]
[2021-10-21T10:54:30.361Z] received stats from bike [power=64W cadence=84rpm speed=17km/h]
[2021-10-21T10:54:30.596Z] wheel rotation [timestamp=1634813670595.447 revolutions=7 speed=17km/h power=64W]
[2021-10-21T10:54:31.011Z] received stats from bike [power=64W cadence=84rpm speed=17km/h]
[2021-10-21T10:54:31.041Z] wheel rotation [timestamp=1634813671039.306 revolutions=8 speed=17km/h power=64W]
[2021-10-21T10:54:31.047Z] pedal stroke [timestamp=1634813671040.8875 revolutions=21 cadence=84rpm power=64W]
[2021-10-21T10:54:31.335Z] received stats from bike [power=64W cadence=85rpm speed=17km/h]
[2021-10-21T10:54:31.484Z] wheel rotation [timestamp=1634813671483.1648 revolutions=9 speed=17km/h power=64W]
[2021-10-21T10:54:31.660Z] received stats from bike [power=65W cadence=85rpm speed=17km/h]
[2021-10-21T10:54:31.750Z] pedal stroke [timestamp=1634813671746.7698 revolutions=22 cadence=85rpm power=65W]
[2021-10-21T10:54:31.929Z] wheel rotation [timestamp=1634813671927.0237 revolutions=10 speed=17km/h power=65W]
[2021-10-21T10:54:31.988Z] received stats from bike [power=65W cadence=85rpm speed=17km/h]
[2021-10-21T10:54:32.311Z] received stats from bike [power=65W cadence=85rpm speed=17km/h]
[2021-10-21T10:54:32.371Z] wheel rotation [timestamp=1634813672370.8826 revolutions=11 speed=17km/h power=65W]
[2021-10-21T10:54:32.453Z] pedal stroke [timestamp=1634813672452.652 revolutions=23 cadence=85rpm power=65W]
[2021-10-21T10:54:32.815Z] wheel rotation [timestamp=1634813672814.7415 revolutions=12 speed=17km/h power=65W]
[2021-10-21T10:54:32.962Z] received stats from bike [power=71W cadence=84rpm speed=16km/h]
[2021-10-21T10:54:33.168Z] pedal stroke [timestamp=1634813673166.9377 revolutions=24 cadence=84rpm power=71W]
[2021-10-21T10:54:33.285Z] received stats from bike [power=104W cadence=84rpm speed=20km/h]
[2021-10-21T10:54:33.292Z] wheel rotation [timestamp=1634813673288 revolutions=13 speed=20km/h power=104W]
[2021-10-21T10:54:33.614Z] received stats from bike [power=113W cadence=85rpm speed=23km/h]
[2021-10-21T10:54:33.625Z] wheel rotation [timestamp=1634813673620 revolutions=14 speed=23km/h power=113W]
[2021-10-21T10:54:33.872Z] pedal stroke [timestamp=1634813673872.82 revolutions=25 cadence=85rpm power=113W]
[2021-10-21T10:54:33.935Z] received stats from bike [power=113W cadence=85rpm speed=23km/h]
[2021-10-21T10:54:33.949Z] wheel rotation [timestamp=1634813673948.0696 revolutions=15 speed=23km/h power=113W]
[2021-10-21T10:54:34.277Z] wheel rotation [timestamp=1634813674276.1392 revolutions=16 speed=23km/h power=113W]
[2021-10-21T10:54:34.580Z] pedal stroke [timestamp=1634813674578.7024 revolutions=26 cadence=85rpm power=113W]
[2021-10-21T10:54:34.591Z] received stats from bike [power=113W cadence=88rpm speed=23km/h]
[2021-10-21T10:54:34.605Z] wheel rotation [timestamp=1634813674604.2087 revolutions=17 speed=23km/h power=113W]
[2021-10-21T10:54:34.911Z] received stats from bike [power=113W cadence=88rpm speed=23km/h]
[2021-10-21T10:54:34.933Z] wheel rotation [timestamp=1634813674932.2783 revolutions=18 speed=23km/h power=113W]
[2021-10-21T10:54:35.235Z] received stats from bike [power=113W cadence=88rpm speed=23km/h]
[2021-10-21T10:54:35.261Z] pedal stroke [timestamp=1634813675260.5205 revolutions=27 cadence=88rpm power=113W]
[2021-10-21T10:54:35.266Z] wheel rotation [timestamp=1634813675260.348 revolutions=19 speed=23km/h power=113W]
[2021-10-21T10:54:35.590Z] wheel rotation [timestamp=1634813675588.4175 revolutions=20 speed=23km/h power=113W]
[2021-10-21T10:54:35.881Z] received stats from bike [power=121W cadence=89rpm speed=22km/h]
[2021-10-21T10:54:35.932Z] wheel rotation [timestamp=1634813675931.3994 revolutions=21 speed=22km/h power=121W]
[2021-10-21T10:54:35.940Z] pedal stroke [timestamp=1634813675934.6777 revolutions=28 cadence=89rpm power=121W]
[2021-10-21T10:54:36.205Z] received stats from bike [power=121W cadence=88rpm speed=22km/h]
[2021-10-21T10:54:36.275Z] wheel rotation [timestamp=1634813676274.3813 revolutions=22 speed=22km/h power=121W]
[2021-10-21T10:54:36.531Z] received stats from bike [power=121W cadence=88rpm speed=22km/h]
[2021-10-21T10:54:36.617Z] pedal stroke [timestamp=1634813676616.4958 revolutions=29 cadence=88rpm power=121W]
[2021-10-21T10:54:36.626Z] wheel rotation [timestamp=1634813676617.3633 revolutions=23 speed=22km/h power=121W]
[2021-10-21T10:54:36.962Z] wheel rotation [timestamp=1634813676960.3452 revolutions=24 speed=22km/h power=121W]
[2021-10-21T10:54:37.178Z] received stats from bike [power=121W cadence=89rpm speed=22km/h]
[2021-10-21T10:54:37.291Z] pedal stroke [timestamp=1634813677290.653 revolutions=30 cadence=89rpm power=121W]
[2021-10-21T10:54:37.305Z] wheel rotation [timestamp=1634813677303.3271 revolutions=25 speed=22km/h power=121W]
[2021-10-21T10:54:37.498Z] received stats from bike [power=121W cadence=88rpm speed=22km/h]
[2021-10-21T10:54:37.647Z] wheel rotation [timestamp=1634813677646.309 revolutions=26 speed=22km/h power=121W]
[2021-10-21T10:54:37.824Z] received stats from bike [power=121W cadence=88rpm speed=22km/h]
[2021-10-21T10:54:37.973Z] pedal stroke [timestamp=1634813677972.4712 revolutions=31 cadence=88rpm power=121W]
[2021-10-21T10:54:37.996Z] wheel rotation [timestamp=1634813677989.291 revolutions=27 speed=22km/h power=121W]
[2021-10-21T10:54:38.150Z] received stats from bike [power=121W cadence=89rpm speed=22km/h]
[2021-10-21T10:54:38.333Z] wheel rotation [timestamp=1634813678332.273 revolutions=28 speed=22km/h power=121W]
[2021-10-21T10:54:38.470Z] received stats from bike [power=123W cadence=89rpm speed=22km/h]
[2021-10-21T10:54:38.648Z] pedal stroke [timestamp=1634813678646.6284 revolutions=32 cadence=89rpm power=123W]
[2021-10-21T10:54:38.676Z] wheel rotation [timestamp=1634813678675.255 revolutions=29 speed=22km/h power=123W]
[2021-10-21T10:54:38.791Z] received stats from bike [power=167W cadence=89rpm speed=26km/h]
[2021-10-21T10:54:38.966Z] wheel rotation [timestamp=1634813678965.4702 revolutions=30 speed=26km/h power=167W]
[2021-10-21T10:54:39.256Z] wheel rotation [timestamp=1634813679255.6855 revolutions=31 speed=26km/h power=167W]
[2021-10-21T10:54:39.321Z] pedal stroke [timestamp=1634813679320.7856 revolutions=33 cadence=89rpm power=167W]
[2021-10-21T10:54:39.439Z] received stats from bike [power=206W cadence=90rpm speed=31km/h]
[2021-10-21T10:54:39.500Z] wheel rotation [timestamp=1634813679499.092 revolutions=32 speed=31km/h power=206W]
[2021-10-21T10:54:39.743Z] wheel rotation [timestamp=1634813679742.4985 revolutions=33 speed=31km/h power=206W]
[2021-10-21T10:54:39.765Z] received stats from bike [power=203W cadence=89rpm speed=31km/h]
[2021-10-21T10:54:39.987Z] wheel rotation [timestamp=1634813679985.905 revolutions=34 speed=31km/h power=203W]
[2021-10-21T10:54:39.995Z] pedal stroke [timestamp=1634813679994.9429 revolutions=34 cadence=89rpm power=203W]
[2021-10-21T10:54:40.085Z] received stats from bike [power=203W cadence=89rpm speed=31km/h]
[2021-10-21T10:54:40.230Z] wheel rotation [timestamp=1634813680229.3115 revolutions=35 speed=31km/h power=203W]
[2021-10-21T10:54:40.475Z] wheel rotation [timestamp=1634813680472.718 revolutions=36 speed=31km/h power=203W]
[2021-10-21T10:54:40.699Z] pedal stroke [timestamp=1634813680669.1 revolutions=35 cadence=89rpm power=203W]
[2021-10-21T10:54:40.721Z] wheel rotation [timestamp=1634813680716.1245 revolutions=37 speed=31km/h power=203W]
[2021-10-21T10:54:40.754Z] received stats from bike [power=213W cadence=93rpm speed=30km/h]
[2021-10-21T10:54:40.968Z] wheel rotation [timestamp=1634813680967.6445 revolutions=38 speed=30km/h power=213W]
[2021-10-21T10:54:41.049Z] received stats from bike [power=210W cadence=91rpm speed=30km/h]
[2021-10-21T10:54:41.220Z] wheel rotation [timestamp=1634813681219.1646 revolutions=39 speed=30km/h power=210W]
[2021-10-21T10:54:41.329Z] pedal stroke [timestamp=1634813681328.4407 revolutions=36 cadence=91rpm power=210W]
[2021-10-21T10:54:41.371Z] received stats from bike [power=210W cadence=91rpm speed=30km/h]
[2021-10-21T10:54:41.470Z] wheel rotation [timestamp=1634813681470.6846 revolutions=40 speed=30km/h power=210W]
[2021-10-21T10:54:41.700Z] received stats from bike [power=206W cadence=90rpm speed=31km/h]
[2021-10-21T10:54:41.716Z] wheel rotation [timestamp=1634813681714.091 revolutions=41 speed=31km/h power=206W]
[2021-10-21T10:54:41.958Z] wheel rotation [timestamp=1634813681957.4976 revolutions=42 speed=31km/h power=206W]
[2021-10-21T10:54:41.998Z] pedal stroke [timestamp=1634813681995.1074 revolutions=37 cadence=90rpm power=206W]
[2021-10-21T10:54:42.029Z] received stats from bike [power=206W cadence=90rpm speed=31km/h]
[2021-10-21T10:54:42.201Z] wheel rotation [timestamp=1634813682200.904 revolutions=43 speed=31km/h power=206W]
[2021-10-21T10:54:42.350Z] received stats from bike [power=206W cadence=90rpm speed=31km/h]
[2021-10-21T10:54:42.445Z] wheel rotation [timestamp=1634813682444.3105 revolutions=44 speed=31km/h power=206W]
[2021-10-21T10:54:42.662Z] pedal stroke [timestamp=1634813682661.7742 revolutions=38 cadence=90rpm power=206W]
[2021-10-21T10:54:42.688Z] wheel rotation [timestamp=1634813682687.717 revolutions=45 speed=31km/h power=206W]
[2021-10-21T10:54:42.932Z] wheel rotation [timestamp=1634813682931.1235 revolutions=46 speed=31km/h power=206W]
[2021-10-21T10:54:42.996Z] received stats from bike [power=206W cadence=92rpm speed=31km/h]
[2021-10-21T10:54:43.175Z] wheel rotation [timestamp=1634813683174.53 revolutions=47 speed=31km/h power=206W]
[2021-10-21T10:54:43.314Z] pedal stroke [timestamp=1634813683313.948 revolutions=39 cadence=92rpm power=206W]
[2021-10-21T10:54:43.418Z] wheel rotation [timestamp=1634813683417.9365 revolutions=48 speed=31km/h power=206W]
[2021-10-21T10:54:43.641Z] received stats from bike [power=215W cadence=93rpm speed=30km/h]
[2021-10-21T10:54:43.670Z] wheel rotation [timestamp=1634813683669.4565 revolutions=49 speed=30km/h power=215W]
[2021-10-21T10:54:43.921Z] wheel rotation [timestamp=1634813683920.9766 revolutions=50 speed=30km/h power=215W]
[2021-10-21T10:54:43.959Z] received stats from bike [power=215W cadence=93rpm speed=30km/h]
[2021-10-21T10:54:43.966Z] pedal stroke [timestamp=1634813683963 revolutions=40 cadence=93rpm power=215W]
[2021-10-21T10:54:44.173Z] wheel rotation [timestamp=1634813684172.4966 revolutions=51 speed=30km/h power=215W]
[2021-10-21T10:54:44.288Z] received stats from bike [power=247W cadence=92rpm speed=32km/h]
[2021-10-21T10:54:44.409Z] wheel rotation [timestamp=1634813684408.2966 revolutions=52 speed=32km/h power=247W]
[2021-10-21T10:54:44.613Z] received stats from bike [power=285W cadence=92rpm speed=37km/h]
[2021-10-21T10:54:44.620Z] pedal stroke [timestamp=1634813684616 revolutions=41 cadence=92rpm power=285W]
[2021-10-21T10:54:44.626Z] wheel rotation [timestamp=1634813684617 revolutions=53 speed=37km/h power=285W]
[2021-10-21T10:54:44.821Z] wheel rotation [timestamp=1634813684820.935 revolutions=54 speed=37km/h power=285W]
[2021-10-21T10:54:44.941Z] received stats from bike [power=316W cadence=90rpm speed=39km/h]
[2021-10-21T10:54:45.015Z] wheel rotation [timestamp=1634813685014.4119 revolutions=55 speed=39km/h power=316W]
[2021-10-21T10:54:45.209Z] wheel rotation [timestamp=1634813685207.8887 revolutions=56 speed=39km/h power=316W]
[2021-10-21T10:54:45.283Z] pedal stroke [timestamp=1634813685282.6667 revolutions=42 cadence=90rpm power=316W]
[2021-10-21T10:54:45.403Z] wheel rotation [timestamp=1634813685401.3655 revolutions=57 speed=39km/h power=316W]
[2021-10-21T10:54:45.594Z] received stats from bike [power=321W cadence=88rpm speed=39km/h]
[2021-10-21T10:54:45.602Z] wheel rotation [timestamp=1634813685598 revolutions=58 speed=39km/h power=321W]
[2021-10-21T10:54:45.792Z] wheel rotation [timestamp=1634813685791.4768 revolutions=59 speed=39km/h power=321W]
[2021-10-21T10:54:45.921Z] received stats from bike [power=321W cadence=88rpm speed=39km/h]
[2021-10-21T10:54:45.968Z] pedal stroke [timestamp=1634813685964.4849 revolutions=43 cadence=88rpm power=321W]
[2021-10-21T10:54:45.987Z] wheel rotation [timestamp=1634813685984.9536 revolutions=60 speed=39km/h power=321W]
[2021-10-21T10:54:46.180Z] wheel rotation [timestamp=1634813686178.4304 revolutions=61 speed=39km/h power=321W]
[2021-10-21T10:54:46.239Z] received stats from bike [power=332W cadence=90rpm speed=38km/h]
[2021-10-21T10:54:46.377Z] wheel rotation [timestamp=1634813686376.9988 revolutions=62 speed=38km/h power=332W]
[2021-10-21T10:54:46.558Z] received stats from bike [power=332W cadence=90rpm speed=38km/h]
[2021-10-21T10:54:46.576Z] wheel rotation [timestamp=1634813686575.5671 revolutions=63 speed=38km/h power=332W]
[2021-10-21T10:54:46.631Z] pedal stroke [timestamp=1634813686631.1516 revolutions=44 cadence=90rpm power=332W]
[2021-10-21T10:54:46.775Z] wheel rotation [timestamp=1634813686774.1355 revolutions=64 speed=38km/h power=332W]
[2021-10-21T10:54:46.898Z] received stats from bike [power=345W cadence=93rpm speed=41km/h]
[2021-10-21T10:54:46.959Z] wheel rotation [timestamp=1634813686958.1746 revolutions=65 speed=41km/h power=345W]
[2021-10-21T10:54:47.143Z] wheel rotation [timestamp=1634813687142.2136 revolutions=66 speed=41km/h power=345W]
[2021-10-21T10:54:47.202Z] received stats from bike [power=345W cadence=93rpm speed=41km/h]
[2021-10-21T10:54:47.277Z] pedal stroke [timestamp=1634813687276.313 revolutions=45 cadence=93rpm power=345W]
[2021-10-21T10:54:47.326Z] wheel rotation [timestamp=1634813687326.2527 revolutions=67 speed=41km/h power=345W]
[2021-10-21T10:54:47.523Z] wheel rotation [timestamp=1634813687510.2917 revolutions=68 speed=41km/h power=345W]
[2021-10-21T10:54:47.707Z] wheel rotation [timestamp=1634813687694.3308 revolutions=69 speed=41km/h power=345W]
[2021-10-21T10:54:47.880Z] wheel rotation [timestamp=1634813687878.3699 revolutions=70 speed=41km/h power=345W]
[2021-10-21T10:54:47.922Z] pedal stroke [timestamp=1634813687921.4744 revolutions=46 cadence=93rpm power=345W]
[2021-10-21T10:54:48.062Z] wheel rotation [timestamp=1634813688062.409 revolutions=71 speed=41km/h power=345W]
[2021-10-21T10:54:48.169Z] received stats from bike [power=335W cadence=91rpm speed=38km/h]
[2021-10-21T10:54:48.261Z] wheel rotation [timestamp=1634813688260.9773 revolutions=72 speed=38km/h power=335W]
[2021-10-21T10:54:48.460Z] wheel rotation [timestamp=1634813688459.5457 revolutions=73 speed=38km/h power=335W]
[2021-10-21T10:54:48.492Z] received stats from bike [power=335W cadence=91rpm speed=38km/h]
[2021-10-21T10:54:48.581Z] pedal stroke [timestamp=1634813688580.815 revolutions=47 cadence=91rpm power=335W]
[2021-10-21T10:54:48.659Z] wheel rotation [timestamp=1634813688658.114 revolutions=74 speed=38km/h power=335W]
[2021-10-21T10:54:48.811Z] received stats from bike [power=499W cadence=91rpm speed=49km/h]
[2021-10-21T10:54:48.819Z] wheel rotation [timestamp=1634813688815 revolutions=75 speed=49km/h power=499W]
[2021-10-21T10:54:48.968Z] wheel rotation [timestamp=1634813688968.992 revolutions=76 speed=49km/h power=499W]
[2021-10-21T10:54:49.123Z] wheel rotation [timestamp=1634813689122.984 revolutions=77 speed=49km/h power=499W]
[2021-10-21T10:54:49.138Z] received stats from bike [power=540W cadence=91rpm speed=51km/h]
[2021-10-21T10:54:49.241Z] pedal stroke [timestamp=1634813689240.1555 revolutions=48 cadence=91rpm power=540W]
[2021-10-21T10:54:49.271Z] wheel rotation [timestamp=1634813689270.9368 revolutions=78 speed=51km/h power=540W]
[2021-10-21T10:54:49.419Z] wheel rotation [timestamp=1634813689418.8896 revolutions=79 speed=51km/h power=540W]
[2021-10-21T10:54:49.567Z] wheel rotation [timestamp=1634813689566.8425 revolutions=80 speed=51km/h power=540W]
[2021-10-21T10:54:49.715Z] wheel rotation [timestamp=1634813689714.7954 revolutions=81 speed=51km/h power=540W]
[2021-10-21T10:54:49.783Z] received stats from bike [power=462W cadence=78rpm speed=47km/h]
[2021-10-21T10:54:49.876Z] wheel rotation [timestamp=1634813689875.34 revolutions=82 speed=47km/h power=462W]
[2021-10-21T10:54:50.010Z] pedal stroke [timestamp=1634813690009.3862 revolutions=49 cadence=78rpm power=462W]
[2021-10-21T10:54:50.036Z] wheel rotation [timestamp=1634813690035.8848 revolutions=83 speed=47km/h power=462W]
[2021-10-21T10:54:50.196Z] wheel rotation [timestamp=1634813690196.4294 revolutions=84 speed=47km/h power=462W]
[2021-10-21T10:54:50.357Z] wheel rotation [timestamp=1634813690356.974 revolutions=85 speed=47km/h power=462W]
[2021-10-21T10:54:50.430Z] received stats from bike [power=462W cadence=78rpm speed=47km/h]
[2021-10-21T10:54:50.518Z] wheel rotation [timestamp=1634813690517.5188 revolutions=86 speed=47km/h power=462W]
[2021-10-21T10:54:50.679Z] wheel rotation [timestamp=1634813690678.0635 revolutions=87 speed=47km/h power=462W]
[2021-10-21T10:54:50.779Z] pedal stroke [timestamp=1634813690778.617 revolutions=50 cadence=78rpm power=462W]
[2021-10-21T10:54:50.839Z] wheel rotation [timestamp=1634813690838.6082 revolutions=88 speed=47km/h power=462W]
[2021-10-21T10:54:51.000Z] wheel rotation [timestamp=1634813690999.1528 revolutions=89 speed=47km/h power=462W]
[2021-10-21T10:54:51.079Z] received stats from bike [power=462W cadence=80rpm speed=47km/h]
[2021-10-21T10:54:51.160Z] wheel rotation [timestamp=1634813691159.6975 revolutions=90 speed=47km/h power=462W]
[2021-10-21T10:54:51.321Z] wheel rotation [timestamp=1634813691320.2422 revolutions=91 speed=47km/h power=462W]
[2021-10-21T10:54:51.400Z] received stats from bike [power=462W cadence=80rpm speed=47km/h]
[2021-10-21T10:54:51.481Z] wheel rotation [timestamp=1634813691480.7869 revolutions=92 speed=47km/h power=462W]
[2021-10-21T10:54:51.529Z] pedal stroke [timestamp=1634813691528.617 revolutions=51 cadence=80rpm power=462W]
[2021-10-21T10:54:51.642Z] wheel rotation [timestamp=1634813691641.3315 revolutions=93 speed=47km/h power=462W]
[2021-10-21T10:54:51.720Z] received stats from bike [power=476W cadence=80rpm speed=46km/h]
[2021-10-21T10:54:51.807Z] wheel rotation [timestamp=1634813691805.3662 revolutions=94 speed=46km/h power=476W]
[2021-10-21T10:54:51.971Z] wheel rotation [timestamp=1634813691969.401 revolutions=95 speed=46km/h power=476W]
[2021-10-21T10:54:52.041Z] received stats from bike [power=482W cadence=80rpm speed=49km/h]
[2021-10-21T10:54:52.124Z] wheel rotation [timestamp=1634813692123.3928 revolutions=96 speed=49km/h power=482W]
[2021-10-21T10:54:52.278Z] wheel rotation [timestamp=1634813692277.3848 revolutions=97 speed=49km/h power=482W]
[2021-10-21T10:54:52.284Z] pedal stroke [timestamp=1634813692278.617 revolutions=52 cadence=80rpm power=482W]
[2021-10-21T10:54:52.364Z] received stats from bike [power=482W cadence=80rpm speed=49km/h]
[2021-10-21T10:54:52.432Z] wheel rotation [timestamp=1634813692431.3767 revolutions=98 speed=49km/h power=482W]
[2021-10-21T10:54:52.586Z] wheel rotation [timestamp=1634813692585.3687 revolutions=99 speed=49km/h power=482W]
[2021-10-21T10:54:52.683Z] received stats from bike [power=472W cadence=79rpm speed=46km/h]
[2021-10-21T10:54:52.750Z] wheel rotation [timestamp=1634813692749.4033 revolutions=100 speed=46km/h power=472W]
[2021-10-21T10:54:52.914Z] wheel rotation [timestamp=1634813692913.438 revolutions=101 speed=46km/h power=472W]
[2021-10-21T10:54:53.039Z] pedal stroke [timestamp=1634813693038.1106 revolutions=53 cadence=79rpm power=472W]
[2021-10-21T10:54:53.078Z] wheel rotation [timestamp=1634813693077.4727 revolutions=102 speed=46km/h power=472W]
[2021-10-21T10:54:53.243Z] wheel rotation [timestamp=1634813693241.5073 revolutions=103 speed=46km/h power=472W]
[2021-10-21T10:54:53.406Z] wheel rotation [timestamp=1634813693405.542 revolutions=104 speed=46km/h power=472W]
[2021-10-21T10:54:53.570Z] wheel rotation [timestamp=1634813693569.5767 revolutions=105 speed=46km/h power=472W]
[2021-10-21T10:54:53.665Z] received stats from bike [power=437W cadence=74rpm speed=44km/h]
[2021-10-21T10:54:53.742Z] wheel rotation [timestamp=1634813693741.0676 revolutions=106 speed=44km/h power=437W]
[2021-10-21T10:54:53.850Z] pedal stroke [timestamp=1634813693848.9214 revolutions=54 cadence=74rpm power=437W]
[2021-10-21T10:54:53.913Z] wheel rotation [timestamp=1634813693912.5586 revolutions=107 speed=44km/h power=437W]
[2021-10-21T10:54:53.993Z] received stats from bike [power=437W cadence=74rpm speed=44km/h]
[2021-10-21T10:54:54.085Z] wheel rotation [timestamp=1634813694084.0496 revolutions=108 speed=44km/h power=437W]
[2021-10-21T10:54:54.256Z] wheel rotation [timestamp=1634813694255.5405 revolutions=109 speed=44km/h power=437W]
[2021-10-21T10:54:54.429Z] wheel rotation [timestamp=1634813694427.0315 revolutions=110 speed=44km/h power=437W]
[2021-10-21T10:54:54.599Z] wheel rotation [timestamp=1634813694598.5225 revolutions=111 speed=44km/h power=437W]
[2021-10-21T10:54:54.635Z] received stats from bike [power=391W cadence=68rpm speed=42km/h]
[2021-10-21T10:54:54.732Z] pedal stroke [timestamp=1634813694731.2744 revolutions=55 cadence=68rpm power=391W]
[2021-10-21T10:54:54.779Z] wheel rotation [timestamp=1634813694778.1797 revolutions=112 speed=42km/h power=391W]
[2021-10-21T10:54:54.962Z] wheel rotation [timestamp=1634813694957.837 revolutions=113 speed=42km/h power=391W]
[2021-10-21T10:54:54.969Z] received stats from bike [power=391W cadence=68rpm speed=42km/h]
[2021-10-21T10:54:55.138Z] wheel rotation [timestamp=1634813695137.4941 revolutions=114 speed=42km/h power=391W]
[2021-10-21T10:54:55.291Z] received stats from bike [power=346W cadence=62rpm speed=41km/h]
[2021-10-21T10:54:55.322Z] wheel rotation [timestamp=1634813695321.5332 revolutions=115 speed=41km/h power=346W]
[2021-10-21T10:54:55.506Z] wheel rotation [timestamp=1634813695505.5723 revolutions=116 speed=41km/h power=346W]
[2021-10-21T10:54:55.691Z] wheel rotation [timestamp=1634813695689.6113 revolutions=117 speed=41km/h power=346W]
[2021-10-21T10:54:55.701Z] pedal stroke [timestamp=1634813695699.0164 revolutions=56 cadence=62rpm power=346W]
[2021-10-21T10:54:55.874Z] wheel rotation [timestamp=1634813695873.6504 revolutions=118 speed=41km/h power=346W]
[2021-10-21T10:54:56.058Z] wheel rotation [timestamp=1634813696057.6895 revolutions=119 speed=41km/h power=346W]
[2021-10-21T10:54:56.243Z] wheel rotation [timestamp=1634813696241.7285 revolutions=120 speed=41km/h power=346W]
[2021-10-21T10:54:56.262Z] received stats from bike [power=346W cadence=62rpm speed=41km/h]
[2021-10-21T10:54:56.426Z] wheel rotation [timestamp=1634813696425.7676 revolutions=121 speed=41km/h power=346W]
[2021-10-21T10:54:56.591Z] received stats from bike [power=31W cadence=62rpm speed=11km/h]
[2021-10-21T10:54:56.668Z] pedal stroke [timestamp=1634813696666.7583 revolutions=57 cadence=62rpm power=31W]
[2021-10-21T10:54:56.911Z] received stats from bike [power=13W cadence=62rpm speed=0km/h]
[2021-10-21T10:54:57.232Z] received stats from bike [power=13W cadence=62rpm speed=0km/h]
[2021-10-21T10:54:57.561Z] received stats from bike [power=13W cadence=62rpm speed=0km/h]
[2021-10-21T10:54:57.636Z] pedal stroke [timestamp=1634813697634.5002 revolutions=58 cadence=62rpm power=13W]
[2021-10-21T10:54:58.214Z] received stats from bike [power=13W cadence=62rpm speed=0km/h]
[2021-10-21T10:54:58.536Z] received stats from bike [power=13W cadence=0rpm speed=0km/h]
[2021-10-21T10:54:58.863Z] received stats from bike [power=13W cadence=0rpm speed=0km/h]
[2021-10-21T10:54:59.191Z] received stats from bike [power=13W cadence=0rpm speed=0km/h]
[2021-10-21T10:54:59.512Z] received stats from bike [power=0W cadence=0rpm speed=1km/h]
[2021-10-21T10:55:00.162Z] received stats from bike [power=0W cadence=0rpm speed=1km/h]
[2021-10-21T10:55:00.814Z] received stats from bike [power=0W cadence=0rpm speed=1km/h]
^C[2021-10-21T10:55:01.542Z] stopping ANT+ server
At 100 watts (which should be ~16m/h) speed comes through at 10m/h on my Garmin watch (is it thinking it’s km/h and then converting to m/h perhaps?)
At 100 watts (which should be ~16m/h) speed comes through at 10m/h on my Garmin watch (is it thinking it’s km/h and then converting to m/h perhaps?)
Yes, the speed values that you were seeing were basically in mph, but displayed as km/h. Now with the fix internally the Power->Speed function will output the speed into km/h and create the ANT+ and BLE timestamps and rotation counts accordingly. Your watch should then show the results correctly in whether you use km/h or mph.
Is there an easyer way to update? - I keep redoing it from the start with fresh sd as otherwise it won’t write over the existing ‘ant BLE-….’ When I try to pull the updated ant BLE-….
You only need to repeat these steps while in the "gymnasticon" folder:
git pull
npm run build
gymnasticon
‘Git pull’ didn’t work on its own…
Updated the full remote and branch information for git pull in the answer just above. Try that one.
What's also odd is that my Garmin watch recognizes the BLE CSC service, but then ends up only displaying cadence and no speed.
This is what I'm seeing too. I can see speed fine when paired over BLE with the Kinetic Fit app, just not on my watch (FR745). I don't have an ANT stick so was hoping to get BLE to work.
Works perfectly fine with Garmin. I can now see the IC4/IC7 bike speed, power, cadence and record it to Garmin without needed additional sensors.
Thanks so much.
Works perfectly fine with Garmin. I can now see the IC4/IC7 bike speed, power, cadence and record it to Garmin without needed additional sensors.
Thanks so much.
I just found this update a few days ago and implemented everything. I have a Keiser M3i with the older computer that required the M Series Connector and a new Garmin Fenix 6 Pro. When I first set this up, I still had my Connector on for my first few rides. I saw the same dropouts that @nealjane was reporting with the new Keiser computer. I disconnected the M Series Connector and then everything has been running fine for me with very few dropouts during my rides. I'm very new to Garmin (long time Polar user) and don't know too much about ANT+. Is there anything that can be done to have it play nicely with Bluetooth? I tried two different lower cost ANT+ USB Sticks (CHILEAF and LIVLOV) because I wasn't willing to spend $50 on the Garmin version. Is it just the ANT+ signal that is a problem in a Bluetooth environment?
Hi Cris, chileaf and livlov are basically the same ant+ dongle , it’s the brand I’ve mentioned to a couple of people on here /M3i fb group(basically they look different to the poor performance anself and clones which people have had issues with over last few years.)
. Performance wise I’ve had no issues with those vs Garmin one. Ant+ is low power so ideally you need it with line of sight and within a couple of metres of bike or more importantly your watch.
As for The new computer (not sure which one you have??) - as Bluetooth converter is now built in(eg zwift compatible) - there’s definite interference from the new computers Bluetooth as when I tested it I saw fairly consistent dropouts in the power signal - I’ve reverted back to the older computer as ant+ is more important to me than anything else for connecting to my fenix 6.
Newer computers = is clearly not always better!!
Hi Neal, thanks for the clarification. I'm glad to hear the chileaf/livlov are good to use; I really didn't want to spend more on the Garmin one. I have the Bluetooth converter; I have since pulled it out of usage (I wasn't getting the Gymnasticon cadence on the Peloton app, so was still using it; realized I had mistakenly used Gymnasticon as a sensor on my Garmin watch, which I fixed and now can see cadence on the Peloton app from Gymnasticon). I am still getting some interference when my wife is in the gym at the same time, which I attribute to other Bluetooth signals present at the same time. I'll try getting the ANT+ stick closer to my watch and see if that helps. I had it tucked away in our workout area. Thanks for the suggestions!
Typical power over ant+ Gymnasticon connected to the newer mid 2021/2022 Keiser ‘M connect’ computer from last autumn (prior to speed implementation),
this was using power from Gymnasticon and speed from the new Keiser computer.
As Gymnasticon now provides speed - it’s whether it’s the speed connection over Bluetooth from new Keiser computer that causes interference?? (I don’t think I tested it without Keiser speed connected). Def Looks like I need to swap over to my newer computers again to see if problem persists for newer Keiser M3i computer users. (I’m off work as it’s half term so will try this week)
I've had a lot of success with the Ant+ stick closer to my watch. I do still have occasional disconnects, where the Garmin watch loses the sensor and then reconnects, so only missing a very small amount of data. Not a huge deal to me.
If I wanted to put gymnasticon in debug mode where it creates a log file, how would I do that? I've tried reading everywhere here but can't find it.
@crissag -Change this in keiser.js file which will reduce the drops to 0 over ant+ (try 2 seconds instead) - const KEISER_STATS_TIMEOUT_NEW = 1.0; // New Bike: If no stats received within 1 sec, reset power and cadence to 0
@nealjane My kesier.js file had this set to 2; It must have been updated in this branch.
I also remembered how much I hated the vi editor.
Hello, long suffering metric-less Garmin and Peloton user here who just wanted to thank you for all the work and report that this branch is working great for me. I'm using a FR745 and am using this ANT+ adapter on a Raspberry Pi 4.
Here's a nice example of the metrics reported to the watch aligning with what the bike recorded itself:
I'm using all the cabling as described in #12 but do not have the Bike TX connected to the tablet -- when I did that the metrics were correct on my watch but continually dropped out / maxed out on the bike like this:
@nealjane Thanks for the pointer to this branch. @chriselsen I followed the instructions and got my Pi pushing cadence stats to my Garmin Edge 25 from my IC4 bike. I didn't check which profile gymnasticon selected but for some reason speed was not published (heart rate came from a separate sensor paired directly to the Edge). I'll re-run with debug mode on and see if speed shows up, but otherwise this worked great. Thank you!
I’m not sure that it works over Bluetooth - ant+ speed should be working though. 👍🏻
I've had a lot of success with the Ant+ stick closer to my watch. I do still have occasional disconnects, where the Garmin watch loses the sensor and then reconnects, so only missing a very small amount of data. Not a huge deal to me.
If I wanted to put gymnasticon in debug mode where it creates a log file, how would I do that? I've tried reading everywhere here but can't find it.
@crissag -Change this in keiser.js file which will reduce the drops to 0 over ant+ (try 2 seconds instead) - const KEISER_STATS_TIMEOUT_NEW = 1.0; // New Bike: If no stats received within 1 sec, reset power and cadence to 0
@nealjane Did setting this to 3s solve your issues with the new Keiser computer as well? I had thought about setting this higher, but the dropouts were less of an issue for me when I moved the pi closer to my watch.
@nealjane Thanks for the pointer to this branch. @chriselsen I followed the instructions and got my Pi pushing cadence stats to my Garmin Edge 25 from my IC4 bike. I didn't check which profile gymnasticon selected but for some reason speed was not published (heart rate came from a separate sensor paired directly to the Edge). I'll re-run with debug mode on and see if speed shows up, but otherwise this worked great. Thank you!
Turns out I forgot to select "Indoor Mode" on the Edge once the sensors were connected. Speed shows up now along with the other stats. Thanks all!
With most recent RPi image I am getting command not found for overctl. Any idea why?
I skipped that part and tested the rest with Garmin + IC4. Works like charm! GJ
Answering my own question with another question. Is it possible that the changes from #58 are not present in the latest img as linked from the README.md ? At least that's where I got my image and it does not have overctl.
With most recent RPi image I am getting command not found for overctl. Any idea why?
I skipped that part and tested the rest with Garmin + IC4. Works like charm! GJ
That command didn’t work for me either, but current version is write protected etc - I just use raspiconfig to write/lock the image when needed/which works.
ptx2 hasn’t been around in almost a year now - so don’t think there will be any furthet updates to the current Gymnasticon without him - Chris Elsens speed additions for ant+ are here though.
I just use raspiconfig to write/lock the image when needed/which works.
I see, thanks for clarifying. Do you have any info on how to do that? Are you using the raspiconfig command and do it through the graphical interface? I do not seem to find anything related in that tree of available options...
I just use raspiconfig to write/lock the image when needed/which works.
Okay, thanks for clarifying! So, in order to do changes you disable the overlay FS with raspiconfig and when you are done you enable it again, right? It seems disabled in my case by default right now... Not sure if I restart the RPi now if all my changes will be gone! XD
Just wanted to say THANK YOU and also want to update @chriselsen and all others contributers that I've tried this on Flywheel bike and it is reporting speed to my Garmin 520 over ANT+ (can't check BLE as I don't have a head unit that can read BLE signal) so you can cross one of the issues in your list.
Not sure how accurate the speed value is, but seems high for the power I am generating. Below are some data points.
60 watts = 21.2 km/h
69 watts = 22.7 km/h
73 watts = 23.3 km/h
91 watts = 25 km/h
119 watts = 29.7 km/h
129 watts = 30.9 km/h
I am using "power-scale":0.9 parameter in the config file to reduce the power reported to my head unit as I found out my spin bike seems to report about 10% more watts than my friend's high end TACX smart trainer when I did a quick test ride side by side. Even if I don't use the "power-scale" parameter the speed seems high. I know this isn't your doing as you are just reporting whatever the bike is spitting out, but one suggestion would be to add "speed-scale" and "speed-offset" parameter for us to play with to fine tune the values to fi these type of issues.
Thank you once again!
The Flywheel bike should be providing the speed via Bluetooth to Gymnasticon. Therefore Gymnasticon only passes through to ANT+ whatever it receives.
At least from experience with the IC4 bike, the speed shown by these bikes doesn't factor in resistance. It's purely some formula based on generated power. Therefore the speed displayed is what you would be riding on a flat surface - where resistance is close to 0.
The speed that you see within e.g. Zwift factors in the slope of the hill that you are climbing or going down. Therefore that speed will not match what your trainer shows you, unless you rid on a long flat stretch in Zwift.
Thanks for the response. I noticed gymnasticon sends the data as Spd/Cad sensor rather than just Speed sensor. If I search under Speed Sensor for Garmin 520, it can't find it, but it will find it under Spd/Cad sensor. Not sure if this has any overlap with Power sensor as this also provides Cadence.
Also I did notice few power drops to zero and speed going to ~4 mph intermittently, but I am not sure what is causing this. I don't recall having this issue before installing this branch (at least for power as I didn't have speed data before). The ANT+ is right below the Garmin head unit so I don't think it is a distance issue from the ANT+ dongle. I'll keep monitoring as I ride and report back any issues with Flywheel bike. Any idea what maybe causing this would be appreciate it and if you need anything from me let me know.
Below graph shows overlay of speed vs. power (speed is the area graph and power is the dark line graph). As you can see, power drops to zero 3x times, but speed drops to ~4 mph at many instances.
| gharchive/pull-request | 2021-10-17T18:57:03 | 2025-04-01T06:45:30.041160 | {
"authors": [
"adanemayer",
"chriselsen",
"crissag",
"gurase",
"nealjane",
"rmaster78",
"romansemko",
"screetch82",
"toma"
],
"repo": "ptx2/gymnasticon",
"url": "https://github.com/ptx2/gymnasticon/pull/89",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
686618429 | After setting up public_activity missing render
Hey there!
After trying to setup this gem I gent an issue with missing partial.
`
ActionView::MissingTemplate in Clients#show
Showing /Users/jorge/Projects/cliente/app/views/clients/show.html.erb where line #137 raised:
Missing partial public_activity/client/_create with {:locale=>[:en, :es], :formats=>[:html], :variants=>[], :handlers=>[:raw, :erb, :html, :builder, :ruby, :jbuilder]}. Searched in:
"/Users/jorge/Projects/cliente/app/views"
"/Users/jorge/.rvm/gems/ruby-2.7.1/gems/pay-2.1.3/app/views"
"/Users/jorge/.rvm/gems/ruby-2.7.1/gems/devise-i18n-1.9.1/app/views"
"/Users/jorge/.rvm/gems/ruby-2.7.1/gems/devise-4.7.2/app/views"
"/Users/jorge/.rvm/gems/ruby-2.7.1/gems/administrate-field-active_storage-0.3.5/app/views"
"/Users/jorge/.rvm/gems/ruby-2.7.1/bundler/gems/administrate-239ec5bb8c06/app/views"
"/Users/jorge/.rvm/gems/ruby-2.7.1/gems/kaminari-core-1.2.1/app/views"
"/Users/jorge/Projects/cliente/lib/jumpstart/app/views"
"/Users/jorge/.rvm/gems/ruby-2.7.1/gems/letter_opener_web-1.4.0/app/views"
"/Users/jorge/.rvm/gems/ruby-2.7.1/gems/actiontext-6.0.3.2/app/views"
"/Users/jorge/.rvm/gems/ruby-2.7.1/gems/actionmailbox-6.0.3.2/app/views"
`
How can I solve this?
@JorgeDDW This is an old issue. I assume you’ve either resolved the issue or moved on‽ In any case, each activity currently needs it own partial. The error message tells you what that partial should be: public_activity/client/_create.html.erb.
There’s an open issue about providing a generic partial: https://github.com/public-activity/public_activity/issues/124.
| gharchive/issue | 2020-08-26T21:10:44 | 2025-04-01T06:45:30.054338 | {
"authors": [
"JorgeDDW",
"ur5us"
],
"repo": "public-activity/public_activity",
"url": "https://github.com/public-activity/public_activity/issues/353",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
647413075 | Timeline info sometimes contains negative values
It has been observed that the timeline production sometimes results in negative values for timing components, e.g.:
{
:unload=>0, :redirect=>0, :dns=>0, :connect=>0, :request=>-615356,
:response=>0, :dom_loading=>0, :dom_interactive=>0,
:dom_content_loaded_event=>0, :dom_complete=>0, :load_event=>0
}
We should investigate the root cause, and suppress negative outputs.
Debug code inserted suggests that the browser is reporting back e.g. "responseStart"=>0, "responseEnd"=>0 in the performance timeline.
Digging further, I believe this might be because the requests in question are still in progress when ndr_browser_timing's JS is querying the browser's API.
The PerformanceEntry API doesn't appear to have a way of exposing if an entry is still active.
I think for our purposes, we could consider an entry without a responseEnd to be "in progress":
xzgrep -E 'NdrBrowserTimings.*"responseEnd"=>0' some/log/files* | grep -oP ':response=>.' | sort | uniq -c
1983 :response=>-
36844 :response=>0
Here, :response=>- suggests that the the response was only partially received when the performance snapshot was sent, and :response=>0 suggests the response was yet to start. As expected, the latter case was much more likely.
| gharchive/issue | 2020-06-29T14:14:01 | 2025-04-01T06:45:30.066249 | {
"authors": [
"joshpencheon"
],
"repo": "publichealthengland/ndr_browser_timings",
"url": "https://github.com/publichealthengland/ndr_browser_timings/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2558723438 | UAT|For mandatory submission orders, both the 'Submit Extension Request' and 'Submit Document' buttons are visible to the advocate, allowing multiple extension requests. These buttons should be hidden after the order is clicked.
jam link:https://jam.dev/c/29541055-49d7-4fc1-a4a1-88a1871c1aae
As per @devarajd94 's confirmation, Advocate can file extension of submission deadline request any number of time.
We just need to block that button when there is already one submission for extension of deadline is in process and judge has not accepted/rejected that order.
Once it's been accepted or rejected, advocate again can file application for extension of submission deadline
@Mahesh-Vishwanath1 Kindly take a note of these requirements.
c.c. @Ramu-kandimalla @rajeshcherukumalli @anirudh-0
Hi @Ramu-kandimalla,
For this ticket, we covered the below test cases, and those are working as expected in solution QA.
Test Cases:
1.Verify that the "Mandatory Submission Pending" task appears on the advocate's side, with the "Submit Extension Request" and "Submit Document" buttons.
2.Verify that when the advocate submits an extension request, the pending task appears on the judge's side for approval or rejection.
3.Verify that even after the advocate submits an extension request, the "Mandatory Submission Pending" task remains visible with the "Submit Document" button for the advocate.
4.Verify that once the advocate submits the document, the pending task is removed from the advocate's side, and the uploaded document is displayed in the judge's document section for the specific case.
URL:
Citizens: https://dristi-kerala-qa.pucar.org/digit-ui/citizen/select-language
Employee: https://dristi-kerala-qa.pucar.org/digit-ui/employee/user/login
user/PW: qaJudge01/Beehyv@123
Reference links:
https://jam.dev/c/6f44c4e9-655d-4bd6-9132-7066cff5adc6
https://jam.dev/c/84d275dc-0533-4eaf-a392-699b5e957aa1
Those scenarios mentioned in test cases are all working in UAT env.
| gharchive/issue | 2024-10-01T09:46:05 | 2025-04-01T06:45:30.236290 | {
"authors": [
"Mahesh-Vishwanath1",
"vaibhavct"
],
"repo": "pucardotorg/dristi",
"url": "https://github.com/pucardotorg/dristi/issues/1830",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2613284203 | [BUG]: [QA | In fso if we mark defect in Accused Details name, it is reflecting as 2 input errors]
Describe the bug
In fso if we mark defect in Accused Details name, it is reflecting as 2 input errors
To Reproduce
Steps to reproduce the behavior:
Go to FSO user
Click on particular case, mark age as defect and enter defect description and click on mark defect.
On bottom left 1 input error and 0 selection error would appear.
Mark name as defect and enter defect description and click on mark defect.
On bottom left 3 input error and 0 selection error would appear.
See error
Expected behavior
On bottom left it should be 2 input error and 0 selection error.
Screenshots
Input error.webm
Desktop:
Browser [firefox]
@iknoorkaur Name consits of first name and lastname and these are two different fields on the user's end. Hence we're showing it as 2 input error.
Kindly update the status of the ticket accordingly.
c.c. @Ramu-kandimalla
| gharchive/issue | 2024-10-25T07:09:23 | 2025-04-01T06:45:30.240788 | {
"authors": [
"iknoorkaur",
"vaibhavct"
],
"repo": "pucardotorg/dristi",
"url": "https://github.com/pucardotorg/dristi/issues/2263",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1181160741 | Index the local Princeton subject headings
650 _7 with $2local
We are not sure yet if it will be into a separate solr field, or mixed in with the existing subject terms
closed because it is a duplicate of #1830
| gharchive/issue | 2022-03-25T19:06:32 | 2025-04-01T06:45:30.266142 | {
"authors": [
"sandbergja"
],
"repo": "pulibrary/bibdata",
"url": "https://github.com/pulibrary/bibdata/issues/1831",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
727622485 | Estimate how many times we hit the alma api.
We want to see how many times the alma api is hit in marc_liberation for any given job.
It would be good to see how many times the marc_liberation queries voyager based on the various endpoints and drive an estimate from that.
Once we get a week's worth of stats, use them in combination with the number of (anticipated) API calls per controller action to produce an estimate. Compare with the API limits.
We can track hits to our API with this: https://app.datadoghq.com/logs/analytics?agg_q=%40controller%2C%40action%2C%40format&cols=host%2Cservice%2C%40http.url_details.queryString.q%2C%40action%2C%40format&from_ts=1604271119706&index=&live=true&messageDisplay=inline&query=source%3Aruby+service%3Abibdata+-%40controller%3A"Locations%3A%3AHoldingLocationsController"+-%40controller%3A"Locations%3A%3ADeliveryLocationsController"+-%40controller%3A"HighVoltage%3A%3APagesController"++-%40controller%3A"Users%3A%3AOmniauthCallbacksController"&stream_sort=desc&to_ts=1606863119706&top_n=10&viz=query_table
I'm not sure how this will work now that we have removed the voyager helpers in the alma branch. My first thought was to merge this in the main branch to track the voyager calls but since we rebase the alma branch with the main branch this will error because it will no longer have the voyager helpers connection.
We need to figure out the Alma API limits and compare them to datadog limits in that timeframe and see if we have a problem.
We need to figure out the Alma API limits and compare them to datadog limits in that timeframe and see if we have a problem.
The current limit is 300,000 per day.
The current limit is 300,000 per day.
We may also need to make a datadog report for OL endpoints which use voyager directoy for patron account interactions.
We may also need to make a datadog report for OL endpoints which use voyager directoy for patron account interactions.
I followed @tpendragon solution and made a start with https://app.datadoghq.com/logs/analytics?agg_m=count&agg_q=%40controller%2C%40action%2C%40format&agg_t=count&analyticsOptions=["bars"%2C"cool"%2Cnull%2Cnull%2C"value"]&from_ts=1610402629076&index=&live=true&query=service%3Abibdata+host%3Abibdata-alma-staging1++-controller%3A"HighVoltage%3A%3APagesController"&to_ts=1610403529076&top_n=10&top_o=top&viz=query_table
I followed @tpendragon solution and made a start with https://app.datadoghq.com/logs/analytics?agg_m=count&agg_q=%40controller%2C%40action%2C%40format&agg_t=count&analyticsOptions=["bars"%2C"cool"%2Cnull%2Cnull%2C"value"]&from_ts=1610402629076&index=&live=true&query=service%3Abibdata+host%3Abibdata-alma-staging1++-controller%3A"HighVoltage%3A%3APagesController"&to_ts=1610403529076&top_n=10&top_o=top&viz=query_table
ExLibris documentation of the limits: https://developers.exlibrisgroup.com/alma/apis/#threshold
Work on this issue is happening in https://docs.google.com/spreadsheets/d/1LmwK6s2IK6L98-DY3JIHHohUQcqYYa7qupZIJHv7Pu8/edit#gid=0
| gharchive/issue | 2020-10-22T18:26:21 | 2025-04-01T06:45:30.291689 | {
"authors": [
"christinach",
"hackartisan",
"hectorcorrea",
"mzelesky",
"tpendragon"
],
"repo": "pulibrary/marc_liberation",
"url": "https://github.com/pulibrary/marc_liberation/issues/871",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
358638582 | Scsb event
Splits "RECAP_RECORDS" dump type into "PRINCETON_RECAP" and "PARTNER_RECAP"
Creates partner recap update job to import updated marc records and deleted record ids from scsb server.
Jobs are set up to pull down all delete and update files since the last partner recap job was run.
Records are cleaned using marc_cleanup gem and fixes are logged in a log file attached to the partner recap dump object.
Files are copied to a separate directory within the marc liberation data mount.
Coverage decreased (-4.09%) to 76.517% when pulling db4cebe6e3a9563a3073305b52b134acd8f37d89 on scsb_event into cb92dadde60e06233e7b151cc6c7d4211f876bf3 on master.
Looks good.
| gharchive/pull-request | 2018-09-10T14:04:27 | 2025-04-01T06:45:30.295059 | {
"authors": [
"coveralls",
"kevinreiss",
"tampakis"
],
"repo": "pulibrary/marc_liberation",
"url": "https://github.com/pulibrary/marc_liberation/pull/457",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
325450847 | 以開發跨三平台的功能為目標
基礎demo依然是Onsen UI VUE
有main page > 下一頁
Setting
GitHub > External Link的功能
Sliding Menu
都要有
還要包括翻譯
| gharchive/issue | 2018-05-22T20:28:15 | 2025-04-01T06:45:30.308464 | {
"authors": [
"pulipulichen"
],
"repo": "pulipulichen/electron-loading-test",
"url": "https://github.com/pulipulichen/electron-loading-test/issues/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2203787046 | Adding Vivado Support in FlooNoC
I'm adding support for Vivado in FlooNoC, but after compilation, during the elaboration step, a segmentation fault error is being shown. However, a specific error indicating the cause of the segmentation fault is not being displayed, whether there is an issue in the design or with the memories that do not support Vivado. I didn't find any solution for this. Can you please guide me on which files in the design need to be changed to add support for Vivado?
Here, is the error shown below which I get while enabling simulation on Vivado.
Hi,
I have never tried to map it onto an FPGA and also don't have too much experience with them to tell you where the problem could be unfortunately.
I merged a fix https://github.com/pulp-platform/FlooNoC/pull/36 for an elaboration error that I experienced during synthesis. I am not sure if that is the same problem you have, but you can maybe give it a try.
| gharchive/issue | 2024-03-23T09:16:29 | 2025-04-01T06:45:30.310975 | {
"authors": [
"fischeti",
"syedarafia13"
],
"repo": "pulp-platform/FlooNoC",
"url": "https://github.com/pulp-platform/FlooNoC/issues/35",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
800758345 | SCIM Okta integration topic
New Okta user guide to align with SCIM 2.0 support in the Pulumi console.
cc @infin8x as FYI
Do you think we should we remove SCIM groups from behind the feature flag before we release this?
https://github.com/pulumi/pulumi-service/issues/5982
cc @infin8x
@sean1588 yep! I've been keeping a list of things we need to do to formally release this feature in the epic: https://github.com/pulumi/pulumi-service/issues/2027. Feel free to add other items under the Ship it 🚀 heading so we can do a coordinated launch.
I have pushed a commit that adds the Azure AD doc as well. Chatted with Dave about updating the title to make it a bit more generic.
| gharchive/pull-request | 2021-02-03T22:39:41 | 2025-04-01T06:45:30.343544 | {
"authors": [
"davidwrede",
"infin8x",
"praneetloke",
"sean1588"
],
"repo": "pulumi/docs",
"url": "https://github.com/pulumi/docs/pull/5139",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
430611149 | Update install.md
adding more details around NPM, its seem to be a requirement from testing around, many random errors from the 101 level turotials for someone new to Pulumi and what it is
e.g.
➜ ~ pulumi new typescript --dir container-quickstart
This command will walk you through creating a new Pulumi project.
Enter a value or leave blank to accept the (default), and press <ENTER>.
Press ^C at any time to quit.
project name: (container-quickstart)
project description: (A minimal TypeScript Pulumi program)
Created project 'container-quickstart'
stack name: (dev)
Created stack 'dev'
Installing dependencies...
error: installing dependencies; rerun 'npm install' manually to try again, then run 'pulumi up' to perform an initial deployment: exec: "npm": executable file not found in $PATH
then later->
error: It looks like the Pulumi SDK has not been installed. Have you run npm install or yarn install?
feel free to request changes or deny... just figured even raising this PR will help people's google-fu
Thank you for the pull request. I agree we need to do a better job at documenting SDK dependencies. However, node isn't universally required. (For example, if using Python with Pulumi you probably need pip instead.)
I'm inclined not to take this particular PR, but perhaps we would want to do something similar, or perhaps create a separate page? (Or maybe have an even better error message baked into pulumi itself, like the one you referenced.)
FWIW - I opened https://github.com/pulumi/docs/issues/997 to track more holistic improvement to address the underlying issue here.
| gharchive/pull-request | 2019-04-08T19:07:35 | 2025-04-01T06:45:30.347211 | {
"authors": [
"IPvSean",
"chrsmith",
"lukehoban"
],
"repo": "pulumi/docs",
"url": "https://github.com/pulumi/docs/pull/980",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
845764185 | Region not read from profile configuration
The aws.Provider has started refusing to read the region configuration for a profile set in ~/.aws/config, while other values appear to be loaded fine.
I'm not 100% certain when this started happening, but I noticed on March 10th and I'm usually pretty up to date, sorry.
Basically just started getting this kind of error:
Error: invocation of aws:iam/getRole:getRole returned an error: 1 error occurred:
* missing required configuration key "aws:region": The region where AWS operations will take place. Examples are us-east-1, us-west-2, etc.
Set a value using the command `pulumi config set aws:region <value>`.
when creating a Provider like this:
new Provider('blah', { profile: 'production' });
Expected behavior
Should use region value specified in profile.
Current behavior
Requires region be programmatically set in pulumi code, duplicating the configuration.
Steps to reproduce
Configure a non-default profile in your ~/.aws/config, it can be for the same account, role, keys, etc.
Create a Provider using only that profile as above ☝🏻
Create or read a resource using that provider.
pulumi up 💥😢
Context (Environment)
Recent usage is setting up cross-account assume role trust where the role is consistently named.
Affected feature
@stack72 are you aware of any changes in this space?
I tested it today with a 6.x version. Used this program:
import * as aws from "@pulumi/aws";
const p = new aws.Provider('blah', { profile: 'dev' });
const table = new aws.dynamodb.Table("tenants", {
hashKey: "id",
attributes: [{ name: "id", type: "S" }],
readCapacity: 1,
writeCapacity: 1,
}, {provider: p});
also checked that I don't have a region configured in my stack config.
The program deployed and the Table got created in the region from my ~/.aws/config. I'll close the issue as fixed.
| gharchive/issue | 2021-03-31T03:31:33 | 2025-04-01T06:45:30.352750 | {
"authors": [
"mikhailshilkov",
"shousper"
],
"repo": "pulumi/pulumi-aws",
"url": "https://github.com/pulumi/pulumi-aws/issues/1422",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1640750790 | Docs tend not to describe as much as the provider(AWS classic) for dynamo
Problem description
Some areas of the docs tend to not describe as much as the provider(AWS), so I jump to that area of documentation that might help me get more info. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html for example for this. I like cloud formation, it has nearly everything for their services, properties, valid inputs, required, etc. perhaps this is intentional and Pulumi can't cover every single detail. and in the Pulumi code, I do a Go To Definition/Type Definition to get more details on the structure of Inputs sometimes.
Using AWS Classic Provider:
import * as aws from '@pulumi/aws';
Suggestions for a fix
I'm not sure what exactly this issue is tracking. @tusharshahrs can you add a link to a problematic Pulumi docs page and the piece(s) that are missing from it? Thank you.
| gharchive/issue | 2022-10-11T19:19:24 | 2025-04-01T06:45:30.355899 | {
"authors": [
"mikhailshilkov",
"tusharshahrs"
],
"repo": "pulumi/pulumi-aws",
"url": "https://github.com/pulumi/pulumi-aws/issues/2436",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1650062060 | Consider publishing arm 64 pulumi-kubernetes-operator images
Hello!
Vote on this issue by adding a 👍 reaction
If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
Issue details
Consider publishing arm 64 pulumi-kubernetes-operator images
Affected area/feature
Note: we'll also need to verify that these images run as expected on ARM nodes.
Upon further testing, this requires upstream changes to pulumi/pulumi-docker-containers to ensure that certain binaries work as expected.
I've pushed an alpha build of this image that could be used for verification and testing, and can be used with the following image tag: pulumi/pulumi-kubernetes-operator:v1.11.5-arm64-alpha.
Any news? 👀 Its been a year since this issue last saw activity.
Update: in 2.x, we now do publish an arm64 build of the operator image. However, one still needs to have an arm64 build of the "kitchen-sink" pulumi image (see #297). While we do have arm64 builds of the language-specific images, we don't use them by default.
Also, we don't have "non-root" variants of the language-specific images.
| gharchive/issue | 2023-03-31T21:22:33 | 2025-04-01T06:45:30.376102 | {
"authors": [
"EronWright",
"devantler",
"phillipedwards",
"rquitales"
],
"repo": "pulumi/pulumi-kubernetes-operator",
"url": "https://github.com/pulumi/pulumi-kubernetes-operator/issues/430",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1969246505 | Support for Structured Configuration
Hello!
Vote on this issue by adding a 👍 reaction
If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
Issue details
Add support for structured configuration to be passed to the Pulumi program. The use cases are:
Passing an object value to the program, e.g. to be retrieved at runtime with RequireObject.
Setting the value of a specific path into a merged configuration.
The value would assumedly come from any of the supported sources, mainly:
a literal YAML-encoded (or JSON-encoded) string
a Secret with a key containing a YAML-encoded string
Ideally this feature would be orthogonal to whether or not the value is Pulumi secret.
There may be need for a new configuration block to allow for non-secret structured configuration. Note that secretsRef defines Pulumi secrets from a secret or from a literal value, while non-secrets are simply typed as map[string]string.
Note that the Automation API recently added support for structured configuration: https://github.com/pulumi/pulumi/pull/12265
Examples
Some examples for illustration purposes.
These examples assume the following program:
interface Data {
active: boolean;
nums: number[];
}
let config = new pulumi.Config();
let data = config.requireObject<Data>("mydata");
console.log(`Active: ${data.active}`);
And intend to generate the following stack configuration:
# Generated: Pulumi.dev.config
config:
proj:mydata:
active: true
nums:
- 1
- 2
- 3
Using a YAML value from a Kubernetes Secret as a Pulumi secret:
apiVersion: v1
kind: Secret
metadata:
name: example-secret
type: Opaque
stringData:
data.yaml: |
active: true
nums:
- 10
- 20
- 30
---
apiVersion: pulumi.com/v1
kind: Stack
metadata:
name: example-1
spec:
stack: example/proj/dev
config:
aws:region: us-east-2
secretsRef:
mydata:
type: secret
secret:
name: example-secret
key: data.yaml
type: object
...
Using an inline YAML value as a Pulumi secret:
apiVersion: pulumi.com/v1
kind: Stack
metadata:
name: example-1
spec:
stack: example/proj/dev
config:
aws:region: us-east-2
secretsRef:
mydata:
type: literal
literal:
jsonValue:
active: true
nums:
- 10
- 20
- 30
...
Using inline YAML as a non-secret (via a hypothetical config2 block):
apiVersion: pulumi.com/v1
kind: Stack
metadata:
name: example-1
spec:
stack: example/proj/dev
config2:
aws:region:
type: string
value: us-east-2
proj:mydata:
type: object
value:
active: true
nums:
- 10
- 20
- 30
...
Open Questions
Config values are currently typed as string as opposed to apiextensionsv1.JSON. If we were to change the API to use JSON, then the values would be interpreted as JSON values (e.g. true would become a bool). Is that an acceptable (breaking) change? Or should a jsonValue field be introduced? Or should literals be string-encoded? Note that Program.spec.configuration.default is a JSON value (not encoded).
Should the API types use json.RawMessage rather than apiextensionsv1.JSON? See Grafana and Tanzu for examples.
Closing as a dup of https://github.com/pulumi/pulumi-kubernetes-operator/issues/258
| gharchive/issue | 2023-10-30T21:21:26 | 2025-04-01T06:45:30.386370 | {
"authors": [
"EronWright"
],
"repo": "pulumi/pulumi-kubernetes-operator",
"url": "https://github.com/pulumi/pulumi-kubernetes-operator/issues/513",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
971183058 | issue for testing webhook
Just a test issue to check if the webhook is working
test
| gharchive/issue | 2021-08-15T17:18:07 | 2025-04-01T06:45:30.408201 | {
"authors": [
"voidderef"
],
"repo": "pumpitupdev/pumptools",
"url": "https://github.com/pumpitupdev/pumptools/issues/42",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
890234340 | Copy unit patterns from other widths if not available
@c960657 Does this make sense to you?
Yes, it does.
I assume it is an error in CLDR data, if these values are not populated, so I hope this problem is eventually resolved in their end.
| gharchive/pull-request | 2021-05-12T15:49:59 | 2025-04-01T06:45:30.409206 | {
"authors": [
"c960657",
"mlocati"
],
"repo": "punic/data",
"url": "https://github.com/punic/data/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1987499339 | [Bug]: When running npm install, I encountered installation issues.
Minimal, reproducible example
// ...
npm timing build:link Completed in 6ms
npm info run puppeteer@21.5.1 postinstall node_modules/puppeteer node install.mjs
(##################) ⠏ reify:puppeteer-core: info run puppeteer@21.5.1 postinstall node_modules/puppeteer node install.mjs
Error string
no error
Bug behavior
[ ] Flaky
[ ] PDF
Background
No response
Expectation
How can I install it
Reality
When running npm install, I encountered installation issues:
'reify:puppeteer-core: info run puppeteer@21.5.1 postinstall node_modules/puppeteer node install.mjs' This step took a long time, but it eventually succeeded.
Puppeteer configuration file (if used)
No response
Puppeteer version
21.5.1
Node version
v16.20.0
Package manager
npm
Package manager version
8.19.4
Operating system
Windows
I am unable to reproduce. If the browser download takes too long for you, use the env variable PUPPETEER_SKIP_DOWNLOAD=true before running npm i to skip the download and download the matching browser separately.
| gharchive/issue | 2023-11-10T12:06:01 | 2025-04-01T06:45:30.419330 | {
"authors": [
"OrKoN",
"xiaopaning"
],
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/issues/11349",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
558729941 | How to retrieve all the data from GitHub's trending repositories?
I am trying to scrape GitHub's trending repository page. For each repository on the page, I am trying to retrieve its name, description, primary programming language, and the usernames of the contributors and display it on my terminal.
A sample output of that would be:
CyberDyne / SkyNet
===========================
Global Digital Defense Network
Written primarily in C++
Primary Contributors: milesDyson, johnConnor
---------------------------
Currently, this is my code:
async function scrapeProduct(url) {
const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.goto(url);
let data = await page.evaluate(() => {
let array = [];
let title = document.querySelector('article>h1').innerText;
let description = document.querySelector('article>p').innerText;
let language = document.querySelector('article>div[class="f6 text-gray mt-2"]>span[class="d-inline-block ml-0 mr-3"]>span[itemprop="programmingLanguage"]').innerText;
// let userNames = document.querySelector('body > div.application-main > main > div.explore-pjax-container.container-lg.p-responsive.pt-6 > div > div:nth-child(2) > article:nth-child(1) > div.f6.text-gray.mt-2 > span:nth-child(4) > a:nth-child(1) > img')[alt]
return {
title,
description,
language
// userNames
};
});
console.log(`${data.title}\n===========================\n${data.description}\n\nWritten primarily in ${data.language}\n\nPrimary Contributors: \n---------------------------`)
await browser.close();
}
scrapeProduct('https://github.com/trending?since=weekly');
I am only able to scrape out the info (couldn't figure out how to get the usernames done correctly tho) from one repository but I want all 25 repositories on the page displayed on my terminal.
Any help/advice would be greatly appreciated.
I was able to do it like this:
const puppeteer = require('puppeteer');
async function scrapeProduct(url) {
const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.goto(url);
await page.waitForSelector('.Box-row');
const sections = await page.$$('.Box-row');
for (const section of sections) {
const title = await section.$eval('h1', h1 => h1.innerText);
const description = await section.$eval('p', p => p.innerText);
let language;
if (await section.$('span[itemprop="programmingLanguage"]')){
language = "Written primarily in " + await section.$eval('span[itemprop="programmingLanguage"]', span => span.innerText)
} else {
language = "NA"
}
const userNames = await page.$$('d-inline-block mr-3')
for (const userName of userNames) {
const names = await userName.$eval('a', a => a.innerText)
}
console.log(`\n${title}\n===========================\n${description}\n\n${language}\n\nPrimary Contributors ${userNames}\n---------------------------`);
}
// console.log(sections.length);
await browser.close();
}
scrapeProduct('https://github.com/trending?since=weekly');
However, I still can't figure out how to properly display the usernames. On the trending page, the contributors are all icons that have no class. I thought of doing a for of loop to dig into sections but nothing so far.
| gharchive/issue | 2020-02-02T17:39:50 | 2025-04-01T06:45:30.422596 | {
"authors": [
"idkjay"
],
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/issues/5375",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
597121229 | Manual browser revision not used, used package.json instead
Currently my webdriver was updated and only supports Chrome 81 version. As I'm using the Puppeteer chromium binary, I checked that is still using Chrome v80, the revision comes from package.json.
I saw that I can change the binary download revision using environment variables or npm config, and it works perfectly. But the fact is that the launcher is always using the previous revision as it gets it through package.json, see this lines:
https://github.com/puppeteer/puppeteer/blob/7a2a41f2087b07e8ef1feaf3881bdcc3fd4922ca/src/Puppeteer.js#L63
It would be great if Puppeteer handle the revision in install.js storing it if it comes from environment variable or npm config to use it then in Puppeteer.js without changing package.json.
Greetings.
Pass the executablePath option to puppeteer.launch. See https://github.com/puppeteer/puppeteer/blob/v2.1.1/docs/api.md#puppeteerlaunchoptions.
This does not solves the problem.
The fact is that Puppeteer handles his own Chromium download on googleapis. As a developer I don't think I have the need to know in which route downloads it, I think this should be done automatically inside Puppeteer, as it knows the path in which will be stored.
The ideal behaviour should be: if developer specified a revision, Puppeteer downloads that revision and stores its executable path to use it.
The ideal behaviour should be: if developer specified a revision, Puppeteer downloads that revision and stores its executable path to use it.
Currently Puppeteer is tied to a given Chrome revision. It's possible to use it with other Chromium versions but we cannot offer any guarantees about support or stability in those cases, and so it doesn't make sense for Puppeteer to provide a first-class API for using arbitrary revisions as it can easily lead to breakage.
Download the revision yourself, and then use executablePath.
Currently Puppeteer allows specifying a custom revision and downloads it for you. I don't know why you don't want to end that process as it doesn't make sense by itself.
I think there's some miscommunication going on here. This has also broken the build of some of our tests.
The question here, @mathiasbynens is, Why does Puppeteer not download the same version of Chromium that matches the version of chromedriver being used by Puppeteer?
If it is tied to a given Chrome revision as you say, why does it fail to run because the chromedriver is set wrong?
Ok, it turns out our issue was because Protractor and its dependency webdriver-manager automatically updated to Chrome 81.
Are Puppeteer and Protractor used together a lot? If so, is there a best practice way of synchronising the version of chromium Puppeteer provides and the version of webdriver-manager Protractor uses?
Hi! Sorry, I couldn't reply until today. Yes, I'm using Protractor and that was the issue.
It's a nice way to bring my e2e tests to my CI system since I don't have to manage the Chrome version in Gitlab, I can use Puppeteer version.
The only way I found to make the Chrome version match is forcing the revision in Puppeteer.
Thanks!
We've also run into this problem a couple times. Just wanted to drop a note in response to @mathiasbynens that this issue is not an arbitrary revision, it is the latest version. As @rikkiprince mentioned protractor keeps it up to date while puppeteer does not do so at the same pace causing the mismatch. For the guys using this just in CI testing, our immediate solution was to go back to installing chrome through ci container scripts and removing puppeteer.
| gharchive/issue | 2020-04-09T08:41:45 | 2025-04-01T06:45:30.430632 | {
"authors": [
"ajspera",
"manuman94",
"mathiasbynens",
"rikkiprince"
],
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/issues/5617",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
179912839 | Require beaker-pe if available
This change pulls in beaker-pe if it is available in the ruby
$LOAD_PATH. This makes it so that project repos only have to specify
beaker-pe in their Gemfile, not require beaker-pe itself in their
test setup.
Refer to this link for build results (access rights to CI server needed):
http://jenkins-beaker.delivery.puppetlabs.net//job/qe_beaker_btc-intn/3046/
Refer to this link for build results (access rights to CI server needed):
http://jenkins-beaker.delivery.puppetlabs.net//job/qe_beaker_btc-intn/3048/
:+1:
This would solve for several failures that Integration is seeing in the 3.8.x pipeline. This will have to be resolved, and this seems like the right way
@puppetlabs/beaker @rick @tvpartytonight and I were discussing the merits of potentially moving to this approach, rather than needing them to include the gem in their dependencies and load it themselves.
One of my worries is that we then have two separate instructions for how to setup a beaker-library. That's not really more of an objection than a hiccup though, we can easily overcome that. What do you think?
@mwbutcher you can resolve your integration failures right now by using the new import mechanism and including beaker-pe.
Here's an example branch I made that shows how you can do that for 2016.5. The change shouldn't be hard to port back to older versions of testing if they have acceptance helpers. The upgrade doc should provide sufficient instructions on how to upgrade to beaker 3.0. If it doesn't, let us know and we'll fix that.
I'd rather make sure we get the time to consider this PR, rather than be unnecessarily pressured into merging it.
@puppetlabs-jenkins retest this please
@kevpl just to clarify, this change still requires an update to a project's Gemfile, but does not require the project require 'beaker-pe in its helper(s).
With this change, we are still forcing users to add the library via their Gemfile, but not requiring that they update their actual code to require it. I was motivated to submit this change because it it is relatively trivial to modify a Gemfile for all projects, but figuring out a way to require beaker-pe in all projects is something we can't easily/universally do. This simplifies the testing effort in all our pipelines significantly.
However, I am open to the idea that once this initial refactor is done by users, it will be better in the long term for beaker to have a cleaner separation of concerns in its own independent gems...
Refer to this link for build results (access rights to CI server needed):
http://jenkins-beaker.delivery.puppetlabs.net//job/qe_beaker_btc-intn/3050/
@kevpl including beaker-pe explicitly works great in 2016.4 and newer, but for 3.8.x we don't have a good place to require it centrally. The good news is that once we shut down 3.8.x it won't matter, so this fix could be temporary.
@mwbutcher I hacked it into the install file in beaker's ci for testing 3.8.x installation. Not the best place for it, but, it works.
@highb are you okay with that solution?
👍 yeah, that's an OK hack for now, I guess?
We could help ourselves remember to get rid of this hack with a JIRA ticket mentioned in the code.
I'm 👍 on getting this in, it makes sense this will pick up a lot for usability, even if it's only a middle step. We should do the work to get testing green here if possible on our pipelines before merging, as it'll be necessary before release if we're to have confidence anyways
@puppetlabs-jenkins retest this please
Refer to this link for build results (access rights to CI server needed):
http://jenkins-beaker.delivery.puppetlabs.net//job/qe_beaker_btc-intn/3051/
@puppetlabs-jenkins retest this please
Refer to this link for build results (access rights to CI server needed):
http://jenkins-beaker.delivery.puppetlabs.net//job/qe_beaker_btc-intn/3052/
@puppetlabs-jenkins retest this please
Refer to this link for build results (access rights to CI server needed):
http://jenkins-beaker.delivery.puppetlabs.net//job/qe_beaker_btc-intn/3056/
| gharchive/pull-request | 2016-09-28T23:00:36 | 2025-04-01T06:45:30.445702 | {
"authors": [
"highb",
"kevpl",
"mwbutcher",
"nwolfe",
"puppetlabs-jenkins",
"tvpartytonight"
],
"repo": "puppetlabs/beaker",
"url": "https://github.com/puppetlabs/beaker/pull/1257",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
125299948 | (MAINT) Exclude org.clojure from lein-ezbake
Instead of overriding org.clojure in ezbake profile with an explicit
version, just exclude the one dragged in by the lein-ezbake plugin.
Follow-up to #848
+1, I agree this is better, thanks for doing it @nwolfe . Would prefer to hold off on merging this until we get the voom build out though.
I started having second thoughts about this a bit. Everywhere else, we've moved to a policy of resolving dependency version conflicts by specifying the explicit version to be used at the top level. @cprice404's #848 PR seems to follow that same policy. If we were to merge in this PR, seems like we'd be going against that policy.
That said, I don't have super-strong feelings about this so if @cprice404 and @nwolfe are both onboard with the exclusion approach, I won't object to merging it in.
Generally speaking I prefer avoiding exclusions, but for clojure specifically, and ezbake specifically, it seems like a reasonable place to make an exception, and I really hate having that extra version variable at the time. So I'm +1 but I don't care all that much. Maybe someone else could weigh in as a tiebreaker.
@cprice404 @camlow325 @nwolfe I also don't have a very strong opinion on this, but I think I'm leaning towards the approach here in this PR. Making an exception for clojure seems worthwhile to me
| gharchive/pull-request | 2016-01-07T01:09:24 | 2025-04-01T06:45:30.453848 | {
"authors": [
"camlow325",
"cprice404",
"fpringvaldsen",
"nwolfe"
],
"repo": "puppetlabs/puppet-server",
"url": "https://github.com/puppetlabs/puppet-server/pull/852",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
134436160 | (SERVER-1143) Auth rule for static file content
This PR adds an auth rule to allow authenticated access to the
static_file_content endpoint, and adds the associated acceptance test
@camlow325 would you mind taking a look at this? would appreciate a brief review from you as the tk-auth expert.
:+1: overall from me other than the one question on adding an "unauthorized" beaker test.
@camlow325 Added an unauthenticated test
LGTM :shipit:
:+1:
| gharchive/pull-request | 2016-02-17T23:47:59 | 2025-04-01T06:45:30.456014 | {
"authors": [
"KevinCorcoran",
"camlow325",
"haus",
"jpinsonault"
],
"repo": "puppetlabs/puppet-server",
"url": "https://github.com/puppetlabs/puppet-server/pull/916",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
144601975 | (PUP-6099) add additional autorequires for file/mount interactions
With this improvement, file resources will execute before their mount point is mounted and file resources will execute after their parent mount is mounted.
I need some help writing the spec tests because I am unable to fully parse how the Puppet spec API is working. Please assist.
Also I probably could use some code criticism. The coding style between the mount and file types and their spec tests were rather different and I tried to stick to the style within each.
I did a good bit more work on the spec tests. I noticed there are no spec tests on the user and group autorequires for the file resource, so I understand I am probably foraging into unknown territory here by trying to implement cross-resource autorequire spec tests here. If I succeed, then you guys could potentially use this pattern for the other unimplemented spec tests.
Also this PR would prompt an update to the file and mount resource type documentation.
Still would appreciate some code criticism and test assistance though.
Thanks for this contribution! I can't think of any reasons at the moment why this would cause problems (/cc @branan). I'll take a look into helping with specs a bit later.
Thank so much for the feedback. I will take a look at these comments and update accordingly (I have been resetting and force pushing to keep everything in one commit).
I notice a lot of random unrelated spec tests have been failing for this changeset. I am wondering if I have done something really bad or if I had an unlucky fork.
Github is currently ignoring all of my pushes to the Puppet fork, so I reforked it and Github is still ignoring my pushes for some reason.
While I run down Github's technical difficulties, this is the diff of the newest changes:
diff --git a/lib/puppet/type/mount.rb b/lib/puppet/type/mount.rb
index 9e798a1..3fa7427 100644
--- a/lib/puppet/type/mount.rb
+++ b/lib/puppet/type/mount.rb
@@ -14,7 +14,11 @@ module Puppet
**Autorequires:** If Puppet is managing any parents of a mount resource ---
that is, other mount points higher up in the filesystem --- the child
- mount will autorequire them."
+ mount will autorequire them. If Puppet is managing the file path of a
+ mount point, the mount resource will autorequire it.
+
+ **Autobefores:** If Puppet is managing any child file paths of a mount
+ point, the mount resource will autobefore them."
feature :refreshable, "The provider can remount the filesystem.",
:methods => [:remount]
@@ -293,5 +297,16 @@ module Puppet
dependencies[0..-2]
end
+ # Autorequire the mount point's file resource
+ autorequire(:file) { Pathname.new(@parameters[:name].value) }
+
+ # Autobefore the mount point's child file paths
+ autobefore(:file) do
+ dependencies = []
+ Pathname.new(@parameters[:path].value).descend do |child|
+ dependencies.push child.to_s
+ end
+ dependencies[0..-2]
+ end
end
end
diff --git a/spec/unit/type/mount_spec.rb b/spec/unit/type/mount_spec.rb
index 4cd94a4..0976e13 100644
--- a/spec/unit/type/mount_spec.rb
+++ b/spec/unit/type/mount_spec.rb
@@ -545,15 +545,22 @@ describe Puppet::Type.type(:mount), :unless => Puppet.features.microsoft_windows
end
end
- describe "establishing autorequires" do
+ describe "establishing autorequires and autobefores" do
- def create_resource(path)
+ def create_mount_resource(path)
described_class.new(
:name => path,
:provider => providerclass.new(path)
)
end
+ def create_file_resource(path)
+ Puppet::Type.type(:file).stubs(:defaultprovider).new(
+ :path => path,
+ :provider => Puppet::Type.type(:file).stubs(:defaultprovider).new(:path => path).provider
+ )
+ end
+
def create_catalog(*resources)
catalog = Puppet::Resource::Catalog.new
resources.each do |resource|
@@ -563,38 +570,71 @@ describe Puppet::Type.type(:mount), :unless => Puppet.features.microsoft_windows
catalog
end
- let(:root_mount) { create_resource("/") }
- let(:var_mount) { create_resource("/var") }
- let(:log_mount) { create_resource("/var/log") }
+ let(:root_mount) { create_mount_resource("/") }
+ let(:var_mount) { create_mount_resource("/var") }
+ let(:log_mount) { create_mount_resource("/var/log") }
+ let(:var_file) { create_file_resource('/var') }
+ let(:log_file) { create_file_resource('/var/log') }
+ let(:puppet_file) { create_file_resource('/var/log/puppet') }
before do
- create_catalog(root_mount, var_mount, log_mount)
+ create_catalog(root_mount, var_mount, log_mount, var_file, log_file)
end
it "adds no autorequires for the root mount" do
expect(root_mount.autorequire).to be_empty
end
- it "adds the parent autorequire for a mount with one parent" do
+ it "adds the parent autorequire and the file autorequire for a mount with one parent" do
parent_relationship = var_mount.autorequire[0]
+ file_relationship = var_mount.autorequire[1]
- expect(var_mount.autorequire).to have_exactly(1).item
+ expect(var_mount.autorequire).to have_exactly(2).item
expect(parent_relationship.source).to eq root_mount
expect(parent_relationship.target).to eq var_mount
+
+ expect(file_relationship.source).to eq var_file
+ expect(file_relationship.target).to eq var_mount
end
- it "adds both parent autorequires for a mount with two parents" do
+ it "adds both parent autorequires and the file autorequire for a mount with two parents" do
grandparent_relationship = log_mount.autorequire[0]
parent_relationship = log_mount.autorequire[1]
+ file_relationship = log_mount.autorequire[2]
- expect(log_mount.autorequire).to have_exactly(2).items
+ expect(log_mount.autorequire).to have_exactly(3).items
expect(grandparent_relationship.source).to eq root_mount
expect(grandparent_relationship.target).to eq log_mount
expect(parent_relationship.source).to eq var_mount
expect(parent_relationship.target).to eq log_mount
+
+ expect(file_relationship.source).to eq log_file
+ expect(file_relationship.target).to eq log_mount
+ end
+
+ it "adds the child autobefore for a mount with one file child" do
+ child_relationship = log_mount.autobefore[0]
+
+ expect(log_mount.autobefore).to have_exactly(1).item
+
+ expect(child_relationship.source).to eq log_mount
+ expect(child_relationship.target).to eq puppet_file
+ end
+
+ it "adds both child autbefores for a mount with two file childs" do
+ child_relationship = var_mount.autobefore[0]
+ grandchild_relationship = var_mount.autobefore[1]
+
+ expect(var_mount.autobefore).to have_exactly(2).items
+
+ expect(child_relationship.source).to eq var_mount
+ expect(child_relationship.target).to eq log_file
+
+ expect(grandchild_relationship.source).to eq var_mount
+ expect(grandchild_relationship.target).to eq puppet_file
end
end
end
I am unsure how to get Puppet to traverse unknown child paths of a mount point, unless it is working from the file resources themselves (which would be back to the autorequire). I am sure the mimicked code I put in will not actually work correctly.
Changed up the spec test for everything being in mount, but still expecting the file resources to not be generated correctly.
Github broke my fork (I think they were having some technical/networking issues in the last hour) so I have to open a new PR.
| gharchive/pull-request | 2016-03-30T13:59:45 | 2025-04-01T06:45:30.462441 | {
"authors": [
"MikaelSmith",
"mschuchard"
],
"repo": "puppetlabs/puppet",
"url": "https://github.com/puppetlabs/puppet/pull/4824",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
681854447 | (PUP-10474) Remove deprecated Puppet::Resource methods
Puppet::Resource.set_default_parameters
Puppet::Resource.validate_complete
Puppet::Resource::Type.assign_parameter_values
CLA signed by all contributors.
| gharchive/pull-request | 2020-08-19T13:47:00 | 2025-04-01T06:45:30.464441 | {
"authors": [
"ciprianbadescu",
"puppetcla"
],
"repo": "puppetlabs/puppet",
"url": "https://github.com/puppetlabs/puppet/pull/8281",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
139132290 | (PDB-1960) Remove race condition with initial vacuum analyze
This commit move the asynchronous vacuum analyze to use the write-db
connection pool instead of the temporary one we use for migrations so
that we know the connection will still be around while we attempt to
execute the vacuum.
Refer to this link for build results (access rights to CI server needed):
http://jenkins-enterprise.delivery.puppetlabs.net/job/enterprise_puppetdb_init-multijob_githubpr-stable/147/
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
http://jenkins-enterprise.delivery.puppetlabs.net/job/enterprise_puppetdb_init-multijob_githubpr-stable/148/
Test FAILed.
@kbarber fixed that up, should only run when there are migrations
Refer to this link for build results (access rights to CI server needed):
http://jenkins-enterprise.delivery.puppetlabs.net/job/enterprise_puppetdb_init-multijob_githubpr-stable/149/
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
http://jenkins-enterprise.delivery.puppetlabs.net/job/enterprise_puppetdb_init-multijob_githubpr-stable/150/
Test PASSed.
| gharchive/pull-request | 2016-03-08T00:04:08 | 2025-04-01T06:45:30.490949 | {
"authors": [
"ajroetker",
"puppetlabs-jenkins"
],
"repo": "puppetlabs/puppetdb",
"url": "https://github.com/puppetlabs/puppetdb/pull/1882",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
295641923 | Revert "(maint) Add build_tar: FALSE to build_defaults"
This reverts commit f9733e1b0adcd2fd6b998f6aa817e7bb1cf88488.
We were fixing a symptom of a problem, rather than the problem itself (see
RE-10238).
https://github.com/puppetlabs/packaging/pull/788 needs to go in first.
Test PASSed
| gharchive/pull-request | 2018-02-08T19:46:19 | 2025-04-01T06:45:30.492651 | {
"authors": [
"mwaggett",
"puppetlabs-jenkins"
],
"repo": "puppetlabs/puppetdb",
"url": "https://github.com/puppetlabs/puppetdb/pull/2457",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
71399557 | Apt_reboot_required fact addition.
Simple fact to determine if a reboot is necessary after updates have been installed, along with unit tests and comment in the README.
@mhaskel This seems good to me, what do you think?
@dlactin @daenney also, when #515 gets merged it will ahve the README updates, and this may need a rebase after that :\
I will change the fact to the simplified version and create another pull request quick.
great, thanks @dlactin !
| gharchive/pull-request | 2015-04-27T20:56:58 | 2025-04-01T06:45:30.495630 | {
"authors": [
"daenney",
"dlactin",
"mhaskel"
],
"repo": "puppetlabs/puppetlabs-apt",
"url": "https://github.com/puppetlabs/puppetlabs-apt/pull/512",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
62655959 | Leverage awsdk to add facts
I'm going to be looking into this myself, likely today, but it will be an early step into writing custom facts that do something other than call system binaries.
The ability to set tags inside this module is the most interesting piece to me. Using tags as a staging point for associations between AWS resources allows grouping and treating resources as end services for management without requiring the extremely tight, and somewhat restrictive coupling added by CloudFormation or similar tools.
I'm assuming it shouldn't be terribly hard to pull tags for a resource out of AWS as a fact. It might also be useful to push those tags into the puppet resource tags property for any resource that manages the creation of an element in AWS.
Am I off base? Does this seem useful to anyone else? Is this trivial, and I'm just not particularly smart? All valid possibilites.
HI @MarsuperMammal. Additional EC2 facts definitely sound of interest. I presume you've already seen the existing core facts http://docs.puppetlabs.com/facter/latest/core_facts.html#ec2ec2-instance-data
It's probably worth kicking off a conversation about where those facts should live on the puppet-dev mailing list I think. That way the conversation will hopefully involve some of the people working on facter as well as those interested in this module.
Yeah, I think the facts that come in to core facter are purely the data returned by dumping the instance metadata. There's a number of things about them that are less than ideal in terms of usability, and a lot of information available from API endpoints in AWS not covered in the metadata. I'll start a conversation in puppet-dev
| gharchive/issue | 2015-03-18T10:42:08 | 2025-04-01T06:45:30.498938 | {
"authors": [
"MarsuperMammal",
"garethr"
],
"repo": "puppetlabs/puppetlabs-aws",
"url": "https://github.com/puppetlabs/puppetlabs-aws/issues/119",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
204168612 | Add role management for ECS tasks
Without this change, the ECS tasks are unable to assume a role when
under puppet management. This work adds a new 'role' property that
allows tasks to assume a given IAM role.
The test failure here would be addressed by a merge on https://github.com/puppetlabs/puppetlabs-aws/pull/399.
I've got another that I'll put up after this here, based on this work: https://github.com/zleswomp/puppetlabs-aws/compare/ecstaskrole...zleswomp:ecstaskvolumes
Unit tests are passing here, but there is some gem issue.
| gharchive/pull-request | 2017-01-31T00:09:59 | 2025-04-01T06:45:30.501443 | {
"authors": [
"zleswomp"
],
"repo": "puppetlabs/puppetlabs-aws",
"url": "https://github.com/puppetlabs/puppetlabs-aws/pull/406",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
423784778 | lit - create a spec whose body is its description
Good evening, greetings from Australia and thank you for your amazing work on purescript!
Occasionally I'll have a Spec so short and to-the-point that a textual description of it serves only to duplicate the literal interpretation of the expression.
it "accepts strings that contains substrings" $
"foobar" `AS.shouldContain` "foo"
(From Test.Spec.AssertionSpec)
Ruby uses polymorphism to accept a spec that only provides a block.
it { expect("foobar").to contain "foo" }
(From http://www.betterspecs.org/#short)
lit creates a spec for test cases where the code reads as a literal description of its behaviour.
This requires a degree of connoisseurship as the spec reporter emits empty lines as spec descriptions:
* Build successful.
* Running tests...
Test
Spec
Runner
✓︎ collects "it" and "pending" in Describe groups
✓︎ collects "it" and "pending" with shared Describes
✓︎ filters using "only" modifier on "describe" block
✓︎ filters using "only" modifier on nested "describe" block
✓︎ filters using "only" modifier on "it" block
✓︎ supports async
Test
Spec
Assertions
String
shouldContain
✓︎ accepts strings that contains substrings
✓︎
✓︎ rejects strings that does not contain substrings
However, I would argue that for smaller projects, the cardinality of the specs is sufficient to determine success:
* Build successful.
* Running tests...
Test
Spec
Runner
✓︎
✓︎
✓︎
✓︎
✓︎
✓︎
Test
Spec
Assertions
String
shouldContain
✓︎
✓︎
✓︎
And in the case of failure, the ordinality of the failure is sufficient to locate it:
* Building project in /Users/htmldrum/code/purescript/purescript-spec
Compiling Test.Spec.AssertionSpec
Compiling Test.Main
* Build successful.
* Running tests...
Test
Spec
Runner
✓︎ collects "it" and "pending" in Describe groups
✓︎ collects "it" and "pending" with shared Describes
✓︎ filters using "only" modifier on "describe" block
✓︎ filters using "only" modifier on nested "describe" block
✓︎ filters using "only" modifier on "it" block
✓︎ supports async
Test
Spec
Assertions
String
shouldContain
✓︎ accepts strings that contains substrings
1)
✓︎ rejects strings that does not contain substrings
...
57 passing
3 pending
1 failed
1) Test Spec Assertions String shouldContain
"foo" ∉ "fozbar"
* ERROR: Subcommand terminated with exit code 1
Thank you for your time.
Hey! Thanks for the kind words 🙂
How would you figure out which test failed with this approach?
On Thu, 21 Mar 2019 at 16:18, lisp_ceo notifications@github.com wrote:
Reopened #85
https://github.com/purescript-spec/purescript-spec/issues/85.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/purescript-spec/purescript-spec/issues/85#event-2220076119,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABdYwa-E0eYWD-_N_JdfTUsV-52uPojaks5vY6LXgaJpZM4cBrX0
.
Thanks for the informative suggestion!
I can see the appeal of not having to write a description for "self-descriptive" tests. But without some way of showing the assertion source code when tests fail (which I think wouldn't be practical in PureScript) I'm failing to see how this is valuable.
Additionally, unless purescript-spec would do something special with lit tests other than use a blank description, this is easy to add as an alias in your own test suite. You could have a utils module or something with:
lit = it ""
I'm happy to be convinced otherwise, though. :slightly_smiling_face:
I'm not sure
If you don't care about naming tests then instead of:
describe "Data" do
describe "Foo" do
it "" testA
it "" testB
You can do this too:
describe "Data" do
it "Foo" do
testA
testB
I agree with @owickstrom points, so close this for now.
| gharchive/issue | 2019-03-21T15:16:05 | 2025-04-01T06:45:30.612371 | {
"authors": [
"CDaubert77",
"felixmulder",
"lisp-ceo",
"owickstrom",
"safareli"
],
"repo": "purescript-spec/purescript-spec",
"url": "https://github.com/purescript-spec/purescript-spec/issues/85",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
547461698 | maybeToExceptT
maybeToExceptT seems like a useful feature.
maybeT2ExceptT :: Functor m => e -> MaybeT m a -> ExceptT e m a and
maybe2ExceptT :: e -> Maybe a -> ExceptT e m a
https://hackage.haskell.org/package/transformers-0.5.6.2/docs/Control-Monad-Trans-Maybe.html#v:maybeToExceptT
This may be covered by MonadThrow
| gharchive/issue | 2020-01-09T12:56:29 | 2025-04-01T06:45:30.617792 | {
"authors": [
"BebeSparkelSparkel"
],
"repo": "purescript/purescript-transformers",
"url": "https://github.com/purescript/purescript-transformers/issues/118",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
312042911 | Fix sending messages when an attachment is not set
What?
Currently messages were not sending if an attachment was not set. This fixes that.
CC @hamchapman
1 Warning
:warning:
Any changes to library code should be reflected in the Changelog. Please consider adding a note there. \nYou can find it at CHANGELOG.md.
SwiftLint found issues
Warnings
File
Line
Reason
PCCurrentUser.swift
20
TODOs should be avoided (This should probably be [PCUse...).
PCCurrentUser.swift
368
TODOs should be avoided (Use the soon-to-be-created new...).
PCCurrentUser.swift
456
TODOs should be avoided (Do we need to fetch users in t...).
PCCurrentUser.swift
774
TODOs should be avoided (Do I need to add a Last-Event-...).
PCCurrentUser.swift
815
TODOs should be avoided (What happens if you provide bo...).
PCCurrentUser.swift
836
TODOs should be avoided (This should probably be a room...).
PCCurrentUser.swift
846
TODOs should be avoided (Should we be handling onError ...).
PCCurrentUser.swift
872
TODOs should be avoided (Only consider the room subscri...).
PCCurrentUser.swift
881
TODOs should be avoided (Should we be handling onError ...).
PCCurrentUser.swift
871
Unused parameter "err" in a closure should be replaced with _.
PCCurrentUser.swift
1011
Unused parameter "data" in a closure should be replaced with _.
PCCurrentUser.swift
887
Function body should span 40 lines or less excluding comments and whitespace: currently spans 49 lines
PCCurrentUser.swift
698
Lines should not have trailing whitespace.
PCCurrentUser.swift
702
Lines should not have trailing whitespace.
PCCurrentUser.swift
707
Lines should not have trailing whitespace.
PCCurrentUser.swift
187
MARK comment should be in valid format. e.g. '// MARK: ...' or '// MARK: - ...'
PCCurrentUser.swift
246
MARK comment should be in valid format. e.g. '// MARK: ...' or '// MARK: - ...'
PCCurrentUser.swift
709
Arguments can be omitted when matching enums with associated types if they are not used.
PCCurrentUser.swift
709
Arguments can be omitted when matching enums with associated types if they are not used.
PCCurrentUser.swift
5
Variable name should be between 3 and 40 characters long: 'id'
PCCurrentUser.swift
56
Variable name should be between 3 and 40 characters long: 'id'
PCCurrentUser.swift
157
Variable name should be between 3 and 40 characters long: 'id'
PCCurrentUser.swift
174
Variable name should be between 3 and 40 characters long: 'id'
PCCurrentUser.swift
208
Variable name should be between 3 and 40 characters long: 'id'
PCCurrentUser.swift
263
Variable name should be between 3 and 40 characters long: 'id'
PCCurrentUser.swift
323
Variable name should be between 3 and 40 characters long: 'id'
PCCurrentUser.swift
1040
Enum element name should be between 3 and 40 characters long: 'messageIdKeyMissingInMessageCreationResponse'
PCCurrentUser.swift
104
Collection literals should not have trailing commas.
PCCurrentUser.swift
545
Collection literals should not have trailing commas.
PCCurrentUser.swift
821
Collection literals should not have trailing commas.
PCCurrentUser.swift
759
Opening braces should be preceded by a single space and on the same line as the declaration.
PCCurrentUser.swift
196
Line should be 120 characters or less: currently 142 characters
PCCurrentUser.swift
208
Line should be 120 characters or less: currently 135 characters
PCCurrentUser.swift
212
Line should be 120 characters or less: currently 132 characters
PCCurrentUser.swift
510
Line should be 120 characters or less: currently 127 characters
PCCurrentUser.swift
575
Line should be 120 characters or less: currently 121 characters
PCCurrentUser.swift
615
Line should be 120 characters or less: currently 121 characters
PCCurrentUser.swift
666
Line should be 120 characters or less: currently 133 characters
PCCurrentUser.swift
677
Line should be 120 characters or less: currently 132 characters
PCCurrentUser.swift
744
Line should be 120 characters or less: currently 123 characters
PCCurrentUser.swift
920
Line should be 120 characters or less: currently 123 characters
PCCurrentUser.swift
1016
Line should be 120 characters or less: currently 136 characters
Errors
File
Line
Reason
PCCurrentUser.swift
4
Type body should span 200 lines or less excluding comments and whitespace: currently spans 616 lines
PCCurrentUser.swift
1055
File should contain 400 lines or less: currently contains 1055
Generated by :no_entry_sign: Danger
Thanks for this! Looks like quite a large oversight on my part. Sorry about that!
I'll make sure that I add some tests for this on Monday so it can't happen in future.
@hamchapman Did you manage to look into getting some tests written for this? It would be great to have it merged if possible 😊
@steve228uk I did but got caught up in trying to get stubbing of a HTTP/2 streamed request working!
I think I'll merge this for now and follow up with the tests later once I've got the stubbing working properly. Thanks again!
@hamchapman You're a hero 😄 Thanks so much!
| gharchive/pull-request | 2018-04-06T16:42:00 | 2025-04-01T06:45:30.671857 | {
"authors": [
"hamchapman",
"pusher-ci",
"steve228uk"
],
"repo": "pusher/chatkit-swift",
"url": "https://github.com/pusher/chatkit-swift/pull/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1465437959 | events: Suppress "foreign" event handling
Hello, Guys!
Description
onEvent listener can catch events, dispatched from other libraries, using native event emitter. This can make things broken, when using Pusher with other libraries or simple libraries, who extend Pusher's one and using same event names.
TypeError: undefined is not an object (evaluating 'channel.onEvent')
According to Pusher docs:
Events are the primary method of packaging messages in the Channels system. They form the basis of all communication.
We can suppress onEvent callback if there's no channel found.
CHANGELOG
[CHANGED] When executing onEvent callback in channel, ensure channel exists.
Thoughts
In this case we disallow args.onEvent to be called, if channel is not present as well. Not sure if we are allowed to do this, otherwise simply channel?.onEvent will do the trick
Alread changed on the latest version. Thanks for you contribution!
| gharchive/pull-request | 2022-11-27T13:15:37 | 2025-04-01T06:45:30.674985 | {
"authors": [
"fbenevides",
"fwyh"
],
"repo": "pusher/pusher-websocket-react-native",
"url": "https://github.com/pusher/pusher-websocket-react-native/pull/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
160901860 | Feature/pusherclientoptions improvements
I've done a number of things in order to cleanup the PusherClientOptions and make it more Swifty™.
Awesome - I'll probably end up pulling this down, writing new / fixing tests and then get it merged.
Sounds good to me, if you PR that I can code review for you if you like.
Closing as replaced by #60
| gharchive/pull-request | 2016-06-17T14:32:13 | 2025-04-01T06:45:30.676570 | {
"authors": [
"Noobish1",
"hamchapman"
],
"repo": "pusher/pusher-websocket-swift",
"url": "https://github.com/pusher/pusher-websocket-swift/pull/57",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
453022606 | Fix for repr on Base class descendants and other minor stuff
Fixes #38
Also changed the user ID search logic to look for the report button instead of the friend request button.
It works fine for me. Please make sure that you are using python 3.4+
| gharchive/pull-request | 2019-06-06T13:11:28 | 2025-04-01T06:45:30.680347 | {
"authors": [
"pushrbx"
],
"repo": "pushrbx/python3-mal",
"url": "https://github.com/pushrbx/python3-mal/pull/39",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
2227246876 | 🛑 SDM is down
In a79e697, SDM (https://sdm.unmer.ac.id/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: SDM is back up in 3ab10c0 after 5 minutes.
| gharchive/issue | 2024-04-05T07:15:03 | 2025-04-01T06:45:30.682688 | {
"authors": [
"pusimgit"
],
"repo": "pusimgit/upptime",
"url": "https://github.com/pusimgit/upptime/issues/1040",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
75980136 | Mise en avant (ou pas) des posts
Ref https://github.com/putaindecode/organisation/issues/34
Faire des essais avec des étoiles en quoi, sur avatar ou post (preview) directement
Utiliser une forme simplifiée du logo pour dire member of p! et donc post approved by p! ?
J'attends un petit screenshot pour voir ce que ça pourrait donner :)
C'est pas terrible
Pas fan en effet.
Si tu tentes un truc simple genre border rouge sur l'avatar ?
Quelques tests comme ça :
Le rond rouge à la limite, le reste j'ai du mal perso
Je suis pas fan non plus. A vrai dire je suis même pas sur que les gens fassent bien la distinction entre post fav et non fav. Je vais tourner ma réflexion ailleurs
Pareil que @MoOx, le cercle rouge avec le petit espace blanc. Le reste, non.
Au mieux https://cloud.githubusercontent.com/assets/1997108/8075416/8870c150-0f40-11e5-902e-c8aac99bc661.png.
Le reste, je n'aime pas. C'est trop grossier à mon sens. Mais j'aime bien l'idée.
sinon on met rien hein
On rajoute une étoile devant ou après chaque auteur membre ?
rien pour ma part
sinon on met rien hein
voila
On close alors ?
Dans un sens, les posts sont validés par la communauté, donc bon... Je ferais peut être une PR avec une étoile vite fait un jour...
| gharchive/issue | 2015-05-13T13:16:32 | 2025-04-01T06:45:30.690920 | {
"authors": [
"Macxim",
"MoOx",
"bloodyowl",
"kud",
"lionelB",
"magsout"
],
"repo": "putaindecode/putaindecode.fr",
"url": "https://github.com/putaindecode/putaindecode.fr/issues/407",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1909881933 | 🛑 Storage is down
In c0e8c0d, Storage ($SITES_URL_STORAGE) was down:
HTTP code: 526
Response time: 344 ms
Resolved: Storage is back up in 6197fb5 after 18 minutes.
| gharchive/issue | 2023-09-23T13:58:47 | 2025-04-01T06:45:30.694648 | {
"authors": [
"putty182"
],
"repo": "putty182/upptime",
"url": "https://github.com/putty182/upptime/issues/297",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1945108654 | Sprig not working on element rendered initially through my own JS
Hi there,
I am trying to get my sprig 'wishlist' component to work on an element that is loaded initially through an axios request on page load. However once loaded, clicking on the sprig wishlist button results in nothing happening.
The below code is the controller for my axios GET request, it returns the contents of the card as a HTML string.
foreach($entries as $item) :
$html = $this->getView()->renderTemplate('cards/item.twig', ['item' => $item]);
$return['results'][] = $html;
endforeach;
return $this->asJson($return);
Then after the JS inserts this HTML into the DOM.
axios.get('/locations')
.then( (response) => {
this.data = response.data.results;
this.container.insertAdjacentHTML('beforeend', this.data.join(" "))
});
On the 'item.twig' file that is inserted into the JS, this is my sprig button to add the item to the wishlist.
<button sprig s-method="post" s-action="wishlist/items/add">Add to wishlist</button>
However when clicking it, nothing happens. No server request is sent in the network browser tab. I can see that sprig data attributes are being loaded onto the button element, which is leading me to believe that perhaps an event listener for the actual button needs to be recalled?
When I call and render the elements the normal using a craft entry query the add to wishlist button works fine, so its something to do with the fact I'm loading them in and inserting them via my own JS.
How can this be done? Thanks!
You'll need to tell htmx to process the HTML after it has been inserted into the DOM, which you can do with htmx.process(), see https://htmx.org/api/#process
I've just tried adding that into my .js file where the HTML is inserted however it doesn't seem to have made a change? I'm using webpack so I have included it like so:
import 'htmx.org';
htmx.process(this.container)
this.container being my HTML element that holds the appended HTML. Is there anything else I may need to do?
I'm not sure about your specific setup, but the answer should be to have htmx reprocess the DOM.
| gharchive/issue | 2023-10-16T12:32:29 | 2025-04-01T06:45:30.699094 | {
"authors": [
"bencroker",
"charliestrafe"
],
"repo": "putyourlightson/craft-sprig",
"url": "https://github.com/putyourlightson/craft-sprig/issues/333",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2364897823 | Update colorbar tick names in sunpath gallery example
[x] Duplicate of #1998
[x] I am familiar with the contributing guidelines
[x] Pull request is nearly complete and ready for detailed review.
[x] Maintainer: Appropriate GitHub Labels (including remote-data) and Milestone are assigned to the Pull Request and linked Issue.
@echedey-ls, @IoannisSifnaios, @RDaxini
Hi GSoC students
This PR is rather simple, but could be a good exercise for you to review
Great @AdamRJensen ! I will review in depth later.
Handy links:
This PR example: https://pvlib-python--2097.org.readthedocs.build/en/2097/gallery/solar-position/plot_sunpath_diagrams.html#sphx-glr-gallery-solar-position-plot-sunpath-diagrams-py
Stable example: https://pvlib-python.readthedocs.io/en/stable/gallery/solar-position/plot_sunpath_diagrams.html#sphx-glr-gallery-solar-position-plot-sunpath-diagrams-py
Let's add @PhilBrk8 to the whatsnew file:
* :ghuser:`PhilBrk8`
@kandersolar I've addressed @RDaxini's review as well so this PR should be ready to merge. Might as well get it out of the way with the release of 0.11.0.
Let's add @PhilBrk8 to the whatsnew file:
* :ghuser:`PhilBrk8`
EDIT: LGTM the code from a quick glance
The Whatsnew-file is what exactly?
I get notified of every change?
Would be down for it.
...even if life gets in the way a lot, I still plan on contributing more to Open Source and picked out this library to do so -
Hey @PhilBrk8 !
The Whatsnew-file is what exactly?
It is the changelog, a set of files that summarize the changes made to each one of the past releases and the next one. They are the source file(s) to build the following page: https://pvlib-python.readthedocs.io/en/stable/whatsnew.html
I get notified of every change?
You won't. We mention you in the contributors section. It's a way of encouraging and giving credit to contributors. You can find yourself listed on the previous link. Even though your PR wasn't merged, the original idea is still yours.
Feel free to contribute whenever you can and want, you will be welcomed here!
| gharchive/pull-request | 2024-06-20T17:09:43 | 2025-04-01T06:45:30.710496 | {
"authors": [
"AdamRJensen",
"PhilBrk8",
"echedey-ls"
],
"repo": "pvlib/pvlib-python",
"url": "https://github.com/pvlib/pvlib-python/pull/2097",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1206129390 | 자료 수집 및 분류 가이드라인
Description
예시
해당 자료와 같이 각 폴더를 설명하고 개략적인 형식을 지침 하는 파일이 있었으면 좋겠습니다 ^^
@JeMinMoon 기여자님 내용 예시로 wiki에 자료 수집 및 분류 시 가이드라인 항목으로 추가하도록 하겠습니다.
wiki 페이지()에서 확인 가능합니다.
내용 수정 필요하시면 변경 후 피드백 부탁 드립니다 =]
내용 수정을 필요 없을 것 같고 깔끔하게 정리를 잘 하셨네요. 일찍 가이드라인이 만들어졌다면 좋았을 것 같은데 아쉽습니다.
@JeMinMoon 감사합니다. 동감하는 부분입니다 ㅠㅠ
해당 이슈는 close 하도록 하겠습니다.
| gharchive/issue | 2022-04-16T14:50:04 | 2025-04-01T06:45:30.724963 | {
"authors": [
"JeMinMoon",
"wlwhsxz"
],
"repo": "pwjdgus/Age_Friendly_City",
"url": "https://github.com/pwjdgus/Age_Friendly_City/issues/206",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.