added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T06:36:51.577234
2020-06-17T11:21:42
640362862
{ "authors": [ "ethanfrey", "webmaster128" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:646", "repo": "CosmWasm/cosmwasm", "url": "https://github.com/CosmWasm/cosmwasm/issues/433" }
gharchive/issue
Report input errors to contract in canonicalize_address/humanize_address Before https://github.com/CosmWasm/cosmwasm/pull/431 the errors erported back to the contract were very strange. On the one hand, too many errors around Region handling were reported. On the other hand, errors from the backend implementation (FfiError::Other) are not reported and contain insufficient information. ~The only thing that made some sense was rejecting non-UTF8 human addresses as invalid input. But even this indicates a problem in the standard library, not the contract.~ See also // TODO: would be nice if do_canonicalize_address could differentiate between different errors // from Api.canonical_address and return INVALID_INPUT for those cases as well. let result = do_canonicalize_address(api, ctx, source_ptr2, dest_ptr); match result.unwrap_err() { VmError::FfiErr { source: FfiError::Other { .. }, } => {} err => panic!("Incorrect error returned: {:?}", err), }; Since we are probably getting and error string with useful information via FfiError::Other soon, I guess the best is to return an error Region pointer. I'm wondering about pushing the remaining error handling changes (also https://github.com/CosmWasm/go-cosmwasm/issues/73 and https://github.com/CosmWasm/cosmwasm/issues/308) to 1.0. Since we will have one more release pre-0.39-cosmos-sdk, we don't need to throw all these in there. Unless we can get them done in the next few days, I would rather cut 0.9 without them. (I will most likely punt https://github.com/CosmWasm/go-cosmwasm/issues/73) This can easily be done this week. Since it is a mayor ABI breakage, I'd like to have it in all upcoming testnets. Will release an alpha today without this fix, since it does not affect contract developers. Okay, fair enough. I will prepare a 0.9-alpha for go-cosmwasm and wasmd today as well (to help test cosmjs)
2025-04-01T06:36:51.579950
2017-02-17T08:52:44
208375418
{ "authors": [ "danieldahan", "whitepixelstudios" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:647", "repo": "CosmicMind/Motion", "url": "https://github.com/CosmicMind/Motion/issues/3" }
gharchive/issue
Jitter at animation end I've just updated our code to utilise the new Motion framework instead of the old one built in to Material. Using the Motion.rotation(angle:) function on a FabButton (as primary button of a Material Menu) works fine in terms of animating the button. However, we're noticing a flash of the initial animation state at the end of the rotation (GIF recording) menu.views.first?.animate(animation: Motion.rotation(angle: -90)) Any ideas what might cause this? Can you try using motion for this? menu.views.first?.motion(.rotationAngle(-90)) Thanks! That fixed the issue. Awesome :)
2025-04-01T06:36:51.583701
2021-08-05T09:37:58
961636297
{ "authors": [ "EiffL", "andrevitorelli" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:648", "repo": "CosmoStat/autometacal", "url": "https://github.com/CosmoStat/autometacal/issues/30" }
gharchive/issue
(Small) Code Reestructuring Together with issue #28 , I'm trying to reorganise the code so that it becomes more clear. So datasets will become data (and I intend to, in the future, have a cleanup of that), and we need sub packages for fitting methods (as we will have 3: one taken from ngmix, our simple moments & @b-remy's model fitting). This is mostly renaming and reorganising (except for, again, #28, which I'm doing right now) but I'll then make a PR and ask @EiffL to go through it. You can open a draft PR, meaning it's not completely ready for review, but it can be convenient. I think this can be closed? right @andrevitorelli ? Yes, for now. I'm happy with the current structure (which didn't change much, tbh)
2025-04-01T06:36:51.588591
2022-08-05T14:30:50
1330015275
{ "authors": [ "Anmol1696", "faddat", "sascha1337" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:649", "repo": "CosmosContracts/juno", "url": "https://github.com/CosmosContracts/juno/issues/245" }
gharchive/issue
Cleaner juno/app Overview Create app/upgrades dir to have all the upgrades in one place with versions and abstract out all keepers to app/keepers to make the the app/app.go cleaner and more maintainable. Problem Definition app/app.go is too big file. Proposal split app/app.go into app.go, upgrade/, keepers/ , module.go ref: osmosis/app provenance/app Hi there! I really liked your PR for the hub, this one: https://github.com/cosmos/gaia/pull/1580 I have been in seoul and unable to give it a proper review. Are the osmo and provenance styles different? Point blank: I agree with you. @faddat - refactor useful? might be some good practice to get warm @faddat provenance style is a little different in the sense they define there upgrades as a map with values as the upgrade function like this, then they have functions to deal with this map and create handlers out of this. In osmo this is more modular and we create a package itself per upgrade version, instead of a key value in map. I prefer the osmo way since it is cleaner and will be able to do alot more, includig handling forks better as well I'm very happy ot use the osmo style :D
2025-04-01T06:36:51.609955
2022-03-13T21:31:20
1167699325
{ "authors": [ "CouchersBot", "lucaslcode" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:650", "repo": "Couchers-org/couchers", "url": "https://github.com/Couchers-org/couchers/issues/2680" }
gharchive/issue
Front-page login for logged in accounts Subject: Front-page login for logged in accounts Description: Not necessarily a bug, but it threw me off. So clicking the couchers logo put me into the log-in page. I was expecting to land on almost any other page. I often use the site-logo as a sort of refresh beetween tasks. Results: Could it be possible to check to see if the user is logged in. If so, put them on the dashboard or some other site instead? Backend version: develop-bf8528d9 Frontend version: develop-bf8528d9 User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.51 Safari/537.36 Page: https://couchers.org/ User: myhgis (10850) Can't reproduce
2025-04-01T06:36:51.615808
2021-09-14T02:47:52
995525147
{ "authors": [ "rhanka" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:651", "repo": "Cour-de-cassation/judilibre-uptime", "url": "https://github.com/Cour-de-cassation/judilibre-uptime/issues/3" }
gharchive/issue
⚠️ sandbox-api.piste.gouv.fr healthcheck has degraded performance In a467991, sandbox-api.piste.gouv.fr healthcheck (https://sandbox-api.piste.gouv.fr/minju/judilibre/v1.0/healthcheck) experienced degraded performance: HTTP code: 200 Response time: 2973 ms Resolved: sandbox-api.piste.gouv.fr healthcheck performance has improved in da7562e.
2025-04-01T06:36:51.634901
2016-03-23T21:41:55
143087068
{ "authors": [ "MastaCoder", "christopherkardas", "hassanila97" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:652", "repo": "Cr4xy/agar-lvlgen", "url": "https://github.com/Cr4xy/agar-lvlgen/issues/31" }
gharchive/issue
Testing out a new Mac OSX script I of course don't own a mac, (because PCMasterRace that's why) and I googled a bit finding the following below, can someone test this on there mac possibly to see if it works? Edit the old command text and change to this: bash -c 'while [ 0 ]; do date; node $(cd "$(dirname "$0")"; pwd)/lvlgen.js done' DOES NO ONE HAVE A MAC? lol Seems the agario community is intelligent enough to dont buy this crap :p Saw many hero members are on IOS/Mac, but there also not very intelligent lol I honestly think we should drop Mac development and leave it up for themselves to figure it out.
2025-04-01T06:36:51.654455
2022-04-22T14:38:15
1212446911
{ "authors": [ "mjendrysik-hpe" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:653", "repo": "Cray-HPE/docs-csm", "url": "https://github.com/Cray-HPE/docs-csm/pull/1428" }
gharchive/pull-request
Update credential documentation Summary and Scope Update to password change documentation to avoid out-of-order operation resulting in the inability to power components on and off. Issues and Related PRs Resolves CASMHMS-5472 for mainline Change will also be needed in release/1.2.5, release/1.2, release/1.0, release/0.9 Pull Request Checklist [x] Target branch correct /backport --dry-run release/1.2 release/1.1 release/1.0 /backport --dry-run release/1.2 release/1.1 release/1.0 /backport release/1.2
2025-04-01T06:36:51.906094
2024-04-22T11:16:37
2256269691
{ "authors": [ "annapoorna-s-alt", "haasken-hpe", "shivaprasad-metimath" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:654", "repo": "Cray-HPE/sat", "url": "https://github.com/Cray-HPE/sat/pull/214" }
gharchive/pull-request
CRAYSAT-1839-sat-bootsys-waiter-subclass-addition Updating the sat bootsys kubelet start function to address chronjob creation failure IM:CRAYSAT-1839 Reviewer:Ryan Summary and Scope modified the do_kubelet_start function to add the wait after kubelet gets started. using the Waiter class that is already used in this module. Created a new Waiter subclass that implements the has_completed method to query the Kubernetes API to get the nodes in the Cluster. Once the query returns successfully, would be proceeding with the next step. Issues and Related PRs List and characterize relationship to Jira/Github issues and other pull requests. Be sure to list dependencies. Resolves CRAYSAT-1839 Testing Yet to be tested . Tested on: YTT Test description: will power off the nodes and power on the node to validate the functionality inclusion during the power on. Risks and Mitigations Minimal Pull Request Checklist [x] Version number(s) incremented, if applicable [x] Copyrights updated [x] License file intact [x] Target branch correct [x] CHANGELOG.md updated [ ] Testing is appropriate and complete, if applicable [ ] HPC Product Announcement prepared, if applicable Test output: ncn-m001:/mnt/shiva # sat bootsys boot --stage platform-services --ceph-timeout 120 The following Non-compute Nodes (NCNs) will be included in this operation: managers: ncn-m001 ncn-m002 ncn-m003 storage: ncn-s001 ncn-s002 ncn-s003 ncn-s004 workers: ncn-w001 ncn-w002 ncn-w003 ncn-w004 Are the above NCN groupings correct? [yes,no] yes INFO: Executing step: Ensure containerd is running and enabled on all Kubernetes NCNs. INFO: Executing step: Ensure etcd is running and enabled on all Kubernetes manager NCNs. INFO: Executing step: Start and enable kubelet on all Kubernetes NCNs. WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c987f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9ca7f70>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c87790>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8a9a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8abe0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9ca6a90>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8d250>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8eeb0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8ee20>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87adc0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87ad00>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87a0a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c9f040>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d400>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d190>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c99c40>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c99a90>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c99fd0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881be0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881220>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885250>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881130>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881d30>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881820>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c99ac0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c99c40>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aabe8850>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d4c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c9f6a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9cad1f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8853d0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885280>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885a00>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87deb0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d1f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d6a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c99fa0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c99fd0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aabe44c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885ca0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885610>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8857c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aaa731c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c767f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881af0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3ad1a3bb0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c99f70>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c99fd0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d1f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d4c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8d8e0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885640>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885580>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aaa73370>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885670>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8859a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885790>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d400>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87df10>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8a9a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3ad1a3b50>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aabe44f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881820>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aaa73490>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aaa73400>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aaa73910>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d2b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87deb0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885ca0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885f10>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885e50>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885ee0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87aca0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87aaf0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c9fb80>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c98490>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c988e0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c9f6a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885a60>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885340>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8856a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885b80>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885640>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885820>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d3a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881820>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c982b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9cae880>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c98940>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c982b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aab4fe80>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8854c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885520>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8852b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8853a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885df0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87ab50>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c9fbb0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c9f6a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885970>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8852b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885f10>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d3a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d3d0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d190>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c98070>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c98940>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9caefa0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8d430>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8d550>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9cae580>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881820>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881b80>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d1f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885220>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885790>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885640>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885be0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885580>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8d730>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885580>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8858b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885730>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885b50>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885910>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aab4f760>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9ca1a00>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9ca6dc0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c98070>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8d9d0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8d820>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8df40>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881070>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d430>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d400>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8d8b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885490>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8855b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885df0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885fa0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885610>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8aee0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aab4fe80>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885bb0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885b80>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8857c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8854c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d1f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8811c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881820>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c983d0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c98040>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8af70>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c98040>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c98940>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c8d8b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881d60>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d4c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d1f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa8854c0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885940>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885a30>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c87790>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c879d0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c87cd0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885b50>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa885d00>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa87d3a0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3aa881190>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9ca1fa0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9ca14f0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c98040>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c87580>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c87a00>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c995b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c87f40>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe3a9c87d30>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/nodes WARNING:root:MaxRetryError occurred while trying to connect to Kubernetes API. ERROR:/sat/venv/lib/python3.9/site-packages/sat/waiting.py:Waiting for condition "Kubernetes API availability check" timed out after 300 seconds INFO: Executing step: Recreate cron jobs that have become stuck INFO:sat.cli.bootsys.platform:Executing step: Recreate cron jobs that have become stuck DEBUG:csm_api_client.k8s:Couldn't load the in-cluster config: Service host/port is not set. (proceeding under the assumption that the config should be loaded from the kubeconfig file) the warning seems to be lot many, which may lead to confusion, even though its on the expected lines Updated output, post addressing the comments: ncn-m001:/mnt/shiva # sat bootsys boot --stage platform-services --ceph-timeout 120 The following Non-compute Nodes (NCNs) will be included in this operation: managers: ncn-m001 ncn-m002 ncn-m003 storage: ncn-s001 ncn-s002 ncn-s003 ncn-s004 workers: ncn-w001 ncn-w002 ncn-w003 ncn-w004 Are the above NCN groupings correct? [yes,no] yes INFO: Executing step: Ensure containerd is running and enabled on all Kubernetes NCNs. INFO: Executing step: Ensure etcd is running and enabled on all Kubernetes manager NCNs. INFO: Executing step: Start and enable kubelet on all Kubernetes NCNs. INFO: Waiting for Kubernetes API to become reachable... Waiting for condition "Kubernetes API available" timed out after 300 seconds INFO: Executing step: Recreate cron jobs that have become stuck WARNING: Jobs for cronjob "cray-dns-unbound-manager" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: Jobs for cronjob "hms-discovery" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. ncn-m001:/mnt/shiva # systemctl is-enabled kubelet enabled ncn-m001:/mnt/shiva # systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf /etc/systemd/system/kubelet.service.d └─10-kubelet.conf Active: active (running) since Thu 2024-04-25 06:46:32 UTC; 20min ago Docs: https://kubernetes.io/docs/ Main PID: 2977924 (kubelet) Tasks: 79 CGroup: /system.slice/kubelet.service └─ 2977924 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kube> Apr 25 06:52:37 ncn-m001 kubelet[2977924]: E0425 06:52:37.079689 2977924 pod_workers.go:951] "Error syncing pod, skipping" er> Apr 25 06:52:51 ncn-m001 kubelet[2977924]: E0425 06:52:51.079839 2977924 pod_workers.go:951] "Error syncing pod, skipping" er> Apr 25 06:53:04 ncn-m001 kubelet[2977924]: E0425 06:53:04.080191 2977924 pod_workers.go:951] "Error syncing pod, skipping" er> Apr 25 06:53:17 ncn-m001 kubelet[2977924]: E0425 06:53:17.079126 2977924 pod_workers.go:951] "Error syncing pod, skipping" er> Apr 25 06:53:45 ncn-m001 kubelet[2977924]: I0425 06:53:45.778928 2977924 scope.go:110] "RemoveContainer" containerID="f58d126> Apr 25 06:53:45 ncn-m001 kubelet[2977924]: I0425 06:53:45.779894 2977924 scope.go:110] "RemoveContainer" containerID="f58d126> Apr 25 06:53:45 ncn-m001 kubelet[2977924]: E0425 06:53:45.781904 2977924 remote_runtime.go:296] "RemoveContainer from runtime> Apr 25 06:53:45 ncn-m001 kubelet[2977924]: E0425 06:53:45.781995 2977924 kuberuntime_container.go:798] failed to remove pod i> Apr 25 06:53:47 ncn-m001 kubelet[2977924]: I0425 06:53:47.796994 2977924 scope.go:110] "RemoveContainer" containerID="07bfd90> Apr 25 06:53:47 ncn-m001 kubelet[2977924]: I0425 06:53:47.797035 2977924 scope.go:110] "RemoveContainer" containerID="a90ee4d> ncn-m001:/mnt/shiva # kubectl cluster-info Kubernetes control plane is running at https://<IP_ADDRESS>:6442 CoreDNS is running at https://<IP_ADDRESS>:6442/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. @shivaprasad-metimath, from your testing, I can see that this code does not behave correctly. Specifically, see these lines: INFO: Waiting for Kubernetes API to become reachable... Waiting for condition "Kubernetes API available" timed out after 300 seconds INFO: Executing step: Recreate cron jobs that have become stuck WARNING: Jobs for cronjob "cray-dns-unbound-manager" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: Jobs for cronjob "hms-discovery" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. This shows that it timed out waiting for the Kubernetes API to become available. This means your has_completed method in your new KubernetesAPIWaiter class must never be returning True and allowing the waiter to complete. I looked into it, and I noticed that you are never loading the Kubernetes configuration, so it's not able to connect to the Kubernetes cluster. To fix this, please use the function load_kube_api from csm_api_client.k8s. This will properly ensure the Kubernetes configuration is loaded and then instantiate a CoreV1Api and return it to you. Note that when I tested using load_kube_api in a sat bash shell, I found another bug with calling list_node on the resulting CoreV1Api object. I addressed that in my pull request for CRAYSAT-1848 here: https://github.com/Cray-HPE/sat/pull/216 Please take a look at that. Once that's fixed in the main branch, you'll want to merge main into your feature/CRAYSAT-1740 branch and then rebase your CRAYSAT-1839-bootsys-waiter-subclass-addition branch on the feature branch to pull in the fix before you test again. While I was looking at other usages of the {{wait_for_completion}} method of various {{Waiters}}, I noticed that there is actually already a KubernetesAPIAvailableWatier in sat.cli.bootsys.k8s. It uses get_api_resources instead of list_node, and that's fine. All we're really trying to do is query the API, and that's probably the simplest API call you could make. As an added bonus, it doesn't look like get_api_resources has the same problem with the Kubernetes version mismatch as described in CRAYSAT-1848. The only thing missing from that class that you have in yours is an info log message the first time the waiter finds the Kubernetes API to be unreachable. I think it's fine to add such a log message. Please modify and use the existing KubernetesAPIAvailableWaiter instead of implementing another one. Latest output: ncn-m001:/mnt/shiva # sat bootsys boot --stage platform-services --ceph-timeout 120 The following Non-compute Nodes (NCNs) will be included in this operation: managers: ncn-m001 ncn-m002 ncn-m003 storage: ncn-s001 ncn-s002 ncn-s003 ncn-s004 workers: ncn-w001 ncn-w002 ncn-w003 ncn-w004 Are the above NCN groupings correct? [yes,no] yes INFO: Executing step: Ensure containerd is running and enabled on all Kubernetes NCNs. INFO: Executing step: Ensure etcd is running and enabled on all Kubernetes manager NCNs. INFO: Executing step: Start and enable kubelet on all Kubernetes NCNs. INFO: The Kubernetes API is currently unreachable. INFO: Executing step: Recreate cron jobs that have become stuck WARNING: Jobs for cronjob "cray-bos-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-bos-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '57bb6e09-ec2b-4dcb-a2bf-86e0a855d65b', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:15 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-bss-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-bss-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': 'be931aa2-c6ab-4f5d-9325-23c0d231e2c4', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:20 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-dns-unbound-manager" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-dns-unbound-manager" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '8c4d8af1-911d-4b90-a9ea-23881c3bef50', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:23 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-fas-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-fas-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '52dd4fe9-1419-4733-9527-0c6349b25ea5', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:26 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-hbtd-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-hbtd-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '61bc1a16-c0da-43b8-9386-4980c4f14642', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:29 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-hmnfd-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-hmnfd-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '2784415f-090b-4c01-acd9-89eed145a24b', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:29 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-power-control-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-power-control-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '6842288e-8752-4b75-ae21-902cbf0fb2f7', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:38 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-uas-mgr-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-uas-mgr-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '47650866-831a-4e66-bd36-6473a3f5f6cd', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:41 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "etcd-backup-pvc-snapshots-to-s3" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "etcd-backup-pvc-snapshots-to-s3" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '030149ce-48bf-4e2e-9103-84eede7bc86f', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:44 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "hms-discovery" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "hms-discovery" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '60fd74fb-e410-4c41-8fd8-46dc7748b734', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:47 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "sonar-sync" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "sonar-sync" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': 'ffd241fa-b628-4624-9d83-72816c5da21d', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '4ea23580-d341-47ea-a184-d76c9126b813', 'X-Kubernetes-Pf-Prioritylevel-Uid': '03d335b2-1a9f-4bed-979b-b187c7e83a43', 'Date': 'Tue, 30 Apr 2024 07:26:47 GMT', 'Content-Length': '545'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: ncn-m001:/mnt/shiva # Waiting for condition "Kubernetes API available" timed out after 300 seconds ERROR: Fatal error in step "Start and enable kubelet on all Kubernetes NCNs." of platform services start: Failed to start kubelet: Kubernetes API not available Why is this error popped up even after I see kubelet is active and enabled Kubernetes api availability check seems to be working fine with expected output, during the chron job creation there is a delay observed leading to warnings. It could be observed only on this system. Complete log has been attached for reference. sat1.log 1st attempt: ncn-m001:/mnt/shiva # sat bootsys boot --stage platform-services --ceph-timeout 120 The following Non-compute Nodes (NCNs) will be included in this operation: managers: ncn-m001 ncn-m002 ncn-m003 storage: ncn-s001 ncn-s002 ncn-s003 ncn-s004 workers: ncn-w001 ncn-w002 ncn-w003 ncn-w004 Are the above NCN groupings correct? [yes,no] yes INFO: Executing step: Ensure containerd is running and enabled on all Kubernetes NCNs. INFO: Executing step: Ensure etcd is running and enabled on all Kubernetes manager NCNs. INFO: Executing step: Start and enable kubelet on all Kubernetes NCNs. INFO: Waiting up to 300 seconds for the Kubernetes API to become available INFO: The Kubernetes API is currently unreachable. INFO: Kubernetes API is available INFO: Executing step: Recreate cron jobs that have become stuck WARNING: Jobs for cronjob "cray-bos-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-bos-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': 'cc070e0a-fce8-45a1-8902-7e5d4ab8ff49', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:29 GMT', 'Content-Length': '493'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": context deadline exceeded","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": context deadline exceeded"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-bss-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-bss-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '1065b4ab-5747-4449-9ae6-cd000f659d59', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:32 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-dns-unbound-manager" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-dns-unbound-manager" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': 'a383ca98-5a4a-4bb0-962e-3ad778cac7a4', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:35 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-fas-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-fas-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': 'df5dc7f7-9a90-4f96-8de1-7daaf829166c', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:38 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-hbtd-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-hbtd-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '6eceeac2-152a-4ace-8c9b-d436c8f60c4f', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:41 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-hmnfd-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-hmnfd-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': 'da31639f-0f55-4b1e-8729-dc21aa9c4298', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:42 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-power-control-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-power-control-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '0a6b87f1-ed1b-461a-bb94-f535e0ec7738', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:45 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "cray-uas-mgr-bitnami-etcd-snapshotter" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "cray-uas-mgr-bitnami-etcd-snapshotter" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': 'c06ba9bd-cbbe-4e12-b24f-6be1ee537104', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:48 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "etcd-backup-pvc-snapshots-to-s3" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "etcd-backup-pvc-snapshots-to-s3" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': 'ef533cf7-2517-4304-bc21-7c78023f270c', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:51 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "hms-discovery" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "hms-discovery" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '1785c39e-98ed-4a44-88ae-d435d046ca78', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:54 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: WARNING: Jobs for cronjob "sonar-sync" in namespace "services" do not appear to be scheduled on time according to the cron job's schedule; recreating cron job. WARNING: An error occurred while re-creating cronjob "sonar-sync" in namespace "services": (500) WARNING: Reason: Internal Server Error WARNING: HTTP response headers: HTTPHeaderDict({'Audit-Id': '4cd734ff-ba4c-493c-bb7b-b6de10682b69', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '206250a1-21b4-4b9d-9608-833e6a1583cd', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'b5beaacd-765d-46f9-988c-cf2c75636da4', 'Date': 'Wed, 15 May 2024 10:23:57 GMT', 'Content-Length': '549'}) WARNING: HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook "validate.kyverno.svc-fail": Post "https://cray-kyverno-svc.kyverno.svc:443/validate/fail?timeout=10s\": dial tcp <IP_ADDRESS>:443: connect: no route to host"}]},"code":500} WARNING: 2nd attempt: ncn-m001:/mnt/shiva # sat bootsys boot --stage platform-services --ceph-timeout 120 The following Non-compute Nodes (NCNs) will be included in this operation: managers: ncn-m001 ncn-m002 ncn-m003 storage: ncn-s001 ncn-s002 ncn-s003 ncn-s004 workers: ncn-w001 ncn-w002 ncn-w003 ncn-w004 Are the above NCN groupings correct? [yes,no] yes INFO: Executing step: Ensure containerd is running and enabled on all Kubernetes NCNs. INFO: Executing step: Ensure etcd is running and enabled on all Kubernetes manager NCNs. INFO: Executing step: Start and enable kubelet on all Kubernetes NCNs. INFO: Waiting up to 300 seconds for the Kubernetes API to become available INFO: Kubernetes API is available INFO: Executing step: Recreate cron jobs that have become stuck
2025-04-01T06:36:51.915206
2023-10-18T04:08:45
1948767020
{ "authors": [ "CrazyMarvin", "weblate" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:655", "repo": "Crazy-Marvin/MetadataRemover", "url": "https://github.com/Crazy-Marvin/MetadataRemover/pull/44" }
gharchive/pull-request
Translations update from Hosted Weblate Translations update from Hosted Weblate for Metadata Remover/Metadata. It also includes following components: Metadata Remover/Metadata Remover Current translation status: Thank you very much for your support! 😘
2025-04-01T06:36:51.974369
2017-10-17T09:21:44
266051597
{ "authors": [ "Crinsane", "ellgibug" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:662", "repo": "Crinsane/LaravelShoppingcart", "url": "https://github.com/Crinsane/LaravelShoppingcart/issues/397" }
gharchive/issue
Store in DB Hello! Thank you for this package, it is very usefull. In my project I want to do this. "User One" logges in, put several items to the cart and loggs out. After a while he logges in again and sees items in his cart. I know than I may use store method, but I don't understand how, cause I have a "CartAlreadyStoredException". Where may I put code for this method? My code for adding products: public function addProductToCart(Request $request, $id) { if ($request->has('amount')){ $amount = $request->amount; } else { $amount = 1; } $product = Product::find($id); Cart::instance('shopping')->add($id, $product->name, $amount, $product->price); if(Auth::check()){ Cart::instance('shopping')->store(Auth::user()->id); //error! } return back(); } and for displaying: (now it shows items that are in the session, but I want to swith between cart in the session for not auth users and between cart in the DB for auth users) public function cart() { $cartItems = Cart::instance('shopping')->content(); return view ('orders.cart', compact('cartItems')); } To sum up, I need to use session when user is guest in and DB when user is logged in. Once again, thanks for package. I hope someone can help me:) I'm afraid you're using the 'store' method in another way than it's designed for. The idea is that you can 'store' the cart for a later point in time, and 'restore' it when the user wants it again. It's not really designed to store the cart in the database 'realtime' so to say. Thanks for your answer) If someone has this problem, that's my suggestion. In logout method: public function logout(Request $request) { Cart::instance('shopping')->restore(Auth::user()->id); Cart::instance('shopping')->store(Auth::user()->id); $this->guard()->logout(); $request->session()->invalidate(); return redirect(url()->previous()); } @humamalamin If I am logged in I store data in the session. My updated logout function public function logout(Request $request) { //delete old cartitems DB::table('shoppingcart')->where([ ['identifier', Auth::user()->id], ['instance', 'shopping'] ])->delete(); DB::table('shoppingcart')->where([ ['identifier', Auth::user()->id], ['instance', 'wishlist'] ])->delete(); //save new cart items Cart::instance('shopping')->store(Auth::user()->id); Cart::instance('wishlist')->store(Auth::user()->id); $this->guard()->logout(); $request->session()->invalidate(); //clear session data. cart became empty return redirect(something_url); } When I'm logged in I combine data from DB (if it is exists) and data from the cart (if customer add smth already) public function login(Request $request) { /* * ..... * checking validation and logging user in. if OK */ // get cart from DB if it exists $storedCartItems = DB::table('shoppingcart')->where([ ['identifier', Auth::user()->id], ['instance', 'shopping'] ])->value('content'); // get wishlist from DB if it exists $storedWishlistItems = DB::table('shoppingcart')->where([ ['identifier', Auth::user()->id], ['instance', 'wishlist'] ])->value('content'); $storedCartItems = \unserialize($storedCartItems); $storedWishlistItems = \unserialize($storedWishlistItems); // check if count of each product in the store is more than in the cart and more than 0 (only for cart) if($storedCartItems){ foreach ($storedCartItems as $item){ Cart::instance('shopping')->add($item->id, $item->name, $item->qty, $item->price)->associate('App\Product'); // if it passes, I'll add them to the cart in the session if (($item->model->qty > 0) & ($item->model->qty < $item->qty)){ Cart::instance('shopping')->update($item->rowId, $item->model->qty); // if it does not pass, I will not add them to the cart in the session } elseif ($item->model->qty == 0){ Cart::instance('shopping')->remove($item->rowId); } } } //add items from wishlist from DB to the wishlist items in the session if($storedWishlistItems){ foreach ($storedWishlistItems as $item){ Cart::instance('wishlist')->add($item->id, $item->name, $item->qty, $item->price)->associate('App\Product'); } } // return redirect or smth else } That works for me. Hope it can help you)
2025-04-01T06:36:51.990041
2018-04-03T10:50:15
310786658
{ "authors": [ "autosoftmultimedia", "rdelrosario" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:663", "repo": "CrossGeeks/FacebookClientPlugin", "url": "https://github.com/CrossGeeks/FacebookClientPlugin/issues/16" }
gharchive/issue
Share dialog Hi, how can I show the share dialog before posting on Facebook? You can create your own ui for sharing. We made it flexible so that you can create your own UX, instead of using fb default dialog.
2025-04-01T06:36:52.001797
2022-12-08T16:06:46
1484947552
{ "authors": [ "jonathimer", "peoray" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:664", "repo": "CrowdDotDev/crowd.dev", "url": "https://github.com/CrowdDotDev/crowd.dev/issues/328" }
gharchive/issue
[C-288] Create tooltips for severals terms in web app Several users asked for explanations/defintions for different terms, including "Active" members Identities Engagement level Reach Attributes Solution: Create tooltips for those terms in the web app From SyncLinear.com | C-288 @jonathimer @joanagmaia Is this issue still valid? If so, please can you provide more context. @nu Pinging @joanagmaia @nunoeufrasio again :) Is this issue still valid? If so, please can you provide more context. duplicate
2025-04-01T06:36:52.008944
2023-04-12T21:27:41
1665287169
{ "authors": [ "CLAassistant", "erinmikailstaples" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:665", "repo": "CrowdDotDev/crowd.dev", "url": "https://github.com/CrowdDotDev/crowd.dev/pull/753" }
gharchive/pull-request
Bug Fix // Corrected Typo Changes proposed ✍️ Corrected typos — "coulnd't" ➡️ "couldn't" Added period to end of full sentences Screenshots (front-end changes only) - N/A Checklist ✅ [X] Label appropriately with Feature, Enhancement, or Bug. [N/A] Tests are passing [N/A] New backend functionality has been unit-tested. [N/A] Environment variables have been updated: [N/A] Local frontend configuration: frontend/.env.dist.local, frontend/.env.dist.composed. [N/A] Local backend: backend/.env.dist.local, backend/.env.dist.composed. [N/A] Configuration docs have been updated. [N/A] Team members only: update environment variables in override, staging and production env. files and trigger update config script. [N/A] API documentation has been updated (if necessary) (see docs on API documentation). [X] Quality standards are met. [ ] All changes have been tested in a staging site. [ ] All changes are working locally running crowd.dev's Docker local environment. Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T06:36:52.010731
2020-08-06T13:16:34
674306245
{ "authors": [ "Jhy1993", "aravindsankar28", "zoeleesss" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:666", "repo": "CrowdDynamicsLab/GroupIM", "url": "https://github.com/CrowdDynamicsLab/GroupIM/issues/1" }
gharchive/issue
code plz? Could you provide the code of GroupIM, plz? Thx! Thanks for your interest in our work, the code will be released by the end of the month. Thanks for your interest in our work, the code will be released by the end of the month. Tomorrow is the end of this month😃 Thanks for the reminder, the code is up now!
2025-04-01T06:36:52.030466
2023-07-29T20:29:27
1827647199
{ "authors": [ "HendX", "normand1" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:667", "repo": "CrunchyBagel/OutcastID3", "url": "https://github.com/CrunchyBagel/OutcastID3/pull/10" }
gharchive/pull-request
Fix Out of range errors in Data+String.swift Hi I love the library and use it in my podcast app HyperCatcher! I ran into what looks like an out of range error on this mp3 file: https://api.substack.com/feed/podcast/135258128/8ebf19eff23e3b34077912c5ff4f40b5.mp3 From this podcast feed: https://www.latent.space/feed This PR is the fix I implemented in my app Ah - nice catch. I'm not in a position to test this just at the minute. Just looking over it - the .single changes look good, but I'm not sure what the changes inside the .double case achieve exactly? Is it changing it to look forward instead of backwards? Ah - nice catch. I'm not in a position to test this just at the minute. Just looking over it - the .single changes look good, but I'm not sure what the changes inside the .double case achieve exactly? Is it changing it to look forward instead of backwards? Sorry for the delay, I was at a hackathon last week when I worked on this. I did add the .double case changes in my app, but looking back at this I don't remember why so I removed them for now. The .single case was where I was actually seeing the crash so the .double case is probably fine as it is anyway. Thanks for this!
2025-04-01T06:36:52.047753
2022-03-06T19:40:56
1160697235
{ "authors": [ "CryanCode" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:668", "repo": "CryanCode/github-slideshow", "url": "https://github.com/CryanCode/github-slideshow/pull/3" }
gharchive/pull-request
Add CryanCodes File created a branch, created a file and made a commit, and opened a pull request created a branch, created a file and made a commit, and opened a pull request
2025-04-01T06:36:52.050263
2021-09-08T10:45:25
990997177
{ "authors": [ "CrypticSignal", "askfriends" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:669", "repo": "CrypticSignal/av-converter", "url": "https://github.com/CrypticSignal/av-converter/issues/21" }
gharchive/issue
Add heroku deploy support Can you add heroku deploy support? I've never deployed something to Heroku before and it's not really something that I'm interested in figuring out. What would the benefit be? I've never deployed something to Heroku before and it's not really something that I'm interested in figuring out. What would the benefit be? heroku offers free hours for 20days and if we add CC it allows another 15days so it means we can host our app for free and unlimited bandwidth. Its easy and offers high speed bandwidth The server that https://av-converter.com runs on has unlimited bandwidth and as I am paying for the server, it is free for everyone else including you. Adding Heroku support to this project is not something that I want to spend my time on figuring out, at least not at this moment in time. You are free to give it a go yourself and submit a pull request if you are successful.
2025-04-01T06:36:52.057369
2020-05-27T17:54:45
625906945
{ "authors": [ "coreycaplan3", "k06a" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:670", "repo": "CryptoManiacsZone/1inchProtocol", "url": "https://github.com/CryptoManiacsZone/1inchProtocol/issues/21" }
gharchive/issue
Add support for DMM The interface is very similar to compound. All mTokens have the same number of decimals as their underlying counterpart. The exchangeRate is always a number with 18 decimal places, regardless of the token. Master Controller - 0x4CB120Dd1D33C9A3De8Bc15620C7Cd43418d77E2 You can call #getDmmTokenIds to get all of the tokens by their ID and iterate through them, if you want. There are the respective functions in there for getting the underlying token by address or querying on the mToken address if you only have the underlying (eg DAI --> mDAI) underlyingTokenAddressToDmmTokenIdMap mDAI -0x06301057D77D54B6e14c7FafFB11Ffc7Cab4eaa7 mETH - 0xdF9307DFf0a1B57660F60f9457D32027a55ca0B2 mUSDC - 0x3564ad35b9E95340E5Ace2D6251dbfC76098669B Minting is as simple as calling mint(uint underlyingAmount) where underlyingAmount is the amount of underlying you want to send into the contract. Note, a token approval is needed for calling mint on the underlying contract, where the spender is set to the mToken contract address. Redeeming mTokens for underlying, plus interest, is done through redeem(uint amount) where amount is the amount of mTokens to be sent to the contract and redeemed. Note, no token approvals are required to redeem mTokens. Thanks!
2025-04-01T06:36:52.060548
2024-10-11T17:35:07
2581874777
{ "authors": [ "Ctoic", "dexterousdhruv" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:671", "repo": "Ctoic/Lisbook", "url": "https://github.com/Ctoic/Lisbook/issues/35" }
gharchive/issue
Integrate Google Drive/Dropbox for Audiobook Uploads Description: Allow users to upload their own audiobooks from cloud services like Google Drive or Dropbox. This feature will enable users to sync their own audiobook collections with LisBook. Acceptance Criteria: Add "Upload from Google Drive" and "Upload from Dropbox" options in the app. Ensure proper handling of file formats and uploads. Test uploading from both cloud services. I'd like to give it a shot. Please assign it to me. Sure @dexterousdhruv I would love to see some cool contribution from your side. 😄. Don't Forget to ⭐ our repo and happy coding 👨🏼‍💻 @dexterousdhruv are you working on it ? If you need any help let me know I'm on it. Great @dexterousdhruv go ahead @Ctoic My question is where would we store the files, once the user uploads it from his respective cloud provider ? Guess what, we have to set up an express server. What do you say ?
2025-04-01T06:36:52.064781
2024-10-28T19:35:34
2619364571
{ "authors": [ "IceOfWraith" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:672", "repo": "CubeCoders/AMP", "url": "https://github.com/CubeCoders/AMP/issues/1199" }
gharchive/issue
Memory Usage Misreported on ARK: SA Operating System Unknown AMP Version and Build Date 2.6 AMP Release Stream Mainline I confirm that [X] I have searched for an existing bug report for this issue. [X] I am using the latest available version of AMP. [X] my operating system is up-to-date. Intended Action N/A Expected Behaviour N/A Actual Behaviour N/A Reproduction The memory usage is misreporting after the 2.6 upgrade. This is resolved in the release coming tomorrow.
2025-04-01T06:36:52.069636
2019-07-27T20:03:38
473672795
{ "authors": [ "omega9380", "pizzafox" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:673", "repo": "CubeCoders/AMP", "url": "https://github.com/CubeCoders/AMP/issues/131" }
gharchive/issue
BUG: No console output using Minecraft module after upgrade to <IP_ADDRESS> Bug Report System Information Operating System: Ubuntu Server 18.04/Kernel 4.15.0-55-generic AMP version and build date: v<IP_ADDRESS> built 25/07/2019 17.10 Which AMP release stream you're using: Mainline I confirm: [x] that I have searched for an existing bug report for this issue. [x] that I am using the latest available version of AMP. [x] that my operating system is up-to-date. Symptoms After upgrading my instances to <IP_ADDRESS>, console output is not working in the Minecraft instance web interface. The game is accpeting commands from the server but no text is showing on the console tab in the web interface Reproduction Upgrade an existing <IP_ADDRESS> instance to <IP_ADDRESS> and check the console tab after starting the server (i have not tried creating a new instance) After clearing my browser cache, the console tab started working again. Please close this bug out as resolved. Thanks! After clearing my browser cache, the console tab started working again. Please close this bug out as resolved. Thanks! You have a button at the bottom of the page that lets you "close" the issue.
2025-04-01T06:36:52.073233
2023-07-07T05:16:26
1792794576
{ "authors": [ "Braiam23", "Paneedah" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:674", "repo": "Cubed-Development/Modern-Warfare-Cubed", "url": "https://github.com/Cubed-Development/Modern-Warfare-Cubed/pull/189" }
gharchive/pull-request
General adjustments in Taurus and Beowulf 🤔 What type of PR is this? (check all applicable) [ ] 🍕 Addition [ ] ⌨️ Productivity [X] 🐛 Bug Fix [ ] 🔥 Optimization [ ] ⚙️ Configuration [ ] 🌟 Quality Of Life [X] ✨ Enhancement [ ] 📝 Documentation 📝 Description fix and renew some more things, maybe in the next one I will fix the accessories of the taurus revolver and the m249 maybe it will fix it 🖼️ Screenshots/Recordings Bug: Looks good to me. Awaiting review from @Desoroxxx. I already added the changes to the changelog
2025-04-01T06:36:52.076782
2023-02-11T10:56:51
1580788036
{ "authors": [ "HasanSibakhi", "sagarkhadka" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:675", "repo": "Cuberto/mouse-follower", "url": "https://github.com/Cuberto/mouse-follower/issues/17" }
gharchive/issue
data-magnetic Hi again, I added this tag (data-magnetic), but it didn't work. I think it's disabled! Thank you. For data-magnetic you have to wrap your element with an div that holds data-cursor-stick. Heres an exaple: <div data-cursor-stick='#stick-here'> <div id='stick-here'> <BiMenu /> </div> </div> @sagarkhadka thank you for your time. Yes, I know this point. I meant the magnet like the picture below @HasanSibakhi For the effect I think cuberto has given different repo. You can find it here. Maybe this will help.
2025-04-01T06:36:52.178616
2024-06-18T00:01:48
2358525163
{ "authors": [ "jkowalleck", "jkugler" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:681", "repo": "CycloneDX/cyclonedx-python-lib", "url": "https://github.com/CycloneDX/cyclonedx-python-lib/pull/635" }
gharchive/pull-request
#561: Add component and services for tools CycloneDX spec 1.5 deprecated an array of tools in bom.metadata and instead prefers object with an array of components and an array of services. This PR implements that. This works de-serializing a Syft SBOM with a tool section like so: "metadata": { "timestamp": "2024-06-10T13:06:52-08:00", "tools": { "components": [ { "type": "application", "author": "anchore", "name": "syft", "version": "1.4.1" } ] }, "component": { "bom-ref": "08329a07b4eb8eac", "type": "file", "name": "./" } }, Next up: docs, XML (de)serialization code, and tests. fixes #561 a feature i would love to see in the end: metadata tools converters, that allow me to have a bunch of Components and Services , and when normalizing to CycloneDX 1.3, the Compoennts and Servioces are converted to Tools in the resulting XML/JSON. So that i dont loose any data... i dislike the current concept of ToolRepository that holds tools, services and components at the same time. This is an abomination of a data type, and teaching people how to use it properly will just be a hack of an effort. i'd rather have a type ToolRepository that holds services and components and no Tool! So, Metadata.tools would be truly Union[ToolRepository, SortedSet[Tool]]. The (de)serialize would be handled by a (de)normaliser (aka "Helper" and thats it. Is there something that speaks against this very simple and still pythonic solution? @jkowalleck: So drivers for my approach were two-fold: I am not 100% acquainted with the use cases of the the library; that is how people use the library. I was trying to maintain 100% backward compatibility. Thus, the reason for ToolRepository which combined the three types. I wanted to be able to initialize and empty BomMetaData (which is allowed now) and then do: # old code still works bom.metadata.tools.add(....) # or bom.metadata.tools = my_sorted_set_of_tools as well as be able to do: # new code would work as expected bom.metadata.tools.components.add(...) bom.metadata.tools.services.add(...) If we don't have an object in bom.metadata.tools which responds to components and services attributes, then calling those attributes will generate an exception, and cause unexpected behavior. At least in my mind, I would expect to be able to instantiate a Bom, and then call bom.metadata.tools.components.add(...) or bom.metadata.tools.components = ... a feature i would love to see in the end: metadata tools converters, that allow me to have a bunch of Components and Services , and when normalizing to CycloneDX 1.3, the Compoennts and Servioces are converted to Tools in the resulting XML/JSON. So that i dont loose any data... That would be really neat. Do you mean a stand-alone tool? Or built in to the new type? Is there a way to know in the normalizing functions which version we're serializing for? I did not see that anywhere in the docs, but it would be great if we could do that. re https://github.com/CycloneDX/cyclonedx-python-lib/pull/635#issuecomment-2177181574 Thank you for the explanation. I understand the reasoning behind the current implementation of ToolsRepo now. For sake of usability, you are right to using it. A need for API backwards compatibility is not really needed. Clean code and documentation are more important. An improvement I could imagine: have it a much simpler, documented container-object like so: class ToolsRepository: """our implementation of the tools repo""" tools: SortedSet[Tool] """DEPRECATED tools""" components: SortedSet[Component] """docstring here..." services: SortedSet[Service] """docstring here..." def __init__(self, *, components: Optional[Iterable[Components]] = None, services: OptionalIterable[Service]] = None, # Deprecated in v1.x tools: OptionalIterable[Tool]] = None, ): if tools: warn("deprecation message here...", DeprecationWarning) self.tools = SortedSet(tools or ()) self.components = SortedSet(components or ()) self.services = SortedSet(services or ()) Thanks for the explanation. Given the comment here: https://github.com/CycloneDX/cyclonedx-python-lib/blob/main/cyclonedx/model/init.py#L1079 I'd like to try to keep backward compatibility for now. The code to maintain this is probably only 60 to 70 or so lines more than breaking backward compatibility, and once the support for List[Tool] goes away completely, the semantics for tools.components and tools.services won't change. Everything seems to be working, save the problem I mentioned here: https://cyclonedx.slack.com/archives/CVA0QJEVA/p1718821402381719 Tests for new functionality are in place. Pretty sure I didn't do it the best way, but just wanted to show the new functionality works. :) We can re-work the tests before merge if need be. BTW, maintains 93% test coverage. I have a few more code paths I want to test as well. @jkugler, before we can use your contribution, we need you to sign-off your commits. here is why this is required and what it implies: https://github.com/CycloneDX/cyclonedx-python-lib/blob/main/CONTRIBUTING.md#sign-off-your-commits here is a step-by-step instruction on how to do this: https://github.com/CycloneDX/cyclonedx-python-lib/pull/635/checks?check_run_id=26536713378 There is a decision we'll need to make in how to handle the current behavior in BomMetaData: if not tools: self.tools.add(ThisTool) This adds the CycloneDX information if none is provided when initializing BomMetaData, and the will generate an error if someone, later does this: bom.metadata.tools.components = ... # or bom.metadata.tools.services = However, there will not be an error if a developer does bom.metadata.tools.components.add(...) or similar with services, and if there are components or services, those will be rendered and not the [Tools]. If we're OK with that behavior, that is, telling users to provide a ToolsRepository themselves, or use .add(), I'm OK with that. @jkugler, before we can use your contribution, we need you to sign-off your commits. Yes, I did that in https://github.com/CycloneDX/cyclonedx-python-lib/pull/635/commits/2bbd659eec2ac6711da14e6f265c208b4ee61ccb I didn't realize I would need to do that for every commit, as I assumed the PR would be squash merged into a single commit, and thus include the sign-off. I'll get it fixed. re: https://github.com/CycloneDX/cyclonedx-python-lib/pull/635#issuecomment-2183494986 all the effort to make the thing backwards compatible? Well, the old Metadata.tools was of type SortedSet[Tool]. The new one implements a subset of SortedSet's functionality. If you really wanted to make it backwards compatible, then ToolsRepository MUST extend SortedSet. Then most of your concerns would be solved, right? re: https://github.com/CycloneDX/cyclonedx-python-lib/pull/635#issuecomment-2183523501 it will be squashed on merge, that is true, but only if each and every commit was signed-off. Otherwise, we would quash/merge a thing that never was legally usable :-) re: https://github.com/CycloneDX/cyclonedx-python-lib/pull/635#issuecomment-2183605697 I could inherit from SortedSet. I'm not sure if that's necessary, though. I guess by "backward compatible," I was more shooting for "all the existing tests pass" as that should indicate all existing (supported) use cases continue to work. Since we don't even document the BomMetaData class (nor BomMetaData.tools), I think it is a reasonable expectation that "tools" Set. I took that in to account with these functions in ToolsRepository: def __len__(self) -> int: return len(self._tools) def __bool__(self) -> bool: return any([self._tools, self._components, self._services def __getattr__(self, name: str) -> Any: """ Enables us to behave as list of tools to maintain backward compatibility. Returns: An attribute of SortedSet """ return getattr(self._tools, name) def __iter__(self) -> Iterator[Tool]: """ Also part of acting as a list of tools Returns Iterator[Tool] """ for t in self._tools: yield t That will keep the behavior of the set, passing all unknown attributes to SortedSet[Tools], and thus exposing the entire public interface of SortedSet[Tools]. I think we'll be good. If we want to release a breaking change, and call this out, I'm not against that, but I think most of the code out there using this library will continue to work. I know mine will. :) @jkugler please add test fixtures/models with the new features you've added? they go to https://github.com/CycloneDX/cyclonedx-python-lib/blob/main/tests/_data/models.py the new test fixture should be built by functions called like get_bom_<something>(). see also: https://github.com/CycloneDX/cyclonedx-python-lib/blob/49a93a03b38574f264a49e9515cd6aa7b0b0f4c5/tests/_data/models.py#L1128-L1165 @jkugler re https://github.com/CycloneDX/cyclonedx-python-lib/pull/635#issuecomment-2187510084 well, changing a property BomMetaData.tools from SortedSet[Tool] to something different is considered a breaking change. Why bother having backwards compatibility when we have a breaking change anyway? Since we don't even document the BomMetaData class (nor BomMetaData.tools) We do. A type annotation is considered documentation. it is even rendered as such, see https://cyclonedx-python-library.readthedocs.io/en/latest/autoapi/cyclonedx/model/bom/index.html#cyclonedx.model.bom.BomMetaData.tools well, changing a property BomMetaData.tools from SortedSet[Tool] to something different is considered a breaking change. Why bother having backwards compatibility when we have a breaking change anyway? I see what you mean. I was thinking about "breaking behavior" in this case. We can publish a breaking change, but I hope the way I've done things will results in no needed code changes for most of our users. We do. A type annotation is considered documentation. it is even rendered as such, see https://cyclonedx-python-library.readthedocs.io/en/latest/autoapi/cyclonedx/model/bom/index.html#cyclonedx.model.bom.BomMetaData.tools Somehow, I missed that. I apologize. missing test cases: craft a BOM with metadata.tools having 1 Component, 1 Service, and 1 Tool - render BOM to XML/JSON 1.0 to 1.6 without loosing essential data and schema-validate the result this should result in a set of 3 Tool in JSON/XML - Component/Service were converted to Tool, as all must be Tool - to be schema compatible craft a BOM with metadata.tools having 1 Component, 1 Service, and NO Tool - render BOM to XML/JSON 1.0 to 1.6 without loosing essential data and schema-validate the result this should show how in JSON/XML 1.4 and before: this should result in a set of 2 Tool - Component/Service were converted to Tool - to be schema compatible in JSON/XML 1.5 and later: Component/Service were NOT converted to Tool re: https://github.com/CycloneDX/cyclonedx-python-lib/pull/635#issuecomment-2190086101 There are many fields in Component and Service which do not map to the fields available in Tool. Silently converting those objects to Tool would quietly discard a host of information and the user would be none the wiser, and would assume all the information they have added has been rendered to the SBOM. In this case, there are five fields in Tool, 27 fields in Component, and 17 in Service. This would actually cause a good deal of data loss while the user assumes the SBOM will contain all data they have added. I would propose one of the two possibilities: On the "quiet" side: print out a large warning if there is information in SortedSet[Component] or SortedSet[Service] and we are rendering in CDX < 1.5. On the "loud" side: refuse to render CDX < 1.5 when there is information in SortedSet[Component] or SortedSet[Service] and raise some kind of exception. To minimize data loss, and to subscribe to the "principle of least surprise," I would direct a user who wanted to produce a CDX <= 1.4 SBOM. to use Tool, and CDX >= 1.5 to use component/service. That said, if we do the auto-conversion, where and how will we document this so it is obvious to users of the library? re https://github.com/CycloneDX/cyclonedx-python-lib/pull/635#issuecomment-2190165893 i'd vote for option 3): silently transform and accept loss of unavailable data, while carrying over all available data. On rendering, we also do not print a warning if a property is populated that is not available in the target version. This library adheres to the CycloneDX specification. If a user knowingly uses data models that are not available in the target version, they would not be surprised to lose this data. :-) The target audience is well-informed software-developed. The model transform on rendering to lower versions is a convenience feature may people need. We would put it in the library, so others don't need to reinvent the wheel, and since we already adhere to the spec and are the experts in the field. @jkugler, Could you rebase on latest master and fix the conflicts. I had to do some style-changes(#643) to make the review easier. Sorry for the inconvenience. @jkowalleck Is this what you had in mind? https://github.com/CycloneDX/cyclonedx-python-lib/pull/635/commits/1a639a678d35c17fd7dab660348f7952599ed10d @jkowalleck For the "auto conversion" of Components and Services to Tools for older schemas, does this mapping work? Tools: Vendor Name Version Hashes externalReferences Components: Supplier - > Vendor? Or Author -> Vendor? Or Publisher -> Vendor? Name -> Name Version -> Version Hashes -> Hashes externalReferences -> externalReferences Services Provider -> Vendor? Name -> Name Version -> Version No Hashes externalReferences -> externalReferences I'm still really concerned about the amount of data that would be lost, but we can go this route if you think it best. @jkowalleck Is this what you had in mind? 1a639a6 yes. that looks great. @jkowalleck For the "auto conversion" of Components and Services to Tools for older schemas, does this mapping work? Tools: * Vendor * Name * Version * Hashes * externalReferences Components: * Supplier - > Vendor? Or Author -> Vendor? Or Publisher -> Vendor? Group -> Vendor Author, Publisher etc are missing in Tool - that is why Component was introduced as a possible item-class of the tools repository. * Name -> Name * Version -> Version * Hashes -> Hashes * externalReferences -> externalReferences Services * Provider -> Vendor? Provider -> Vendor * Name -> Name * Version -> Version * No Hashes * externalReferences -> externalReferences I'm still really concerned about the amount of data that would be lost, but we can go this route if you think it best. THink of it more like this: downstream users might loose some data, but they know about this - but, most importantly, we save as much data as possible. Code was added to "down-convert" from new Tools to old Tools. Some snapshot updates. Will look more and see what additional testing needs to be done. Also need to update XML rendering. @jkowalleck: craft a BOM with metadata.tools having 1 Component, 1 Service, and 1 Tool - render BOM to XML/JSON 1.0 to 1.6 without loosing essential data and schema-validate the result this should result in a set of 3 Tool in JSON/XML - Component/Service were converted to Tool, as all must be Tool - to be schema compatible As currently coded, this cannot be done, because an exception will be thrown if one attempts to have a Tool and a Comonent /Service. Is that acceptable? Or are you proposing we accept all conflicting objects, and down-convert if there are Components or Services with Tool? The spec says Tools XOR Components/Services. I would think if someone was going to use Components/Services, they would want that detail and would want to be warned they are still using Tool and need to upgrade/convert that to a Component. Also, if they do not add Components/Services in BomMetadata upon initialization, and it gets a default ThisTool, then they would never be warned when they add Components/Services and it's down-converted. I would rather be explicit about "you shouldn't do that" instead of them seeing the output and wondering why their Components/Services were converted to SBOMs. craft a BOM with metadata.tools having 1 Component, 1 Service, and NO Tool - render BOM to XML/JSON 1.0 to 1.6 without loosing essential data and schema-validate the result this should show how in JSON/XML 1.4 and before: this should result in a set of 2 Tool - Component/Service were converted to Tool - to be schema compatible in JSON/XML 1.5 and later: Component/Service were NOT converted to Tool I think this is done, as I see the auto-generated tests and snapshots from get_bom_with_tools_with_component_and_service Bump. this idea of mutual exclusive properties is true for the schema, but not for the data models. therefore, it was removed via 59b0987af61718a1c412aca1b83ca7b3b7b67bae I am trying to go through the changes made, but I'm not understanding where the code is. I see a bunch of code was moved to cyclonedx/serialization/__init__.py in this commit: https://github.com/CycloneDX/cyclonedx-python-lib/commit/376dfa8c1fe59f983ec1217ad83b3cf2bd2e9e1b but when I pull down my branch: https://github.com/jkugler/cyclonedx-python-lib/blob/561_add_components_and_services/cyclonedx/serialization/init.py I don't see that code. Was that code not pushed to my branch? Sorry for the confusionl. I am trying to go through the changes made, but I'm not understanding where the code is. I see a bunch of code was moved to cyclonedx/serialization/__init__.py in this commit: 376dfa8 but when I pull down my branch: https://github.com/jkugler/cyclonedx-python-lib/blob/561_add_components_and_services/cyclonedx/serialization/init.py I don't see that code. Was that code not pushed to my branch? Sorry for the confusionl. i am very sorry for the inconvenience. 376dfa8c1fe59f983ec1217ad83b3cf2bd2e9e1b resp 427add4e8b91a56dba5d565f7809c998c1ba3e96 was a version where i tried to move ToolsRepositoryHelper the helper to place where other existing helpers were. this did not work properly, as it caused cyclic includes. So it was reverted via 4a2ac6526d0cdcb1d01f32d698d8b568399b6454 maybe this helps to see the actual changes: git diff 73007f84fc043924f65560e143ba5adbdab56be2...c937f215e3ac4b56c33cc5da2b0444ee0a22807c https://github.com/jkugler/cyclonedx-python-lib/compare/73007f84fc043924f65560e143ba5adbdab56be2...c937f215e3ac4b56c33cc5da2b0444ee0a22807c I noticed the test coverage for cyclonedx/model/tool.py went from 100% to 97% after the recent changes. Would you like me to make sure the missed statements are checked? I noticed the test coverage for cyclonedx/model/tool.py went from 100% to 97% after the recent changes. Would you like me to make sure the missed statements are checked? did that. please review did that, cov at 100% now. please review Looks great! Let's get it merged! Thanks again for all your help and patience on this.
2025-04-01T06:36:52.233659
2014-05-14T07:54:21
33469469
{ "authors": [ "DmitryOlshansky", "MartinNowak", "andralex", "ibuclaw", "yglukhov" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:683", "repo": "D-Programming-Language/dmd", "url": "https://github.com/D-Programming-Language/dmd/pull/3547" }
gharchive/pull-request
Optional monitors This PR introduces optional monitors feature. The whole idea is that Object doesn't contain __monitor field anymore. TypeInfo_Class will hold a monitor offset, if a class has one (by marking class declaration with @monitor attribute), or 0. Monitor lookup is done with a hash map, if monitorOffset is 0. The hash map is protected by a primitive RW spin lock. Monitor finalization will only lookup for monitors, if a monitor was allocated at least once for the type of object being finalized. druntime counterpart: https://github.com/D-Programming-Language/druntime/pull/789 discussion<EMAIL_ADDRESS> I'm in favor of this. The current backward-compatible approach is the way to go, though I agree with @MartinNowak in the long run we may envision complete deprecation. Please rebase and let's push this through. Thanks! Half a year later w/o merging this patch the idea still looks sexy to me. @MartinNowak any plans to move on this in any of 2 competing plans? Well what's going on? "All checks have failed" Half a year later w/o merging this patch the idea still looks sexy to me. @MartinNowak any plans to move on this in any of 2 competing plans? Yes, I'm still opposed to adding a global hash and the outlined plan still makes sense. Furthermore having monitor support on all classes creates ownership/attribute issue for the monitor (https://github.com/MartinNowak/phobos/commit/8cf0ec29ad65ac2a13bd6917b4ff3da0fdea5ab0#diff-4e008aedb3026d4a84f58323e53bf017R4883). I don't have the capacity to pull this story, but starting by adding an @(Object.Monitor) UDA and recognizing that in the compiler should be fairly trivial (e.g. look at the objective-c changes). recognize monitor UDA and deprecate synchronizing on classes without it Let's proceed with stage 1 for when 2.071 opens then? Do we have available documentation / rationale on the deprecated features page? Or a DIP? Do we have available documentation / rationale on the deprecated features page? Or a DIP? Sure, a small entry on http://dlang.org/deprecate.html would be nice, a DIP is overkill though.
2025-04-01T06:36:52.235786
2014-12-05T14:06:01
51102090
{ "authors": [ "9rnsr", "CyberShadow" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:684", "repo": "D-Programming-Language/dmd", "url": "https://github.com/D-Programming-Language/dmd/pull/4193" }
gharchive/pull-request
[REG2.067a] Issue 13775 - Broken explicit casting of dynamic array slices of known size to static array of different type https://issues.dlang.org/show_bug.cgi?id=13775 Support reinterpret-cast from a CT-known boundaries slice expression that can be implicitly typed as T[n] to another static array type U[m], iff their sizes are same (T[n].sizeof == U[m].sizeof). Another regression filed against this change: https://issues.dlang.org/show_bug.cgi?id=14582
2025-04-01T06:36:52.237369
2015-02-05T00:30:41
56610002
{ "authors": [ "DmitryOlshansky", "Orvid" ], "license": "BSL-1.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:685", "repo": "D-Programming-Language/phobos", "url": "https://github.com/D-Programming-Language/phobos/pull/2962" }
gharchive/pull-request
[WIP] std.file Refactoring DO NOT MERGE So far this cleans up DirEntry by factoring out the common interface between Windows and Posix. It also moves the examples in the documentation into unittest blocks instead. There is still more work to be done in std.file, but this is just the start, and I mostly just want to make sure I didn't break anything while doing it. Any news? Going to close as it seems stuck.
2025-04-01T06:36:52.263194
2017-11-21T00:13:03
275542478
{ "authors": [ "Patskimoto", "intenscia", "leper" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:686", "repo": "DBolical/modioSDK", "url": "https://github.com/DBolical/modioSDK/issues/30" }
gharchive/issue
File responses should have a filehash object instead of a string. The current API (v1) hard-codes MD5 as the file hash. MD5 should be considered broken as collision attacks are relatively easy to do and do not take long. To make the API easier to update/extend in the future (without even bumping the API version) having filehash be an object would be a lot nicer. filehash: { md5: "abcd", sha256: "1234", sha3-512: "5689" } By just providing multiple hashes adding better hashes is easily doable, without breaking users of the API, and allowing nice updates. In case some hash is severely broken in the future one could even just drop that one from a new version of the API. A good suggestion - will discuss within the team. Fantastic idea, allows future updates easily - will get this implemented. Also while MD5 is unreliable due to collisions, this feature is primarily there as an integrity check, but we will look to adding stronger methods. Yes, all of MD5 and the SHA family of hashes are just to verify integrity of the file transfer (one could also use CRC for that if one wanted to). About something stronger that is the main point of #31.
2025-04-01T06:36:52.273778
2017-05-25T14:42:46
231359871
{ "authors": [ "colinxfleming", "lwaldsc", "mebates", "rudietuesdays" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:687", "repo": "DCAFEngineering/dcaf_case_management", "url": "https://github.com/DCAFEngineering/dcaf_case_management/issues/1077" }
gharchive/issue
Add medicaid coverage to clinic data? Thanks for creating an issue! Please fill out this form so we can be sure to have all the information we need, and to minimize back and forth. What are we trying to do? Baltimore is interested in tracking medicaid info for clinics. We should figure out how to implement and what they're hoping to get out of it. What feature or behavior is this required for? ??? How could we solve this issue? (Not knowing is okay!) Probs start by talking to the baltimore folks about how they use this up the road! Anything else? not yet! @lwaldsc or @nerdygirl537 - can you follow up on the user requirements for Baltimore and medicaid, please? This issue doesn't have enough info to do any UX work yet Re: Annie at BAF "So I think what we'd like is the ability to, in DARIA, put in info re: which clinics take Maryland Medicaid and to what gestational age (similar to how you can fill in costs now). Colin mentioned something about creating a clinic lookup tool, so I just wanted to make sure that if/when that is built, it also includes information for Medicaid patients so our CMs can steer them to the best clinic for them." Pretty simple ask here, I think. Based on the above from Annie I think there are two enhancements here: deal with Medicaid clinics in #978 implement something similar to the 'Show only clinics that accept NAF funds' checkbox for Medicaid clinics Either way, the route to both of these is slapping an extra boolean field for accepts_medicaid on the clinic object, and then adjusting the form and tests. Retagging as backend for now, and we can take advantage of the benefits down the road. I'll work on this tonight!
2025-04-01T06:36:52.276432
2018-07-26T14:16:30
344863453
{ "authors": [ "16in17", "rserizel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:688", "repo": "DCASE-REPO/dcase2018_baseline", "url": "https://github.com/DCASE-REPO/dcase2018_baseline/issues/15" }
gharchive/issue
task4:This video is unavailable.(deleted by user) when using code to download data from Youtube, I found that part of dataset had been deleted by users. How can I find them? Thanks for your interest in task 4. As explained in the dataset readme: "The script produces missing_files[dataset].csv log files (were [dataset] corresponds to the name of a particular set) with a list of audio files that were not downloaded by the script. After completion if some of the files where not downloaded you might want to run the script a second to download missing files. If you are experiencing problems downloading the full dataset please contact the task organizers (see also task 4 official page)" If you send these files to us (Nicolas Turpault or Romain Serizel), we'll take care of providing the missing audio files. Hello , I am experiencing problems downloading the full dataset (some eval files are missing) , can you provide them ?
2025-04-01T06:36:52.287639
2020-02-05T18:26:12
560549999
{ "authors": [ "cneidle", "kwasiopoku" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:689", "repo": "DCS-LCSR/ASL-DAI", "url": "https://github.com/DCS-LCSR/ASL-DAI/issues/203" }
gharchive/issue
DAI searching is completely broken! Nothing happens when you do a normal DAI search... thanks. Note that this doesn't work any better when logged in. This has been resolved.
2025-04-01T06:36:52.288503
2020-10-10T13:51:08
718624280
{ "authors": [ "kwasiopoku" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:690", "repo": "DCS-LCSR/ASL-DAI", "url": "https://github.com/DCS-LCSR/ASL-DAI/issues/251" }
gharchive/issue
Set default occurrence for all signs being added to signbank if occurrence is the only sign variant Set default occurrence for all signs being added to signbank if occurrence is the only sign variant Done
2025-04-01T06:36:52.315698
2020-02-13T12:25:06
564648799
{ "authors": [ "davidgisbey", "stevehook" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:691", "repo": "DFE-Digital/apply-for-postgraduate-teacher-training", "url": "https://github.com/DFE-Digital/apply-for-postgraduate-teacher-training/pull/1358" }
gharchive/pull-request
Add email for all choices being rejected Context When a user receives a rejection and all of their course choices have been a rejected they should be sent an email. Changes proposed in this pull request Add an email to the CandidateMailer Guidance to review Emails https://docs.google.com/document/d/1VH_nxuLTiVCkmKhC4eaBpFC61Hd5_Ol1tKwOul-2uUs/edit#heading=h.hz139gcneav5 This email needs a provider name, course name, candidate name, amount of choices (for pluralisation and rejection reason. I've struggled to get all the below objects associated using build_stubbed. Is there an easier way? The current implementation is a pain for adding it to PreviewCandidateMailer Also, what would happen if a provider refused to give feedback was not covered. I added the following: This needs a content review. Link to Trello card https://trello.com/c/22e7C80p/840-email-🙅♀️-a-provider-has-rejected-your-application-to-candidate Things to check [x] This code doesn't rely on migrations in the same Pull Request [x] If this code includes a migration adding or changing columns, it also backfills existing records for consistency [x] API release notes have been updated if necessary [x] New environment variables have been added to the Azure config I had a similar struggle with build_stubbed yesterday - it has limitations! I was able to set up a one-many association by using a combination of hard-coded ids and adding to collections using plain build (so a bit hacky). https://github.com/DFE-Digital/apply-for-postgraduate-teacher-training/blob/master/spec/mailers/candidate_mailer_spec.rb#L287-L298 It may be completely different for your case.
2025-04-01T06:36:52.322342
2019-12-11T09:24:25
536242648
{ "authors": [ "duncanjbrown", "stevehook" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:692", "repo": "DFE-Digital/apply-for-postgraduate-teacher-training", "url": "https://github.com/DFE-Digital/apply-for-postgraduate-teacher-training/pull/849" }
gharchive/pull-request
Minor API documentation fixes Context This is a follow-up to https://github.com/DFE-Digital/apply-for-postgraduate-teacher-training/pull/826. A number of issues were raised after manual testing by @fofr . This PR addresses the remaining fixes to API docs. Changes proposed in this pull request [x] Work experience commitment example and description needed (already done) [x] Qualification examples are missing the "institution_name" and "equivalency" examples (already done) [x] We should include an example of an ISO8601 date in the since param (PH actually caused an error with this when he tried it) - (done in earlier PR) [x] reference content should not be marked "optional" bc we only send apps with references - changed the wording to emphasise that content is required to send apps to providers [x] Clarify that tokens can only be passed via header, not via URL param - added a bit of extra copy to clarify I've not tried to add the state diagram (suggestion from @duncanjbrown) in this PR because there isn't an obvious place to put it in the docs (without causing a confusion). This one is open to debate! Guidance to review Do the copy changes make sense? Link to Trello card 1176 - Minor API fixes following user testing with Paul H Env vars [x] No env vars Perhaps the state diagram could go in the general introduction? On the basis that it's a useful high-level summary of how the thing works. We might need to prune it a bit. @duncanjbrown I've added a section to the api-docs home page called 'Application Lifecycle' with a copy of the state diagram. Wording is all up for debate. I've not tried to trim the diagram (it's just automatically copied from the generate_state_diagram rake task into the public directory). Perhaps the state diagram could go in the general introduction? On the basis that it's a useful high-level summary of how the thing works. We might need to prune it a bit. @duncanjbrown I've added a section to the api-docs home page called 'Application Lifecycle' with a copy of the state diagram. Wording is all up for debate. I've not tried to trim the diagram (it's just automatically copied from the generate_state_diagram rake task into the public directory). :+1: I think this is useful
2025-04-01T06:36:52.336795
2019-02-28T10:02:36
415539963
{ "authors": [ "dankmitchell" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:693", "repo": "DFE-Digital/manage-courses-backend", "url": "https://github.com/DFE-Digital/manage-courses-backend/pull/156" }
gharchive/pull-request
Add course scope for opted in providers Context The courses endpoint should only return courses where their providers are opted in Changes proposed in this pull request Add scope to courses endpoint Guidance to review /api/v1/courses Ensure courses are scoped by providers that have opted in
2025-04-01T06:36:52.341474
2023-12-21T10:50:19
2052188889
{ "authors": [ "darokel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:694", "repo": "DFE-Digital/register-trainee-teachers", "url": "https://github.com/DFE-Digital/register-trainee-teachers/pull/3883" }
gharchive/pull-request
[6527] Enable performance/profile monitoring Context Enables Sentry's performance and profile monitoring so we can setup some anomaly alerts around this category. Start with a low traces_sample_rate rate. We'll assess how noisy it is in Sentry and adjust accordingly. Set up anomaly alerts in register support slack channel
2025-04-01T06:36:52.347764
2021-05-13T14:37:49
891097947
{ "authors": [ "felixtheflex", "twd-tv-ci" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:695", "repo": "DFE-Digital/teaching-vacancies", "url": "https://github.com/DFE-Digital/teaching-vacancies/pull/3462" }
gharchive/pull-request
remove-paas-url-from-tfvar-file Use new PaaS credentials Jira ticket URL Just add the ticket number to the end: https://dfedigital.atlassian.net/browse/TEVA- Changes in this PR: Is there anything specific you want feedback on? Screenshots of UI changes: Before After Next steps: [ ] Terraform deployment required? [ ] New development configuration to be shared? Review app deployed to https://teaching-vacancies-review-pr-3462.london.cloudapps.digital
2025-04-01T06:36:52.350441
2021-05-24T09:53:52
899503907
{ "authors": [ "csutter", "twd-tv-ci" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:696", "repo": "DFE-Digital/teaching-vacancies", "url": "https://github.com/DFE-Digital/teaching-vacancies/pull/3521" }
gharchive/pull-request
Do not refer to ActionMailer::Base in initializers This causes issues with autoloading (invoking behaviour that has been deprecated in Zeitwerk when certain stars align, in our case if we add ActionText, c.f. https://github.com/rails/rails/issues/36546) Instead of adding configuration to ActionMailer::Base, set it on Rails.configuration.action_mailer instead. Review app deployed to https://teaching-vacancies-review-pr-3521.london.cloudapps.digital
2025-04-01T06:36:52.354796
2017-11-03T17:18:00
271054219
{ "authors": [ "adamrobinson361", "isi-avbulimen" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:697", "repo": "DFEAGILEDEVOPS/schools-workforce-benchmarking", "url": "https://github.com/DFEAGILEDEVOPS/schools-workforce-benchmarking/issues/33" }
gharchive/issue
Refine report 2 Ensure each plot is on a new page Consider making plots landscape Decided not to progress
2025-04-01T06:36:52.357154
2017-02-28T15:30:54
210820070
{ "authors": [ "DFreds", "marnen" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:698", "repo": "DFreds/code-peek-atom", "url": "https://github.com/DFreds/code-peek-atom/issues/21" }
gharchive/issue
Not compatible with Semanticolor I'm trying to use code-peek 1.4.16 with Semanticolor, but there's a conflict. The problem is that to do its highlighting, Semanticolor prefixes the name of the current grammar with "semanticolor", so code-peek gives the error "Peek function does not currently support semanticolor - Ruby files". So...perhaps it should ignore the "semanticolor" prefix or find some other way of determining the language? 1.4.17 will fix this
2025-04-01T06:36:52.378574
2023-04-06T12:06:43
1657300990
{ "authors": [ "ecomodeller", "jsmariegaard" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:699", "repo": "DHI/fmskill", "url": "https://github.com/DHI/fmskill/pull/181" }
gharchive/pull-request
Extract model data at location(s) Narrow API to model extraction. More explicit what attributes that are needed to extract data from a model. I.e. the observed values, color, filename etc. is not relevant in this context. I see the point (no pun intended) from an architectural point of view - but as a user I think it becomes more difficult to understand (more objects/classes). Wouldn't it be better to let Point and Track be a Mixin (or a protocol) and then let the extract method accept anything which is a Point/Track? I see the point (no pun intended) from an architectural point of view - but as a user I think it becomes more difficult to understand (more objects/classes). Wouldn't it be better to let Point and Track be a Mixin (or a protocol) and then let the extract method accept anything which is a Point/Track? I think it is difficult to enforce compliance with an ABC or Protocol, since they only enforce the names of the methods (and names of arguments), but not attributes. So this last change tries to balance simplicity for the user, no need for conversion and still communicating the needs of the extract methods.
2025-04-01T06:36:52.383738
2024-09-07T18:20:11
2512011122
{ "authors": [ "DHancock", "cxnky" ], "license": "Unlicense", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:700", "repo": "DHancock/WinUI3Controls", "url": "https://github.com/DHancock/WinUI3Controls/issues/49" }
gharchive/issue
Text boxes within groupboxes not stretching properly I have the following XAML <w3c:GroupBox Grid.Row="0" Grid.Column="0" Heading="test" HorizontalAlignment="Stretch" VerticalAlignment="Top" MinWidth="300"> <Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <TextBox x:Name="Test" Header="Test ID:" Grid.Column="1" HorizontalAlignment="Stretch" VerticalAlignment="Top" Grid.Row="0" /> <TextBox x:Name="Test2" Header="Test ID:" Grid.Column="1" HorizontalAlignment="Stretch" VerticalAlignment="Top" Grid.Row="1" Margin="0,10,0,0" /> </Grid> </w3c:GroupBox> and the Stretch property is not working properly, as I just see this The textboxes do, however, resize when you type in them Instead of setting the GroupBox property HorizontalAlignment="Stretch" you should set HorizontalContentAlignment="Stretch" That should fix the problem you are seeing.
2025-04-01T06:36:52.410253
2022-09-18T17:00:35
1377118774
{ "authors": [ "Ptrhnk", "adammertel" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:701", "repo": "DISSINET/InkVisitor", "url": "https://github.com/DISSINET/InkVisitor/issues/1234" }
gharchive/issue
Create simple component for Relation type icons RelationType dictionary keys in a circle. Something like this Needs scaling - relation of circle size to font size
2025-04-01T06:36:52.461214
2021-06-20T12:48:51
925592619
{ "authors": [ "araffin", "markub3327" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:702", "repo": "DLR-RM/rl-baselines3-zoo", "url": "https://github.com/DLR-RM/rl-baselines3-zoo/issues/117" }
gharchive/issue
SAC with gSDE not working on MinitaurBulletEnv-v0 Hello @araffin, I trained the agent on MinitaurBulletEnv-v0 but without success. I tried to use the Min-max normalization method because every observation has its own range of values, but it's not working and without normalization, I get better results. I test it on my platform RL Toolkit, which is based on TF but is almost the same as yours. Why is MinitaurBulletEnv-v0 not working, when the agent here is learning? My tested hyperparameters: Hyperparameter Value n_timesteps 1000000 learning_rate 0.00073 batch_size 256 buffer_size 1000000 ent_coef auto gamma 0.99 learning_starts 10000 update_interval 64 When you'll find better hyperparameters or anything else please share it with me. https://user-images.githubusercontent.com/74611856/122724806-9606cf00-d274-11eb-8212-6c414012e29e.mp4 Thanks a lot. Hello, So after a quick trial, I found those hyperparameters to be working (for unstructured noise): MinitaurBulletEnv-v0: n_timesteps: !!float 1e6 policy: 'MlpPolicy' learning_rate: !!float 3e-4 buffer_size: 100000 batch_size: 256 ent_coef: 'auto' train_freq: 1 gradient_steps: 1 learning_starts: 10000 with gSDE (the noise sampling frequency is quite small and therefore quite close to unstructured noise): MinitaurBulletEnv-v0: n_timesteps: !!float 1e6 policy: 'MlpPolicy' learning_rate: !!float 3e-4 buffer_size: 1000000 batch_size: 256 ent_coef: 'auto' gamma: 0.99 train_freq: 4 gradient_steps: 4 learning_starts: 10000 use_sde: True policy_kwargs: "dict(log_std_init=-3)" As mentioned in the gSDE paper (the arxiv version will be updated tomorrow: https://arxiv.org/abs/2005.05719 ), the main strength of gSDE is not in simulation but on a real robot (as it reduces wear-and-tear while keeping good performance). I will try to upload the learning curves and trained agent soon. The training reward with unstructured noise: I thought that the MinitaurBulletEnv-v0 is Sim-to-Real problematics and gSDE is useful for that too. I will test it with a higher update freq than 64 and a lower learning rate. Thanks. I thought that the MinitaurBulletEnv-v0 is Sim-to-Real problematics and gSDE is useful for that too. gSDE was designed to run RL directly on real robots (no sim2real) even though it should help for sim2real pb too (however, I'm not sure if MinitaurBulletEnv-v0 is Sim-to-Real problematics or not). I read about Minitaur here. Can you share with me please, Actor loss, Critic loss, Steps (at episode) charts for detailed analysis of my problematics. I try your hyperparameters without significant improvement. I use client-server architecture for training RL agents (with Reverb). Thanks. Best for you would be to do a run using SB3 + the rl zoo: the hyperparams: MinitaurBulletEnv-v0: n_timesteps: !!float 1e6 policy: 'MlpPolicy' learning_rate: !!float 3e-4 buffer_size: 1000000 batch_size: 256 ent_coef: 'auto' gamma: 0.99 train_freq: 4 gradient_steps: 4 learning_starts: 10000 use_sde: True policy_kwargs: "dict(log_std_init=-3)" Training with tensorboard (logging to /tmp/tensorboard_sb3/ here) python train.py --algo sac --env MinitaurBulletEnv-v0 -tb /tmp/tensorboard_sb3/ --num-threads 2 --eval-episodes 20 --n-eval-envs 5 The learning curves (blue is unstructured noise, orange with gSDE): Note that the true performance of SAC is higher because I display the training reward here, not the evaluation one using the deterministic controller. PS: I think that it is because the learner is faster than the agent. When I try for example AntBulletEnv-v0 everything is good, but the agent plays faster than the learner. In this case, the learner has more data to training from. you mean that you are learning in parallel of data collection? Yes this can be definitely a problem. Btw, the new version of the gSDE paper is online https://arxiv.org/abs/2005.05719 ;) A summary of where it is useful: Yes, the learner instance is learning in parallel to data collection. Thanks a lot for your time, I will be trying it with your hyperparameters and try to make the learner slower than the agent is collecting data because those configuration is working on other environments. Yeah, thanks for the answers I think this issue is solved. I must still be working on RL-Toolkit. Expectation over time means the average over the time in continuity cost equation? yes. I'm using a wrapper to compute it: class ContinuityCostWrapper(gym.Wrapper): """ Add continuity cost to the reward. It assumes that the action space is normalized and symmetric (actions in [-1, 1]). :param env: :param weight_continuity: :param verbose: :param print_freq: Print every n episodes the mean continuity cost """ def __init__(self, env: gym.Env, weight_continuity: float = 0.0, verbose: int = 0, print_freq: int = 1): super(ContinuityCostWrapper, self).__init__(env) self.last_action = None self.weight_continuity = weight_continuity self.verbose = verbose self.continuity_hist = [] self.unnormalized_hist = [] self.n_episodes = 0 self.print_freq = print_freq def reset(self): self.last_action = None self.n_episodes += 1 return self.env.reset() def step(self, action): obs, reward, done, info = self.env.step(action) # Continuity cost if self.last_action is not None: max_delta = 2.0 # for the action space: high - low = 1 - (-1) = 2 continuity_cost = np.mean((action - self.last_action) ** 2 / max_delta ** 2) unnormalized_cost = np.mean((action - self.last_action) ** 2) self.continuity_hist.append(continuity_cost) self.unnormalized_hist.append(unnormalized_cost) continuity_cost = self.weight_continuity * continuity_cost self.last_action = action.copy() else: continuity_cost = 0.0 self.last_action = action.copy() if done: continuity_score = 100 * np.mean(self.continuity_hist) if self.verbose > 0 and self.n_episodes % self.print_freq == 0: print(f"n_step={len(self.continuity_hist)}") print(f"Continuity={continuity_score:.5f} +/ {np.std(self.continuity_hist):.5f}") # print(f"Unnormalized continuity={np.mean(self.unnormalized_hist):.5f}\n") info["continuity_score"] = continuity_score reward -= continuity_cost info["continuity_cost"] = continuity_cost return obs, reward, done, info @araffin Thanks all is working now. I must use a TQC algorithm instead of classic SAC, because the learner was faster than the agent collecting experiences. TotalInteractions: 1532116 (the agent steps in the environment) Train step: 999906 (the learner's training steps) Now is the agent about 500 000 steps before the learner and learning process is more accurate. https://user-images.githubusercontent.com/74611856/124826185-d81e5900-df74-11eb-80a2-3b6fb800bdec.mp4
2025-04-01T06:36:52.463465
2020-02-13T11:52:00
564632104
{ "authors": [ "cdboer", "onyame" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:703", "repo": "DLR-SC/gitlab2prov", "url": "https://github.com/DLR-SC/gitlab2prov/issues/28" }
gharchive/issue
Create typing stub files Mypy doc on stub files. https://mypy.readthedocs.io/en/stable/stubs.html Create stub files for used third party libraries that do not provides their own types. Create stub files for gitlab2prov code to allow projects that import modules from gitlab2prov to also import their type signatures. Sub-Issue of #26 Postponed
2025-04-01T06:36:52.478306
2016-04-17T05:55:03
148921701
{ "authors": [ "valefranz" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:704", "repo": "DMIunipg/AI-Project-VacuumEnvironment", "url": "https://github.com/DMIunipg/AI-Project-VacuumEnvironment/pull/2" }
gharchive/pull-request
DMIunipgAI-VacuumCleaner2016-trivelle versione "Trivelle" del Vacuum Cleaner, file della competition 2016 https://github.com/DMIunipg/AI-Project-VacuumEnvironment/pull/2.patch Step 1: From your project repository, check out a new branch and test the changes. git checkout -b valefranz-master master git pull git://github.com/valefranz/AI-Project-VacuumEnvironment.git master Step 2: Merge the changes and update on GitHub. git checkout master git merge --no-ff valefranz-master git push origin master
2025-04-01T06:36:52.492547
2021-10-22T15:19:13
1033696615
{ "authors": [ "jyao1", "rkongintel", "steven-bellock" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:705", "repo": "DMTF/libspdm", "url": "https://github.com/DMTF/libspdm/issues/221" }
gharchive/issue
File naming It seems that many file names between the requester/responder are the same. Generally we avoid h/c file collisions because they are in different directories. I’m not so sure this applies to object files though as they may be built into the same directory and have the same name if some client wants to use both the requester and responder. When I ported the libspdm to a Windows static library to be finally built as a Windows dll, I ran into problems and had to rename some of the files so their object files would not collide. The VC++ standard project/solution build system normally dumps all object files to a single output dir. I did not see an easy way to change this, so I resolved it by changing the files with the name collision to have “requester” in front of them. I think other potential clients may fun into this problem, and was wondering if we should name the files differently, by prepending requester or responder to then. Example. Instead of “communication.c”, name it “requester_communication.c” and “responder_communication.c”. Resolve together with https://github.com/DMTF/libspdm/issues/155 Resolved by https://github.com/DMTF/libspdm/pull/278
2025-04-01T06:36:52.501162
2017-10-24T21:32:55
268193426
{ "authors": [ "EPTamminga", "thosalbert" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:706", "repo": "DNNCommunity/DNN.Events", "url": "https://github.com/DNNCommunity/DNN.Events/issues/57" }
gharchive/issue
Can't get access to Phone No. Field in Event Enrollment I can't seem to find a token to access the Phone Number field that is built into the Enroll section of an Event. On this page: https://github.com/DNNCommunity/DNN.Events/wiki/Tokens-to-be-used-in-Templates I found the token for the name: [event:signupusername] and Email address [event:signupuseremail] But I have no idea how to access the data in the Phone No. field. -Tom Currently, the Phone number is not avaialble as token. I will add this as an enhancement request Wonderful. Thanks for responding and letting me know. Tom
2025-04-01T06:36:52.570798
2024-10-30T18:20:37
2624951731
{ "authors": [ "aakankshaduggal", "samc5" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:707", "repo": "DS219/spark-seprep", "url": "https://github.com/DS219/spark-seprep/pull/115" }
gharchive/pull-request
Added my notebook with analysis of German credit risk dataset. This includes correlations, a few graphs, and reports of my findings Good job @samc5 10/10
2025-04-01T06:36:52.585623
2024-11-11T09:35:27
2648738456
{ "authors": [ "ShedrachJonah11", "cau-git" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:708", "repo": "DS4SD/docling", "url": "https://github.com/DS4SD/docling/issues/293" }
gharchive/issue
Is there a Docker deployment solution or a FastAPI server setup available for docling? Question I am looking to deploy docling and would like to know if there is an existing Docker deployment solution or a FastAPI server setup available. ... Hi @ShedrachJonah11, we have a webserver for docling in the works. It is currently experimental stage. See here. There is also a Dockerfile in this repo to demonstrate how to run docling on the container.
2025-04-01T06:36:52.601130
2018-10-26T02:17:45
374200568
{ "authors": [ "Dananji", "ahmet-uyar" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:709", "repo": "DSC-SPIDAL/twister2", "url": "https://github.com/DSC-SPIDAL/twister2/pull/69" }
gharchive/pull-request
Submit zip jobs Adding support for job submission as zip files to twister2. I think we should determine the content of the zip file. What files and directories should there be? We need to unzip it on workers and add jar files to the classpath. As I understand, we expect the user to specify the job type on command line as "job_type=zip" or "job_type=jar". We are also expecting the user to specify the job file on the command line as "job_file=xyz.jar" or "job_file=xyz.zip". I think we may not need the extra job_type parameter. We can just check the extension of the job_file. we should properly pack the unzipped files into the job package. Currently, TarGzipPacker in ResourceAllocator packs all files into a tar.gz file. That file is transferred to the workers. This packer currently packs job description file[job-name.job], user job jar file (jar file specified on command line), and conf directory. We need to add any newly added files or directories from zip file to the job tar.gz package. So, this all depends on the format of the original zip file. What will be the content of that zip file.
2025-04-01T06:36:52.602254
2024-10-01T11:55:04
2559004743
{ "authors": [ "MoritzWeber0", "zusorio" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:710", "repo": "DSD-DBS/capella-collab-manager", "url": "https://github.com/DSD-DBS/capella-collab-manager/issues/1862" }
gharchive/issue
Update links in docs to new Angular site A while ago, Angular moved all of their documentation to a new site (angular.dev). All the links in our docs are now out of date and should be updated. As far as I can see, there are only some references from the frontend testing docs. Those are outdated anyway, we don't actively use playwright for testing anymore, instead we use Storybook.
2025-04-01T06:36:52.626347
2018-07-30T16:44:05
345840781
{ "authors": [ "AlexanderS", "marsaoua", "mwoodiupui", "tdonohue" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:717", "repo": "DSpace/DSpace", "url": "https://github.com/DSpace/DSpace/pull/2139" }
gharchive/pull-request
[DS-3971] BitstreamStorageServiceImpl: Do not call update If the BitstreamStorageService calls update on the just cloned bitstream, this bitstream will not belong to any Item and the user may get authorization failures. The update call can be removed here, because the clone method is only used by the AbstractVersionProvider and it will call update on the bitstream later by itself. https://jira.duraspace.org/browse/DS-3971 I dislike having BitstreamStorageServiceImpl make changes to an entity and then give up control of it without persisting them, but we may have no choice here. If so, the unpersisted state should be documented. The place for that is in BitstreamStorageService#clone, which is not documented at all. @mwoodiupui I just added some comment about the missing update() call to the existing comment of BitstreamStorageService#clone. Is this what you had in mind? Yes, thank you. Whenever we make changes to an entity which we do not commit, the caller should be warned. I tested it, a submitter can create a new version of its item! Just a thought: Instead of changing the cloned bitstream in the BitstreamStorageServiceImpl#clone, does anything speak against moving all changes to BitstreamServiceImpl#clone? In this way, there will not be a unpersisted state. We have created a pull request which avoid an unpersisted state in BitstreamStorageServiceImpl#clone https://github.com/DSpace/DSpace/pull/2428#issue-279153056 Closing, replaced by #2428 (Please review/test that PR so that we can get it resolved/merged)
2025-04-01T06:36:52.630383
2015-01-12T08:27:10
54029636
{ "authors": [ "christian-scheible", "helix84" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:718", "repo": "DSpace/DSpace", "url": "https://github.com/DSpace/DSpace/pull/824" }
gharchive/pull-request
[DS-2345] Fixes wrong start index starting at 0 instead of 1 for OpenSearch This fixes the bug for the Discovery implementation. I think for the StandardOpenSearchGenerator (old Lucene based) implemenatation there are 2 lines which should be changed too. Line 72 and Line 114. But we are using Discovery which makes it harder to test it. I accidentally merged the changes on dspace master to the Branch [DS-2345]. Not sure how this affects the pull request. Don't worry, we have a way to get just the stuff we need. Here is more Information on the bug: If you look at the result xml file of this query: http://demo.dspace.org/xmlui/open-search/discover?format=kops&query=author%3A*&start=0 it says that start index is 1 <opensearch:startIndex>1</opensearch:startIndex> This query says the same: http://demo.dspace.org/xmlui/open-search/discover?format=kops&query=author%3A*&start=1 but the result is different (starting with the second result of the first query) The expected behaviour would be: the same query result for both querys because OpenSearch starts counting at 1 not at 0 and if <opensearch:startIndex>1</opensearch:startIndex> is equal in both cases the result has to be the same too. +1 tested, cherry-picked
2025-04-01T06:36:52.637410
2018-03-18T11:24:50
306232079
{ "authors": [ "gocrafterlp", "natanbc" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:719", "repo": "DV8FromTheWorld/JDA", "url": "https://github.com/DV8FromTheWorld/JDA/issues/649" }
gharchive/issue
com.android.builder.dexing.DexArchiveBuilderException com.android.builder.dexing.DexArchiveBuilderException on Android Issue Type [x] Bug Report [ ] Feature Request Description This Gradle exception occurs when trying to build app with JDA https://developer.android.com/studio/write/java8-support.html @natanbc thank you
2025-04-01T06:36:52.646383
2021-04-02T12:38:21
849194708
{ "authors": [ "DV8FromTheWorld", "MinnDevelopment", "TheChilliPL", "jzvi12" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:720", "repo": "DV8FromTheWorld/JDA", "url": "https://github.com/DV8FromTheWorld/JDA/pull/1575" }
gharchive/pull-request
First pass on stage channels Pull Request Etiquette [x] I have checked the PRs for upcoming features/bug fixes. [x] I have read the contributing guidelines. Changes [x] Internal code [x] Library interface (affecting end-user code) [ ] Documentation [ ] Other: _____ Closes Issue: #1572 Description This adds support for stage channels, which are basically just voice channels with topics. Freezing this for now due to discord/discord-api-docs#2751 Please note that this feature is still under active development (even though it is launched), and thus there is higher than normal risk of things possibly changing. The API for stage channels is kinda weird. I did some digging and found out a few issues that will be annoying to deal with: The request to speak endpoint seems to be REST-only (so you can't join and immediately speak) The endpoint also requires you to specify which channel you are currently in (which requires caching, not a problem for us) They expect an ISO timestamp for the request to speak (i can't understand why but ok) You can suppress yourself? I have no idea why this is a thing Apparently, bots can also approve requests to speak, this is limited through MUTE_MEMBERS permission The request to speak timestamp is also part of voice states, so we can derive an event there Can't see any methods to invite someone to speak, I think that's possible in stage channels (works like speaking requests but backward: the user has to accept the invite from a moderator to speak) Any updates? Can't see any methods to invite someone to speak, I think that's possible in stage channels (works like speaking requests but backward: the user has to accept the invite from a moderator to speak) The bot I use (which uses this JDA lib) can have a moderator role and bypass any invites?? It will be ready when it is ready. Stage channels are still in flux behind the scenes in the Discord API. Should we move the speaker methods such as approveSpeaker and inviteSpeaker into the Member interface instead? Feature TODO [ ] Stage Instance events [ ] Stage Instance moderation (set topic etc) [ ] Voice states for lurkers Overall TODO [ ] Documentation [ ] Checks [ ] Testing
2025-04-01T06:36:52.652600
2015-03-18T16:03:17
62729672
{ "authors": [ "DZamataev", "RealBug", "praveengodz" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:721", "repo": "DZamataev/DZReadability", "url": "https://github.com/DZamataev/DZReadability/issues/2" }
gharchive/issue
Bug : More paragraphs in comments Hi First, your DZReadability is powerfull ! I've seen juste one bug, its when the comments bloc have more than article. By example for this news : http://blog.lefigaro.fr/football/bruno_roger-petit/2015/03/subasic-heureux-pour-monaco-peut-on-se-rejouir-davoir-ete-ridicule.html // count how many p tags are inside the parent NSArray *pNodes = [parent nodesMatchingSelector:@"p"]; Did you have an idea for fix it please ? :) Thx Realbug Hi Thank you. That's something that I will try to focus on in the next release. I think of adding the ability to provide tag ids and classes for instant win to the algorythm before start. It will help to identify article block on specific sites and may help cure these errors with finding the right article block. I will also implement ignoring comment blocks by their most common id and class names. I don't know when I'll return to this project for the next iteration. Any help is appreciated. @DZamataev cant able to get the heading of the web page , i have tried with various Readability Parser Option but no result , can u help me ? @praveengodz Sure, give me the url you are trying to parse please. @DZamataev URL : http://trak.in/tags/business/2016/06/14/employees-startup-must-have/ From this url except main heading (title) other are so perfect . Thanks for your support Denis.
2025-04-01T06:36:52.662295
2020-05-19T11:03:36
620884715
{ "authors": [ "JackMD", "SOF3" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:722", "repo": "DaPigGuy/PiggyFactions", "url": "https://github.com/DaPigGuy/PiggyFactions/issues/49" }
gharchive/issue
Faction command does not implement PluginIdentifiableCommand This prevents Poggit from detecting the command, and hence the plugin cannot be searched. Shouldn't this be done by commando tho? Fun fact: Commando doesn't even take a Plugin instance, so they can't fix it without breaking BC. https://github.com/CortexPE/Commando/blob/master/src/CortexPE/Commando/BaseCommand.php#L74-L78 I created https://github.com/CortexPE/Commando/issues/26 on Commando though.
2025-04-01T06:36:52.715052
2024-09-20T13:30:14
2538837655
{ "authors": [ "DanForys" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:723", "repo": "DanForys/ts-query-model", "url": "https://github.com/DanForys/ts-query-model/pull/32" }
gharchive/pull-request
chore(main): release 0.7.1 :robot: I have created a release beep boop 0.7.1 (2024-09-20) Bug Fixes README (#33) (5e564f1) separetely export db connection classes (#31) (75e2e57) This PR was generated with Release Please. See documentation. :robot: Created releases: v0.7.1 :sunflower:
2025-04-01T06:36:52.715874
2016-04-25T08:41:37
150789525
{ "authors": [ "DanGrew" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:724", "repo": "DanGrew/JenkinsTestTracker", "url": "https://github.com/DanGrew/JenkinsTestTracker/issues/108" }
gharchive/issue
User Guide Finish off and put into version control. Wiki pages have their own repo - need to explore this further
2025-04-01T06:36:52.750582
2018-10-24T14:11:10
373504155
{ "authors": [ "DanielMartinus", "spiderShaki" ], "license": "isc", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:725", "repo": "DanielMartinus/Konfetti", "url": "https://github.com/DanielMartinus/Konfetti/issues/68" }
gharchive/issue
Allow us to give vector drawable It would be very helpful to us if we can use our own drawable to animate .. This is related to this issue where shapes and drawables are being discussed: https://github.com/DanielMartinus/Konfetti/issues/11
2025-04-01T06:36:52.814859
2022-09-20T01:48:17
1378708402
{ "authors": [ "Michelle951", "windsonsea" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:726", "repo": "DaoCloud/DaoCloud-docs", "url": "https://github.com/DaoCloud/DaoCloud-docs/pull/143" }
gharchive/pull-request
Translate install steps 新增了英文: install 步骤 license 激活 video 频道 Besides, it is suggested to cover yout phone numer and email address with mosaics in the pics you uploaded.
2025-04-01T06:36:52.909807
2023-11-29T19:00:27
2017226683
{ "authors": [ "thyssentishman" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:727", "repo": "DartGit-dev/git2dart", "url": "https://github.com/DartGit-dev/git2dart/issues/1" }
gharchive/issue
Failed to open the library I'm getting the following error when trying to print Libgit2.version Failed to open the library. Make sure that libgit2 library is bundled with the application. Another exception was thrown: Invalid argument(s): Failed to load dynamic library '/home/johannes/td/build/linux/x64/debug/bundle/lib/libgit2-1.6.2.so': libssh2.so.1: cannot open shared object file: No such file or directory Installed with: $ flutter pub add git2dart $ uname -a Linux ubuntu 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Installing libgit2-dev on ubuntu seems to have solved the issue: sudo apt install libgit2-dev However, shouldn't the library be bundled with git2dart for it to work e.g. on Android
2025-04-01T06:36:52.912498
2022-04-25T20:14:01
1214994256
{ "authors": [ "rafkra", "tbgoose" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:728", "repo": "DasLetzteEinhorn/AlphaESS_Monitor_Hass", "url": "https://github.com/DasLetzteEinhorn/AlphaESS_Monitor_Hass/issues/7" }
gharchive/issue
Energy values required Hi, is there any chance to get the energy values (kWh) from the API? These values are shown in the Energy Diagram. They are necessary for the energy dashboard of HA. Looks like these values available via https://www.alphaess.com/api/Power/SticsByPeriod and/or https://www.alphaess.com/api/ESS/SticsSummeryDataForCustomer Looks like the neccesary values for issue #4 ... Regards, Ralf Hi Ralf, check out https://github.com/CharlesGillanders/homeassistant-alphaESS. It doesn't replace this add-on, but when run alongside it gives you everything you need for both real time data and the data needed for the energy dashboard.
2025-04-01T06:36:52.932743
2024-06-25T13:32:23
2372716778
{ "authors": [ "pietrushnic" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:729", "repo": "Dasharo/docs", "url": "https://github.com/Dasharo/docs/pull/841" }
gharchive/pull-request
dev-proc/versioning.md: initial coorections to dasharo naming scheme Based on discussion in following issues and announcements made on DUG#6 new naming convention for Dasharo products was introduced. For details please check: https://github.com/Dasharo/docs/pull/820 https://github.com/Dasharo/dasharo-issues/issues/762 @macpijan ping @macpijan @miczyg1 ping
2025-04-01T06:36:52.935317
2023-08-04T12:05:00
1836628231
{ "authors": [ "TomaszAIR", "macpijan", "miczyg1" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:730", "repo": "Dasharo/meta-dts", "url": "https://github.com/Dasharo/meta-dts/pull/35" }
gharchive/pull-request
meta-dts-distro/recipes-dts/dts/dasharo-deploy: fix downloading Dell … …BIOS Update packages @miczyg1 Picked here on top of the latest changes and rebuilding locally: https://github.com/Dasharo/meta-dts/pull/34 @TomaszAIR Didn't we have CI for DTS? Couldn't we build PRs and upload artifacts? @macpijan we have a weekly CI for checking cache, but we did not create one for pushed PR, I started an issue for this https://github.com/Dasharo/dasharo-issues/issues/476 @miczyg1 changes from here are already integrated in #36 so I am closing this one. @TomaszAIR the problem is the change didn't work as expected. I must have messed up the bash syntax because wget doesn't detect the URL after the change.
2025-04-01T06:36:52.943081
2023-08-31T09:52:32
1875181631
{ "authors": [ "Mikescops", "irew" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:731", "repo": "Dashlane/passkeys-resources", "url": "https://github.com/Dashlane/passkeys-resources/pull/15" }
gharchive/pull-request
[NEW] airnewzealand.com Domain Name: airnewzealand.com Purpose: Airline Relevance: https://www.airnewzealand.com/cyber-security-account-protection Additional Information: Needs to solve merge conflict @irew
2025-04-01T06:36:52.967525
2020-11-05T15:59:40
737057854
{ "authors": [ "rushtong", "solideoglori" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:732", "repo": "DataBiosphere/duos-ui", "url": "https://github.com/DataBiosphere/duos-ui/pull/693" }
gharchive/pull-request
Lc alink text fix <your comments for this PR go here > Have you read Terra's Contributing Guide lately? If not, do that first. I, the developer opening this PR, do solemnly pinky swear that: [ ] PR is labeled with a Jira ticket number and includes a link to the ticket [ ] PR is labeled with a security risk modifier [no, low, medium, high] [ ] PR describes scope of changes In all cases: [ ] Get a minimum of one thumbs worth of review, preferably 2 if enough team members are available [ ] Get PO sign-off for all non-trivial UI or workflow changes [ ] Verify all tests go green [ ] Squash and merge; you can delete your branch after this [ ] Test this change deployed correctly and works on dev environment after deployment I think this branch needs to be rebased from the latest develop.
2025-04-01T06:36:52.971564
2024-02-08T19:51:50
2125961744
{ "authors": [ "dvoet" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:733", "repo": "DataBiosphere/terra-common-lib", "url": "https://github.com/DataBiosphere/terra-common-lib/pull/131" }
gharchive/pull-request
WOR-1510 upgrade otel override HttpServerMetrics https://broadworkbench.atlassian.net/browse/WOR-1510 Does a service using OTEL via TCL need to do anything extra to get these changes? yes, they need to upgrade their TCL version, I will start doing that tomorrow
2025-04-01T06:36:52.972798
2019-06-20T22:07:56
458907139
{ "authors": [ "zarsky-broad" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:734", "repo": "DataBiosphere/terra-ui", "url": "https://github.com/DataBiosphere/terra-ui/pull/1706" }
gharchive/pull-request
rename tools to workflows [SATURN-868] This replaces references to tools in the code with workflows, along with redirectors for affected paths. There are still a few places in copy where we refer to tools, and I didn't touch those, I want to check in with product first. @panentheos I think I got them all
2025-04-01T06:36:52.979783
2022-05-24T19:39:35
1247043760
{ "authors": [ "nawatts", "petesantos", "slucasbroad" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:735", "repo": "DataBiosphere/terra-ui", "url": "https://github.com/DataBiosphere/terra-ui/pull/3072" }
gharchive/pull-request
[DC-362] add metrics to the retry logic in terra UI added a retry handler for AJAX calls to return metrics to mixpanel. Thinking about this some more, it seems like this will capture many request failures that we don’t actually care about because they’re part of the normal operation of Terra. For example, every time I load Terra, there are several 404 responses from Bard because I don’t have NIH, Anvil, etc accounts linked. This would report events for all of those. There are also some expected error responses in data tables. For example, if you try to delete a row referenced by another row, the API returns 429 to let you know that you have to delete the reference first. Is this going to capture so many false positives that any potential signal is lost in the noise? I agree, we can filter out the bond errors. Filtering out Bond errors by service URL should use the URL for the current environment (#3072 (comment)). Could you clarify? Unsure about instead of using a direct root of https://broad-bond-dev.appspot.com and replacing it with the args[0] value of https://broad-bond-dev.appspot.com/api/link/v1/fence? Is it redundent? Could you clarify? Unsure about instead of using a direct root of https://broad-bond-dev.appspot.com and replacing it with the args[0] value of https://broad-bond-dev.appspot.com/api/link/v1/fence? Is it redundent? In development, Terra UI makes Bond requests to broad-bond-dev.appspot.com. In production, it makes those requests to broad-bond-prod.appspot.com. Thus, filtering out requests to broad-bond-dev.appspot.com would not affect requests in production. These service URLs are configured for each environment (dev, alpha, staging, prod, etc.) Instead of hardcoding a URL like broad-bond-dev.appspot.com, this should use the configured service URL (getConfig().bondUrlRoot). https://github.com/DataBiosphere/terra-ui/blob/74a981df7a6a5a27a96bd13e5bae60644cbf8298/src/libs/ajax.js#L156-L169 Filtering out Bond errors by service URL should use the URL for the current environment (#3072 (comment)). Could you clarify? Unsure about instead of using a direct root of https://broad-bond-dev.appspot.com and replacing it with the args[0] value of https://broad-bond-dev.appspot.com/api/link/v1/fence? Is it redundent? or you meant replace it with getConfig().bondUrlRoot missed that thanks
2025-04-01T06:36:53.005177
2022-06-10T07:14:34
1267155744
{ "authors": [ "acastro2", "mengschin" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:736", "repo": "DataDog/datadog-agent", "url": "https://github.com/DataDog/datadog-agent/issues/12363" }
gharchive/issue
HPA target average value being summed when same metric is being used in more than one HPA Output $kubectl describe cm datadog-custom-metrics external_metric-horizontal-core-autoscaler2-Sample.Datadog.Metric: {"metricName":"Sample.Datadog.Metric","labels":{},"ts":1654754970,"reference":{"type":"horizontal","name":"autoscaler2","namespace":"core","uid":"12345678-1234-5678-9456-74162016a00f"},"value":0.5,"valid":true} external_metric-horizontal-core-autoscaler1-Sample.Datadog.Metric: {"metricName":"Sample.Datadog.Metric","labels":{},"ts":1654754970,"reference":{"type":"horizontal","name":"autoscaler1","namespace":"core","uid":"12345678-1234-5678-a0e1-6c9d34bd20cd"},"value":0.5,"valid":true} $kubectl describe hpa autoscaler1 -n core Name: autoscaler1 Namespace: core CreationTimestamp: Fri, 08 Apr 2022 11:45:01 +0800 Reference: Deployment/autoscaler1 Metrics: ( current / target ) "Sample.Datadog.Metric" (target average value): 1 / 15 Min replicas: 1 Max replicas: 1 Deployment pods: 1 current / 1 desired $kubectl describe hpa autoscaler2 -n core Name: autoscaler2 Namespace: core CreationTimestamp: Thu, 09 Jun 2022 12:12:09 +0800 Reference: Deployment/autoscaler2 Metrics: ( current / target ) "Sample.Datadog.Metric" (target average value): 1 / 15 Min replicas: 1 Max replicas: 1 Deployment pods: 1 current / 1 desired Describe what happened: I'm using the same metric for different Horizontal Pod Autoscaler(HPA). As shown in datadog-custom-metrics config map above, the Sample.Datadog.Metric is being used in two different HPA: Autoscaler1 and Autoscaler2 However, when describing each of the HPA respectively, you could see the Target Average Value is 1, instead of 0.5 ; it seems the HPA is summing up the value as long as the Metric name is similar. Describe what you expected: I'm expecting the Autoscaler1 and Autoscaler2 should show 0.5 as Target Average Value instead of 1. Steps to reproduce the issue: Create two Horizontal Pod Autoscaler and use the same metric name. Additional environment details (Operating System, Cloud provider, etc): Is this still happening?
2025-04-01T06:36:53.007869
2023-07-26T13:49:36
1822465188
{ "authors": [ "danielmilanov", "ofek" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:737", "repo": "DataDog/datadog-agent", "url": "https://github.com/DataDog/datadog-agent/issues/18400" }
gharchive/issue
Drop prometheus metrics Hi, we've got some workloads exporting prometheus-formatted metrics. DD agent discovers them successfully for scraping based on workload annotations, and indeed time series are collected. we would like to do some basic relabeling, namely drop some of the time series, as they do not serve any meaningful purpose at this point, and have high cardinality. The following configuration is added to the annotation: "metrics": [".*"] "exclude_metrics": ["rest.*"] expected outcome is DD agent to drop everything starting with rest, however these keep piling up. Please advise what is being missed. TIA Can you please show the full configuration and an example of a metric that is not being excluded?
2025-04-01T06:36:53.015811
2024-01-03T21:49:09
2064691187
{ "authors": [ "robertjli" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:738", "repo": "DataDog/datadog-agent", "url": "https://github.com/DataDog/datadog-agent/pull/21852" }
gharchive/pull-request
Skip TestWindowsTestSuite/TestManualProcessDiscoveryCheck What does this PR do? Skip TestProcessDiscoveryCheck in the Process Agent Windows E2E test suite due to flakiness https://github.com/DataDog/datadog-agent/pull/21842 skipped the wrong test Motivation Test sometimes flakes when the number of processes returned is greater than 100. When this happens, two JSON objects are returned, which causes the following error during unmarshaling: Error: Received unexpected error: invalid character '{' after top-level value Test: TestWindowsTestSuite/TestManualProcessDiscoveryCheck Messages: failed to unmarshal process check output Additional Notes Possible Drawbacks / Trade-offs Describe how to test/QA your changes Reviewer's Checklist [x] If known, an appropriate milestone has been selected; otherwise the Triage milestone is set. [ ] Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote. [ ] A release note has been added or the changelog/no-changelog label has been applied. [ ] Changed code has automated tests for its functionality. [x] Adequate QA/testing plan information is provided. Except if the qa/skip-qa label, with required either qa/done or qa/no-code-change labels, are applied. [x] At least one team/.. label has been applied, indicating the team(s) that should QA this change. [x] If applicable, docs team has been notified or an issue has been opened on the documentation repo. [ ] If applicable, the need-change/operator and need-change/helm labels have been applied. [ ] If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature. [ ] If applicable, the config template has been updated. Made sure it's skipping the correct test now: --- PASS: TestWindowsTestSuite (854.18s) --- PASS: TestWindowsTestSuite/TestManualProcessCheck (3.47s) --- PASS: TestWindowsTestSuite/TestManualProcessCheckWithIO (27.33s) --- SKIP: TestWindowsTestSuite/TestManualProcessDiscoveryCheck (24.30s) --- PASS: TestWindowsTestSuite/TestProcessCheck (16.86s) --- PASS: TestWindowsTestSuite/TestProcessCheckIO (51.58s) --- PASS: TestWindowsTestSuite/TestProcessDiscoveryCheck (64.73s) PASS ok github.com/DataDog/datadog-agent/test/new-e2e/tests/process 1240.863s https://gitlab.ddbuild.io/DataDog/datadog-agent/-/jobs/400199699 /merge
2025-04-01T06:36:53.018656
2024-05-23T15:40:18
2313242442
{ "authors": [ "jonbodner", "rarguelloF" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:739", "repo": "DataDog/datadog-agent", "url": "https://github.com/DataDog/datadog-agent/pull/25868" }
gharchive/pull-request
service_discovery: add telemetry What does this PR do? Adds telemetry for the service_discovery check. Motivation Get data on how the service_discovery check is performing/behaving. Additional Notes Possible Drawbacks / Trade-offs Describe how to test/QA your changes Additional code coverage is needed, too. /merge
2025-04-01T06:36:53.022566
2024-07-01T10:57:59
2383521531
{ "authors": [ "adel121" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:740", "repo": "DataDog/datadog-agent", "url": "https://github.com/DataDog/datadog-agent/pull/27186" }
gharchive/pull-request
check for group in IsNodeMetadata What does this PR do? This PR checks for the resource group in IsNodeMetadata to avoid returning true for nodes that are not in the empty kubernetes group. Motivation Some resources might have the name nodes, but they belong to another (custom) API Group (i.e. not the empty group in kubernetes). IsNodeMetadata should return false in these cases. Additional Notes Currently there is an issue in the resource-group-version mapping for generic metadata collection, which, in some cases, leads to watching metrics.k8s.io/v1beta1, Resource=nodes instead of /v1, Resource=nodes. This will be fixed in a separate PR. This fix done in this PR is useful to avoid using the metadata collected for metrics.k8s.io/v1beta1, Resource=nodes as if they were native node metadata, resulting in incorrect data. Possible Drawbacks / Trade-offs Describe how to test/QA your changes No need for qa, we have automated testing. /merge
2025-04-01T06:36:53.024778
2024-07-02T19:19:44
2386984032
{ "authors": [ "paulcacheux" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:741", "repo": "DataDog/datadog-agent", "url": "https://github.com/DataDog/datadog-agent/pull/27261" }
gharchive/pull-request
[CWS] remove now unused secl.json What does this PR do? This file was split into secl_linux.json and secl_windows.json. It can safely be removed now. Motivation Additional Notes Possible Drawbacks / Trade-offs Describe how to test/QA your changes /merge
2025-04-01T06:36:53.027471
2024-08-26T12:03:06
2486693649
{ "authors": [ "gjulianm" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:742", "repo": "DataDog/datadog-agent", "url": "https://github.com/DataDog/datadog-agent/pull/28740" }
gharchive/pull-request
[EBPF] Fix the suggested kmt.test command when creating config from CI What does this PR do? Fixes the suggested kmt.test command that is shown when the --from-ci-pipeline argument is created in kmt.config. Also adds the list of failed tests to the command via the --run argument. Motivation Additional Notes Possible Drawbacks / Trade-offs Describe how to test/QA your changes /merge
2025-04-01T06:36:53.029492
2024-12-18T17:13:49
2748354314
{ "authors": [ "BaptisteFoy" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:743", "repo": "DataDog/datadog-agent", "url": "https://github.com/DataDog/datadog-agent/pull/32356" }
gharchive/pull-request
fix(installer): Make policy metadata files root-owned & world-readable What does this PR do? Fixes a bug at installer install: we try to write the policy metadata files as dd-agent before having created the dd-agent user. This PR makes the file owned by root instead, and world-readable so that the daemon can read it. There is no sensitive data in this file anyway. Motivation Describe how you validated your changes Tested manually on a VM + E2E tests Possible Drawbacks / Trade-offs Additional Notes /merge
2025-04-01T06:36:53.033651
2020-07-24T19:40:50
665376304
{ "authors": [ "jared-gs", "mx-psi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:744", "repo": "DataDog/datadog-agent", "url": "https://github.com/DataDog/datadog-agent/pull/6053" }
gharchive/pull-request
Add system.mem.slab_reclaimable gauge What does this PR do? Adds a gauge for system.mem.slab_reclaimable. This is part of slab memory that might be reclaimed (i.e. caches). Motivation Datadog 7.x adds SReclaimable memory, if available on the system, to the system.mem.cached gauge by default: https://github.com/shirou/gopsutil/commit/f9e238c38b5f16a36794dd2f0f751c7376c64a60. This may lead to inconsistent metrics for clients migrating from Datadog 5.x, where system.mem.cached didn't include SReclaimable memory. Adding a gauge for system.mem.slab_reclaimable allows inverse calculation to remove this value from the system.mem.cached gauge. Additional Notes This PR is a follow up to https://github.com/DataDog/datadog-agent/issues/5038. Thanks @mx-psi, I've incorporated those suggestions. Thank you so much for your patience as I figured this out! 😅 I think the CLA is good now. No probs, thanks again!
2025-04-01T06:36:53.048578
2024-10-11T01:12:17
2580174430
{ "authors": [ "dmehala", "koizumi7010" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:745", "repo": "DataDog/dd-trace-cpp", "url": "https://github.com/DataDog/dd-trace-cpp/issues/164" }
gharchive/issue
404 Errors ( Unexpected telemetry response ) from Datadog Trace Hello, Recently, I updated Istio from version 1.20.8 to 1.21.6. Immediately after the update, the following errors have been occurring intermittently in the envoy tracing scope for every istio-proxy container: [source/extensions/tracers/datadog/logger.cc:23] Unexpected Remote Configuration status 503 with body (if any, starts on next line): upstream connect error or disconnect/reset before headers. reset reason: connection termination [source/extensions/tracers/datadog/logger.cc:23] Unexpected telemetry response status 404 with body (if any, starts on next line): 404 page not found I was able to resolve the first error by referring to this Issue and setting DD_REMOTE_CONFIGURATION_ENABLED = "false" in the IstioOperator YAML template file. However, I haven't been able to solve the second error yet. And, I am not familiar with C++, but I found code blocks that seems to involve the error log. https://github.com/DataDog/dd-trace-cpp/blob/main/src/datadog/datadog_agent.cpp#L176-L190 If anyone understands the cause of this error and knows how to resolve it, I would greatly appreciate your help. Hi @koizumi7010 We collect telemetry to get more insight on the tracer and its usage. This feature can be disabled with DD_INSTRUMENTATION_TELEMETRY_ENABLED=false. Learn more on all the configurations we support. Let me know if that solve your issue :) @dmehala Thanks for the reply! Setting DD_INSTRUMENTATION_TELEMETRY_ENABLED = 'false' in IstioOperator YAML template file eliminated the error :) 👍🏼
2025-04-01T06:36:53.420652
2017-01-20T15:13:45
202164044
{ "authors": [ "truthbk" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:746", "repo": "DataDog/integrations-core", "url": "https://github.com/DataDog/integrations-core/issues/124" }
gharchive/issue
[appveyor][windows] fix winfixme checks So for some reason the mocking of modules loaded in the check breaks on windows. Not sure what python does differently there, but it's definitely windows specific. We'll have to get to the bottom of it and address it. Fixed here: https://github.com/DataDog/integrations-core/pull/79
2025-04-01T06:36:53.429101
2019-05-23T16:39:33
447765183
{ "authors": [ "dsahni", "goodgrits", "hithwen" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:747", "repo": "DataDog/integrations-core", "url": "https://github.com/DataDog/integrations-core/pull/3801" }
gharchive/pull-request
[ibm_mq] fix queue auto discovery to include any qlocal in addition to qmodel, … [ibm_mq] fix queue auto discovery to include any qlocal in addition to qmodel, fltering to include only DEFTYPE(PREDEFINED) types What does this PR do? Expand metric collection for queues: This change allows for MQ queues of type QLOCAL to be considered for metric collection in addition to those of type QMODEL. The existing implementation only allows for those queues of type QMODEL. This is required for my use case as nearly all of our queues are of type QLOCAL. (I suspect this is true for most orgs that use IBM MQ.) Filtering out of 'system' queues is also updated to look at the DEFTYPE attribute to determine a 'system' queue in lieu of the hard-coded list found in the existing version of config.py -- this filter is now based on whether a queue's DEFTYPE attribute is PREDEFINED (if so, consider it, else ignore). Also, this change provides regex support for the queue_patterns configuration (overcoming MQ's right-side-wildcard-only pattern treatment). Motivation :information_source: This change was discussed with DataDog pre-sales during a trial-period checkpoint call. DataDog was eager to review the changes and requested that this patch be submitted as a PR. It is understood that there may be more refinement needed to this PR if DataDog were to choose to incorporate this and that the PR may not be merge-ready in its current state. While involved with a Datadog evaluation, I found that there were no metrics reported for any queues using the auto-discovery configuration. After looking in the integration code, I found the limitation mentioned above that queues would have to be of type QMODEL in order to be considered for metric collection. This meant that the IBM MQ integration would be unusable for my use case. In order to proceed with my evaluation, QLOCALs would also have to be considered. Additional Notes None. Review checklist (to be filled by reviewers) [ ] PR title must be written as a CHANGELOG entry (see why) [ ] Files changes must correspond to the primary purpose of the PR as described in the title (small unrelated changes should have their own PR) [ ] PR must have changelog/ and integration/ labels attached [ ] Feature or bugfix must have tests [ ] Git history must be clean [ ] If PR adds a configuration option, it must be added to the configuration file. Hi @goodgrits, I'm a PM at Datadog and would like to thank you for submitting this PR! While our team is reviewing, I'd like to chat with you in more detail about this request. Could you please email me at<EMAIL_ADDRESS>with the best contact information to reach you at? Please have a look at the proposed changes in https://github.com/DataDog/integrations-core/tree/julia/unit-tests. There is a type fix (the issue was already present, and updates the example file and test to account for regexes) @hithwen : Should your #3893 supercede this PR? That would seem fine to me. @goodgrits Ok, lets do that and close this one.
2025-04-01T06:36:53.431429
2020-12-10T16:47:29
761404078
{ "authors": [ "AlexandreYang" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:748", "repo": "DataDog/integrations-core", "url": "https://github.com/DataDog/integrations-core/pull/8183" }
gharchive/pull-request
Add loader config What does this PR do? Add loader config Note: We might want to delay this change to keep is_jmx for one or more Agent versions. Motivation https://github.com/DataDog/datadog-agent/pull/6700 Related PR: https://github.com/DataDog/datadog-agent/pull/6953 Closing for now, since there is no much benefit right now to use loader instead of is_jmx.
2025-04-01T06:36:53.440989
2018-02-11T01:57:01
296151794
{ "authors": [ "Andrey-Pavlov", "StephenKappel", "jrtechs", "mrgreywater" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:749", "repo": "DataDog/piecewise", "url": "https://github.com/DataDog/piecewise/issues/4" }
gharchive/issue
[Feature] robust regression and error function With my dataset, I had some trouble where no matter what is entered as min_stop_frac, it would always return the same amount of segments. It would merge large segments that had almost no deviations before (creating a visible error), instead of merging small segments with bigger deviations. Taking the square root of the squared error that is returned by the OLS linear regression function fixed it, reduced the error and improved the result immensely: I replaced : https://github.com/DataDog/piecewise/blob/3a15a1c3113cbbecf979bb318f19f2c7fbdc9408/piecewise/regressor.py#L301 with return tuple(coeffs), 0.0 if len(error) == 0 else float(math.sqrt(error)) I'm not sure if this should be pushed in a PR, as it would have to be checked against more data, but maybe it would make sense from an api standpoint to let the user supply his own linear regression/cost function, so one can use a more robust regression like Theil-San estimator or RANSAC incase the dataset has outliers. Thank you lots for this library, it helped me quite a bit. I definitely like this idea of make the cost function pluggable. If you're interested in pushing a PR, that'd be great. Otherwise, I can probably get around to adding this functionality eventually, although I'm not sure how soon. @mrgreywater Thank you! @StephenKappel But how to increase number of segments? I want to see red lines on the chart. min_stop_frac doesn't help https://puu.sh/AP63P/967d3d3243.png Hey @jrtechs -- I'd be happy to have a PR to make this better! Here's some ideas... Currently, the algorithm only remembers the segments from what it thinks is the best state so far. It does this based on the increase in total error (i.e., the cost of the merge). It does not consider how many merges remain before it ends up with a single segment, nor does it consider granular information about the cost of merges in the past. The algorithm is trying to catch the big jump in total error that normally accompanies the one-merge-too-far. Something like shown here... I think the problem comes when that sudden increase in error starts with an error increase that isn't the single largest error increase. For example, the algorithm will do the right thing with this series of merge costs: 1, 1, 1, 2, 3, 3, 3, 4, 5, 5, 5, 5, 100, 125, 100, 105 But it will go one merge too far for this series of merge costs: 1, 1, 1, 2, 3, 3, 3, 4, 5, 5, 5, 5, 50, 105, 100, 125 We could try: Tracking the cost of all merges to date, so we could use some more interesting statistics of the merge costs rather than just the max and cumulative sum of the past. Remembering more than one state so that we don't need to make the decision of whether or not to discard a previous potentially-interesting state until we see all the merge costs. The hope would be that, in retrospect, we'd be able to make smarter choices than we can on the fly. A simpler solution for the specific example you provide could be providing a parameter to require more significant increases in cost for a merge to be considered the tipping point. On this line, cost_increase == biggest_cost_increase could become something like cost_increase >= 2.0*biggest_cost_increase, where 2.0 would be parameterized. @StephenKappel I will definitely look into that. Just for clarification, what would be the difference between the min_stop_frac and this new error_increase_tolerance? Does the min_stop_frac define the increase in error allowed before we stop merging where the error_increase_tolerance would help us push more partitions/buckets to be merged? The main motivation for min_stop_frac is to prevent the algorithm from giving a suboptimal solution when the "best" solution is a single line segment. It prevents the algorithm from stopping merging too soon if no single merge has led to a large fraction of the total error. A new parameter would help the algorithm stop sooner, but this wouldn't contradict min_stop_frac. That is, in order to stop merging, the min_stop_frac-based threshold would still have to be met. However, after that's met (it will always be met for every future iteration after it's first true), we need a new parameter to help prevent too much merging.
2025-04-01T06:36:53.459577
2024-03-06T06:00:36
2170700614
{ "authors": [ "gaoyan1998", "zhengtingxue" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:752", "repo": "DataLinkDC/dinky", "url": "https://github.com/DataLinkDC/dinky/issues/3244" }
gharchive/issue
[Bug] [k8s app] submit job with k8s app ha mode,flink have the same jobid,submit and cancle job throw error Search before asking [X] I had searched in the issues and found no similar issues. What happened submit job with k8s app ha mode,flink have the same jobid,like<PHONE_NUMBER>924de00000000000000000;submit job throw error "Duplicate entry 'test20000000006924de00000000000000000-1' for key 'cluster_un_idx1'" when the old flink instance still exists,and cancle job throw error "Expected one result (or null) to be returned by selectOne(), but found: 5" because of the same id; What you expected to happen submit job and cancle job work well How to reproduce submit job with k8s app ha mode, Anything else No response Version 1.0.0 Are you willing to submit PR? [ ] Yes I am willing to submit a PR! Code of Conduct [X] I agree to follow this project's Code of Conduct mysql表内可以试着取消name唯一约束,这个不影响任务
2025-04-01T06:36:53.469249
2021-03-10T19:24:24
828226316
{ "authors": [ "HadesArchitect", "RooDK" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:753", "repo": "DataStax-Academy/Intro-to-Cassandra-for-Developers", "url": "https://github.com/DataStax-Academy/Intro-to-Cassandra-for-Developers/issues/12" }
gharchive/issue
[HW] Roozbeh Name: Roozbeh Linkedin Profile: www.linkedin.com/in/roozbeh-dargahi Attach the homework screenshots below: https://api.badgr.io/public/assertions/TC5ZhD0AT2yOFtxc2DwH8A?identity__url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Froozbeh-dargahi
2025-04-01T06:36:53.470535
2015-01-19T16:15:29
54782651
{ "authors": [ "ibash", "jonathankeebler", "micahlmartin" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:754", "repo": "Datahero/node-pardot", "url": "https://github.com/Datahero/node-pardot/pull/1" }
gharchive/pull-request
Create and update prospects I added a few functions to create and update prospects. They're all yours if you want in the library :) Looks pretty good, main thing is that the style doesn't match. Can you offer some specific style guidance to @jonathankeebler? It would be niced to get this functionality merged in. Especially since it's the only package up on npm. In the meantime I'm installing his branch.
2025-04-01T06:36:53.472821
2021-12-22T21:21:01
1087197620
{ "authors": [ "fvazquez-caylent", "tamr-teamcity" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:755", "repo": "Datatamer/terraform-aws-tamr-vm", "url": "https://github.com/Datatamer/terraform-aws-tamr-vm/pull/38" }
gharchive/pull-request
CA 124 Deprecates s3_policy_arns & CA 125 Cloudwatch logs organization Deprecates s3_policy_arns in favor of additional_policy_arns. Permits the ability to create the cloudwatch log group and pass it to the shell script. This PR doesn't appear to be linked to a DevOps/SRE jira ticket This PR doesn't appear to be linked to a DevOps/SRE jira ticket This PR doesn't appear to be linked to a DevOps/SRE jira ticket This PR doesn't appear to be linked to a DevOps/SRE jira ticket This PR doesn't appear to be linked to a DevOps/SRE jira ticket
2025-04-01T06:36:53.483534
2022-08-24T06:16:02
1348922544
{ "authors": [ "573", "GeorgesAlkhouri", "MatrixManAtYrService", "ryanswrt" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:756", "repo": "DavHau/mach-nix", "url": "https://github.com/DavHau/mach-nix/issues/506" }
gharchive/issue
When used in a flake: error: attribute 'currentSystem' missing Hi, I'm a newbie, so I apologize if this question is in the wrong place. I'm trying to use mach-nix in this flake.nix { outputs = {self, nixpkgs}: let mach-nix = import (builtins.fetchGit { url = "https://github.com/DavHau/mach-nix"; ref = "refs/tags/3.5.0"; rev = "7e14360bde07dcae32e5e24f366c83272f52923f"; }) { }; in { defaultPackage.x86_64-linux = mach-nix.mkPython rec { requirements = '' numpy ''; }; }; } But when I run nix shell . I get this error: error: attribute 'currentSystem' missing at /nix/store/5n402azp0s9vza4rziv4z5y88v2cv1mq-nixpkgs/pkgs/top-level/impure.nix:17:43: 16| # (build, in GNU Autotools parlance) platform. 17| localSystem ? { system = args.system or builtins.currentSystem; } | ^ 18| I was reading a blog about this which said: You may get an error like this: error: attribute 'currentSystem' missing This happens because in the context of flakes, builtins.currentSystem does not exist (it is, after all, an impurity). If you come across this, try to refactor your legacy-nix portion so the system is always an argument, and provide that argument from your flake, as above. From that I get the feeling that I need to provide x86_64-linux as a parameter somehow. But where do I put it? You should follow the example described here: https://github.com/DavHau/mach-nix/blob/master/examples.md#use-mach-nix-from-a-flake - generally you want to use flake inputs to "import" remote nix packages/code. You could try (untested): { outputs = {self, nixpkgs}: let mach-nix = import (builtins.fetchGit { url = "https://github.com/DavHau/mach-nix"; ref = "refs/tags/3.5.0"; rev = "7e14360bde07dcae32e5e24f366c83272f52923f"; }) { inherit system; }; in { defaultPackage.x86_64-linux = mach-nix.mkPython rec { requirements = '' numpy ''; }; }; } { inherit system; } is not working. What you can do as a workaround is to build your configuration with the --impure option.
2025-04-01T06:36:53.488958
2016-09-12T16:02:27
176415015
{ "authors": [ "DaveWoodCom", "humblehacker" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:757", "repo": "DaveWoodCom/XCGLogger", "url": "https://github.com/DaveWoodCom/XCGLogger/pull/151" }
gharchive/pull-request
update projects and library for swift 2.3 (updated) I started with @bersaelor's swift_2.3 branch, rebased against @DaveWoodCom's master, and made one small change to get it to build with Xcode 8 GM. I don't see this getting merged to master, but could you maybe keep a swift_2.3 compatibility branch with these changes? Thanks for this PR. You should be able to use the swift_2.3 branch now. Let me know if there are any issues.
2025-04-01T06:36:53.503372
2023-02-17T06:00:12
1588799376
{ "authors": [ "DavidDeSimone", "ShadowApex" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:758", "repo": "DavidDeSimone/OpenCloudSaves", "url": "https://github.com/DavidDeSimone/OpenCloudSaves/issues/49" }
gharchive/issue
Support building OpenCloudSaves as a shared library for 3rd party integration Is your feature request related to a problem? Please describe. OpenCloudSaves currently only provides an executable binary, which makes it more difficult for 3rd party integration. Describe the solution you'd like Provide a shared library build of OpenCloudSaves. Go buildmodes allows exporting Go methods as a C shared library which can be used by 3rd party software to integrate OpenCloudSaves. Describe alternatives you've considered One alternative would be to use the CLI provided by OpenCloudSaves to integrate it into other software. Additional context I am currently developing a gamepad native game launcher and overlay called OpenGamepadUI as a free and open source alternative. I would love to be able to integrate OpenCloudSaves into it either natively or as a plugin. Will target for 0.17 Right now my concept for the API will be to expose a C interface that allow an invocation of the application using our option flags https://github.com/DavidDeSimone/OpenCloudSaves/blob/main/main.go#L14-L23 This will be our stable interface that will follow semver conventions. This way you can embed OpenCloudSave into your application without having to invoke from the command line. Internally, I try to have the GUI basically "invoke" the command line by using calls to CLIMain, so in general the app internally uses those flags to drive behavior. That would be perfect! That would be perfect! I couple of other issues I am thinking through: OpenCloudSave currently compiles and distributes a copy of rclone to perform the actual syncing. There are a couple of solutions I can think of for this: a. Require users of opencloudsave.so to provide an rclone for usage b. Try to bundle rclone into open cloud save (not sure of the difficulty here) On windows, OpenCloudSave requires a WebView DLL and WebView2 to be installed by the end user. This is all currently handled by our MSI - these requirements would end up passed on to the users of opencloudsave.so a. I don't know of another way around this - but I imagine it won't be a deal breaker for most applications, since they can just copy our install flow/distribution from how we build our MSI. I think it's reasonable to require the integrator to bundle or make their package depend on rclone themselves if they're using the shared library. Maybe OpenCloudSave could also expose an interface to specify the path to rclone if the integrating application has rclone in a custom directory? For OpenGamepadUI, since it's Linux-only, I was just planning on adding rclone as a dependency after integrating OpenCloudSave. I think it's reasonable to require the integrator to bundle or make their package depend on rclone themselves if they're using the shared library. Maybe OpenCloudSave could also expose an interface to specify the path to rclone if the integrating application has rclone in a custom directory? For OpenGamepadUI, since it's Linux-only, I was just planning on adding rclone as a dependency after integrating OpenCloudSave. Yeah, this sounds reasonable to me - I like the idea of exposing a hook for a user to specify the path to the rclone they want to use. I might expose that in the GUI layer as well.
2025-04-01T06:36:53.517218
2015-12-20T13:38:03
123160159
{ "authors": [ "DavidWatkins", "KhaledAtef" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:759", "repo": "DavidWatkins/Dice", "url": "https://github.com/DavidWatkins/Dice/issues/125" }
gharchive/issue
test-gcd.dice Bug. You cannot assign values to parameters It was too complicated to support assignment of values to parameters, please remove all instances of this and test-gcd will work Corrected.
2025-04-01T06:36:53.545300
2023-09-16T05:24:40
1899304591
{ "authors": [ "Seann-Moser", "TCMine" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:760", "repo": "DawnGroveStudios/GodotP2PNetwork", "url": "https://github.com/DawnGroveStudios/GodotP2PNetwork/issues/1" }
gharchive/issue
Example is unavailable The example link in the readme (https://github.com/DawnGroveStudios/GodotP2PNetworkExample) leads to a 404. Sorry about that, this example project is still a work in progress. We will hopefully have it finished with in the next few days Sorry for the delay, I have made this repo public. However, it is incomplete since I am spending most of my time on a different game. I will work on improving and adding more examples that repo in the coming weeks as well as improving the documentation. https://github.com/DawnGroveStudios/GodotP2PNetworkExample/tree/main/basic Closing out for now but please leave another issue if you would like to see a specific example or if you have any ideas to help improve this plugin in.
2025-04-01T06:36:53.560936
2021-08-04T08:09:17
960036934
{ "authors": [ "siradji" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:761", "repo": "DeFiCh/jellyfish", "url": "https://github.com/DeFiCh/jellyfish/pull/558" }
gharchive/pull-request
List Anchors RPC What kind of PR is this?: /kind feature What this PR does / why we need it: Implement listanchors RPC Which issue(s) does this PR fixes?: Fixes #48 Additional comments?: I want to write a test case that asserts the number of anchor that is returned when listAnchors RPC is called. I have tried creating anchors via spv_anchorrewards but no avail. My questions are What is the difference between spv_listanchors and listanchors? Is there a away to create anchors so that i can make the assertion i outlined above?
2025-04-01T06:36:53.578875
2022-09-30T11:09:19
1392274075
{ "authors": [ "fuxingloh", "wafflespeanut" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:762", "repo": "DeFiCh/metachain", "url": "https://github.com/DeFiCh/metachain/issues/106" }
gharchive/issue
FFI communication between native and meta chain What would you like to be added: As an alternative to JSON RPC communication (#80), I propose invoking functions through FFI. Metachain (MC) can be compiled as a staticlib, which then gets linked to defid (NC). The PoC for this approach exists in libmc branch in metachain and ain repository. CLI args from NC will be passed to MC, which will be bootstrapped and then defid will follow. Why is this needed: This way, we can eliminate the security concerns for running MC as an independent process and communicating through RPC (#45), which also reduces TCP overhead and avoids us packaging and running two binaries which will most probably exist in the same machine. #94 wouldn't have any adverse impact on this decision as the source of the bottleneck will still be in defid. It should be noted that MC and NC will not have any knowledge of what the other is doing and neither will one manage the other. However, when MC fails and exits, it will shutdown defid as well (at this point). As for e2e testing, we can still expose RPC from MC, because essentially they'll be the same FFI calls. cc @prasannavl @fuxingloh @DieHard073055 As an alternative to JSON RPC communication (#80), I propose invoking functions through FFI. Metachain (MC) can be compiled as a staticlib, which then gets linked to defid (NC). The PoC for this approach exists in libmc branch in ain repository. CLI args from NC will be passed to MC, which will be bootstrapped and then defid will follow. I think it's sound given if both options are provided with a shared Interfacing for the sake of testing with the ability to force defid to switch between prepackaged binary or via a provided RPC URL --metachain_url=.... #97 is also a very important related issue where in the future or now, we might not want to "force" all defid clients to be mining for NC and MC, given MC has different performance and storage requirements for masternode operators. This way, we can eliminate the security concerns for running MC as an independent process and communicating through RPC (#45), Although #45 is important, it isn't sane for masternode operators to operate on a host machine they can't trust. It's more of an additional security good practice IMO, as I don't think they should not be operating their masternode over the network. which also reduces TCP overhead and avoids us packaging and running two binaries which will most probably exist in the same machine. #94 wouldn't have any adverse impact on this decision as the source of the bottleneck will still be in defid. From the current literature on our threading model, I think #94 requires a renewal of our existing threading model for MC integration rather than the bottleneck of integration. I think it's sound given if both options are provided with a shared Interfacing for the sake of testing with the ability to force defid to switch between prepackaged binary or via a provided RPC URL --metachain_url=.... While I'm okay with supporting RPC for the purpose of testing, the whole aim of using FFI is to prepackage the binaries and to not worry about having RPC calls in production or needing to secure them, but if we're planning to have it in the near future anyway, then I don't think the FFI way serves any purpose. @prasannavl Need your opinion here. https://github.com/DeFiCh/metachain/issues/97 is also a very important related issue where in the future or now, we might not want to "force" all defid clients to be mining for NC and MC, given MC has different performance and storage requirements for masternode operators. Is the performance and storage requirements the only constraint for not being able to run MC alongside NC? Although https://github.com/DeFiCh/metachain/issues/45 is important, it isn't sane for masternode operators to operate on a host machine they can't trust. This is not specific to MC though, right? Wouldn't it apply to NC as well? While I'm okay with supporting RPC for the purpose of testing, the whole aim of using FFI is to prepackage the binaries and to not worry about having RPC calls in production or needing to secure them, but if we're planning to have it in the near future anyway, then I don't think the FFI way serves any purpose. @prasannavl Need your opinion here. Yup, DMC is the first part of a series of changes to modulize the DeFiChain blockchain into several components that were initially proposed in DFIP 2111-B. DMC is the first of many execution planes we want to integrate into a shared network plane. The goal is to modularize the blockchain to make upgrading easier. It was explained in a video on a generative approach to adding more modularized components with a shared network plane. Is the performance and storage requirements the only constraint for not being able to run MC alongside NC? Part of it. With DMC embedded, the "server profile" and server maintenance's operational flow must be changed if we force defid to be prepackaged. Several operators such as exchanges, individuals, and node operators would have to reconfigure their setup to fit the runtime requirements running defid. It's not simply a plug-and-play since these must be pre-communicated, tested, and operationalized. This is not specific to MC though, right? Wouldn't it apply to NC as well? Yup, it would; it's not specific to MC or NC, just running servers or validators in general. IMO, additional security cushions more for the uninitiated server maintainers. Btw, @mambisi has brought it to my attention going through the FFI route will not be as easy as it sounds because we won't get the context from substrate outside of wasm environment. Until we have a clear direction there, this is how we've currently decided to go: graph subgraph DeFiCh/ain nc[Native chain consensus] nb[Native chain bootstrap] nb --> nc nb --" FFI "--> mb nc --" FFI "--> nrc nb --" FFI "--> lock subgraph "DeFiCh/libain-rs" pt["Protobuf spec"] ngc["gRPC client"] nrc["RPC client"] pt --> ngc pt --> nrc end subgraph DeFiCh/metachain mb[Metachain bootstrap] mc[Metachain consensus] mprc[RPC server] lock[Random number agreement] mprc --> lock lock --> mprc mb --> mc mprc --> mc nrc --" JSON RPC "--> mprc end end Metachain build will also emit a static library in addition to the executable. This gets linked to defid. When defid starts, it has the option to boot up metachain (and arguments following a certain flag will be passed to it). When metachain terminates, it can issue a shutdown request to defid. When metachain is under defid's management, both will agree upon a large random number which will then be set alongside the RPC arguments. This way, any incoming request (to those consensus RPC endpoints) not coming from defid will be dropped. The consensus mechanism will still be through RPC (addressed in #80). FFI will be used only to boot up metachain and agree upon the random number. E2E tests will work normally on both metachain side and ain side because there won't be any breaking changes. Btw, @mambisi has brought it to my attention going through the FFI route will not be as easy as it sounds because we won't get the context from substrate outside of wasm environment. Until we have a clear direction there, this is how we've currently decided to go: Yup sounds good, I'm afraid of that as well. Metachain build will also emit a static library in addition to the executable. This gets linked to defid. When defid starts, it has the option to boot up metachain (and arguments following a certain flag will be passed to it). When metachain terminates, it can issue a shutdown request to defid. Yup, this is great; it's important that it's optional. https://github.com/DeFiCh/metachain/issues/97#issuecomment-1277355273 Does this also mean instead of starting up MetaChain, we could also provide an URL with a port number? When metachain is under defid's management, both will agree upon a large random number which will then be set alongside the RPC arguments. This way, any incoming request (to those consensus RPC endpoints) not coming from defid will be dropped. Could we use JWT instead if it's not too troublesome? Minor, I'm also very comfortable with this design for now, but it might require changes. Let's see, it's not important now. The consensus mechanism will still be through RPC (addressed in JSON-RPC communication between Native Chain and Meta Chain #80). FFI will be used only to boot up metachain and agree upon the random number. E2E tests will work normally on both metachain side and ain side because there won't be any breaking changes. Sounds good. Does this also mean instead of starting up MetaChain, we could also provide an URL with a port number? I didn't have this in mind, but it shouldn't be hard to implement with the current design. Could we use JWT instead if it's not too troublesome? Minor, I'm also very comfortable with this design for now, but it might require changes. Let's see, it's not important now. Agreed, let's get both defid and metachain running in a single node. I'm sure we can improve the mechanism later on.
2025-04-01T06:36:53.666115
2024-03-21T09:14:46
2199607765
{ "authors": [ "iPsych", "m-reuter" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:763", "repo": "Deep-MI/FastSurfer", "url": "https://github.com/Deep-MI/FastSurfer/issues/491" }
gharchive/issue
Is it possible to get segstats of cerebellum after run FastSurfer pipeline? Question/Support Request Is it possible to get segstats of cerebellum after run FastSurfer pipeline? Since the default option is not generating segstats, I wonder if I should run all pipeline, or can generate segstats for cerebellum afterwards. Bests, After running the cerebnet module, which is part of the segmentation pipeline, you should get the statsfile, see : https://deep-mi.org/FastSurfer/dev/overview/OUTPUT_FILES.html#cerebnet-module You can run with --seg_only, which will include cerebnet. No need to run the longer surface pipeline for this. Make sure you use the latest release. Thanks. Actually, I ran only cerebnet with already processed data via python directly as below. python CerebNet/run_prediction.py --t1 $t1 --asegdkt_segfile $asegdkt_segfile --conformed_name $conformed_name --cereb_segfile $cereb_segfile --seg_log $seg_log --batch_size $batch_size --viewagg_device $viewagg --device $device --async_io --threads $threads I missed --cereb_statsfile $cereb_statsfile, so the statsfile did not generated. You can either do the hard work of looking into run_fastsurfer scripts and also manually replicate the other steps, or you can just run the --seg_onlyagain (takes only minutes per case) and get everything you need.
2025-04-01T06:36:53.674335
2023-09-21T09:48:04
1906566414
{ "authors": [ "DaniBodor", "gcroci2" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:764", "repo": "DeepRank/deeprank2", "url": "https://github.com/DeepRank/deeprank2/issues/500" }
gharchive/issue
Add performances comparisons with previous packages Table implemented in PR #493 shows the timings obtained for generating graphs/graphs+grids, atomic resolution, with all features except for the ones in the conservation module, because we don't have the pssm files for the data in the tutorials (for computing the performances of deeprank2, I used the raw data available at this address). We need some discussion here: Is this a satisfying way of showing performances? Do we need to generate all the features possible (by adding conservation module features), and to add performances for residue resolution as well? How do we do a fair comparison with the previously developed packages? Features are different in number and in how they are calculated, so if we use all features in all packages we can't know if the comparison is fair. Maybe we could just pick a couple of them which are the same in all packages (e.g., distance, residue type)? When we'll have clearer ideas/plans about 1. and 2., compare deeprank2 with: deeprank [ ] PPIs, grid deeprank-gnn [ ] PPIs, graph deeprank-mut [ ] variants, grid How would you advice to proceed here, especially for question 2.? @sonjageorgievska, @DaniBodor I don't have a great idea about this, apart from just doing a "not fair" comparison and being open about it and explaining the differences. I don't have a great idea about this, apart from just doing a "not fair" comparison and being open about it and explaining the differences. Actually I agree, I also think this is the only realistic option we have.
2025-04-01T06:36:53.694493
2020-04-05T08:59:15
594340515
{ "authors": [ "devGregA", "valentijnscholten" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:765", "repo": "DefectDojo/django-DefectDojo", "url": "https://github.com/DefectDojo/django-DefectDojo/issues/2151" }
gharchive/issue
alert notifications shown as enabled by default when they're not When a user has not setup nofitications yet, the django view code defaults to creating a new Notifications model instance. This has all notifications types set to be enabled for 'alert'. So on screen it looks to the user that these notifications are enabled, when in reality they are not. solution is either to disable them by default in the Notifications model or to somehow create a default Notifications model instance in the database whenever a new user is created. fixed for users created after https://github.com/DefectDojo/django-DefectDojo/blob/4be8d4c6e4f6e72f3574098e985d067b068bceb1/dojo/utils.py#L1971 existing users may still suffer. if someone can write a short piece of code to fix this for existing users, we can add that as a migration @valentijnscholten are we okay to close this one?