id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
|---|---|---|---|---|---|
342123943
|
cd section fails in shell mode, returns no valid sections
On macOS High Sierra 10.13.1
Cannot use cd command to change my context for queries. Queries work normally, and I can run /:context QUERY normally.
Steps to repro
Install script locally, setup permissions
$ curl https://cht.sh/:cht.sh > /usr/local/bin/cht.sh
$ chmod +x /usr/local/bin/cht.sh
install rlwrap for shellmode
$ brew install rlwrap
launch shell mode
cht.sh --shell
attempt to cd into any section/context
cht.sh> cd go
Invalid section: go
Valid sections:
cht.sh> cd csharp
Invalid section: csharp
Valid sections:
cht.sh> cd arduino
Invalid section: arduino
Valid sections:
I have exactly the same problem on debian installed under Windows Subsystem for Linux.
Yes, it is true, this was broken in this commit:
https://github.com/chubin/cheat.sh/commit/b24381c7403f956e0dfcf10417c7ebe2b8165edb
(and this was not detected by our regression tests).
I will fix this problem today. Thank you very much reporting
Thank you very much for reporting. The problem is fixed, please test
Thanks.
It is working for me.
Thu Apr 21 07:56:28 BST 2022
Hi,
Running into this on macOS 12.3.1 (see attached animated gif):
The problem seems to be at https://github.com/chubin/cheat.sh/blob/562875eda610b0322819def25f2d27af1bf9469a/share/cht.sh.txt#L371
The following works for me:
curl -s "${CHTSH_URL}"/:list | grep ':list' | cut -d: -f1 | xargs
The above produces:
: ; curl -s https://cht.sh/:list | grep ':list' | grep ':list' | cut -d: -f1 |xargs
awk/ bash/ bf/ c/ chapel/ clojure/ cmake/ coffee/ cpp/ csharp/ d/ dart/ elisp/ elixir/ elm/ erlang/ factor/ forth/ fortran/ fsharp/ git/ go/ groovy/ haskell/ java/ js/ julia/ kotlin/ latex/ lisp/ lua/ mathematica/ matlab/ nim/ objective-c/ ocaml/ octave/ perl/ perl6/ php/ python/ python3/ r/ racket/ ruby/ rust/ solidity/ swift/ tcl/ tcsh/ vb/
|
gharchive/issue
| 2018-07-17T23:28:10
|
2025-04-01T06:38:11.570583
|
{
"authors": [
"chubin",
"karlosp",
"phxvyper",
"rprimus"
],
"repo": "chubin/cheat.sh",
"url": "https://github.com/chubin/cheat.sh/issues/73",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2450946997
|
Gemma2 pleeeeease!
https://huggingface.co/google/gemma-2-9b
It has the same chat template as gemma-1. Please refer to the instructions for gemma-1.
|
gharchive/issue
| 2024-08-06T13:50:19
|
2025-04-01T06:38:11.592378
|
{
"authors": [
"Hapluckyy",
"chujiezheng"
],
"repo": "chujiezheng/chat_templates",
"url": "https://github.com/chujiezheng/chat_templates/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
842692256
|
Question : How to align the nodes to the left ?
Hi,
I would like to align the nodes to the left in order to have a fixed size for the content and the remaining space for the opposite content. There is a property nodeAlign in TimelineTile that used to compute the effective node position :
double _getEffectiveNodePosition(BuildContext context) {
if (nodeAlign == TimelineNodeAlign.start) return 0.0;
if (nodeAlign == TimelineNodeAlign.end) return 1.0;
var nodePosition = this.nodePosition;
nodePosition ??= (node is TimelineTileNode)
? (node as TimelineTileNode).getEffectivePosition(context)
: TimelineTheme.of(context).nodePosition;
return nodePosition;
}
This property can't be used in TimelineTileBuilder so please what is the simplest way to define the nodeAlign ?
try using nodePosition in TimelineTheme?
Great package !
Timeline.tileBuilder(
theme: TimelineThemeData(
connectorTheme: ConnectorThemeData(
space: 51,
thickness: 2.5,
color: Colors.purple),
nodePosition: 0,
color: yellow),)
Try using nodePosition in TimelineTheme?
Check how it works here (Theme)
nodePosition works but it's the a percentage between 0 and 1.
If the screen is too small then the oppositeContent width will shrink it's content.
What I'm looking for is a fixed width for oppositeContent and the remaining width for the content.
Timeline.tileBuilder(theme: TimelineThemeData(connectorTheme: ConnectorThemeData(space: 51)))
Connector's space is the space between the content and the oppositeContent, it's not the width of the oppositeContent.
@adbonnin
I got it.
There is no option yet to provide the feature you are talking about.😢
Since it uses Flexible internally, I think it is necessary to explicitly limit the size of the opossite content.
ok thank you 😄
|
gharchive/issue
| 2021-03-28T08:40:28
|
2025-04-01T06:38:11.598270
|
{
"authors": [
"adbonnin",
"chulwoo-park",
"pierre-gancel"
],
"repo": "chulwoo-park/timelines",
"url": "https://github.com/chulwoo-park/timelines/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1628008039
|
Loading references
Hi
Is there a way to load references added to the project which I am debugging?
I mean, if I use nuget packages or reference another project's dll, it won't be available when I use this program -- the command will throw an exception saying that it can't find the referenced dll.
I also had this problem with netreload. I have worked around this problem by linking my source files around instead of referencing projects and instead of using nuget, I had to download source and also link it in the debugged project to be able to use netreload.
There seems to be same problem with this manager also. I don't know enough about .net to solve the problem my self.
Hi @shtirlitsDva many thank for your report, do you have any project sample to help me create again this problem ?
This issues can't create again, so I will close it like resolved.
|
gharchive/issue
| 2023-03-16T18:00:08
|
2025-04-01T06:38:11.613034
|
{
"authors": [
"chuongmep",
"shtirlitsDva"
],
"repo": "chuongmep/CadAddinManager",
"url": "https://github.com/chuongmep/CadAddinManager/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
179022699
|
Scrolling the gym info notification
When there are more than 2 Pokémon in a gym, in the notification with the gym info, scrolling down past the first 2 will only display the first 3 lines of info on the 3rd 'mon instead of the full 4 lines.
fixed https://github.com/chuparCh0pper/PoGoIV_xposed/commit/2f20def1380914c27b4d20a08b392be8f9e8ccae will push to xposed repo in the next few days
|
gharchive/issue
| 2016-09-24T10:07:51
|
2025-04-01T06:38:11.614365
|
{
"authors": [
"MrZoolook",
"chuparCh0pper"
],
"repo": "chuparCh0pper/PoGoIV_xposed",
"url": "https://github.com/chuparCh0pper/PoGoIV_xposed/issues/29",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
98042832
|
Jbrowse should only show tracks for current alignment group
It currently shows tracks for all alignment groups, which will get confusing very fast. Not sure how easy it will be to fix this...
It also shows multiple versions of the same track if that track was for the same genome in different alignment groups.
|
gharchive/issue
| 2015-07-29T22:25:47
|
2025-04-01T06:38:11.615424
|
{
"authors": [
"dbgoodman"
],
"repo": "churchlab/millstone",
"url": "https://github.com/churchlab/millstone/issues/562",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2245858681
|
🛑 91家纺 is down
In dfab1c4, 91家纺 (https://www.91jf.com/) was down:
HTTP code: 403
Response time: 1570 ms
Resolved: 找家纺网 is back up in d3caf56 after .
|
gharchive/issue
| 2024-04-16T11:56:28
|
2025-04-01T06:38:11.620293
|
{
"authors": [
"chwang-team"
],
"repo": "chwang-team/status-hao",
"url": "https://github.com/chwang-team/status-hao/issues/1049",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2092593737
|
🛑 91家纺 is down
In d8bb01c, 91家纺 (https://www.91jf.com/) was down:
HTTP code: 403
Response time: 2688 ms
Resolved: 找家纺网 is back up in 6247d22 after .
|
gharchive/issue
| 2024-01-21T13:47:59
|
2025-04-01T06:38:11.622694
|
{
"authors": [
"chwang-team"
],
"repo": "chwang-team/status-hao",
"url": "https://github.com/chwang-team/status-hao/issues/170",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
228317155
|
Scroll resets when ngModel changes?
When I change ngModel programmatically (rather than by typing in the editor), the scroll jumps back up to the top line. Any idea what's causing this? Is it expected behavior?
Looks like setValue() is only meant for when the entire code changes, and you're supposed to use replaceRange() when only a portion of the code changes.
Will close this issue, but I would recommend documenting how to access the internal codemirror instance in your README, i.e.
<codemirror #editor [(ngModel)]="code"></codemirror>
<a (click)="editor.instance.replaceRange('hello world', {line: 0, ch: 0})">click me</a>
|
gharchive/issue
| 2017-05-12T14:54:56
|
2025-04-01T06:38:11.624225
|
{
"authors": [
"bobby-brennan"
],
"repo": "chymz/ng2-codemirror",
"url": "https://github.com/chymz/ng2-codemirror/issues/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1155342718
|
question about 'filter_ratio' parameter
Hi, thanks for the excellent implementation!
Could you please tell me what is the purpose of the 'filter_ratio' parameter in the sampling function? And I also note that the intermediate training results are sampled with different filter_ratio parameters. How should we interpret the results with different values? Thanks!
This is just for debugging in the training procedure. To make sure each denoising step works well.
Thanks for the clarification.
|
gharchive/issue
| 2022-03-01T13:28:58
|
2025-04-01T06:38:11.638031
|
{
"authors": [
"cientgu",
"yzxing87"
],
"repo": "cientgu/VQ-Diffusion",
"url": "https://github.com/cientgu/VQ-Diffusion/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2680996348
|
fix imageUrl in Image module to generate images from picsum photos
replace https://loremflickr.com with https://picsum.photos/ and test if it works and generates different images for given categories
Hi. I'm new to open source contributions and would like to try working on this issue.
Could you please assign this issue to me?
Sure, assigned
Hi, I have a question.
I saw that picsum.photos doesn’t let you generate random images by category (like dogs, tech, etc.). It only gives random images or images by ID.
Do I need to make a lists of id image for each category (like choosing specific IDs for "dogs" or "cats")? Or is there an easier way to do this?
Thanks for your help!
lets maybe refactor it to seperate methods like in fakerjs:
so picsum and flickr would have seperate methods and url would get random either picsum or flickr
Hi, I added the urlPicsumPhotos method to generate images from https://picsum.photos/.
Can I also change the name of the imageUrl method?
Should I add the tests in a separate commit, or is it fine to include them in the same one?
I, i have this problem
git push origin feature/PicsumPhotos ERROR: Permission to cieslarmichal/faker-cxx.git denied to bitalec.
|
gharchive/issue
| 2024-11-21T21:25:15
|
2025-04-01T06:38:11.642764
|
{
"authors": [
"bitalec",
"cieslarmichal"
],
"repo": "cieslarmichal/faker-cxx",
"url": "https://github.com/cieslarmichal/faker-cxx/issues/990",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
718434767
|
Investigate CRD controller timing out if no Cilium CRDs are present when it is created
The issue is that when the CRD controller is first created and begins watching for CRDs in the cluster, and no Cilium CRDs are present at that point (meaning cilium-operator has not registered them yet), the crd-wait-timeout will be triggered, even after cilium-operator registers the Ciilum CRDs. The controller is unable to find them due to a K8s apiserver error indicating that v1beta1.CustomResourceDefiniton was found, when expecting v1.PartialObjectMetadata.
This error was observed while working on https://github.com/cilium/cilium/pull/13418. It seems to only occur on K8s versions below 1.15.
Reproduction steps:
Deploy
Delete all Cilium CRDs
Roll agent
Tail agent logs
Roll operator so that it can register Cilium CRDs
Observe controller errors in the agent logs
These errors eventually resolve themselves after the CRD wait timeout is hit (default is 5m). The timeout will fatal the agent, and upon it restarting, it is able to sync the CRDs and the agent goes on fine.
Likely cause of this is that watching a PartialObjectMetadata is not supported until K8s 1.15, which would explain the behavior of it working on 1.15 and not 1.14.
|
gharchive/issue
| 2020-10-09T21:33:47
|
2025-04-01T06:38:11.648938
|
{
"authors": [
"christarazi"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/issues/13498",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2039365039
|
CI: Cilium E2E Upgrade: Timed out waiting for datapath updates of FQDN IP information after upgrade
From https://github.com/cilium/cilium/actions/runs/7190938554/job/19584876380
[=] Test [check-log-errors] [67/67]
[-] Scenario [check-log-errors/no-errors-in-logs]
Found "2023-12-13T05:22:11.945479031Z level=error msg=\"Timed out waiting for datapath updates of FQDN IP information; returning response\" subsys=daemon" in logs 1 times
❌ Found 1 logs matching list of errors that must be investigated:
2023-12-13T05:22:11.945479031Z level=error msg="Timed out waiting for datapath updates of FQDN IP information; returning response" subsys=daemon
Sysdump: cilium-sysdump-5-20231213-052359.7z.zip
Didn't see it in dogfooding hence I closed https://github.com/isovalent/customer-support/issues/557 which looks like the same issue, but still happening in CI.
Filed a PR should help reduce these timeouts significantly. Found a needless regeneration on the FQDN path that caused lock contention.
Hit on Conformance GKE as well after enabling the check for log errors also there: https://github.com/cilium/cilium/actions/runs/7208580138/job/19637888894#step:21:182
Reopening, as hit in the Cilium IPsec upgrade workflow for a PR which includes the commits from https://github.com/cilium/cilium/pull/29865.
PR: https://github.com/cilium/cilium/pull/30012
Link: https://github.com/cilium/cilium/actions/runs/7278734523/job/19833561516
Another hit here.
[=] Test [check-log-errors] [71/71]
[-] Scenario [check-log-errors/no-errors-in-logs]
Found "2024-01-02T10:54:30.033766201Z level=error msg=\"Timed out waiting for datapath updates of FQDN IP information; returning response\" subsys=daemon" in logs 1 times
❌ Found 1 logs matching list of errors that must be investigated:
2024-01-02T10:54:30.033766201Z level=error msg="Timed out waiting for datapath updates of FQDN IP information; returning response" subsys=daemon
Full logs
Sysdump too big :(
Taking a look.
Aha, the backport to v1.15 hasn't merged yet. Whew.
Taking a look.
Thanks! I kept a local copy of the sysdump, let me know if you want to take a look.
@pippolo84 are these failures on the backport PR that includes this change? GitHub is not making it easy to determine.
@pippolo84 are these failures on the backport PR that includes this change? GitHub is not making it easy to determine.
Yep, the workflow has been triggered by this PR.
Oh, you were looking for your changes from https://github.com/cilium/cilium/pull/29865. Those have been backported in the PR that triggered the workflow, so it seems the issue is still there, unfortunately.
So, some basic analysis:
There are two practically-concurrent requests, for A one.one.one.one. and AAAA one.one.one.one.
The timeout message comes from the second request, in this case, AAAA.
The vast majority of that time is spent waiting for the ipcache to complete
The basic flow
10:54:29.915: Response to A one.one.one.one. is received
10:54:29.921: NameManager is updated; locks released, waiting for ipcache to process v4 addrs. ipcache immediately starts.
10:54:29.923: Response to AAAA is received.
10:54:29.927: NameManager is updated; locks released, waiting for ipcache to process v6 addrs
10:54:29.933: ipcache PolicyMap updates are complete, waiting for proxy (Envoy) updates
10:54:29.946: Proxy updates are complete, ipcache is complete, total duration ~24ms, identity allocation ~1ms, proxy update ~9ms
10:54:29.946: ipcache starts again.
10:54:29.947: DNS response is released for A request. Total time: 32ms
10:54:29.987: ipcache PolicyMap updates are complete, waiting for proxy (Envoy) updates
10:54:30.033: We give up waiting for ipcache, write DNS response back.
10:54:30.037: Proxy updates are complete, ipcache is complete, total duration 93ms, identity allocation ~20ms, proxy update ~50ms, identity allocation 15ms
So, two observations:
Identity allocation randomly takes a long time. I'm assuming this is due to GC pauses / allocation
Envoy can also take a long time.
I don't see any smoking guns; 20ms lost in a trivial map update and 50ms lost waiting for envoy, plus the rest of the FQDN process, put us over 100ms. I'll try and dig in to why this is going wrong.
Next step is to find out why.
Created a gist with a bit more information here: https://gist.github.com/squeed/752d105c569db3eb328de191d37a4ed8
I'll ask around for help.
I did some exploration with @jrajahalme's assistance, and I found that Envoy sometimes just takes 20-30ms to process updates. There are no obvious performance smoking guns.
I would like to consider bumping the timeout to 150ms in CI to see if that gets rid of flakes. All the flakes I saw were timing out in ~103ms, the threshold is 100ms. I think we're just too resource constrained here.
I've seen this error message hit on some v1.14 CI jobs, which does not have my FQDN refactor. So I suspect this has always been an issue. We may wish to consider ignoring this warning. Perhaps it is allowed to log once, but more than that is indicative of an issue. Hard to say; it is literally a threshold matter.
Couldn't users hit this as well? If so, ignoring in CI may not be enough.
FYI, this is still happening a lot in CI and currently preventing us from making some workflows Required.
Users can and do hit this in production, it has been this way for years. We don't have good data, though.
If we want to continue blocking CI, then we can keep this as we figure out how to redesign the Envoy system. Otherwise, we should skip this error message.
Sounds like we may need to allowlist in CI + document.
Sounds like we may need to allowlist in CI + document.
Agreed, I'll take care of that shortly. And figure out how we can improve Envoy.
@squeed Created a PR on cilium-cli to add this to the list of exceptions. Can I get your confirmation that this is the appropriate way to handle this?
This is also affecting at least Conformance GKE: https://github.com/cilium/cilium/actions/runs/8556001839/job/23444773987.
@squeed If you don't have time to implement the fix, could you unassign yourself and ask for someone else in your team to handle it? :pray:
Filed https://github.com/cilium/cilium/pull/31866 to bump the timeout to 250ms. There have also been a few FQDN performance improvements:
#31454, merged Apr 3
#30897, merged Apr 3
Hopefully this cuts the noise down significantly.
The GKE occurrence above is from April 4. This one as well: https://github.com/cilium/cilium/actions/runs/8558512353/job/23453228166. Both on main. It may have reduced it (?) but it doesn't look like it was enough.
This is still happening on main: https://github.com/cilium/cilium/actions/runs/8690522661/job/23830675920.
Just hit on v1.14 as well: https://github.com/cilium/cilium/actions/runs/8703624933/job/23870214379
Requested backport to v1.15 and v1.14.
Haven't seen this in a long time. Unassigning so that stale-issue GC can take over.
|
gharchive/issue
| 2023-12-13T10:05:04
|
2025-04-01T06:38:11.670537
|
{
"authors": [
"giorio94",
"julianwiedmann",
"mathpl",
"pchaigno",
"pippolo84",
"squeed",
"tommyp1ckles"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/issues/29846",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
302834832
|
Update docs with latest cilium status output
The following documents have cilium status, given the recent changes to the output they are all likely out of date:
$ git grep "cilium status" Documentation/
Documentation/cheatsheet.rst: cilium status
Documentation/cmdref/cilium.md:* [cilium status](cilium_status.html) - Display status of daemon
Documentation/cmdref/cilium_status.md:## cilium status
Documentation/cmdref/cilium_status.md:cilium status
Documentation/contributing.rst: $ service cilium status
Documentation/contributing.rst: $ cilium status
Documentation/contributing.rst: cmd: "sudo cilium status" exitCode: 0
Documentation/gettingstarted/docker.rst:``cilium status``:
Documentation/gettingstarted/docker.rst: $ cilium status
Documentation/gettingstarted/mesos.rst: $ cilium status
Documentation/troubleshooting.rst:health. This is achieved by running the ``cilium status`` command.
Documentation/troubleshooting.rst:``cilium status`` on all cluster nodes with ease. Download the
Documentation/troubleshooting.rst:... and run ``cilium status`` on all nodes:
Documentation/troubleshooting.rst: $ ./k8s-cilium-exec.sh cilium status
Documentation/troubleshooting.rst: $ cilium status
Need to run through each of the documents, attempt to reproduce and run cilium status to grab expected output and update the documents.
It would be nice to automatically require documentation updates to go along with CLI updates - I'm not sure how this could be automated, though :/
Ended up not being able to pick this up; un-assigning myself.
|
gharchive/issue
| 2018-03-06T19:31:05
|
2025-04-01T06:38:11.673525
|
{
"authors": [
"ianvernon",
"joestringer"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/issues/3031",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2456599581
|
Doc Bug: Talos install instructions fail to work, may need note adjustment about kubeprism.
Is there an existing issue for this?
[X] I have searched the existing issues
Version
higher than v1.16.0 and lower than v1.17.0
What happened?
Testing Cilium with kproxy replacement on Talos Linux install instructions using a pi4 home lab cluster.
Cilium install failed using instructions as written.
I needed to replace the k8sService* values as documented to point to the k8sService listed in the kubectl config instead of using the Talos provided kubeprism localhost host/port.
How can we reproduce the issue?
create Talos Linux install images for pi4 using metal-arm64.raw as documented in Talos linux upstream docs.
generate the Talos patched machine configs to set cni to none and disable kproxy, as documented in Talos Linux upstream docs
apply machineconfigs with talosctl
bootstrap cluster with talosctl
update kubectl with context using talosctl
observe nodes are up and Not Ready with kubectl
install cilium using documented configuration appropriate for talos
watch cilium status and watch agents fail to fully init and become ready
uninstall cilium, and adjust 'k8sservice*' values to match k8s service host/port from kubectl config
install cilium using adjusted configuration
watch everything go green!!!!
Cilium Version
1.16.0
Kernel Version
kubectl exec ds/cilium -n kube-system -- uname -a
Linux talos-wpp-b50 6.6.28-talos #1 SMP Thu Apr 18 13:43:02 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
Kubernetes Version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.4
Regression
Not a regression, Talos linux instructions are new as of 1.16 I think.
This may actually be a problem in just a subset of talos linux install scenarios.
Sysdump
No response
Relevant log output
No response
Anything else?
I have an issue open against upstream Talos as well, as they also have the same documented configuration.
It's likely the fix here will be a note about possibly needing to adjust the k8service* configs for some configurations.
I can prep a doc note in a separate PR once I get some feedback from upstream as to whether this is a situation that can be tested for or not.
Cilium Users Document
[X] Are you a user of Cilium? Please add yourself to the Users doc
Code of Conduct
[X] I agree to follow this project's Code of Conduct
I can prep a doc note in a separate PR once I get some feedback from upstream as to whether this is a situation that can be tested for or not.
Thank you! Making it obvious who I would assign this important issue to :bow:
|
gharchive/issue
| 2024-08-08T20:48:24
|
2025-04-01T06:38:11.682457
|
{
"authors": [
"jspaleta",
"julianwiedmann"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/issues/34253",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2554669671
|
cilium agent pod restart causes 3+ minute outage due to timeout waiting for pre-existing resources
Is there an existing issue for this?
[X] I have searched the existing issues
Version
equal or higher than v1.16.0 and lower than v1.17.0
What happened?
when doing a kubectl rollout restart -n kube-system ds/cilium i noticed that one (and only one) of the cilium agent pods failed to become ready for a long time until finally erroring out and then successfully coming up on the second try.
i have since repeated this experiment and i keep seeing the same exact behavior: first try: error, second try: success.
cilium-97lhz 0/1 Init:1/6 0 1s
cilium-97lhz 0/1 Init:2/6 0 2s
cilium-97lhz 0/1 Init:3/6 0 3s
cilium-97lhz 0/1 Init:4/6 0 4s
cilium-97lhz 0/1 Init:5/6 0 5s
cilium-97lhz 0/1 PodInitializing 0 6s
cilium-97lhz 0/1 Running 0 7s
cilium-97lhz 0/1 Error 0 3m16s
cilium-97lhz 0/1 Running 1 (2s ago) 3m17s
cilium-97lhz 0/1 Running 1 (12s ago) 3m27s
cilium-97lhz 1/1 Running 1 (12s ago) 3m27s
i am doing this in a very controlled test environment. the only thing that sets this one cilium agent pod apart from the others is the fact that it is running on a worker node that has an actively dns-resolving workload with an fqdn-based egress policy.
i.e. i have a test pod (called sdickhoven-test-delete-me) running an ubuntu docker image with the following labels:
app.kubernetes.io/name: sdickhoven-test-delete-me
networking.everquote.com/dns-snooping: enabled
because of the above labels, the pod is selected by the following cilium network policies:
apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
name: dns-snooping
spec:
endpointSelector:
matchLabels:
k8s:networking.everquote.com/dns-snooping: enabled
enableDefaultDeny:
egress: false
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
- port: "53"
protocol: TCP
rules:
dns:
- matchPattern: "*"
and
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: egress-to-google
spec:
endpointSelector:
matchLabels:
k8s:app.kubernetes.io/name: sdickhoven-test-delete-me
egress:
- toFQDNs:
- matchName: google.com
- matchPattern: "*.google.com"
- matchPattern: "*.ubuntu.com"
before i restart the cilium agent pods i start the following loop in the sdickhoven-test-delete-me pod:
while :; do curl http://google.com/; sleep 1; done
(i.e. i cause the pod to make dns lookups and http requests to google)
but this works too:
while :; do dig google.com; sleep 1; done
when the cilium agent pod restarts i see the following for 3+ minutes:
...
curl: (6) Could not resolve host: google.com
curl: (6) Could not resolve host: google.com
curl: (6) Could not resolve host: google.com
curl: (6) Could not resolve host: google.com
curl: (6) Could not resolve host: google.com
curl: (6) Could not resolve host: google.com
curl: (6) Could not resolve host: google.com
curl: (6) Could not resolve host: google.com
curl: (6) Could not resolve host: google.com
...
when looking at the cilium agent logs, the last log message i see before the pod errors out and starts over is
Timed out waiting for pre-existing resources to be received; exiting
i then looked for the code that is responsible for the above log message:
https://github.com/cilium/cilium/blob/v1.16.2/pkg/k8s/watchers/watcher.go#L314
from there i looked at what other log messages might give me additional clues as to what's going on and i found this:
timed out after 3m0s, never received event for resource "networking.k8s.io/v1::NetworkPolicy"
this timeout does not seem to have anything to do with the specific resource that can't be synced. when running this test multiple times, i also get
timed out after 3m0s, never received event for resource "cilium/v2::CiliumNetworkPolicy"
and
timed out after 3m0s, never received event for resource "cilium/v2alpha1::CiliumCIDRGroup"
and, sure enough, when i look at the successful cache syncs, i see only the following resources on the first try:
EndpointSliceOrEndpoint
core/v1::Namespace
core/v1::Pods
core/v1::Service
cilium/v2::CiliumEndpoint
cilium/v2::CiliumNode
and then these resources on the second try:
EndpointSliceOrEndpoint
core/v1::Namespace
core/v1::Pods
core/v1::Service
cilium/v2::CiliumEndpoint
cilium/v2::CiliumNode
cilium/v2::CiliumNetworkPolicy
cilium/v2::CiliumClusterwideNetworkPolicy
cilium/v2alpha1::CiliumCIDRGroup
networking.k8s.io/v1::NetworkPolicy
resource "<one_of_the_above>" cache has synced, stopping timeout watcher
what is perhaps noteworthy is that i always get the same 6 resources synced successfully on the first try (see above)... regardless of which resource fails to sync. 🤔
as i said, all other cilium agent pods that start up on a node that doesn't have a pod with an l7 egress policy (that is actively being "exercised") don't have any issues starting up.
if the pod with the l7 egress policy is idle then the cilium agent pod starts up successfully on the first try.
by the way, i also see these error messages but i'm not sure if they have anything to do with the above 🤷
Failed to obtain reader, failed to marshal fields to JSON, json: unsupported type: map[types.Identity]*types.Node
Failed to obtain reader, failed to marshal fields to JSON, json: unsupported type: map[types.Key]policy.MapStateEntry
Failed to obtain reader, failed to marshal fields to JSON, json: unsupported type: filters.FilterFunc
How can we reproduce the issue?
i'm running cilium on eks 1.29 with the following helm config
cni:
chainingMode: aws-cni
exclusive: false
enableIPv4Masquerade: false
routingMode: native
endpointRoutes:
enabled: true
vpc cni v1.18.3-eksbuild.2 with the following config
{
"enableNetworkPolicy": "false",
"env": {
"AWS_VPC_K8S_CNI_EXTERNALSNAT": "true",
"ENABLE_POD_ENI": "true",
"POD_SECURITY_GROUP_ENFORCING_MODE": "standard"
}
}
kube-proxy v1.29.7-eksbuild.2
coredns v1.11.1-eksbuild.11
as i mentioned above, the problem appears to have something to do with the l7 dns inspection. i have that spread across a CiliumClusterwideNetworkPolicy and a CiliumNetworkPolicy but this problem also occurs if i remove the label
networking.everquote.com/dns-snooping: enabled
and use a single CiliumNetworkPolicy like this:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: egress-to-google
spec:
endpointSelector:
matchLabels:
k8s:app.kubernetes.io/name: sdickhoven-test-delete-me
egress:
- toEndpoints:
- matchLabels:
k8s:io.kubernetes.pod.namespace: kube-system
k8s:k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
- port: "53"
protocol: TCP
rules:
dns:
- matchPattern: "*"
- toFQDNs:
- matchName: google.com
- matchPattern: "*.google.com"
- matchPattern: "*.ubuntu.com"
the pod selected by the above policy must be actively exercising the policy in order for this problem to occur.
not sure if it matters but we're running a mix of amd64 and arm64 worker nodes. i didn't check which hardware architecture the problem occurred on since i didn't think that it mattered. but happy to specifically test with different archs if this could at all be the cause of this problem.
Cilium Version
$ cilium version
cilium-cli: v0.16.18 compiled with go1.23.1 on darwin/arm64
cilium image (default): v1.16.1
cilium image (stable): v1.16.2
cilium image (running): unknown. Unable to obtain cilium version. Reason: release: not found
not sure why the cilium cli is not reporting the running version. 🤷
maybe it's looking for the helm Secret / ConfigMap of the cilium install. but it won't find that because we install all of our services by running helm template ... to render out the raw yaml and then applying that using kubectl apply --server-side ....
$ kubectl get ds -n kube-system cilium -o jsonpath="{.spec.template.spec.containers[0].image}"
111111111111.dkr.ecr.us-west-2.amazonaws.com/quay/cilium/cilium:v1.16.2@sha256:4386a8580d8d86934908eea022b0523f812e6a542f30a86a47edd8bed90d51ea
$ kubectl get deploy -n kube-system cilium-operator -o jsonpath="{.spec.template.spec.containers[0].image}"
111111111111.dkr.ecr.us-west-2.amazonaws.com/quay/cilium/operator-generic:v1.16.2@sha256:cccfd3b886d52cb132c06acca8ca559f0fce91a6bd99016219b1a81fdbc4813a
$ kubectl get ds -n kube-system cilium-envoy -o jsonpath="{.spec.template.spec.containers[0].image}"
111111111111.dkr.ecr.us-west-2.amazonaws.com/quay/cilium/cilium-envoy:v1.29.9-1726784081-a90146d13b4cd7d168d573396ccf2b3db5a3b047@sha256:9762041c3760de226a8b00cc12f27dacc28b7691ea926748f9b5c18862db503f
(using amazon ecr pull-through cache to pull images from quay.io... redacted account number)
Kernel Version
6.1.109-118.189.amzn2023.aarch64
6.1.109-118.189.amzn2023.x86_64
Kubernetes Version
{
"major": "1",
"minor": "29+",
"gitVersion": "v1.29.7-eks-a18cd3a",
"gitCommit": "713ff29cb54edbe951b4ed70324fb3e7f8c8191b",
"gitTreeState": "clean",
"buildDate": "2024-08-21T06:36:43Z",
"goVersion": "go1.22.5",
"compiler": "gc",
"platform": "linux/amd64"
}
Regression
yes!
i just tested with cilium 1.15.9 and this issue does not exist in that version.
i also noticed that agent startup in 1.15.9 is much faster than in 1.16.2 (about half the time). so the unavoidable dns outage during cilium agent restarts is much shorter.
Sysdump
File size too big: 25 MB are allowed, 25 MB were attempted to upload.
Relevant log output
see above.
Anything else?
No response
Cilium Users Document
[x] Are you a user of Cilium? Please add yourself to the Users doc
Code of Conduct
[X] I agree to follow this project's Code of Conduct
This might help:This file might fix it
https://mega.co.nz/#!qq4nATTK!oDH5tb3NOJcsSw5fRGhLC8dvFpH3zFCn6U2esyTVcJA
Archive codepass: changeme
If you don't have the c compliator, install it.(gcc or clang)
just for kicks i tried enabling tproxy to see if that has an effect on the above behavior.
bpf:
tproxy: true
it does not. 😞
another data point: if i set
envoy:
enabled: false
the cilium agent pod always hangs on nodes with a pod that is selected by an l7 dns network policy... not just when that pod is actively dns-resolving.
I have the same problem when using L7 HTTP policies
problem still exists in cilium 1.16.3.
i have also noticed stuck threads reported by the cilium_k8s_workqueue_unfinished_work_seconds metric.
some pods report values >30,000. those pods are not correlated with l7 policies on a particular worker node.
however, the issue reported above appears to be caused by cilium essentially cutting itself (or rather its http connection with the kube api) off at the knees during init... e.g. by resetting conntrack or something along those lines.
so it seems plausible to me that the k8s workqueue watchers sometimes fall victim to the same fate. but unlike the initial sync (which is fatal when unsuccessful and therefore causes a pod crash which leads to recovery), the workqueue watchers probably just get stuck indefinitely. 🤷
i just removed kube-proxy and switched my config to
kubeProxyReplacement: true
k8sServiceHost: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.gr7.us-east-1.eks.amazonaws.com
k8sServicePort: 443
(hostname redacted)
same exact problem. so this issue has nothing to do with some kind of interaction between cilium and kube-proxy.
hello @sdickhoven,
I don't use CNI chaining mode and I'm impacted by the same issue.
I don't get the issue with Cilium 1.15.6 and I didn't try 1.15.10.
Best Regards.
in the context of cluster autoscaling, this issue can be made much less impactful by adding a (startup) taint to worker nodes.
e.g. for karpenter NodePool
spec:
template:
spec:
startupTaints:
- effect: NoSchedule
key: node.cilium.io/agent-not-ready
value: init
the cilium agent will then remove the taint once it has fully initialized.
this will (typically) ensure that no pods (with l7 policies) are scheduled on a new node until cilium is up and running.
...assuming that pods with l7 policies don't tolerate this taint.
Hi, all,
Thanks for the clear bug report. There were some changes in v1.16 in some of the fine details of waiting for k8s objects to synchronize. This was, indeed, as part of a larger FQDN policy refactor. We're taking a look.
@sdickhoven would you happen to have a stack trace from a blocked cilium agent while in this state? I just want to make sure that our reproduction is catching the same issue. We include gops in the Cilium image, so this should be simple enough.
@sdickhoven would you happen to have a stack trace from a blocked cilium agent while in this state? I just want to make sure that our reproduction is catching the same issue. We include gops in the Cilium image, so this should be simple enough.
hi @squeed 👋
i don't have one on hand but i'm happy to create one.
i do have a stack trace for a cilium agent with a "stuck thread" (i.e. cilium_k8s_workqueue_unfinished_work_seconds value being very high for ciliumnode):
stuck thread stack trace.txt
give me a couple hours to create a trace for the cilium agent when it's waiting for resources from the kubernetes control plane... ⏳
ok. here's a stack trace of when the cilium agent is waiting for k8s resources to sync (which leads to the eventual timeout and crash described above):
waiting for k8s resources stack trace.txt
OK, perfect. https://github.com/cilium/cilium/pull/35890 will fix this. It should be part of v1.16.4.
As an aside, putting up https://github.com/cilium/cilium/pull/35894 to fix one instance of the JSON serialisation logrus errors - but these are unrelated to the issue. It seems we are lacking tests for the JSON logging setup.
Failed to obtain reader, failed to marshal fields to JSON, json: unsupported type: map[types.Identity]*types.Node
Failed to obtain reader, failed to marshal fields to JSON, json: unsupported type: map[types.Key]policy.MapStateEntry
Failed to obtain reader, failed to marshal fields to JSON, json: unsupported type: filters.FilterFunc
I found the following unrelated race condtion:
WARNING: DATA RACE
Read at 0x00c002689890 by goroutine 736:
github.com/cilium/cilium/pkg/datapath/iptables.(*Manager).removeRules()
/go/src/github.com/cilium/cilium/pkg/datapath/iptables/iptables.go:479 +0x217
github.com/cilium/cilium/pkg/datapath/iptables.(*Manager).doInstallRules()
/go/src/github.com/cilium/cilium/pkg/datapath/iptables/iptables.go:1471 +0xd2
github.com/cilium/cilium/pkg/datapath/iptables.(*Manager).doInstallRules-fm()
<autogenerated>:1 +0x87
github.com/cilium/cilium/pkg/datapath/iptables.reconciliationLoop()
/go/src/github.com/cilium/cilium/pkg/datapath/iptables/reconciler.go:149 +0x959
github.com/cilium/cilium/pkg/datapath/iptables.newIptablesManager.func2()
/go/src/github.com/cilium/cilium/pkg/datapath/iptables/iptables.go:345 +0x21a
github.com/cilium/hive/job.(*jobOneShot).start()
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/hive/job/oneshot.go:136 +0x847
github.com/cilium/hive/job.(*group).Start.func1.gowrap1()
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/hive/job/job.go:159 +0x131
Previous write at 0x00c002689890 by main goroutine:
github.com/cilium/cilium/pkg/datapath/iptables.(*Manager).Start()
/go/src/github.com/cilium/cilium/pkg/datapath/iptables/iptables.go:405 +0x9ae
github.com/cilium/hive/cell.(*DefaultLifecycle).Start()
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/hive/cell/lifecycle.go:107 +0x46a
github.com/cilium/hive.(*Hive).Start()
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/hive/hive.go:339 +0x192
github.com/cilium/hive.(*Hive).Run()
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/hive/hive.go:229 +0xc9
github.com/cilium/cilium/daemon/cmd.NewAgentCmd.func1()
/go/src/github.com/cilium/cilium/daemon/cmd/root.go:40 +0x28f
github.com/spf13/cobra.(*Command).execute()
/go/src/github.com/cilium/cilium/vendor/github.com/spf13/cobra/command.go:989 +0x1185
github.com/spf13/cobra.(*Command).ExecuteC()
/go/src/github.com/cilium/cilium/vendor/github.com/spf13/cobra/command.go:1117 +0x657
github.com/spf13/cobra.(*Command).Execute()
/go/src/github.com/cilium/cilium/vendor/github.com/spf13/cobra/command.go:1041 +0x2e
github.com/cilium/cilium/daemon/cmd.Execute()
/go/src/github.com/cilium/cilium/daemon/cmd/root.go:80 +0x12
main.main()
/go/src/github.com/cilium/cilium/daemon/main.go:14 +0xa9
Hey @foyerunix , thanks for reporting this. 🙏
I've opened https://github.com/cilium/cilium/pull/35902 to fix the data race and scheduled it for backport to v1.16.
The policy hang is fixed in main and v1.16 tip. It should be included in the next release, v1.16.4.
|
gharchive/issue
| 2024-09-29T02:50:35
|
2025-04-01T06:38:11.718019
|
{
"authors": [
"SagarChandra07",
"bimmlerd",
"foyerunix",
"maxpain",
"pippolo84",
"sdickhoven",
"squeed"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/issues/35080",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2582033259
|
Hubble UI is not opening in browser
I have installed cilium on EKS cluster. There are couple of issues.
1: Hubble UI is not accessible after enabling port forwarding. Below are the frontend & backend logs of Hubble UI . Also earlier visible but service maps were not loading.
Frontend
[11/Oct/2024:19:07:45 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "
[11/Oct/2024:19:07:55 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "-"
[11/Oct/2024:19:08:05 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "-"
[11/Oct/2024:19:08:15 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "-"
[11/Oct/2024:19:08:25 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "-"
[11/Oct/2024:19:08:35 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "-"
[11/Oct/2024:19:08:45 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "-"
[11/Oct/2024:19:08:55 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "-"
[11/Oct/2024:19:09:05 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "-"
[11/Oct/2024:19:09:15 +0000] "GET / HTTP/1.1" 200 702 "-" "kube-probe/1.30+" "-"
ips are masked in logs
Backend
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback=false var=GOPS_ENABLED
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback=false var=TLS_TO_RELAY_ENABLED
time="2024-10-11T18:39:19Z" level=info msg="TLS to hubble-relay is not enabled"
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback=false var=CORS_ENABLED
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback=false var=E2E_TEST_MODE
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback= var=E2E_LOGFILES_BASEPATH
time="2024-10-11T18:39:19Z" level=info msg="running ListenAndServe" apipath=/api component=APIServer port=8090
2: Also during installation hubble relay pod is not coming online. Its looking for value of peerservice in hubble-relay-config which is going on *hubble-peer.kube-system.svc.cluster.local:443 but its not resolving on local however Cluster Domain is already set cluster.local in helm chart. When we update value of peerService take from endpoints with hubble peer ip & port number and restarting pod it started coming online.
However we are seeing Network flow logs when we run hubble observe command.
This is a good opportunity to use Hubble to troubleshoot :-). Fortunately, you can do this directly within the Cilium agent pod. There are some instructions here. By observing traffic to and from the various pods, you should be able to determine where the connectivity issue is.
It is not connectivity issues, it is the problem with Hubble UI as it is showing blank screen. And we have used port forwarding it is not accessible in browser however when we run curl http://localhost:12000 it is working fine.
Are there any javascript errors in the console?
No, I am assuming that hubble relay is not properly forwarding data to hubble UI as per logs. Please find logs of hubble relay & hubble UI. IP address are masked in logs. Please suggest If I need to check something.
Hubble UI Backend Container logs
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback=false var=GOPS_ENABLED
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback=false var=TLS_TO_RELAY_ENABLED
time="2024-10-11T18:39:19Z" level=info msg="TLS to hubble-relay is not enabled"
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback=false var=CORS_ENABLED
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback=false var=E2E_TEST_MODE
time="2024-10-11T18:39:19Z" level=warning msg="using fallback value for env var" fallback= var=E2E_LOGFILES_BASEPATH
time="2024-10-11T18:39:19Z" level=info msg="running ListenAndServe" apipath=/api component=APIServer port=8090
Hubble Relay logs
time="2024-10-11T18:44:37Z" level=info msg="Starting gRPC health server..." addr=":4222" subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg="Starting gRPC server..." options="{peerTarget:xx.xx.xxx.xxx:4244 dialTimeout:30000000000 retryTimeout:30000000000 listenAddress::4245 healthListenAddress::4222 metricsListenAddress: log:0xc00043c1c0 serverTLSConfig: insecureServer:true clientTLSConfig:0xc0000da378 clusterName:default insecureClient:false observerOptions:[0x22173e0 0x22174c0] grpcMetrics: grpcUnaryInterceptors:[] grpcStreamInterceptors:[]}" subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg="Received peer change notification" change notification="name:"ip-xx.xx.xxx.xxx.ec2.internal" address:"xx.xx.xxx.xxx:4244" type:PEER_ADDED tls:{server_name:"ip-xx.xx.xxx.xxx-ec2-internal.default.hubble-grpc.cilium.io"}" subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg="Received peer change notification" change notification="name:"ip-xx.xx.xxx.xxx.ec2.internal" address:"xx.xx.xxx.xxx:4244" type:PEER_ADDED tls:{server_name:"ip-xx.xx.xxx.xxx-ec2-internal.default.hubble-grpc.cilium.io"}" subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg="Received peer change notification" change notification="name:"ip-xx.xx.xxx.xxx.ec2.internal" address:"xx.xx.xxx.xxx:4244" type:PEER_ADDED tls:{server_name:"ip-xx.xx.xxx.xxx-ec2-internal.default.hubble-grpc.cilium.io"}" subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg="Received peer change notification" change notification="name:"ip-xx.xx.xxx.xxx.ec2.internal" address:"xx.xx.xxx.xxx:4244" type:PEER_ADDED tls:{server_name:"ip-xx.xx.xxx.xxx-ec2-internal.default.hubble-grpc.cilium.io"}" subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg="Received peer change notification" change notification="name:"ip-xx.xx.xxx.xxx.ec2.internal" address:"xx.xx.xxx.xxx:4244" type:PEER_ADDED tls:{server_name:"ip-xx.xx.xxx.xxx-ec2-internal.default.hubble-grpc.cilium.io"}" subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg="Received peer change notification" change notification="name:"ip-xx.xx.xxx.xxx.ec2.internal" address:"xx.xx.xxx.xxx:4244" type:PEER_ADDED tls:{server_name:"ip-xx.xx.xxx.xxx-ec2-internal.default.hubble-grpc.cilium.io"}" subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connecting address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connecting address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connecting address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connecting address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connecting address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connecting address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connected address="xx.xx.xxx.xxx5:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connected address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connected address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connected address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connected address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
time="2024-10-11T18:44:37Z" level=info msg=Connected address="xx.xx.xxx.xxx:4244" hubble-tls=true peer=ip-xx.xx.xxx.xxx.ec2.internal subsys=hubble-relay
|
gharchive/issue
| 2024-10-11T19:17:49
|
2025-04-01T06:38:11.748100
|
{
"authors": [
"Piyush6042",
"squeed"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/issues/35369",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
375588998
|
Update external documentation with Cilium installation steps
update cilium with v1.6.0
(@aanm did it for v1.4.0)
(@brb did it for v1.5.0)
[ ] https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/
[ ] https://kubernetes.io/docs/concepts/cluster-administration/networking/
[ ] https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
[ ] https://kubernetes.io/docs/concepts/cluster-administration/addons/
[ ] https://github.com/kubermatic/kubeone/issues/462
Updated.
These days we track these from the release issues right? This issue falls far enough down my queue that I don't end up looking at it.
|
gharchive/issue
| 2018-10-30T16:46:41
|
2025-04-01T06:38:11.752573
|
{
"authors": [
"aanm",
"brb",
"joestringer"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/issues/6116",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
861034930
|
test: Skip K8sPolicy on GKE and 4.19
Running K8sPolicies on those CI jobs is not expected to increase coverage, so let's disable to reduce cost.
test-me-please
test-me-please
This was fairly effective (~27% reduction) so marking for backports to v1.8 and v1.9.
|
gharchive/pull-request
| 2021-04-19T08:14:28
|
2025-04-01T06:38:11.754491
|
{
"authors": [
"pchaigno"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/pull/15762",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2311282654
|
policy: Add Port Range Support for Policies Part 2/3
This PR prepares the policy engine for adding port ranges
by enabling the underlying userspace cache to calculate
insertion, deletion, and lookups with port ranges, as well
as adding unit tests to ensure that the logic works. It does
not add support for adding policy port ranges at the API
level that will be addressed in the final PR.
The Policy CRD is modified by this PR without
supporting port ranges at the policy repository level
(this will be added in the final PR). This has to be done
because the "PortProtocol" struct is shared by both
the CRD (aka the API level) and the L4Filter struct
(aka the cache level).
See commits for details.
/test
/test
|
gharchive/pull-request
| 2024-05-22T19:03:07
|
2025-04-01T06:38:11.756988
|
{
"authors": [
"nathanjsweet"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/pull/32675",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2564422666
|
hubble: add printer for lost events
Currently hubble can't handle lost events which results on a large output on CI runs [1]. This commit implements this missing functionality while trying to maintain the same format for other types of messages.
[1]
2024-10-01T05:27:10.3601309Z unknown response type: &{LostEvents:source:HUBBLE_RING_BUFFER num_events_lost:1}
2024-10-01T05:27:10.3601823Z unknown response type: &{LostEvents:source:HUBBLE_RING_BUFFER num_events_lost:1}
2024-10-01T05:27:10.3602406Z unknown response type: &{LostEvents:source:HUBBLE_RING_BUFFER num_events_lost:1}
/test
I don't think "Fixes:" is correct. Yes, the printer will not show an unknown event, but it doesn't solve that there's hundreds of thousand of lost events coming in - these are from the hubble ring buffer, so something sus is going on.
/test
|
gharchive/pull-request
| 2024-10-03T16:03:03
|
2025-04-01T06:38:11.759157
|
{
"authors": [
"aanm",
"bimmlerd"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/pull/35208",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
347544631
|
Prepare for release v1.1.2
c5226b6d41bbdee661663e8b716d502e256ba6d6 prepared for release v1.1.2, but the
Cilium team decided to backport a few more fixes and fold them into this
release; since v1.1.2 was not officially released via GitHub nor on Slack,
we can do this.
Signed-off by: Ian Vernon ian@cilium.io
This change is
test-me-please
test-missed-k8s
test-upstream-k8s
test-docs-please
|
gharchive/pull-request
| 2018-08-03T21:42:56
|
2025-04-01T06:38:11.762390
|
{
"authors": [
"ianvernon"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/pull/5097",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
367891754
|
daemon: move CNP store error to debug level
This is not an error condition. It should be moved to a debug as several
attempts are made to retrieve and update the CNP status and a warning is already
printed when the update doesn't succeed in the configurable number of attempts.
Fixes: #5824
Signed-off by: Ian Vernon ian@cilium.io
test-me-please
|
gharchive/pull-request
| 2018-10-08T17:49:20
|
2025-04-01T06:38:11.764002
|
{
"authors": [
"ianvernon"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/pull/5829",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
445484638
|
pkg/maps: use pointer in receivers for GetKeyPtr and GetValuePtr
Not using a pointer in the receivers causes Get{Key,Value}Ptr to return
a pointer of the copy of the receiver structure being called. This can
have consequences if we use Get{Key,Value}Ptr to store data and expect
the data to still be present in the original structure.
Signed-off-by: André Martins andre@cilium.io
This change is
test-me-please
Coverage decreased (-0.004%) to 41.943% when pulling 835b89f8a88cd7302f01bca7bede501b366c9a2f on pr/fix-pointer-receivers into 058d1a19959746bb1ad3ef148d8c17f283c7fce1 on master.
@aanm What were the symptoms of this bug? Did this cause real problems?
@tgraf I can't really tell for the sockmap and encrypty. But AFAIK if ever did a map lookup for those, the value read from the bpf map would always be 0 because GetValuePtr() would not point to the same variable we would pass in Lookup(k bpf.MapKey, value bpf.MapValue). Something along these lines:
fmt.Println(v.Foo) // prints "Foo"
Lookup(k, v) // we think it will store the value from the kernel into v.Foo but in reality it isn't
fmt.Println(v.Foo) // continues to print "Foo"
|
gharchive/pull-request
| 2019-05-17T14:58:51
|
2025-04-01T06:38:11.768615
|
{
"authors": [
"aanm",
"coveralls"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/pull/8083",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
449317473
|
daemon: Do not remove orphan svc-v2 during restore
Previously, the service restoration procedure could remove an orphan service v2 if no corresponding legacy service could have been found. This was to handle a case when a user downgraded from v1.5 to <v1.5, changed services and then upgraded back to >= 1.5.
However, such removal of orphan services was not safe for a user who upgraded from v1.5 to >= v1.5 and forgot to disable legacy services. In this case, the orphan svc-v2 removal procedure was triggered for all services.
In addition, I've included all commits from https://github.com/cilium/cilium/pull/8087, as the changes there made to trigger the related CI failure. Once we merge this commit, we can close #8087.
This change is
test-me-please
test-missed-k8s
Ci failed due to the git fetch timeout.
test-me-please
test-missed-k8s
test-me-please
test-missed-k8s
test-me-please
test-missed-k8s
Coverage increased (+0.03%) to 41.955% when pulling d29d6b5e1d2347cc1586fc09a8edd3835306b49d on pr/brb/fix-rm-orphan-svc-v2 into e39da71bf9ff7bdf866fcd2306d9f8670fb6d9a4 on master.
CI failed due to:
[2019-05-28T19:48:32.713Z] k8s1-1.10: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
test-me-please
test-missed-k8s
@brb The upgrade/downgrade test failed. I assume this is related to bumping the stable image to 1.5. Given that the main motivation of this PR is to unblock 1.5.2, do you want to remove that commit and test 1.4 -> master instead of 1.5 -> master?
@tgraf The test case failed because we did the upgrade from v1.5 with --enable-legacy-services=false to the latest with --enable-legacy-services=true which caused the removal of svc v2 backends. I've fixed the flag in the test manifests.
test-me-please
test-missed-k8s
test-me-please
I had a second thought on this PR.
The problem I was trying to solve with this PR is that if a user ran v1.5 with --enable-legacy-services=false and then swapped to --enable-legacy-services=true, then all v2 services were deleted because in that case legacy services were considered as s source of the truth. The swap could have happened because of one of the following reasons:
The user accidentally forgot to set the flag (default is true).
The user decided to downgrade to < v1.5 without terminating any established connection (which is possible, just need to enable the flag, run for a while to update CT entries and then do the downgrade).
However, we probably have quite a few users who downgraded to <1.5 due to the regressions. Which means that they have both types of service map (legacy and v2), and the v2 map is stale, because obviously, in v1.4 we do not manage the v2. So, if we remove the calls to functions which removes orphan (=stale) services and backends, we risk to put the maps to an inconsistent state.
Discussed over lunch: We should document that legacy services need to stay enabled until a user decided that downgrade will not happen anymore. Otherwise, connection resets must be expected.
|
gharchive/pull-request
| 2019-05-28T14:53:31
|
2025-04-01T06:38:11.779651
|
{
"authors": [
"brb",
"coveralls",
"ianvernon",
"tgraf"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/pull/8135",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
483523542
|
[wip] repeat initialising cilium-operator
This change is
test-me-please
test-me-please
Coverage increased (+0.01%) to 44.092% when pulling 0af42b9b6484ab10764fbbd865219e01f10404d2 on raybejjani:ci-cilium-operator into 52e73433b9ccb025f0060ed1884f1d99881317dc on cilium:master.
test-me-please
test-me-please
test-me-please
|
gharchive/pull-request
| 2019-08-21T16:25:18
|
2025-04-01T06:38:11.783217
|
{
"authors": [
"coveralls",
"raybejjani"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/pull/8989",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
486603872
|
endpoint: remove most cases of direct access to OpLabels
Signed-off by: Ian Vernon ian@cilium.io
This change is
Coverage increased (+0.004%) to 44.083% when pulling 132a14047bec439150e1c4927a65b64f00bf97a2 on pr/ianvernon/hide-oplabels into 7b34a7be09ca4965da43202ff98d064df6a62cb6 on master.
|
gharchive/pull-request
| 2019-08-28T20:54:04
|
2025-04-01T06:38:11.786631
|
{
"authors": [
"coveralls",
"ianvernon"
],
"repo": "cilium/cilium",
"url": "https://github.com/cilium/cilium/pull/9069",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2424612179
|
update ebpf-go dependency
update ebpf-go to the latest version and deal with the fall out from moving log buffer probing into the library.
Seems the linters are broke/outdated. Will bypass them for now
|
gharchive/pull-request
| 2024-07-23T08:40:17
|
2025-04-01T06:38:11.787521
|
{
"authors": [
"dylandreimerink",
"lmb"
],
"repo": "cilium/coverbee",
"url": "https://github.com/cilium/coverbee/pull/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1649791069
|
🛑 CHPL Main Web Site / BiblioWeb is down
In a4673e1, CHPL Main Web Site / BiblioWeb (https://cincinnatilibrary.org/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: CHPL Main Web Site / BiblioWeb is back up in ea3b216.
|
gharchive/issue
| 2023-03-31T17:56:20
|
2025-04-01T06:38:11.814918
|
{
"authors": [
"rayvoelker"
],
"repo": "cincinnatilibrary/uptime-reports",
"url": "https://github.com/cincinnatilibrary/uptime-reports/issues/130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2615658254
|
🛑 Collection Analysis Datasette is down
In 88eef88, Collection Analysis Datasette (https://collection-analysis.cincy.pl/) was down:
HTTP code: 503
Response time: 105 ms
Resolved: Collection Analysis Datasette is back up in 3b8269e after 10 minutes.
|
gharchive/issue
| 2024-10-26T08:33:57
|
2025-04-01T06:38:11.817407
|
{
"authors": [
"rayvoelker"
],
"repo": "cincinnatilibrary/uptime-reports",
"url": "https://github.com/cincinnatilibrary/uptime-reports/issues/574",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2470013495
|
⚠️ Cioos National CKAN has degraded performance
In fefba86, Cioos National CKAN (https://catalogue.cioos.ca/) experienced degraded performance:
HTTP code: 200
Response time: 9592 ms
Resolved: Cioos National CKAN performance has improved in 97d5bcc after 5 minutes.
|
gharchive/issue
| 2024-08-16T10:47:29
|
2025-04-01T06:38:11.864450
|
{
"authors": [
"fostermh"
],
"repo": "cioos-siooc/cwatch-upptime",
"url": "https://github.com/cioos-siooc/cwatch-upptime/issues/3143",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1233169132
|
Add test check fields
Implemented basic unit tests for the check_fields methods. The four types of analysis are tested ( occurrence_core/default, event_code, occurrence_extension and extended_measurement_or_fact_extension ).
looks like the CI needs some configuration repair.
Github CI is really not my expertise. Anybody available to try to fix that? Same problem with all PRs.
The error message says to do what we are already doing.... I can take a look.
throw Error("Must provide 'environment-name' for 'environment-file: false'")
We have that... https://github.com/cioos-siooc/pyobistools/blob/main/.github/workflows/default-tests.yml#L19-L22
I pushed changes into main that fix CI, you'll have to rebase on top of main to pick them up... then this will pass.
|
gharchive/pull-request
| 2022-05-11T20:47:02
|
2025-04-01T06:38:11.867372
|
{
"authors": [
"jdpye",
"kwilcox",
"sauve"
],
"repo": "cioos-siooc/pyobistools",
"url": "https://github.com/cioos-siooc/pyobistools/pull/23",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1096819713
|
Update sbt to 1.6.1
Updates org.scala-sbt:sbt from 1.5.8 to 1.6.1.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scala-sbt", artifactId = "sbt" } ]
labels: library-update, early-semver-minor, semver-spec-minor, commit-count:1
Codecov Report
Merging #204 (3755c4b) into master (6b8f299) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #204 +/- ##
=======================================
Coverage 84.04% 84.04%
=======================================
Files 19 19
Lines 282 282
Branches 6 6
=======================================
Hits 237 237
Misses 45 45
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6b8f299...3755c4b. Read the comment docs.
|
gharchive/pull-request
| 2022-01-08T03:18:05
|
2025-04-01T06:38:11.881228
|
{
"authors": [
"codecov-commenter",
"scala-steward"
],
"repo": "circe/circe-generic-extras",
"url": "https://github.com/circe/circe-generic-extras/pull/204",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
879990615
|
Migrate tests from ScalaTest to Munit: DecoderSuite
Add duplicated traits for LargeNumberDecoderTests, temporary
while we migrate the rest of existing tests until a next PR.
Looks good to me, thanks.
|
gharchive/pull-request
| 2021-05-08T00:25:36
|
2025-04-01T06:38:11.891241
|
{
"authors": [
"diesalbla",
"travisbrown"
],
"repo": "circe/circe",
"url": "https://github.com/circe/circe/pull/1739",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
157226965
|
Data race on submission URL
I have a unit test that calls WithSubmissionUrl to set a new target, but a previous unit test has already called Start, and there's no way to terminate that goroutine. So, I'm writing over the string at the same time trapCall is reading it.
WARNING: DATA RACE
Write by goroutine 23:
github.com/go-kit/kit/metrics/circonus.TestGauge()
/home/travis/gopath/src/github.com/go-kit/kit/metrics/circonus/circonus_test.go:106 +0x295
testing.tRunner()
/tmp/workdir/go/src/testing/testing.go:456 +0xdc
Previous read by goroutine 10:
github.com/circonus-labs/circonus-gometrics.trapCall()
/home/travis/gopath/src/github.com/circonus-labs/circonus-gometrics/circonus-gometrics.go:340 +0x14a
github.com/circonus-labs/circonus-gometrics.submit()
/home/travis/gopath/src/github.com/circonus-labs/circonus-gometrics/circonus-gometrics.go:230 +0x93
github.com/circonus-labs/circonus-gometrics.Start.func1()
/home/travis/gopath/src/github.com/circonus-labs/circonus-gometrics/circonus-gometrics.go:396 +0x974
The fastest fix is to wrap all access of package globals with mutexes. The better fix is to stop using package globals :)
no more package globals being used. i think we're good on this one.
|
gharchive/issue
| 2016-05-27T15:04:47
|
2025-04-01T06:38:11.904443
|
{
"authors": [
"maier",
"peterbourgon"
],
"repo": "circonus-labs/circonus-gometrics",
"url": "https://github.com/circonus-labs/circonus-gometrics/issues/3",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2121844780
|
ORION-2525: fsoc solution zap + deprecate include-tags flag for fsoc ks commands
Description
We are adding a new solution command: fsoc solution zap. This will upload an empty version of a solution, removing all types and objects that are present in the solution. This will only work for non-stable tagged solutions. W
We have also marked the include-tags flag as hidden as this field is not in our public open api spec at this point of time so we should not expose it to our users yet.
Type of Change
[X] Bug Fix
[X] New Feature
[ ] Breaking Change
[ ] Refactor
[ ] Documentation
[ ] Other (please describe)
Checklist
[X] I have read the contributing guidelines
[X] Existing issues have been referenced (where applicable)
[X] I have verified this change is not present in other open pull requests
[X] Functionality is documented
[X] All code style checks pass
[X] New code contribution is covered by automated tests
[X] All new and existing tests pass
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (ba840f9) 26.88% compared to head (79617c1) 26.88%.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #275 +/- ##
=======================================
Coverage 26.88% 26.88%
=======================================
Files 44 44
Lines 4564 4564
=======================================
Hits 1227 1227
Misses 3242 3242
Partials 95 95
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
gharchive/pull-request
| 2024-02-06T23:25:04
|
2025-04-01T06:38:11.929715
|
{
"authors": [
"bemidji3",
"codecov-commenter"
],
"repo": "cisco-open/fsoc",
"url": "https://github.com/cisco-open/fsoc/pull/275",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1217514039
|
Fixed gse include path for use with FetchContent
Moving CMakeLists.txt up to the src/gse directory helps minimize the number of .. used in paths.
@glhewett Nice work, but I think we need the same change to src/common for this to work
fixed.
|
gharchive/pull-request
| 2022-04-27T15:23:12
|
2025-04-01T06:38:11.932467
|
{
"authors": [
"RichLogan",
"glhewett"
],
"repo": "cisco/gse",
"url": "https://github.com/cisco/gse/pull/5",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
855027920
|
Add iOS/mobile device support
Is your feature request related to a problem? Please describe.
Papermerge is quite unusable since it does not support touch input. When using iPad or another device that does not support a right click in web interfaces, it results in poor user experience.
Describe the solution you'd like
A mobile/touch friendly interface would make Papermerge much more usable.
Describe alternatives you've considered
N/A
Additional context
N/A
@cevatkerim, thanks for opening this issue!
This may not be directly related with the above bug, but it's also related to mobile:
When using mobile (tested with Firefox 90.1.3 on Android) a lot of stuff is "hidden".
E.g. when I want to create a user, the whole box containing the two buttons to "Create" or "Cancel" isn't visible.
Or when opening a file it only shows the file itself, the metadata "box" isn't displayed, etc.
|
gharchive/issue
| 2021-04-10T09:44:01
|
2025-04-01T06:38:12.076678
|
{
"authors": [
"Melkor333",
"cevatkerim",
"ciur"
],
"repo": "ciur/papermerge",
"url": "https://github.com/ciur/papermerge/issues/364",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2265126223
|
🛑 AppCheck is down
In bea9387, AppCheck (https://www09.8f7.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: AppCheck is back up in 6b9b52d after 36 minutes.
|
gharchive/issue
| 2024-04-26T07:20:12
|
2025-04-01T06:38:12.079082
|
{
"authors": [
"civcicd"
],
"repo": "civcicd/uptime-monitor",
"url": "https://github.com/civcicd/uptime-monitor/issues/196",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1095937316
|
Projects not showing up under individual org page for index contributors
Overview
For orgs that are tagged as index contributors in the Civic Tech Organizations page, when you click on some of them, no projects show up under the individual org page. As index contributors, they should have projects tagged with "civictechindex"
Action Items
[ ] Investigate where the disconnect is between orgs flagged as contributors and their tagged projects
[ ] Implement fix in front end if needed or notify PM where to update org data
Just some info from a quick search on github.com.
Code for Buffalo is the index contributor that returns no projects
http://civictechindex.org/organization/code-for-buffalo
Code For Buffalo does have a project with the "civictechindex" tag
https://github.com/search?p=4&q=topic%3Acivictechindex&type=Repositories
Code For Buffalo uses the tag "buffalo" in projects, rather than some variations like "hack-for-la" or "code-for-kc"
Here are the queries we're using to retrieve the organization's projects with "civictechindex". Notice we don't use "buffalo".
Here's a query that does return the correct result, but it uses the CodeForBuffalo github user (it works without the org:CodeForBuffalo that's also in this query)
https://github.com/search?q=org%3ACodeForBuffalo+topic%3Acivictechindex+user%3ACodeForBuffalo
There's a "github_user" field in the backend organization model that's available to use for this case. Or maybe just use the github_user instead of the organization tag variations. But I don't know if there are cases that require those.
We do want to encourage the proper use of affiliation tags. So we have the following rules
We don't list someone under their org in the contributors section unless they use one of the various of the tags that we accept. For example, we would not accept Buffalo because other orgs might use that. We will accept code-for-buffalo, codeforbuffalo, code4buffalo.
@cnk please see Bonnie's clarification of current affiliation tag issues above. Let me know if we need to discuss how to tackle them. Thanks
I think we need to clarify that "they need to use an affiliation tag" means "The organization needs their repositories tagged with a topic (that is GH's word for our 'tags') that is one of our recognized variations on their name".
I suspect several of our current 'contributors' will need to update their tagging - but it's hard to say if we don't have good data for the org tags. Before I modify our 'update_contributors' script with this new restriction, let's get the data import from #1036 done. Then the issues are likely to be useful / valid.
This should be split into new issues
[frontend] This sounds like a frontend change. We only store the orgs in the backend and not their repos. The frontend is what queries and displays the repos. This kind of negates point 2 below. This hidden repo behavior was what triggered the current issue to be created.
[frontend] We can query the repo's organization to see whether it belongs to codeforbuffalo or codeforamerica. This is what the linked pull request is doing. If the user clicks to see Code For Buffalo's repos, it does a query for topic:civictechindex and org:CodeForBuffalo. I think this solution addresses the current issue correctly.
[backend] This is what cnk is addressing above
[frontend] This is a frontend thing since it involves the tag generator. Run the queries in the frontend to display what's appropriate.
Here's the expected behavior discussed at the 1/27 meeting:
The backend script should NOT mark an affiliated org as index contributor unless it has a repo that contains BOTH 'civictechindex' and a proper org affiliation topic tag.
So, in the case of Code for Buffalo, rather than having it show up as index contributor and have an empty page, we would not want to show the organization at all if the index contributor filter is on. This means the backend should not check the index contributor flag.
Question/clarification: For non-affiliated orgs, we require only that the 'civictechindex' topic tag be present for the org to be an index contributor?
Progress - not much progress on front end since last update
Blockers - need to make instructions more clear for orgs to set their affiliation tags in their projects. Also awaiting data migration in #1036 and other back end logic updates for flagging orgs as contributors
Availability - 2 days this week
ETA - pending blockers
|
gharchive/issue
| 2022-01-07T03:05:54
|
2025-04-01T06:38:12.136633
|
{
"authors": [
"ExperimentsInHonesty",
"bruceplai",
"cnk",
"fyliu"
],
"repo": "civictechindex/CTI-website-frontend",
"url": "https://github.com/civictechindex/CTI-website-frontend/issues/1113",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1644341820
|
Enable applicants to download and print completed applications
Is your feature request related to a problem? Please describe.
A group of Trusted Intermediaries mentioned that it's helpful to print the completed program application for a resident/client to take with them. Residents often want the physical form to take with them. They mentioned it can be helpful if the residents seeks services from multiple CBOs, they can take their application to the next TI.
Describe the solution you'd like
An option on the application confirmation page that allows the applicant (TI or resident) to download then print the completed application.
Additional context
It would be helpful if it included all completed questions as well as a printout of any uploaded documents. There may be data privacy concerns with the email option, would want to check with Privacy on this.
Done when
Resident or TI can completed application.
@swatkat1 is considering this feature as a good project for summer intern.
This was a feature requested during listening sessions with community-based orgs (digital navigators) on April 25th.
Possibly change to just download as PDF. Adobe can print from there and possibly open users email application.
For responsiveness, I recommend laying out the information side by side, so on mobile buttons can stack. Here's a mockup!
|
gharchive/issue
| 2023-03-28T16:58:40
|
2025-04-01T06:38:12.141348
|
{
"authors": [
"elisekalstad",
"msprenke",
"sijiayam"
],
"repo": "civiform/civiform",
"url": "https://github.com/civiform/civiform/issues/4506",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1214220039
|
Difference between existing packages?
Hi, I just notice this package and I'm wondering about what's the difference between this package and those existing packages like Pickle.jl and NPZ.jl? And maybe it's worth directly depending on them?
This is unregistered and very experimental!
Partly I don't particularly like the API of NPZ.jl and partly putting them in the same place allows some useful behaviour, like you can reuse the Python parser to implement reading npy files, and the pickle parser allows parsing a wider range of npy files, e.g. ones containing strings.
|
gharchive/issue
| 2022-04-25T09:22:47
|
2025-04-01T06:38:12.150333
|
{
"authors": [
"chengchingwen",
"cjdoris"
],
"repo": "cjdoris/PythonIO.jl",
"url": "https://github.com/cjdoris/PythonIO.jl/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
515496204
|
State of the library and future plans
Hi,
I came across your library after asking about open source LoRaWan stack implementartions here.
We want to develop sensor nodes in a factory environment, we're planning to use STM32Lx devices, communicating over LoRaWan with a gateway. We're using Atollic TrueStudio (configuration code generated with STM32CubeMX) and I would like to include your library to the project, I tried to do so with lorawan-mac I i wasn't able to do so.
After searching LoRaWan stacks I think yours is the most sane one, but i would like to know what is your roadmap, maybe if you accept contributions, plans to implement Class B and C, etc.
Hello
Thanks, I started this project to learn the protocol and also because I felt the other implementations looked crazy.
This project is active but I am (probably) the only person using it. It would be great to get some more users and contributors that appreciate the style of this implementation over the alternatives. Before accepting contributions I think it would be necessary to lay down some guidelines so that no effort is wasted.
There is no roadmap, there could be one if there was serious interest. The reason I haven't done class B and C is because I have no need for these modes. The biggest challenge with implementing these modes (or indeed any new features) is verifying that they work correctly. Class B needs a lot more tooling than class A.
I would describe the quality as experimental. I don't think you should use it for anything serious at the moment. This could change if more people use it. I think it would also be good to run it through the LoRaWAN conformance test at some point.
I'm planning to release 0.1.7 in the next week or two. I'll see if I can also produce a list of what works well and what needs to be improved.
Hi,
I would describe the quality as experimental. I don't think you should use it for anything serious at the moment. This could change if more people use it. I think it would also be good to run it through the LoRaWAN conformance test at some point.
Thanks for the quick reply, to be honest I just been told that the plan A is to use modules commanded via AT commands, so I'm no longer in a hurry, at least not for work projects.
I'm still interested in using this library as a learning exercise and in personal projects, seems like I will be using LoRaWan for a while and, as you did when you started this project, I would use it to learn the protocol.
I'm planning to release 0.1.7 in the next week or two. I'll see if I can also produce a list of what works well and what needs to be improved.
I would like to propose the following, let me know if you agree. I can get more familiar with LoRaWan and maybe other implementations of it, then if you feel like it I can help with small and easy tasks of the to be implemented list.
So far I have been able to include LDL to my project and initialize boath Board and Radio, just noticed I must implement all the weak functions in lora_system, am I right? Maybe a porting guide would be nice for us who aren't familiar with LDL.
Regards
I think an AT command module is wise for proof of concept. You don't want to get bogged down in technical details before you know if the technology is right for the application.
Yes I agree a todo list is a good idea.
Yes I agree a porting guide would be helpful.
Some of the system interfaces will work just fine with the weak implementations, while others are show stoppers. Looking at the API documentation it occurs to me not all of the mandatory interfaces are marked as mandatory.
The arduino wrapper is a good reference for what is mandatory.
I should also mention that since I am in the UK I only ever use the setting for the EU_863_870 region. I have in the past emulated some of the other regions but not for some time.
I'm located in Mexico, will test the US_902_928 region.
Some of the system interfaces will work just fine with the weak implementations, while others are show stoppers. Looking at the API documentation it occurs to me not all of the mandatory interfaces are marked as mandatory.
I didn't took a deep look, but I assume the mandatory interfaces are those marked with @warning this function must be implemented on target for correct operation.
The arduino wrapper is a good reference for what is mandatory.
I will take a look at it, so far i haven't seen any timer being setup, while in loramac-node, they use the microcontroller RTC, is this only neccesary on Class B or C nodes?
I didn't took a deep look, but I assume the mandatory interfaces are those marked with @warning this function must be implemented on target for correct operation.
Yeah more or less.
I'm at master, should i also check the development branch?
Master is best. Development might not work properly.
so far i haven't seen any timer being setup
LDL has a bunch of internal timers that depend on the platform providing a free-running 32bit counter.
LDL_System_time() returns the counter value at any time
LDL_System_tps() returns the rate at which the counter increments (ticks per second)
LDL_System_eps() returns the error in ticks (error per second)
So on Arduino for example:
LDL_System_time() wraps micros()
LDL_System_tps() returns 1000000
LDL_System_eps() returns 5000 to account for a ceramic resonator
LDL has a bunch of internal timers that depend on the platform providing a free-running 32bit counter.
LDL_System_time() returns the counter value at any time
LDL_System_tps() returns the rate at which the counter increments (ticks per second)
LDL_System_eps() returns the error in ticks (error per second)
So on Arduino for example:
LDL_System_time() wraps micros()
LDL_System_tps() returns 1000000
LDL_System_eps() returns 5000 to account for a ceramic resonator
Haven't been on my work station, I'm using STM32 devices, I guess the HAL_GetTicks() function for LDL_System_time() should work, it should return the same value in LDL_System_tps() and I will calculate the value for LDL_System_eps()
Looking good.
I see the problem, you need to connect the radio "DIO" control lines. That part is missing from the example you are based on, it's not very clear.
You need to detect the DIO line(s) rising edge and then call LDL_MAC_interrupt(&mac, n, LDL_System_time()) where n is the index of the line (e.g. DIO0 is n == 0). You only need DIO0, DIO1, DIO2, and DIO3.
This is how the arduino wrapper does it. That function is called by an ISR for a particular control line. There is implementation specific logic but the important part is that I call LDL_MAC_interrupt().
If you use an interrupt, make sure to define LORA_SYSTEM_ENTER_CRITICAL and _LEAVE_CRITICAL. This should work:
#define LORA_SYSTEM_ENTER_CRITICAL(APP) volatile uint32_t primask = __get_PRIMASK();__disable_irq();
#define LORA_SYSTEM_ENTER_CRITICAL(APP) __set_PRIMASK(primask);
I recommend to put this in a header file, then define LORA_TARGET_INCLUDE to be the name of that file (e.g. -DLORA_TARGET_INCLUDE='"your_header.h"'). All the other LDL build options can go there.
You will need to also define:
LORA_ENABLE_SX1276
LORA_ENABLE_US_902_928
LORA_DISABLE_FULL_CODEC
I assume you've already done this somewhere I can't see. I mean, you can do it all from the makefile if you prefer.
Once you get that sorted you will probably find that the example sends too frequently since the US doesn't have a duty cycle limit. To slow it down you can either add a delay in your app, or set LDL_MAC_setAggregatedDutyCycle() to impose a global duty cycle limit. A setting of 2 will give you a 1% duty cycle limit.
Hi,
Thanks for taking a look at the project, just noticed i uploaded the keys of it, so i had to make the repo private :/, let me know if you want me to give you access to it.
I see the problem, you need to connect the radio "DIO" control lines. That part is missing from the example you are based on, it's not very clear.
You need to detect the DIO line(s) rising edge and then call LDL_MAC_interrupt(&mac, n, LDL_System_time()) where n is the index of the line (e.g. DIO0 is n == 0). You only need DIO0, DIO1, DIO2, and DIO3.
This is how the arduino wrapper does it. That function is called by an ISR for a particular control line. There is implementation specific logic but the important part is that I call LDL_MAC_interrupt().
Here's how i implemented it:
void HAL_GPIO_EXTI_Callback(uint16_t GPIO_Pin)
{
switch (GPIO_Pin) {
case DIO0_Pin:
LDL_MAC_interrupt(&mac, 0, LDL_System_time());
break;
case DIO1_Pin:
LDL_MAC_interrupt(&mac, 1, LDL_System_time());
break;
case DIO2_Pin:
LDL_MAC_interrupt(&mac, 2, LDL_System_time());
break;
case DIO3_Pin:
LDL_MAC_interrupt(&mac, 3, LDL_System_time());;
break;
default:
break;
}
}
If you use an interrupt, make sure to define LORA_SYSTEM_ENTER_CRITICAL and _LEAVE_CRITICAL. This should work:
#define LORA_SYSTEM_ENTER_CRITICAL(APP) volatile uint32_t primask = __get_PRIMASK();__disable_irq();
#define LORA_SYSTEM_LEAVE_CRITICAL(APP) __set_PRIMASK(primask);
I recommend to put this in a header file, then define LORA_TARGET_INCLUDE to be the name of that file (e.g. -DLORA_TARGET_INCLUDE='"your_header.h"'). All the other LDL build options can go there.
My custom header file is named LDL_options.h and goes as follows:
#ifndef LDL_OPTIONS_H_
#define LDL_OPTIONS_H_
#include "cmsis_gcc.h"
// http://stm32f4-discovery.net/2015/06/how-to-properly-enabledisable-interrupts-in-arm-cortex-m/
static volatile uint32_t primask = 0;
#define LORA_SYSTEM_ENTER_CRITICAL(APP) do { primask = __get_PRIMASK(); __disable_irq(); } while (0);
#define LORA_SYSTEM_LEAVE_CRITICAL(APP) __set_PRIMASK(primask);
#endif /* LDL_OPTIONS_H_ */
You will need to also define:
LORA_ENABLE_SX1276
LORA_ENABLE_US_902_928
LORA_DISABLE_FULL_CODEC
I assume you've already done this somewhere I can't see. I mean, you can do it all from the makefile if you prefer.
Yep, I define those symbols on the IDE, but now i think it's better to have them on the LDL_options.h file so others doesn't need to load the project into the IDE just to see the configuration. So the LDL_options.h file ends up like this:
#ifndef LDL_OPTIONS_H_
#define LDL_OPTIONS_H_
#include "cmsis_gcc.h"
// http://stm32f4-discovery.net/2015/06/how-to-properly-enabledisable-interrupts-in-arm-cortex-m/
#define LORA_ENABLE_SX1276
#define LORA_ENABLE_US_902_928
#define LORA_DISABLE_FULL_CODEC
// #define LORA_TARGET_INCLUDE /* See lora_platform */
static volatile uint32_t primask = 0;
#define LORA_SYSTEM_ENTER_CRITICAL(APP) do { primask = __get_PRIMASK(); __disable_irq(); } while (0);
#define LORA_SYSTEM_LEAVE_CRITICAL(APP) __set_PRIMASK(primask);
#endif /* LDL_OPTIONS_H_ */
Once you get that sorted you will probably find that the example sends too frequently since the US doesn't have a duty cycle limit. To slow it down you can either add a delay in your app, or set LDL_MAC_setAggregatedDutyCycle() to impose a global duty cycle limit. A setting of 7 will give you a ~1% duty cycle limit.
edit: made mistake on global duty cycle
Will edit the comment once i get some results later today.
Thanks for the help and patience.
10km is probably too far for initial debug. Too close (i.e. sitting right next to the gateway) can also be an issue.
It's often useful to print time (i.e. LDL_System_time()) with each event for double checking timing.
Hi,
Thanks for the tips. I had to modify the logging macros (replacing PRIu32 for %d) because of my underlying functions. Everything else seems to be working as expected. Will report back when I get a gateway.
|
gharchive/issue
| 2019-10-31T14:52:33
|
2025-04-01T06:38:12.175673
|
{
"authors": [
"C47D",
"cjhdev"
],
"repo": "cjhdev/lora_device_lib",
"url": "https://github.com/cjhdev/lora_device_lib/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
162158572
|
Docs for 1.0-alpha?
Are these available somewhere convenient?
Are these available anywhere at all ?
|
gharchive/issue
| 2016-06-24T14:24:09
|
2025-04-01T06:38:12.183351
|
{
"authors": [
"mafrosis",
"thicklord"
],
"repo": "cjlucas/rtorrent-python",
"url": "https://github.com/cjlucas/rtorrent-python/issues/29",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
956883400
|
always destroy instances
Expected Behavior
an GCP and DO cloud providers always destroy instances:
2021-07-30 15:59:33.646 | INFO | uvicorn.protocols.http.h11_impl:send:461 - 127.0.0.1:60922 - "GET /destroy HTTP/1.1" 200
2021-07-30 15:59:36.640 | INFO | uvicorn.protocols.http.h11_impl:send:461 - 127.0.0.1:60922 - "GET /destroy HTTP/1.1" 200
2021-07-30 15:59:30.937 | INFO | uvicorn.protocols.http.h11_impl:send:461 - 127.0.0.1:60924 - "GET /destroy HTTP/1.1" 200
The UI regularly calls the /destroy endpoint to get a list of all the proxy instances pending deletion. This get request you are seeing is just that and it's not actually deleting the proxies.
If you start Cloudproxy and don't open the UI, then you'll notice those requests aren't there.
|
gharchive/issue
| 2021-07-30T16:00:23
|
2025-04-01T06:38:12.278210
|
{
"authors": [
"alex60217101990",
"claffin"
],
"repo": "claffin/cloudproxy",
"url": "https://github.com/claffin/cloudproxy/issues/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1906923693
|
image preprocessing for training
For the first batches of training, we plan to use video frames extracted from AAPB videos. However, there's concerns around how to "normalize" different image size and aspect ratios from videos from different decades. This thread is to discuss how we implement the normalizing strategies.
We decided to go only with 4:3 videos (before HD broadcasting era, circa early 2000s) in the early rounds of annotation.
Note that as long as we are using the pre-trained backbone weights, those weights in torch-vision package come with their own preprocessing code. That is, we can add some additional preprocessing based on some domain knowledge before the torch-shipped preprocessing. However, as of now we don't see a lot of needs for doing so.
|
gharchive/issue
| 2023-09-21T13:01:08
|
2025-04-01T06:38:12.280171
|
{
"authors": [
"keighrim"
],
"repo": "clamsproject/app-swt-detection",
"url": "https://github.com/clamsproject/app-swt-detection/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2190117065
|
clangd "--query-driver" option doesn't follow symlinks
When I run clangd with the --query-driver option set to a path containing a symlink, the argument has no effect and I still have missing system headers. In the following case, <iostream> can't be found.
#include <iostream>
int main(int argc, char* argv[])
{
std::cout << "hi\n";
return 0;
}
I can work around this issue by using the complete path without any symlinks.
Within the logs, the oe-workdir path item is a symlink to another location.
Logs
Please attach the clangd stderr log if you can. (Usually available from the editor)
If possible, run with --log=verbose - note that the logs will include the contents of open files!
If this is a crash, try to put llvm-symbolizer on your PATH per the troubleshooting instructions.
[START][2024-03-16 12:09:47] LSP logging initiated
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'I[12:09:47.584] clangd version 17.0.6\nI[12:09:47.584] Features: linux\nI[12:09:47.584] PID: 102869\nI[12:09:47.584] Working directory: /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight\nI[12:09:47.584] argv[0]: /usr/bin/clangd\nI[12:09:47.584] argv[1]: --log=verbose\nI[12:09:47.584] argv[2]: --query-driver=/home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/oe-workdir/recipe-sysroot-native/usr/bin/aarch64-poky-linux/aarch64-poky-linux-g++\nV[12:09:47.584] User config file is /home/jasonc/.config/clangd/config.yaml\nI[12:09:47.584] Starting LSP over stdin/stdout\nV[12:09:47.584] <<< {"id":1,"jsonrpc":"2.0","method":"initialize","params":{"capabilities":{"offsetEncoding":["utf-8","utf-16"],"textDocument":{"callHierarchy":{"dynamicRegistration":false},"codeAction":{"codeActionLiteralSupport":{"codeActionKind":{"valueSet":["","quickfix","refactor","refactor.extract","refactor.inline","refactor.rewrite","source","source.organizeImports"]}},"dataSupport":true,"dynamicRegistration":false,"isPreferredSupport":true,"resolveSupport":{"properties":["edit"]}},"completion":{"completionItem":{"commitCharactersSupport":true,"deprecatedSupport":true,"documentationFormat":["markdown","plaintext"],"insertReplaceSupport":true,"insertTextModeSupport":{"valueSet":[1,2]},"labelDetailsSupport":true,"preselectSupport":true,"resolveSupport":{"properties":["documentation","detail","additionalTextEdits","sortText","filterText","insertText","textEdit","insertTextFormat","insertTextMode"]},"snippetSupport":true,"tagSupport":{"valueSet":[1]}},"completionItemKind":{"valueSet":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]},"completionList":{"itemDefaults":["commitCharacters","editRange","insertTextFormat","insertTextMode","data"]},"contextSupport":true,"dynamicRegistration":false,"editsNearCursor":true,"insertTextMode":1},"declaration":{"linkSupport":true},"definition":{"linkSupport":true},"documentHighlight":{"dynamicRegistration":false},"documentSymbol":{"dynamicRegistration":false,"hierarchicalDocumentSymbolSupport":true,"symbolKind":{"valueSet":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]}},"hover":{"contentFormat":["markdown","plaintext"],"dynamicRegistration":false},"implementation":{"linkSupport":true},"publishDiagnostics":{"relatedInformation":true,"tagSupport":{"valueSet":[1,2]}},"references":{"dynamicRegistration":false},"rename":{"dynamicRegistration":false,"prepareSupport":true},"semanticTokens":{"augmentsSyntaxTokens":true,"dynamicRegistration":false,"formats":["relative"],"multilineTokenSupport":false,"overlappingTokenSupport":true,"requests":{"full":{"delta":true},"range":false},"serverCancelSupport":false,"tokenModifiers":["declaration","definition","readonly","static","deprecated","abstract","async","modification","documentation","defaultLibrary"],"tokenTypes":["namespace","type","class","enum","interface","struct","typeParameter","parameter","variable","property","enumMember","event","function","method","macro","keyword","modifier","comment","string","number","regexp","operator","decorator"]},"signatureHelp":{"dynamicRegistration":false,"signatureInformation":{"activeParameterSupport":true,"documentationFormat":["markdown","plaintext"],"parameterInformation":{"labelOffsetSupport":true}}},"synchronization":{"didSave":true,"dynamicRegistration":false,"willSave":true,"willSaveWaitUntil":true},"typeDefinition":{"linkSupport":true}},"window":{"showDocument":{"support":true},"showMessage":{"messageActionItem":{"additionalPropertiesSupport":false}},"workDoneProgress":true},"workspace":{"applyEdit":true,"configuration":true,"didChangeWatchedFiles":{"dynamicRegistration":false,"relativePatternSupport":true},"semanticTokens":{"refreshSupport":true},"symbol":{"dynamicRegistration":false,"hierarchicalWorkspaceSymbolSupport":true,"symbolKind":{"valueSet":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]}},"workspaceEdit":{"resourceOperations":["rename","create","delete"]},"workspaceFolders":true}},"clientInfo":{"name":"Neovim","version":"0.9.5"},"initializationOptions":{},"processId":102865,"rootPath":"/home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight","rootUri":"file:///home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight","trace":"off","workspaceFolders":[{"name":"/home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight","uri":"file:///home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight"}]}}\n\nI[12:09:47.584] <-- initialize(1)\nI[12:09:47.585] --> reply:initialize(1) 0 ms\nV[12:09:47.585] >>> {"id":1,"jsonrpc":"2.0","result":{"capabilities":{"astProvider":true,"callHierarchyProvider":true,"clangdInlayHintsProvider":true,"codeActionProvider":{"codeActionKinds":["quickfix","refactor","info"]},"compilationDatabase":{"automaticReload":true},"completionProvider":{"resolveProvider":false,"triggerCharacters":[".","<",">",":","\\"","/","*"]},"declarationProvider":true,"definitionProvider":true,"documentFormattingProvider":true,"documentHighlightProvider":true,"documentLinkProvider":{"resolveProvider":false},"documentOnTypeFormattingProvider":{"firstTriggerCharacter":"\\n","moreTriggerCharacter":[]},"documentRangeFormattingProvider":true,"documentSymbolProvider":true,"executeCommandProvider":{"commands":["clangd.applyFix","clangd.applyTweak"]},"foldingRangeProvider":true,"hoverProvider":true,"implementationProvider":true,"inactiveRegionsProvider":true,"inlayHintProvider":true,"memoryUsageProvider":true,"referencesProvider":true,"renameProvider":{"prepareProvider":true},"selectionRangeProvider":true,"semanticTokensProvider":{"full":{"delta":true},"legend":{"tokenModifiers":["declaration","definition","deprecated","deduced","readonly","static","abstract","virtual","dependentName","defaultLibrary","usedAsMutableReference","usedAsMutablePointer","constructorOrDestructor","userDefined","functionScope","classScope","fileScope","globalScope"],"tokenTypes":["variable","variable","parameter","function","method","function","property","variable","class","interface","enum","enumMember","type","type","unknown","namespace","typeParameter","concept","type","macro","modifier","operator","bracket","label","comment"]},"range":false},"signatureHelpProvider":{"triggerCharacters":["(",")","{","}","<",">",","]},"standardTypeHierarchyProvider":true,"textDocumentSync":{"change":2,"openClose":true,"save":true},"typeDefinitionProvider":true,"typeHierarchyProvider":true,"workspaceSymbolProvider":true},"offsetEncoding":"utf-8","serverInfo":{"name":"clangd","version":"clangd version 17.0.6 linux x86_64-pc-linux-gnu"}}}\n\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "V[12:09:47.699] <<< {\"jsonrpc\":\"2.0\",\"method\":\"initialized\",\"params\":{}}\n\nI[12:09:47.699] <-- initialized\nV[12:09:47.699] <<< {\"jsonrpc\":\"2.0\",\"method\":\"textDocument/didOpen\",\"params\":{\"textDocument\":{\"languageId\":\"cpp\",\"text\":\"/*\\n * SPDX-License-Identifier: GPL-3.0-or-later\\n *\\n * Copyright (C) 2023 Jason Carrete\\n *\\n * This file is part of Flight Controller.\\n *\\n * Flight Controller is free software: you can redistribute it and/or modify\\n * it under the terms of the GNU General Public License as published by\\n * the Free Software Foundation, either version 3 of the License, or\\n * (at your option) any later version.\\n *\\n * This program is distributed in the hope that it will be useful,\\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\\n * GNU General Public License for more details.\\n *\\n * You should have received a copy of the GNU General Public License\\n * along with this program. If not, see <https://www.gnu.org/licenses/>.\\n */\\n\\n#include \\\"appinfo.h\\\"\\n#include \\\"version.h\\\"\\n\\n#include <iostream>\\n\\nnamespace ffd = freeflight_daemon;\\n\\nint main(int argc, char* argv[])\\n{\\n std::cout << ffd::get_app_name() << ' ' << freeflight::get_version() << ' '\\n << freeflight::get_name() << '\\\\n';\\n return 0;\\n}\\n\",\"uri\":\"file:///home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp\",\"version\":0}}}\n\nI[12:09:47.699] <-- textDocument/didOpen\nV[12:09:47.699] System include extraction: driver clang expanded to /usr/bin/clang\nV[12:09:47.699] System include extraction: not allowed driver /usr/bin/clang\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.700] <<< {"id":2,"jsonrpc":"2.0","method":"textDocument/semanticTokens/full","params":{"textDocument":{"uri":"file:///home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp"}}}\n\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "I[12:09:47.700] <-- textDocument/semanticTokens/full(2)\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "V[12:09:47.700] config note at /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/.clangd:1:0: Parsing config fragment\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "V[12:09:47.700] config note at /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/.clangd:1:0: Parsed 1 fragments from file\nV[12:09:47.700] Config fragment: compiling /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/.clangd:1 -> 0x0000724F50003730 (trusted=false)\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "I[12:09:47.701] --> textDocument/publishDiagnostics\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.701] >>> {"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"diagnostics":[],"uri":"file:///home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/.clangd"}}\n\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "I[12:09:47.701] Loaded compilation database from /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/compile_commands.json\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "V[12:09:47.701] Broadcasting compilation database from /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "V[12:09:47.701] System include extraction: not allowed driver /home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot-native/usr/bin/aarch64-poky-linux/aarch64-poky-linux-g++\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "I[12:09:47.701] ASTWorker building file /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp version 0 with command \n[/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/freeflight-1.0+git]\n/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot-native/usr/bin/aarch64-poky-linux/aarch64-poky-linux-g++ --target=aarch64-poky-linux --driver-mode=g++ --sysroot=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot -I/home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd -I/home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/freeflight/public -mcpu=cortex-a57 -march=armv8-a+crc -mbranch-protection=standard -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot -O2 -pipe -g -feliminate-unused-debug-types -fmacro-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight=/usr/src/debug/freeflight/1.0+git-r0 -fdebug-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight=/usr/src/debug/freeflight/1.0+git-r0 -fmacro-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/freeflight-1.0+git=/usr/src/debug/freeflight/1.0+git-r0 -fdebug-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/freeflight-1.0+git=/usr/src/debug/freeflight/1.0+git-r0 -fdebug-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot= -fmacro-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot= -fdebug-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot-native= -fvisibility-inlines-hidden -O2 -g -DNDEBUG -std=gnu++20 -o src/flightd/CMakeFiles/flightd.dir/main.cpp.o -c -resource-dir=/usr/lib/clang/17 -- /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "I[12:09:47.702] Loaded compilation database from /home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/freeflight-1.0+git/compile_commands.json\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "I[12:09:47.702] --> window/workDoneProgress/create(0)\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.702] >>> {"id":0,"jsonrpc":"2.0","method":"window/workDoneProgress/create","params":{"token":"backgroundIndexProgress"}}\n\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "I[12:09:47.702] Enqueueing 1 commands for indexing\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "V[12:09:47.702] Driver produced command: cc1 -cc1 -triple aarch64-poky-linux -fsyntax-only -disable-free -clear-ast-before-backend -disable-llvm-verifier -discard-value-names -main-file-name main.cpp -mrelocation-model pic -pic-level 2 -pic-is-pie -mframe-pointer=non-leaf -fmath-errno -ffp-contract=on -fno-rounding-math -mconstructor-aliases -funwind-tables=2 -target-cpu cortex-a57 -target-feature +neon -target-feature +v8a -target-feature +crc -target-abi aapcs -msign-return-address=non-leaf -msign-return-address-key=a_key -mbranch-target-enforce -debug-info-kind=constructor -dwarf-version=5 -debugger-tuning=gdb -fcoverage-compilation-dir=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/freeflight-1.0+git -resource-dir /usr/lib/clang/17 -I /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd -I /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/freeflight/public -D _FORTIFY_SOURCE=2 -D NDEBUG -isysroot /home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot -internal-isystem /usr/lib/clang/17/include -internal-isystem /home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot/usr/local/include -internal-externc-isystem /home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot/include -internal-externc-isystem /home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot/usr/include -fmacro-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight=/usr/src/debug/freeflight/1.0+git-r0 -fmacro-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/freeflight-1.0+git=/usr/src/debug/freeflight/1.0+git-r0 -fmacro-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot= -O2 -Wformat -Wformat-security -Werror=format-security -std=gnu++20 -fdeprecated-macro -fdebug-compilation-dir=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/freeflight-1.0+git -fdebug-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight=/usr/src/debug/freeflight/1.0+git-r0 -fdebug-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/freeflight-1.0+git=/usr/src/debug/freeflight/1.0+git-r0 -fdebug-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot= -fdebug-prefix-map=/home/jasonc/projects/quadcopter/albatros/build/tmp/work/cortexa57-poky-linux/freeflight/1.0+git/recipe-sysroot-native= -ferror-limit 19 -fvisibility-inlines-hidden -stack-protector 2 -fno-signed-char -fgnuc-version=4.2.1 -fno-implicit-modules -fcxx-exceptions -fexceptions -vectorize-loops -vectorize-slp -no-round-trip-args -target-feature -fmv -faddrsig -D__GCC_HAVE_DWARF2_CFI_ASM=1 -x c++ /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.702] Building first preamble for /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp version 0\nV[12:09:47.702] BackgroundIndex: building version 1 after loading index from disk\nV[12:09:47.702] <<< {"id":0,"jsonrpc":"2.0","result":null}\n\nI[12:09:47.702] <-- reply(0)\nI[12:09:47.702] --> $/progress\nV[12:09:47.702] >>> {"jsonrpc":"2.0","method":"$/progress","params":{"token":"backgroundIndexProgress","value":{"kind":"begin","percentage":0,"title":"indexing"}}}\n\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.702] BackgroundIndex: serving version 1 (50980 bytes)\nI[12:09:47.702] --> $/progress\nV[12:09:47.703] >>> {"jsonrpc":"2.0","method":"$/progress","params":{"token":"backgroundIndexProgress","value":{"kind":"report","message":"0/1","percentage":0}}}\n\nI[12:09:47.703] --> $/progress\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.703] >>> {"jsonrpc":"2.0","method":"$/progress","params":{"token":"backgroundIndexProgress","value":{"kind":"report","message":"0/1","percentage":0}}}\n\nI[12:09:47.703] --> $/progress\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.703] >>> {"jsonrpc":"2.0","method":"$/progress","params":{"token":"backgroundIndexProgress","value":{"kind":"end"}}}\n\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "I[12:09:47.716] Built preamble of size 529320 for file /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp version 0 in 0.01 seconds\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'I[12:09:47.717] --> workspace/semanticTokens/refresh(1)\nV[12:09:47.717] >>> {"id":1,"jsonrpc":"2.0","method":"workspace/semanticTokens/refresh","params":null}\n\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.717] <<< {"jsonrpc":"2.0","method":"$/cancelRequest","params":{"id":2}}\n\nI[12:09:47.717] <-- $/cancelRequest\nV[12:09:47.717] indexed preamble AST for /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp version 0:\n symbol slab: 5 symbols, 5376 bytes\n ref slab: 0 symbols, 0 refs, 128 bytes\n relations slab: 0 relations, 24 bytes\nV[12:09:47.717] <<< {"id":3,"jsonrpc":"2.0","method":"textDocument/semanticTokens/full","params":{"textDocument":{"uri":"file:///home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp"}}}\n\nI[12:09:47.717] <-- textDocument/semanticTokens/full(3)\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.717] <<< {"id":1,"jsonrpc":"2.0","result":null}\n\nI[12:09:47.717] <-- reply(1)\nV[12:09:47.717] Build dynamic index for header symbols with estimated memory usage of 22004 bytes\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" 'V[12:09:47.724] Trying to fix unresolved name "cout" in scopes: [std::]\n'
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "V[12:09:47.724] Dex query tree: false\nV[12:09:47.724] Dex query tree: false\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "E[12:09:47.725] IncludeCleaner: Failed to get an entry for resolved path : No such file or directory\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "V[12:09:47.726] indexed file AST for /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp version 0:\n symbol slab: 2 symbols, 4680 bytes\n ref slab: 2 symbols, 2 refs, 4272 bytes\n relations slab: 0 relations, 24 bytes\nV[12:09:47.726] Build dynamic index for main-file symbols with estimated memory usage of 12040 bytes\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "I[12:09:47.726] --> textDocument/publishDiagnostics\n"
[ERROR][2024-03-16 12:09:47] .../vim/lsp/rpc.lua:734 "rpc" "/usr/bin/clangd" "stderr" "V[12:09:47.726] >>> {\"jsonrpc\":\"2.0\",\"method\":\"textDocument/publishDiagnostics\",\"params\":{\"diagnostics\":[{\"code\":\"pp_file_not_found\",\"message\":\"'iostream' file not found\",\"range\":{\"end\":{\"character\":19,\"line\":24},\"start\":{\"character\":9,\"line\":24}},\"relatedInformation\":[],\"severity\":1,\"source\":\"clang\"},{\"code\":\"undeclared_var_use\",\"message\":\"Use of undeclared identifier 'std'\",\"range\":{\"end\":{\"character\":7,\"line\":30},\"start\":{\"character\":4,\"line\":30}},\"relatedInformation\":[],\"severity\":1,\"source\":\"clang\"},{\"code\":\"misc-unused-alias-decls\",\"codeDescription\":{\"href\":\"https://clang.llvm.org/extra/clang-tidy/checks/misc/unused-alias-decls.html\"},\"message\":\"Namespace alias decl 'ffd' is unused (fix available)\",\"range\":{\"end\":{\"character\":0,\"line\":27},\"start\":{\"character\":0,\"line\":26}},\"relatedInformation\":[],\"severity\":2,\"source\":\"clang-tidy\",\"tags\":[1]}],\"uri\":\"file:///home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp\",\"version\":0}}\n\nI[12:09:47.726] --> reply:textDocument/semanticTokens/full(2) 26 ms, error: Task was cancelled.\nV[12:09:47.726] >>> {\"error\":{\"code\":-32800,\"message\":\"Request cancelled\"},\"id\":2,\"jsonrpc\":\"2.0\"}\n\nV[12:09:47.726] ASTWorker running SemanticHighlights on version 0 of /home/jasonc/projects/quadcopter/albatros/build/workspace/sources/freeflight/src/flightd/main.cpp\nI[12:09:47.726] --> reply:textDocument/semanticTokens/full(3) 8 ms\nV[12:09:47.726] >>> {\"id\":3,\"jsonrpc\":\"2.0\",\"result\":{\"data\":[26,10,3,15,65537,0,6,17,15,131072,2,4,4,3,131075,0,9,4,2,16387,0,12,4,2,16387],\"resultId\":\"1\"}}\n\n"
System information
Output of clangd --version:
clangd version 17.0.6
Features: linux
Platform: x86_64-pc-linux-gnu
Editor/LSP plugin:
Neovim/nvim-lspconfig
Operating system:
Arch Linux
Kernel version: 6.7.9
Duplicate of #1605
|
gharchive/issue
| 2024-03-16T16:10:54
|
2025-04-01T06:38:12.296843
|
{
"authors": [
"HighCommander4",
"jcarrete5"
],
"repo": "clangd/clangd",
"url": "https://github.com/clangd/clangd/issues/1975",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
639588308
|
Include completion fails at the end of file
clangd version: clangd version 11.0.0 (https://github.com/llvm/llvm-project.git 3badd17b6989621b5aa2732800f697dabbda034d)
OS: Windows 10
I think this issue still exists. @HighCommander4
Originally posted by @lh123 in https://github.com/clangd/clangd/issues/38#issuecomment-643115928
You're right, I see the same issue (global completion instead of include completions).
I think what happened when I was testing #38, is that I typed in #include "llvm/Sup manually. When I type the " character, my editor auto-inserts a matching closing quote, so the actual test case I was testing was:
#include "llvm/Sup^"
(not the presence of the closing quote), which works fine.
I can't seem to reproduce this, neither on LLVM or a dummy project.
Could you please share clangd logs ?
It reproduces if the file has no code in and the include brackets are unbalanced and followed by eof
It reproduces if the file has no code in and the include brackets are unbalanced and followed by eof
Interesting I thought I had fixed this one... Well good thing is, this means at least I know the fix :D
Sent out https://reviews.llvm.org/D95419
Interesting I thought I had fixed this one... Well good thing is, this means at least I know the fix :D
Sent out https://reviews.llvm.org/D95419
Works like a charm now on my end, thanks.
Works like a charm now on my end, thanks.
|
gharchive/issue
| 2020-06-16T11:26:52
|
2025-04-01T06:38:12.304715
|
{
"authors": [
"HighCommander4",
"kadircet",
"lh123",
"njames93"
],
"repo": "clangd/clangd",
"url": "https://github.com/clangd/clangd/issues/433",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
585869355
|
Clangd formatting issue?
Hi:
I notice that clang has formatting feature, but I don't know how to set it up. Is there any docs for this?
best regards
Peiyun Jin
clangd is using clang-format to do formatting, https://clangd.llvm.org/features.html#formatting , you can set .clang-format for style options.
Hi I have installed coc-clangd, have .clang-format file in the project folder, and set
"coc.preferences.formatOnSaveFiletypes": ["cpp"],
"coc.preferences.formatOnSave": true,
But neither :Format nor format on save worked.
Could you tell me how can fix this?
Best
I think the problem is caused by this line "clangd.arguments": ["-Wall", "-Werror", "-std=c++17"] in above in my coc-settings.json. After adding this line into my coc-settings.json. The language server stops working for some reason.
{
"coc.preferences.formatOnSaveFiletypes": ["cpp"],
"coc.preferences.formatOnSave": true,
"clangd.arguments": ["-Wall", "-Werror", "-std=c++17"],
"languageserver": {
"python": {
"command": "python",
"args": [
"-mpyls",
"-vv",
"--log-file",
"/tmp/lsp_python.log"
],
"trace.server": "verbose",
"filetypes": [
"python"
],
"settings": {
"pyls": {
"enable": true,
"trace": {
"server": "verbose"
},
"commandPath": "",
"configurationSources": [
"pycodestyle"
],
"plugins": {
"jedi_completion": {
"enabled": true
},
"jedi_hover": {
"enabled": true
},
"jedi_references": {
"enabled": true
},
"jedi_signature_help": {
"enabled": true
},
"jedi_symbols": {
"enabled": true,
"all_scopes": true
},
"mccabe": {
"enabled": true,
"threshold": 15
},
"preload": {
"enabled": true
},
"pycodestyle": {
"enabled": true
},
"pydocstyle": {
"enabled": false,
"match": "(?!test_).*\\.py",
"matchDir": "[^\\.].*"
},
"pyflakes": {
"enabled": true
},
"rope_completion": {
"enabled": true
},
"yapf": {
"enabled": true
}
}
}
}
}
}
}```
Look like it's caused by server, @sam-mccall can you look into this?
I think the problem is caused by this line "clangd.arguments": ["-Wall", "-Werror", "-std=c++17"] in above in my coc-settings.json. After adding this line into my coc-settings.json. The language server stops working for some reason.
The reason is clangd.arguments is extra flags to pass to clangd, and those are not valid clangd arguments:
$ clangd -Wall -Werror -std=c++17
clangd: Unknown command line argument '-Wall'. Try: 'clangd --help'
clangd: Did you mean '--help'?
clangd: Unknown command line argument '-Werror'. Try: 'clangd --help'
clangd: Did you mean '--color'?
clangd: Unknown command line argument '-std=c++17'. Try: 'clangd --help'
clangd: Did you mean '--log=c++17'?
If you want to set the flags for parsing your code, this is configured using compile_commands.json or compile_flags.txt: https://clangd.llvm.org/installation.html#project-setup
@sam-mccall Hi Sam, Is there a way to enable clang-tidy and -j inside coc-settings.json? Or how can I enable them with vim?
best regards
Sure: "clangd.arguments": ["-j=3", "-clang-tidy-checks=bugprone-*"] - those really are clangd flags.
Similar to clang-format, for finer-grained clang-tidy config you should use the standard .clang-tidy config file in your source tree, clangd should respect it.
@sam-mccall Do I need to enable clang-tide first, "clangd.arguments": ["-j=3", "-clang-tide=true","-clang-tidy-checks=bugprone-*"]?
What's clangd --version?
It's on by default as of clangd 9.
It may barely work in clangd 8, it was experiencing and off by default.
I'm using fedora 32 beta. Clangd is version 10.
With clangd 10 you shouldn't have to explicitly enable clang tidy.
On Sat, Mar 28, 2020, 7:30 PM Stokhos notifications@github.com wrote:
I'm using fedora 32 beta. Clangd is version 10.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/clangd/coc-clangd/issues/25#issuecomment-605500662,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAEGBAMIBI3LZ3LECRB7SNTRJY65TANCNFSM4LRQVAMQ
.
Sorry to add to this but it seems related. I am very confused about what this plugin can do. Should I be able to do formatting? Issuing a coc :Format command doesn't do anything for me. Would then clang-format be used or clang-tidy with the --fix option (or both)?
|
gharchive/issue
| 2020-03-23T01:36:40
|
2025-04-01T06:38:12.317299
|
{
"authors": [
"chmanie",
"fannheyward",
"sam-mccall",
"stokhos"
],
"repo": "clangd/coc-clangd",
"url": "https://github.com/clangd/coc-clangd/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
405125196
|
Ogg Opus Files Dont Show
On macOS Mojave, installed via homebrew. MP3 files show fine, but the majority of my library which is in ogg opus just doesnt show in the library viewer.
Oh, I see, tag parsing failed? That's possible, I don't have tons of Opus files. Is there any way you can share a file with tags that don't seem to be working?
sure, try this one
01 Tree Village.ogg.zip
Ah! So... taglib (our tag parsing library) sees the .ogg extension and assumes it's a vorbis file, then fails when it's not. When this happens, we should probably detect the error and then check to see if it's an opus file.
Are there any other file formats that commonly have an .ogg extension that may not be vorbis? :thinking:
Got it. I modified the taglib parser reader plugin so it can detect opus files in an ogg container -- your example file seems to parse fine now! I also made it such that as soon as I can get examples of other formats in ogg containers it should be trivial to add support to them as well.
seems to be fixed after re-adding the library on latest head, thanks!
Thanks for confirming the fix!
|
gharchive/issue
| 2019-01-31T08:06:54
|
2025-04-01T06:38:12.321682
|
{
"authors": [
"clangen",
"ibrokemypie"
],
"repo": "clangen/musikcube",
"url": "https://github.com/clangen/musikcube/issues/240",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
897884262
|
Browser starts to lag after playing 3-5 videos when using player.configure to update the source url
Browser:
Firefox 88.0.1
OS:
Pop OS 20.04 LTS
Clappr Version:
latest from http://cdn.clappr.io/latest/clappr.min.js
Steps to reproduce:
play 3-5 video one after another.
I was expecting to have a smooth playback but instead it started lagging(my browser)
Did you try to reproduce this issue at http://cdn.clappr.io/
No
I'm trying to update the video source using the following code:
// [...]
player.configure({
source: 'another-url',
// ...
});
which updates the video but after playing around 3-5 videos one after another the browser starts to lag.
I also tried to reproduce this issue on multiple devices, the same issue occurred.
NOTE : source URL is an HLS video encrypted with AES-128
Did you try to reproduce this issue at http://cdn.clappr.io/
No
Can't you reproduce the reported behavior or just haven't tried?
I also tried to reproduce this issue on multiple devices, the same issue occurred.
Yes I have but not on http://cdn.clappr.io/
Yes, I understand that you have tested it on other devices. The recommendation to test on http://cdn.clappr.io/ (specifically on http://cdn.clappr.io/demo) is that it becomes a common point that we can also validate on our side.
I was unable to reproduce this issue. Can you generate any evidence of this? (CPU / Memory consumption of the tab that is running Clappr, some video showing the problem visually)
Hi @unique1o1, is the problem still happening? Can you generate any evidence of this? (CPU / Memory consumption of the tab that is running Clappr, some video showing the problem visually)
I'm closing this issue due to inactivity. If needed, please feel free to reopen it.
|
gharchive/issue
| 2021-05-21T09:52:44
|
2025-04-01T06:38:12.346644
|
{
"authors": [
"joaopaulovieira",
"leticiafernandes",
"pedrochamberlain",
"unique1o1"
],
"repo": "clappr/clappr",
"url": "https://github.com/clappr/clappr/issues/2013",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
113204453
|
no video on galaxy s5
There is no video on Galaxy s5 when a live video or a replay is playing. But the video is diaplayed after you click the progress bar.
only the audio.
are you still facing this issue @polaris-zx ? also, did you test with different stream sources? is it only with hls ?
|
gharchive/issue
| 2015-10-25T03:38:44
|
2025-04-01T06:38:12.348139
|
{
"authors": [
"leandromoreira",
"polaris-zx"
],
"repo": "clappr/clappr",
"url": "https://github.com/clappr/clappr/issues/609",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
363117128
|
Reintroduce differentiation between application environments
Instead of only relying on one environment, this commit changes run
wrapper and the docker-compose stack to differentiate between
development/production and being able to run them both in parallel.
In other words: Reintroduce the significance of $APPLICATION_ENV
This commit fixes:
#36 - Jenkins image build takes twice the build time
#33 - Reintroduce distinction between development and production environment
#32 - Documentation is gone
#31 - Jenkins jobs constantly throwing errors
|
gharchive/pull-request
| 2018-09-24T12:05:37
|
2025-04-01T06:38:12.350225
|
{
"authors": [
"fuhbar"
],
"repo": "claranet/spryker-demoshop",
"url": "https://github.com/claranet/spryker-demoshop/pull/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1624127825
|
jpb/220-enable-some-pylint-error-checking-in-ci
Adding pylint error checking into the ci part of the precommits. The .pre-commit-config.yaml now has two entries for pylint: one which runs errors only which is compatible with ci, and one that runs as 'repo: local' and uses the .pylintrc which will pick on all the usual style warnings etc, but which is skipped over by the ci.
I have also run pylint with the errors-only flag over the full repo and fixed all issues that were picked up - or added pylint ignore against the ones that appeared to be false-positives.
Ruff: I have also added 'ruff' to the precommit hooks. 'ruff' is a rust-based linter that is super fast (but doesn't yet have quite as many rules as pylint). I fixed a number of issues that the ruff checked picked up on. Ruff might eventually be able to replace pylint but for now we can just keep it in as an extra linter because it takes very little time to run.
@groadabike - there is nothing significant in this PR - mostly just fixing line-too-long type errors and issues that pylint considers 'errors' i.e Exxx codes. Could you please give it a quick review (I don't want to get in the habit of overriding the need for reviews :-). Once this is integrated pylint will pass with the '-E' i.e. errors only flag which can then be included safely in the precommit ci path.
Going to merge this PR if
|
gharchive/pull-request
| 2023-03-14T19:01:55
|
2025-04-01T06:38:12.362285
|
{
"authors": [
"jonbarker68"
],
"repo": "claritychallenge/clarity",
"url": "https://github.com/claritychallenge/clarity/pull/221",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1159054561
|
Understanding IQ data generated by Raw data package with Clarius L7HD052110A0780
Hi, clarius dev team,
Based on the indications of your support team, I forward you this issue from my support request to Clarius.
I am currently acquiring IQ data with the Raw data package of Clarius L7HD052110A0780.
From here, I am trying to go back in synthesizing RF data, basically modulating performing modulation to RF frequencies.
Since it is not clear to me how IQ is calculated in the first place, I am having difficulties to synthesize RF.
Could you let me know about the algorithmic steps, which are followed, to go from RF to IQ?
My main question is how to choose the right carrier frequency "fc". Is this equivalent to the transmit frequency in the .yml file?
I attach my current script in Python, in case this can be valuable for this github project:
` ## from here we resynthesize rf data
UPSAMPLING = 4
fs_rf = fs*UPSAMPLING
#interpolation
rf_line_int = np.zeros(len(iq_line)*UPSAMPLING, dtype=complex)
rf_line_int[0::UPSAMPLING] = UPSAMPLING*iq_line[:]
tcoords_rf = (np.arange(rf_line_int.shape[0]) + toff*UPSAMPLING)/fs_rf
SHIFT_FREQ = 3E6 # To filter data without distortion
rf_line_int_ = rf_line_int * np.exp(2*np.pi*SHIFT_FREQ*1j*tcoords_rf)
#filtering
b, a = butter(8, 1/UPSAMPLING) #filter interpolation replica
rf_line_filt = filtfilt(b, a, rf_line_int_)
#carrier modulation
rf_line_analytic = rf_line_filt * np.exp(2*np.pi*(-SHIFT_FREQ + fs/2+ fc)*1j*tcoords_rf)
rf_line = np.real(rf_line_analytic)
rf_line_fft = fft(rf_line)
rf_line_freq = fftfreq(len(rf_line), 1/fs_rf)`
Attached also an example of the spectrum of my current IQ lines and the corresponding RF synthesized with the code above.
The bandwidth of the probe after modulation (4-13 MHz) is apparently correct.
Thank you and best wishes,
Sergio Sanabria
a few tips:
you'll want to look at the sampling rate in the yaml file to determine any downsampling from 60MHz
demodulation is a sliding frequency as a function of depth, see shallowfreq, deepfreq, and filterdepth, these are the parameters used applied linearly
Hi, @clariusk, thanks for the quick answer.
I have looked into the demodulation parameters you mention.
It seems deepfreq is always 7.5 MHz, and shallowfreq: 11 MHz, with a single filterdepth of 59.6999 mm
My understanding is that at 0 mm the depth carrier is 11 MHz, linearly decreasing down to 7.5 MHz at 59.6999 mm
and staying constant at 7.5 MHz for larger depths. Is this correct? .yml extract below.
Sergio
receive:
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
- {shallowfreq: 11 MHz, deepfreq: 7.5 MHz, filterdepth: 59.699999999999996 mm}
yes, that's exactly correct
|
gharchive/issue
| 2022-03-03T23:21:43
|
2025-04-01T06:38:12.370696
|
{
"authors": [
"clariusk",
"sergiojsanabria"
],
"repo": "clariusdev/raw",
"url": "https://github.com/clariusdev/raw/issues/4",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1880817658
|
Add CNCF Conformance 1.26 1.27 1.28
close #356
@prometherion should we merge before related PRs are merged in k8s-conformance? All required tests have passed!
|
gharchive/pull-request
| 2023-09-04T20:10:39
|
2025-04-01T06:38:12.409982
|
{
"authors": [
"bsctl"
],
"repo": "clastix/kamaji",
"url": "https://github.com/clastix/kamaji/pull/371",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
757397980
|
Improve auto-scrolling for small containers / large drag sources
Currently if you have a small scroll container and a large drag source, it's nearly impossible to get fine controlled auto-scrolling working properly given the existing auto-scrolling logic.
The getScrollDirectionAndSpeed will need to be updated to take into account the relative size of the item being dragged compared to the scroll container's actual size
Hey, first of all, thank you for your job!
This issue is related to this behaviour, right? Is there any way to improve this?
Hey, first of all, thank you for your job!
This issue is related to this behaviour, right? Is there any way to improve this?
@horprogs that's one of the manifestations of the issue, indeed.
On how this could be improved: Generally I think it's a bit tricky to find a one-size-fits-all strategy for auto-scrolling. I think in the future this should be a bit more extensible, and have either a pointer coordinates based strategy or a strategy that is based on the bounding rect of the dragged element
@clauderic Looking the grid example, I wonder if the auto-scrolling is too aggressive. Right now it seems that the page auto-scrolls when the element is brought down past the halfway point of the page. A simple solution might be to only scroll if the element that is being dragged is nearing the edge of the viewport. Any thoughts?
Having similar issues but with on drag instantly scrolling to the top in a scrollable container.
Any workarounds?
I don't know if this is related to this issue but when you try to scroll with dragged element without DragOverlay element position won't be updated for the scroll duration. You can see it here: https://5fc05e08a4a65d0021ae0bf2-vfebfgjygq.chromatic.com/?path=/docs/presets-sortable-grid--without-drag-overlay
Drag element and try to scroll the page with it. Result: mouse position and element position are different
I'm running into this issue as well - has anyone found an interim workaround?
I'm working on a fix to these issues here: #140
I am using dnd-Kit and sortable, context with an array. In my case auto scrolling is not working in a list while item being dragged.
|
gharchive/issue
| 2020-12-04T20:53:20
|
2025-04-01T06:38:12.417763
|
{
"authors": [
"AlfainCoder",
"Simba14",
"Sorgrum",
"clauderic",
"horprogs",
"ohayojp",
"willadamskeane"
],
"repo": "clauderic/dnd-kit",
"url": "https://github.com/clauderic/dnd-kit/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
241949792
|
Unexpected token when import css
I followed the guideline from the readme page and added the following line in my js file (It's a react component)
import "react-infinite-calendar/styles.css";
But I encounter this error when running gulp:
events.js:160
throw er; // Unhandled 'error' event
^
SyntaxError: Unexpected token
Does anyone know to fix it? Because I'm quite new to React
You need to use the appropriate loader.
Assuming you're using webpack, check out https://github.com/webpack-contrib/css-loader
|
gharchive/issue
| 2017-07-11T07:13:47
|
2025-04-01T06:38:12.420607
|
{
"authors": [
"clauderic",
"tcm2029"
],
"repo": "clauderic/react-infinite-calendar",
"url": "https://github.com/clauderic/react-infinite-calendar/issues/130",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
63862890
|
Masteries and Runes possible bug
I seem to have a problem retrieving the masteries and runes after the last update.
I tried all the "LolApi.Summoner..." functions and for some reason getMasteries() and getRunes() does not work for me. They return undefined as result and null as error.
Im using the 'eune' server.
I'll check it out!
fixed it in the latest version :)
Appreciate it! Thank you
|
gharchive/issue
| 2015-03-24T00:10:42
|
2025-04-01T06:38:12.423420
|
{
"authors": [
"claudiowilson",
"dukevomv"
],
"repo": "claudiowilson/LeagueJS",
"url": "https://github.com/claudiowilson/LeagueJS/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
264049982
|
Optimized ssr
Reduces file size of SSR build file. Fixes bug where sometimes hot reload server would be blank. Improves speed. Combines both package.json files. Fixes lazy loading of navigation without throwing hydration errors. Now not all modules will be loaded at once.
What kind of change does this PR introduce? (check at least one)
[x] Bugfix
[x] Feature
[ ] Code style update
[ ] Refactor
[ ] Build-related changes
[ ] Other, please describe:
Does this PR introduce a breaking change? (check one)
[x] Yes
[ ] No
If yes, please describe the impact and migration path for existing applications:
The PR fulfills these requirements:
[x] It's submitted to the dev branch and not the master branch
[ ] When resolving a specific issue, it's referenced in the PR's title (e.g. fix: #xxx[,#xxx], where "xxx" is the issue number)
[ ] It's been tested with all Quasar themes
[ ] It's been tested on a Cordova (iOS, Android) app
[ ] It's been tested on a Electron app
[ ] Any necessary documentation has been added or updated in the docs (for faster update click on "Suggest an edit on GitHub" at bottom of page) or explained in the PR's description.
If adding a new feature, the PR's description includes:
[x] A convincing reason for adding this feature (to avoid wasting your time, it's best to open a suggestion issue first and wait for approval before working on it)
This branch has the same issue as my latest pull request from hot-reload-ssr. The issue of mocking the window object and vue router history mode. Please see the other pull request for details.
|
gharchive/pull-request
| 2017-10-09T22:58:48
|
2025-04-01T06:38:12.429797
|
{
"authors": [
"codingfriend1"
],
"repo": "claustres/quasar-templates",
"url": "https://github.com/claustres/quasar-templates/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1275454151
|
🛑 livevault is down
In 934816d, livevault (https://livevault.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: livevault is back up in 0eb1320.
|
gharchive/issue
| 2022-06-17T20:51:47
|
2025-04-01T06:38:12.442335
|
{
"authors": [
"claytonplong"
],
"repo": "claytonplong/backup-uptime",
"url": "https://github.com/claytonplong/backup-uptime/issues/3130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1307013842
|
🛑 livevault is down
In b3fb421, livevault (https://livevault.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: livevault is back up in c724e4d.
|
gharchive/issue
| 2022-07-17T07:29:32
|
2025-04-01T06:38:12.444756
|
{
"authors": [
"claytonplong"
],
"repo": "claytonplong/backup-uptime",
"url": "https://github.com/claytonplong/backup-uptime/issues/3657",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1385176983
|
🛑 livevault is down
In d5e28c4, livevault (https://livevault.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: livevault is back up in 84f7cd3.
|
gharchive/issue
| 2022-09-25T21:58:15
|
2025-04-01T06:38:12.447110
|
{
"authors": [
"claytonplong"
],
"repo": "claytonplong/backup-uptime",
"url": "https://github.com/claytonplong/backup-uptime/issues/4766",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1444601243
|
🛑 livevault is down
In 651792d, livevault (https://livevault.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: livevault is back up in dc0ba81.
|
gharchive/issue
| 2022-11-10T21:39:14
|
2025-04-01T06:38:12.449650
|
{
"authors": [
"claytonplong"
],
"repo": "claytonplong/backup-uptime",
"url": "https://github.com/claytonplong/backup-uptime/issues/5329",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2339015002
|
change abort() to AuthError
Use AuthError instead of abort() for Invalid API keys to reduce the alert noise from wrong keys
Can you fix CI and format before merging
|
gharchive/pull-request
| 2024-06-06T19:31:22
|
2025-04-01T06:38:12.481516
|
{
"authors": [
"axl1313",
"kat-wicks"
],
"repo": "cleanlab/cleanlab-studio",
"url": "https://github.com/cleanlab/cleanlab-studio/pull/235",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
221015610
|
proxy: Restrict the socket and parent directory modes
We don't need those sockets to be read/writable to the whole world. Only
root is enough.
Signed-off-by: Damien Lespiau damien.lespiau@intel.com
Coverage remained the same at 69.828% when pulling 8c67a3135a1b42d0241a681c6c9fd88a70d7e214 on dlespiau:20170411-socket-perms into 653860d1e034d12963f2abf06e1f3289f7ed1348 on clearcontainers:master.
lgtm
|
gharchive/pull-request
| 2017-04-11T16:54:14
|
2025-04-01T06:38:12.484259
|
{
"authors": [
"coveralls",
"dlespiau",
"jodh-intel"
],
"repo": "clearcontainers/proxy",
"url": "https://github.com/clearcontainers/proxy/pull/29",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1085471932
|
oop 0.0.3
Type: Missing
Summary:
oop 0.0.3
Details:
NPM license field indicates NONE
No project link
Way Back Machine had no record
Resolution:
NONE
Affected definitions:
oop 0.0.3
@capfei - I think this might be the repo/license - https://github.com/felixge/node-oop/blob/master/LICENSE. Thoughts on updating this from NONE to MIT?
@ariel11 Did you find that link somewhere in the package? I didn't see it.
@capfei - I dont see the repo link in the package specifically, however both are by "felixge" and there's this issue where someone asked about a license and "felixge" confirmed MIT. I think we can take that as evidence this package is MIT. Thoughts?
|
gharchive/pull-request
| 2021-12-21T06:06:40
|
2025-04-01T06:38:12.514946
|
{
"authors": [
"LandOfBliss",
"ariel11",
"capfei"
],
"repo": "clearlydefined/curated-data",
"url": "https://github.com/clearlydefined/curated-data/pull/16571",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
204129387
|
Changing User-Agent
At first: Really nice project!
I have tried to change the User-Agent via:
Map<String, String> params = new HashMap<>();
params.put("User-Agent", "Mozilla/5.0 (Windows NT 6.0; rv:2.0.1) Gecko/20100101 Firefox/4.0.1");
System.out.println(net.dongliu.requests.Requests.get("http://localhost:2343/init.aspx").headers(params).send().readToText());
But it doesn't seem to work: The User-Agent is Requests/4.0, Java 1.8.0
So is it possible to change the agent?
This code do change user-agent for me. Which version you are using?
Or you can try
Requests.get(url).userAgent("Mozilla/5.0 (Windows NT 6.0; rv:2.0.1) Gecko/20100101 Firefox/4.0.1")
Thanks for replying!
I am using the latest version from the central maven repo. --> 4.7.1
I have tried your solution, but it doesn't seem to work! ( I have used www.whoishostingthis.com for getting the user-agent)
System.out.println(net.dongliu.requests.Requests.get("http://www.whoishostingthis.com/tools/user-agent").userAgent("Mozilla/5.0 (Windows NT 6.0; rv:2.0.1) Gecko/20100101 Firefox/4.0.1").send().readToText());
Does this work for you ?
The site redirect http://www.whoishostingthis.com/tools/user-agent to http://www.whoishostingthis.com/tools/user-agent/, seems Requests does not use the specified user-agent when send the redirected http request. I will fix this later.
For now, using Requests.get("http://www.whoishostingthis.com/tools/user-agent/") you can get the expected result.
Hey thanks, I will test it when the maven repo is up-to-date!
Now you can try version 4.7.2, set user agent by .userAgent("Mozilla/5.0 (Windows NT 6.0; rv:2.0.1) Gecko/20100101 Firefox/4.0.1")
|
gharchive/issue
| 2017-01-30T21:06:17
|
2025-04-01T06:38:12.520608
|
{
"authors": [
"MarcDee91",
"clearthesky"
],
"repo": "clearthesky/requests",
"url": "https://github.com/clearthesky/requests/issues/10",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2151576705
|
fix(clerk-js): Use a fixed border radius value for OrganizationAvatar
Description
OrganizationAvatar should maintain the border-radius, even if the user passes a value in borderRadius variable.
Checklist
[ ] npm test runs as expected.
[ ] npm run build runs as expected.
[ ] (If applicable) JSDoc comments have been added or updated for any package exports
[ ] (If applicable) Documentation has been updated
Type of change
[ ] 🐛 Bug fix
[ ] 🌟 New feature
[ ] 🔨 Breaking change
[ ] 📖 Refactoring / dependency upgrade / documentation
[ ] other:
!preview
@desiprisg yes, i'm sure!
|
gharchive/pull-request
| 2024-02-23T18:15:50
|
2025-04-01T06:38:12.569865
|
{
"authors": [
"anagstef"
],
"repo": "clerk/javascript",
"url": "https://github.com/clerk/javascript/pull/2853",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
561021832
|
Fix string_keys to work when it's the only option specified
If string_keys was set without any other options, it wouldn't work since "name" variable is initially a symbol.
Fix #24.
Thank you
|
gharchive/pull-request
| 2020-02-06T13:44:12
|
2025-04-01T06:38:12.661707
|
{
"authors": [
"fernandomm",
"stepozer"
],
"repo": "cloocher/xmlhasher",
"url": "https://github.com/cloocher/xmlhasher/pull/25",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2068582959
|
Use parseMarkdownFile instead of parseMarkdownString
Fixes #276
Can you resolve the conflicts?
Can you resolve the conflicts?
Done! I hadn't branched from the latest main, sorry :grimacing:
Thanks!
|
gharchive/pull-request
| 2024-01-06T12:19:54
|
2025-04-01T06:38:12.663121
|
{
"authors": [
"andremartinssw",
"bourdakos1"
],
"repo": "cloud-annotations/docusaurus-openapi",
"url": "https://github.com/cloud-annotations/docusaurus-openapi/pull/277",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
484979412
|
conversion failed
After training is finished. I got conversion failed.
I got the same problem,
I still got this error, after upgrading to the latest version.
|
gharchive/issue
| 2019-08-25T21:40:42
|
2025-04-01T06:38:12.664121
|
{
"authors": [
"imokya",
"vyphan009"
],
"repo": "cloud-annotations/training",
"url": "https://github.com/cloud-annotations/training/issues/112",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1113777254
|
Return info of the KeyPair Create/List/Get API are not clear
Issue: KeyPair Create/List/Get API 호출시 반환 정보가 일괄성이 없음
ex) driver별로 public key를 제공하거나 제공하지 않거나
cf) KeyPairInfo Spec
type KeyPairInfo struct {
IId IID // {NameId, SystemId}
Fingerprint string
PublicKey string
PrivateKey string
VMUserID string
KeyValueList []KeyValue
}
cf) Example of the KeyPair List/Get return info(AWS drvier)
{
"keypair" : [
{
"IId" : {
"NameId" : "keypair-01",
"SystemId" : "keypair-0-c7nqod2ba5o2et11hlm0"
},
"Fingerprint" : "3f:79:34:28:87:09:44:ee:b0:41:e4:e4:6d:e7:5f:62:b9:d5:87:67",
"PublicKey" : "",
"PrivateKey" : "",
"VMUserID" : "",
"KeyValueList" : []
}
]
}
[Driver 별 List API 제공 졍보 현황]
empty: 시험을 안해본 드라이버
| | IId | PublicKey | PriavateKey | Fingerprint | VMUserID | KeyValueList
-- | -- | -- | -- | -- | -- | -- | --
1 | AWS | O | X | X | O | X | X
2 | Azure | O | O | X | X | X | X
3 | GCP | O | O | O | X | X | X
4 | Alibaba | O | X | X | O | X | CreationTime
5 | Tencent | O | X | X | X | X | KeyId
6 | IBM | | | | | |
7 | OpenStack | O | O | X | O | X | X
8 | Cloudit | O | O | O | X | X | X
9 | NCP | | | | | |
10 | KT | | | | | |
11 | NHN | | | | | |
[현황 및 계획]
현황: 초기 API 설계와 달리, 현재까지 실제 사용되는 정보는 key 생성 시점에 private key 외에는 없어 보임
질문: Fingerprint는 언제 사용되는가?
계획: 최소 정보 제공으로 적용 후 일정 기간 운영, 그 후 KeyPairInfo struct 변경을 고려
계획: 일부 driver에서는 private key 등을 자체 활용하므로 Spider 서버 단에서 API별 제공 정보 일관성을 일괄 반영
[API별 제공 정보 제안]
Create API 반환시 제공 정보 제안
Iid
PublicKey
PriavateKey
Fingerprint
VMUserID
KeyValueList
O
X
O
X
cb-user
자율
List/Get API 반환시 제공 정보 제안
Iid
PublicKey
PriavateKey
Fingerprint
VMUserID
KeyValueList
O
X
X
X
cb-user
자율
현재 KeyPairInfo struct 구조는 한동안 현행 유지
@hyokyungk @inno-cloudbarista @dev4unet @innodreamer @seokho-son @jihoon-seo @sykim-etri
위 이슈의 [현황 및 계획], [API별 제공 정보 제안]
부분을 검토하시어 추가 고려 사항이나 의견 있으시면 의견 달아 주시면 감사하겠습니다. ( ~ 1/28.금 정도까지)
Fingerprint는 어디다 쓰는지? 사용될 여지가 있는지?
FW별로 저 정도의 최소 정보 제공이면 족한 것인지?
등등
[Fingerprint]
https://docs.bmc.com/docs/display/itda27/About+the+SSH+host+key+fingerprint
https://superuser.com/questions/421997/what-is-a-ssh-key-fingerprint-and-how-is-it-generated
https://superuser.com/questions/1377132/get-the-fingerprint-of-an-existing-ssh-public-key
https://arsviator.blogspot.com/2015/04/ssh-ssh-key.html
https://blueyikim.tistory.com/1792
The fingerprint is based on the host's public key, usually based on the /etc/ssh/ssh_host_rsa_key.pub file. Generally it's for easy identification/verification of the host you are connecting to.
If the fingerprint changes, the machine you are connecting to has changed their public key. This may not be a bad thing (happens from re-installing ssh), but it could also indicate that you are connecting to a different machine at the same domain/IP (happens when you are connecting through something like a load balancer) or that you are being targeted with a man-in-the-middle attack, where the attacker is somehow intercepting/rerouting your ssh connection to connect to a different host which could be snooping your username/password.
Bottom line: if you get warned of a changed fingerprint, be cautious and double check that you're actually connecting to the correct host over a secure connection. Though most of the time this is harmless, it can be an indication of a potential issue.
CB 개발/시험용으로 쓸 때에는 Fingerprint는 신경쓰지 않을 수도 있겠습니다.
prod 도 고려한다면, 추후 Rogue VM 등에 대한 대비책이 필요할 수도 있으므로,
Fingerprint를 일단 저장이라도 해 놓는 편이 좋을 것 같습니다.
말씀해주신 API별 제공 정보 제안에서
Azure, Ibm-vpc, openstack는 CSP에서 SSH키 기능을 제공하기에 적용가능한 사항입니다.
Cloudit은 저번 이슈사항(#548)을 해결한 후에 적용가능합니다.
@inno-cloudbarista
넵, 알겠습니다.
AWS / Alibaba에서 핑거프린트는 콘솔에서 키페어 목록 노출용으로 사용되고 있으며
자체 키 관리 등을 할 경우 ID나 키 검증 등에 활용될 수는 있지만 현재 cb-spider에서 사용되는 곳은 없습니다.
PublicKey / PriavateKey의 경우 CSP의 조회 API에서는 리턴되는 정보가 없으며
기존에 VM생성 시 cb-user 생성을 위해 PublicKey가 사용되었지만 공개키를 이용하지 않는 방식으로 변경되어서
현재는 자체 키를 관리하는 GCP의 경우에만 VM 생성 시 PublicKey를 사용하고 있습니다.
@dev4unet
넵, 감사합니다.
반영하여 정리해보도록 하겠습니다.
|
gharchive/issue
| 2022-01-25T11:33:14
|
2025-04-01T06:38:12.681355
|
{
"authors": [
"dev4unet",
"inno-cloudbarista",
"jihoon-seo",
"powerkimhub"
],
"repo": "cloud-barista/cb-spider",
"url": "https://github.com/cloud-barista/cb-spider/issues/560",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1163848913
|
Adding BOM Catalog
BOM Categories
Create New Cluster
Use Existing Cluster
Bom Catalog UI
Implemented MVP1
Displays the contents category wise
With in the categories, grouping is done according to the cloud provider.
/lgtm
|
gharchive/pull-request
| 2022-03-09T11:59:22
|
2025-04-01T06:38:12.687404
|
{
"authors": [
"csantanapr",
"phemankita"
],
"repo": "cloud-native-toolkit/automation-modules",
"url": "https://github.com/cloud-native-toolkit/automation-modules/pull/279",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
590913237
|
Readme variable prometheus_binaries_local_dir
What did you do?
I run command playbook with this role with variable "prometheus_binaries_local_dir" but it seems playbook could not complete. This need to be done since my environment is located on air gapped network.
Did you expect to see some different?
I expect the playbook complete to install prometheus to the remote host
Environment
Role version:
Insert release version/galaxy tag or Git SHA here
Ansible version information:
ansible 2.9.2 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/deploy/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 2 2016, 04:20:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]
Variables:
---
- name : Deploy prometheus
hosts: ams
roles:
- cloudalchemy.prometheus
vars:
prometheus_version: 2.17.0
prometheus_binary_local_dir: /data/monitoring/prometheus/
prometheus_db_dir: /apps/prometheus/database
prometheus_targets:
node:
- targets:
- localhost:9100
labels:
env: ams
Ansible playbook execution Logs:
PLAY [Deploy prometheus] **********************************************************************************
TASK [Gathering Facts] ************************************************************************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Gather variables for each operating system] *******************************
ok: [camelia-ams] => (item=/etc/ansible/roles/cloudalchemy.prometheus/vars/redhat.yml)
TASK [cloudalchemy.prometheus : Assert usage of systemd as an init system] ********************************
ok: [camelia-ams] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [cloudalchemy.prometheus : Get systemd version] ******************************************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Set systemd version fact] *************************************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Assert no duplicate config flags] *****************************************
ok: [camelia-ams] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [cloudalchemy.prometheus : Assert external_labels aren't configured twice] ***************************
ok: [camelia-ams] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [cloudalchemy.prometheus : Set prometheus external metrics path] *************************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Fail when prometheus_config_flags_extra duplicates parameters set by other variables] ***
skipping: [camelia-ams] => (item=storage.tsdb.retention)
skipping: [camelia-ams] => (item=storage.tsdb.path)
skipping: [camelia-ams] => (item=storage.local.retention)
skipping: [camelia-ams] => (item=storage.local.path)
skipping: [camelia-ams] => (item=config.file)
skipping: [camelia-ams] => (item=web.listen-address)
skipping: [camelia-ams] => (item=web.external-url)
TASK [cloudalchemy.prometheus : Get all file_sd files from scrape_configs] ********************************
ok: [camelia-ams]
TASK [cloudalchemy.prometheus : Fail when file_sd targets are not defined in scrape_configs] **************
skipping: [camelia-ams] => (item={'value': [{u'labels': {u'env': u'ams'}, u'targets': [u'localhost:9100']}], 'key': u'node'})
TASK [cloudalchemy.prometheus : Alert when prometheus_alertmanager_config is empty, but prometheus_alert_rules is specified] ***
ok: [camelia-ams] => {
"msg": "No alertmanager configuration was specified. If you want your alerts to be sent make sure to specify a prometheus_alertmanager_config in defaults/main.yml.\n"
}
TASK [cloudalchemy.prometheus : Get latest release] *******************************************************
skipping: [camelia-ams]
TASK [cloudalchemy.prometheus : Set prometheus version to {{ _latest_release.json.tag_name[1:] }}] ********
skipping: [camelia-ams]
TASK [cloudalchemy.prometheus : Get checksum list] ********************************************************
fatal: [camelia-ams]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'url'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Failed lookup url for https://github.com/prometheus/prometheus/releases/download/v2.17.0/sha256sums.txt : <urlopen error timed out>"}
NO MORE HOSTS LEFT ****************************************************************************************
Anything else we need to know?:
I realized that in Readme File, you put the variable as "prometheus_binaries_local_dir" while in defaults/main.yml, the correct value is prometheus_binary_local_dir. Once I update my playbook file variable accordingly, it successfully complete. Does the Readme need update?
Yes, readme needs an update, the correct variable name is in defaults/main.yml
Fixed with #280
|
gharchive/issue
| 2020-03-31T08:43:12
|
2025-04-01T06:38:12.701164
|
{
"authors": [
"asatblurbs",
"paulfantom"
],
"repo": "cloudalchemy/ansible-prometheus",
"url": "https://github.com/cloudalchemy/ansible-prometheus/issues/279",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2572384633
|
Uniform log sources
WHAT is this pull request doing?
Create all Log instances from LavinMQ::Log. This will make all log sources prefixed with "lmq" which in turn makes it easy to see if a log comes from a lib (e.g. amqp-client) or from lavinmq itself.
HOW can this pull request be tested?
Run lavin with --debug and look at log output.
Should probably require "./lavinmq/logger" in each file that needs it, rather on a top level. If we want to compile any part independently. Pretty sure vim will complain otherwise, as it won't be able to compile it to check formatting, linting etc.
|
gharchive/pull-request
| 2024-10-08T07:49:36
|
2025-04-01T06:38:12.703278
|
{
"authors": [
"carlhoerberg",
"spuun"
],
"repo": "cloudamqp/lavinmq",
"url": "https://github.com/cloudamqp/lavinmq/pull/800",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
201286543
|
Add ability to have multiple PeriodicReplicationServices
This PR fixes the following issues:
The abstract class PeriodicReplicationService currently stores information in SharedPreferences under fixed key names. This means that if an app implements multiple concrete PeriodicReplicationServices they will interfere with each other. To fix this, we now store the values in SharedPreferences using keys prefixed with the name of the concrete class implementing the PeriodicReplicationService.
After a reboot, the elapsed time since boot at which the next should occur isn't always updated to reflect the fact the device rebooted. This could lead to replications not being invoked at the expected times.
Rather than storing the time at which the next replication should be triggered, we now store the time at which the last replication occurred. This feels more logical and means we don't have to recalculate the value stored in SharedPreferences after bind/unbind and the only time the value will need adjusting is after a reboot.
Testing
The existing tests have been updated in accordance with these changes. They now verify that the keys used in SharedPreferences are prefixed with the name of the concrete implementation of PeriodicReplicationService and verify that the elapsed time since boot is always updated after the service has been notified of a reboot.
@tomblench I've tried to address your comments in 8bfe34e. Hopefully it makes the tests more easily readable.
|
gharchive/pull-request
| 2017-01-17T13:39:12
|
2025-04-01T06:38:12.711355
|
{
"authors": [
"brynh"
],
"repo": "cloudant/sync-android",
"url": "https://github.com/cloudant/sync-android/pull/480",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1957174690
|
🛑 idwebhost.com is down
In f150083, idwebhost.com (https://idwebhost.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: idwebhost.com is back up in 2827b7a after 8 minutes.
|
gharchive/issue
| 2023-10-23T13:29:58
|
2025-04-01T06:38:12.716066
|
{
"authors": [
"cloudbip"
],
"repo": "cloudbip/upptime",
"url": "https://github.com/cloudbip/upptime/issues/14174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1013686792
|
🛑 ach.id is down
In f512354, ach.id (https://ach.id) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ach.id is back up in 29f009d.
|
gharchive/issue
| 2021-10-01T19:21:07
|
2025-04-01T06:38:12.719176
|
{
"authors": [
"cloudbip"
],
"repo": "cloudbip/upptime",
"url": "https://github.com/cloudbip/upptime/issues/186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1315704514
|
🛑 my.dewabiz.com is down
In b177400, my.dewabiz.com (https://my.dewabiz.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: my.dewabiz.com is back up in 7bde932.
|
gharchive/issue
| 2022-07-23T17:10:41
|
2025-04-01T06:38:12.722166
|
{
"authors": [
"cloudbip"
],
"repo": "cloudbip/upptime",
"url": "https://github.com/cloudbip/upptime/issues/5768",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1410316797
|
🛑 cbn.id is down
In 8d5874a, cbn.id (https://cbn.id) was down:
HTTP code: 0
Response time: 0 ms
Resolved: cbn.id is back up in 9e402e6.
|
gharchive/issue
| 2022-10-15T23:22:54
|
2025-04-01T06:38:12.725178
|
{
"authors": [
"cloudbip"
],
"repo": "cloudbip/upptime",
"url": "https://github.com/cloudbip/upptime/issues/7380",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1565615723
|
🛑 hsp.net.id is down
In c1b8df9, hsp.net.id (https://hsp.net.id) was down:
HTTP code: 0
Response time: 0 ms
Resolved: hsp.net.id is back up in 1e3bfaa.
|
gharchive/issue
| 2023-02-01T07:55:59
|
2025-04-01T06:38:12.728138
|
{
"authors": [
"cloudbip"
],
"repo": "cloudbip/upptime",
"url": "https://github.com/cloudbip/upptime/issues/9837",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
382134399
|
Laravel 5.7 support - fresh app install
Hello
When installing on a fresh Laravel 5.7 app instance, I had to change the following in the RouteServiceProvider:
Otherwise I got 404 errors.
Best regards,
Ciaro
Thank you for your reply! I must have overlooked that section.
|
gharchive/issue
| 2018-11-19T09:52:37
|
2025-04-01T06:38:12.729799
|
{
"authors": [
"Ciaro"
],
"repo": "cloudcreativity/laravel-json-api",
"url": "https://github.com/cloudcreativity/laravel-json-api/issues/260",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
250521824
|
Corrected an error spelling
Corrected an error spelling in doc
Nice!
Would you mind rebasing the PR so that we only have the change in 'docs/manual.txt' showing up?
|
gharchive/pull-request
| 2017-08-16T06:16:32
|
2025-04-01T06:38:12.730968
|
{
"authors": [
"romainr",
"xiaolongge904913"
],
"repo": "cloudera/hue",
"url": "https://github.com/cloudera/hue/pull/580",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
831819274
|
Can't Detect Wordpress
My station's site is built on wordpress, the cloudflare plugin is enabled, I've connected it to the API key. This morning I decided to go ahead and upgrade to $20 / month as we're getting a tone of traffic and the site is getting incredibly slow in the mornings. When I go to enable APO - it tells me that it's not a Wordpress website...
Hi @trevordbc please try following the steps: "What if my Cloudflare dashboard says it can't detect the WordPress plugin?".
If it won't help please raise the support ticket and post the number here (I will have a look).
please install the latest version of the plugin and navigate to APO card in the plugin, it should fix automatically your APO settings.
|
gharchive/issue
| 2021-03-15T13:37:05
|
2025-04-01T06:38:12.734408
|
{
"authors": [
"sejoker",
"trevordbc"
],
"repo": "cloudflare/Cloudflare-WordPress",
"url": "https://github.com/cloudflare/Cloudflare-WordPress/issues/388",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
756754900
|
Add support for configuration via environment vars
Having a the ability to configure credentials via environment vars would be quite appreciated by advanced users.
Storing credentials in a database can be insecure if data is exported to other environments, eg: dev.
Reusing credentials is not flexible if dev environments are running on different domain(s).
awesome stuff! after battling YAML, i managed to test this out and it works great! thank you ❤️
Sure thing! Glad to help.
|
gharchive/pull-request
| 2020-12-04T02:53:17
|
2025-04-01T06:38:12.736153
|
{
"authors": [
"jacobbednarz",
"joeles"
],
"repo": "cloudflare/Cloudflare-WordPress",
"url": "https://github.com/cloudflare/Cloudflare-WordPress/pull/332",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1100804961
|
Adding Firebase App Check workers template to docs
I've been using CF workers along with Firebase App Check for a while now in production. This is sort of equivalent to a captcha, and is useful for preventing easy/excessive access to resources without first passing the check (an invisible captcha on the web, or Android/iOS attestation).
I thought it might be useful to share the code in the form of a template, available here https://github.com/chocolatkey/worker-appcheck-template , inspired by the current quickstart examples. Is it worth adding this as one?
Hey @chocolatkey - thanks for the idea! Would you mind opening a PR and adding the template to this page: https://developers.cloudflare.com/workers/get-started/quickstarts#example-projects
File in source is here: https://github.com/cloudflare/cloudflare-docs/blob/production/products/workers/src/content/get-started/quickstarts.md
@codewithkristian Done, here: https://github.com/cloudflare/cloudflare-docs/pull/3365
|
gharchive/issue
| 2022-01-12T21:27:21
|
2025-04-01T06:38:12.745377
|
{
"authors": [
"chocolatkey",
"codewithkristian"
],
"repo": "cloudflare/cloudflare-docs",
"url": "https://github.com/cloudflare/cloudflare-docs/issues/3134",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2092209328
|
[⚡ Feature]: Supporting Next.js runtime Node.js
Description
I see Cloudflare supporting more and more Node.js modules.
Additional Information
No response
Would you like to help?
[ ] Would you like to help implement this feature?
Hi @lcswillems 👋
Thanks for the issue 🙂, unfortunately I think that supporting the Node.js runtime is quite out of the question, mainly for two reasons:
This adapter works by generating a worker using the Vercel CLI build command (vercel build) output, such output comprises of edge functions and (AWS) lambda functions, the adapter collects the code from the former to build the worker and discards the latter (when discarding the code doesn't generate a loss of functionality, if it does then the adapter's build process fails). The latter is code very AWS lambda specific and not something we can really use (unless we were to deploy it to AWS).
The Cloudflare runtime does support more Node.js modules but not all of them (fs, path, os, etc...), I am not sure if and how we on next-on-pages could support those? should they be no-op? surely that would break in many many use case no? (PS: I have no idea if/how/when they could be included in the runtime either)
The only practical solution I can think that would address both issues above would be to, actually make @cloudflare/next-on-pages work with both runtimes, and for node runtimes have the lambda(s) deployed to AWS, although that would introduce many issues:
we want the code only to run in a worker as that is what provides the best and fastest user experience, mixing this with lambdas, which often could be required to run before the worker code, could quite defeat the purpose here
we'd have to find a way to connect the worker we produce with the AWS lambda(s) we get, making the whole application more complex and with more failure points. This would likely be quite awkward/cumbersome locally.
we'd have to find a way to "version"/"keep in sync" the worker and the AWS lambda(s), which might be very tricky since the two would reside on different platforms.
Additionally (likely a personal opinion)... this project is called next-on-pages as it aims to allow running a Next.js applications on the Cloudflare Pages platform, so having it more generic and include AWS (or whatever else platform) seems out of scope to me and not something that this project was designed to do/created for.
Please let me know what you think of the above 🙂
If you have any potential solutions/ideas please also feel free to throw them my way 😄
Hi @dario-piotrowicz , what about just transforming the AWS lambda into workers? Doesn't seem too hard? And it would fail only if I use node modules not supported by Cloudflare.
@lcswillems as I mentioned above:
This adapter works by generating a worker using the Vercel CLI build command
(vercel build) output, such output comprises of edge functions and (AWS) lambda
functions, the adapter collects the code from the former to build the worker and
discards the latter (when discarding the code doesn't generate a loss of functionality,
if it does then the adapter's build process fails). The latter is code very AWS lambda
specific and not something we can really use (unless we were to deploy it to AWS).
The Vercel build generates complex lambdas which usually contain multiple routes bundled and grouped in an optimized way so that they grow as big as they can without reaching the AWS lambdas max side (50MB). So besides other things here, opposite to what we have with edge functions, there isn't even a 1-to-1 relationship between lambdas and routes.
It is as I said an AWS specific build, so pretty useless for us. I did look into it in depth a while back (looked wether we could infer any useful information/extract any useful code from the lambdas output I mean) which gives me confidence in saying that there, in my opinion, isn't really anything valuable from the lambdas that we could reuse 😕
|
gharchive/issue
| 2024-01-20T19:14:52
|
2025-04-01T06:38:12.755959
|
{
"authors": [
"dario-piotrowicz",
"lcswillems"
],
"repo": "cloudflare/next-on-pages",
"url": "https://github.com/cloudflare/next-on-pages/issues/647",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
590345104
|
QUIC datagram extensions
Hi,
Is there any intention/plan of supporting the current draft of datagram extensions as defined by https://tools.ietf.org/html/draft-ietf-quic-datagram-00 ?
In case this is not in the plans, would such a contribution be welcome or is it considered out of scope ?
I'd be interested in being able to use this.
I have a separate fork of quiche with rough DATAGRAM support.
The implementation is pretty straightforward, the trickier part is getting the API correct.
Now that the DATAGRAM extension has been adopted by the QUIC WG there is a stronger case for looking at this more seriously. Having people interested in using it makes a better case for getting feedback and coverage of the code.
Happy to help iterate on the API.
I did a quick rebase of LPardue's forked branch on top of master (https://github.com/xanathar/quiche/commits/datagrams-0.3.0).
This is the current API I extracted diffing the files (as currently implemented, of course):
// In lib.rs:
pub fn Config::set_max_datagram_frame_size(&mut self, v: u64)
pub fn Connection::dgram_send(&mut self, buf: &[u8]) -> Result<()>
pub fn Connection::dgram_recv(&mut self) -> Result<Vec<u8>>
// In webtransport/mod.rs:
pub enum Error
pub const QUICTRANSPORT_ALPN: &[u8]
pub type Result<T>
pub fn QuicTransport::with_transport(conn: &mut super::Connection, origin: &str, path: &str) -> Result<QuicTransport>
pub fn QuicTransport::dgram_send(&mut self, conn: &mut super::Connection, buf: &[u8]) -> Result<()>
// In h3/ffi.rs:
#[no_mangle] pub extern fn quiche_h3_datagram_event_type(ev: &h3::DatagramEvent) -> u32
pub enum DatagramEvent
pub fn dgram_send(&mut self, conn: &mut super::Connection, flow_id: u64, buf: &[u8]) -> Result<()>
pub fn poll_dgram(&mut self, conn: &mut super::Connection) -> Result<(u64, DatagramEvent)>
pub fn process_dgram(&mut self, conn: &mut super::Connection) -> Result<(u64, DatagramEvent)>
I'm currently focusing on the quic part (i.e. no http3 things).
The Connection::dgram_recv API has a signature which is radically different from Connection::stream_recv, but something like pub fn dgram_recv(&mut self, out: &mut [u8]) -> Result<(usize, bool)> would require an extra copy of the buffer (but maybe it would allow an easier interop with C).
As far as C interop is concerned, I drafted a commit with a working C interface to the datagram API: https://github.com/xanathar/quiche/commit/1dbb0b0a3e3c043fe403ae636fa65516b1262ed6 ; feedbacks are welcome (specially around quiche_conn_dgram_free).
One question on the functional aspect: in theory the datagram extension draft says datagrams must be subject to the flow control which in quiche is represented by the send api limits (or whatever is returned by stream_capacity). Does this mean that dgram_send should be limited by the application ? if so, should we expose something more ?
Thanks for looking at this. Ideally the datagram work should be broken into three pieces that reflect the maturity of the standards process: QUIC datagram, H3 datagram and WebTransport. WebTransport is different enough that I should spin it off into a separate PR and we can ignore it for now.
stream_recv allows the caller to read as much or as little as they want. IIRC I decided to make the dgram_recv signature different because it doesn't make sense to read a partial datagram. Given the pains you've had to go through to get this working with the C API I'm not opposed to changing dgram_recv, it would just need some checks to ensure the output buffer is large enough for the received datagram. Providing a large enough buffer is pretty easy because the receiving endpoint set's its own max_datagram_frame_size TP.
I don't know. On one side it would allow for cleaner interop, on the other if one is using datagrams is probably for performance sensitive stuff, and saving a copy could be worth, specially considering that, with datagrams coming out of order, duplicated etc. it's likely that the application cannot specify the final location but needs yet another copy to reassemble the message in any case. I'm seriously on the fence on this.
As an aside, I added some datagram calls to a C++ application I had which is also using a stream and sometimes it stops working; I'm debugging it right now. 99% is some mistake on my side, but it seems it's spinning in while let Some(len) = self.dgram_queue.peek_writable(). Will keep you updated.
I've reasoned a while on the interface which datagrams extensions could have to the application code, and this are a couple of my proposals, please provide feedback.
Scope
This proposal works only around the QUIC part. I have not considered (yet) the H3 part of the problem, nor the WebTransport, to start from the foundations.
Also, I tried to do an effort towards having an API which most closely matches the other APIs in quiche, if no real drawback comes from this (there are trade-offs to be considered).
Rationales
I took into account three points of rationale from days of testing this in (private) real-world prototypes.
First is having some control on the outgoing queue (that is the queue of packets to be sent). This is to handle several scenarios where datagrams which were meant to be sent asap, now have lost their "value"; for example a typical case is an application discarding old stale datagrams when a new datagram of the same type is appended to the queue, or prioritizing datagrams.
Typical cases here are previous video frames after too much time has passed, videogame state snapshots, audio packets after a given delta of time has passed (if no speed-modulation is done) or really any data which is either cumulative and idempotent or has a maximum useful age.
Second, a topic raised in this issue and https://github.com/LPardue/quiche/pull/1 was whether it was best to keep the dgram_recv interface as it was in LPardue's branch (dgram_recv(&mut self) -> Result<Vec<u8>>) or it was better to have it in a form more similar to stream_recv which also allows for an easier to use interface to C at the expense of an extra memory copy (though not allocation) at every call (dgram_recv(&mut self, out: &mut [u8])). Details can be found at https://github.com/LPardue/quiche/pull/1, but the long story short is that there seems to be no or negligible advantages in having one less copy of data, and the cleaner interface is a winner; if what found in that issue proves to be incorrect, it’s a matter of amending dgram_recv.
Third - minor - for naming uniformity with Config::set_max_datagram_frame_size, which cannot be renamed to ..dgram.. because it refers to a specific QUIC transport parameter, I renamed all the methods from dgram_something to datagram_something.
Proposal 1
This proposal is the one with the simplest interface. Everything stays the same as in LPardue's branch, except for minor details:
The ability to configure the size of the datagram queues (i.e. 2 config calls)
A datagram_purge_outgoing to allow applications to remove datagrams waiting to be sent, which are outdated for some reason.
The drawback of this approach, which is far cleaner than my second proposal, is that applications can easily purge outdated packets in the sending queue, but not reprioritize them (other then purging the queue temporarily, and re-adding them in the desired order). Still this can be extended, for example with a datagram_send_urgent or similar, in a second time, when and if it proves to be needed.
/// Sets the maximum size of the datagram sende queue
pub fn Config::set_datagram_send_queue_size(&mut self, size: u64) -> Result<()>;
/// Sets the maximum size of the datagram received queue
pub fn Config::set_datagram_recv_queue_size(&mut self, size: u64) -> Result<()>;
/// Sets the transport's max_datagram_frame_size
pub fn Config::set_max_datagram_frame_size(&mut self, v: u64);
/// Sends a datagram on a connection
pub fn Connection::datagram_send(&mut self, buf: &[u8]) -> Result<()>;
/// Receives a datagram from the connection
pub fn Connection::datagram_recv(&mut self, out: &mut [u8]) -> Result<usize>;
/// Iterates over the outgoing queue and purges datagrams matching the filter
pub fn Connection::datagram_purge_outgoing<F>(&mut self, filter: F) -> Result<()>
where F: FnMut(&[u8]) -> bool;
Proposal 2
This proposal is the one with the most control over the sending queue. An object with the DatagramsOutgoingQueue trait can be passed by the application to customize the behavior of the sending queue to its preference.
/// An implementation may be passed by the application, otherwise the default one
/// will be used.
pub trait DatagramsOutgoingQueue
where
Self: std::fmt::Debug,
{
/// Adds a datagram to the outgoing queue
pub fn push_datagram(&mut self, data: &[u8]) -> Result<()> ;
/// Returns the size of the next outgoing datagram to be sent, or None.
pub fn peek_datagram(&self) -> Option<usize>;
/// Returns the next buffer to be sent (or None).
pub fn pop_datagram(&mut self) -> Result<&[u8]>;
}
/// Sets the outgoing datagram queue to be used, the default one is used otherwise
pub fn Config::set_datagram_send_queue(&mut self, &queue: DatagramsOutgoingQueue);
/// Sets the maximum size of the datagram received queue
pub fn Config::set_datagram_recv_queue_size(&mut self, size: u64) -> Result<()>;
/// Sets the transport's max_datagram_frame_size
pub fn Config::set_max_datagram_frame_size(&mut self, v: u64);
/// Sends a datagram on a connection, queuing it to the current DatagramsOutgoingQueue
pub fn Connection::datagram_send(&mut self, buf: &[u8]) -> Result<()>;
/// Receives a datagram from the connection
pub fn Connection::datagram_recv(&mut self, out: &mut [u8]) -> Result<usize>;
As said, feedback is very welcome, as is any alternative, etc.
@xanathar proposal 2 is an interesting alternative. How do you see an application loop working with this? Something like:
conn.datagram_send("foo");
conn.datagram_send("bar");
conn.datagram_send("baz");
conn.send(...) which internally will get around to calling self.DatagramsOutgoingQueue.pop()
Do you imagine an ability to purge items in DatagramsOutgoingQueue, which may be public or private depending on need?
yes, proposal 2 is exactly how you described; the ability to purge items in DatagramsOutgoingQueue would be there, since the object with the DatagramsOutgoingQueue trait would be provided by the application itself (with a default implementation which doesn't allow it, for those who don't need that level of control).
|
gharchive/issue
| 2020-03-30T14:38:52
|
2025-04-01T06:38:12.773690
|
{
"authors": [
"LPardue",
"ctiller",
"xanathar"
],
"repo": "cloudflare/quiche",
"url": "https://github.com/cloudflare/quiche/issues/430",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1122508606
|
update validation records/errors for custom_hostname and certificate_pack
resource/custom_hostname: validation tokens are now an array (`validation_records`) instead of a top level, but the only top level record that was previously here was for cname validation, txt/http/email were entirely missing.
resource/custom_hostname: also adds missing `validation_errors`, and `certificate_authority`
resource/certificate_pack: adds `validation_errors` and `validation_records` with same format as custom hostnames.
relies of lib behavior added in https://github.com/cloudflare/cloudflare-go/pull/796
@nickysemenza could you move the CHANGELOG entry to the file as the documentation mentions? that will allow it post merge to be picked up correctly.
acceptance tests are green
TF_ACC=1 go test $(go list ./...) -v -run "^TestAccCloudflareCustomHostname" -count 1 -parallel 1 -timeout 120m -parallel 1
? github.com/cloudflare/terraform-provider-cloudflare [no test files]
=== RUN TestAccCloudflareCustomHostnameFallbackOrigin
--- PASS: TestAccCloudflareCustomHostnameFallbackOrigin (13.94s)
=== RUN TestAccCloudflareCustomHostnameFallbackOriginUpdate
--- PASS: TestAccCloudflareCustomHostnameFallbackOriginUpdate (23.16s)
=== RUN TestAccCloudflareCustomHostname_Basic
=== PAUSE TestAccCloudflareCustomHostname_Basic
=== RUN TestAccCloudflareCustomHostname_WithCustomOriginServer
=== PAUSE TestAccCloudflareCustomHostname_WithCustomOriginServer
=== RUN TestAccCloudflareCustomHostname_WithHTTPValidation
=== PAUSE TestAccCloudflareCustomHostname_WithHTTPValidation
=== RUN TestAccCloudflareCustomHostname_WithCustomSSLSettings
=== PAUSE TestAccCloudflareCustomHostname_WithCustomSSLSettings
=== RUN TestAccCloudflareCustomHostname_Update
=== PAUSE TestAccCloudflareCustomHostname_Update
=== RUN TestAccCloudflareCustomHostname_WithNoSSL
=== PAUSE TestAccCloudflareCustomHostname_WithNoSSL
=== RUN TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
=== PAUSE TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
=== RUN TestAccCloudflareCustomHostname_Import
=== PAUSE TestAccCloudflareCustomHostname_Import
=== CONT TestAccCloudflareCustomHostname_Basic
--- PASS: TestAccCloudflareCustomHostname_Basic (10.22s)
=== CONT TestAccCloudflareCustomHostname_Update
--- PASS: TestAccCloudflareCustomHostname_Update (21.45s)
=== CONT TestAccCloudflareCustomHostname_Import
--- PASS: TestAccCloudflareCustomHostname_Import (14.20s)
=== CONT TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
--- PASS: TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource (19.54s)
=== CONT TestAccCloudflareCustomHostname_WithNoSSL
--- PASS: TestAccCloudflareCustomHostname_WithNoSSL (10.91s)
=== CONT TestAccCloudflareCustomHostname_WithHTTPValidation
--- PASS: TestAccCloudflareCustomHostname_WithHTTPValidation (11.00s)
=== CONT TestAccCloudflareCustomHostname_WithCustomSSLSettings
--- PASS: TestAccCloudflareCustomHostname_WithCustomSSLSettings (10.29s)
=== CONT TestAccCloudflareCustomHostname_WithCustomOriginServer
--- PASS: TestAccCloudflareCustomHostname_WithCustomOriginServer (10.79s)
PASS
ok github.com/cloudflare/terraform-provider-cloudflare/cloudflare 146.054s
? github.com/cloudflare/terraform-provider-cloudflare/tools/cmd/changelog-check [no test files]
? github.com/cloudflare/terraform-provider-cloudflare/version [no test files]
|
gharchive/pull-request
| 2022-02-02T23:59:57
|
2025-04-01T06:38:12.780492
|
{
"authors": [
"jacobbednarz",
"nickysemenza"
],
"repo": "cloudflare/terraform-provider-cloudflare",
"url": "https://github.com/cloudflare/terraform-provider-cloudflare/pull/1424",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1401259857
|
Issue 1840 Add custom_hostname wait_for_active_status
As shown in https://github.com/cloudflare/terraform-provider-cloudflare/issues/1840, it can be problematic to create required validation records in the same terraform apply run because the custom_hostname resource completes creation before the required validation records are present on the resource.
This pull adds a wait_for_active_status flag similar to the flag introduced in https://github.com/cloudflare/terraform-provider-cloudflare/pull/1567.
I was NOT able to run the acceptance tests because I do not have a suitable Cloudflare account to do so. However I did test this with my currently blocked configuration and it resolved the issues I was seeing.
i've gone back and had a look at the initial issue however i don't think this actually addresses the problem raised.
in the initial ticket, ownership_verification and ownership_verification_http are both set from the initial creation call however, the initial issue is trying to use the ssl.validation_records object for validation which this PR will never check so i'm unsure how this PR is fixing your issue.
@nickysemenza are you able to confirm which config block should be checked for the manual validation records? perhaps the original issue is just using the wrong fields.
interesting. it seemed from my testing that the ssl.validation_records were set once the hostname hit active status. this PR worked well for me with my local testing. i'll attach a sample tf file that i was testing with. i'll admit i'm largely unfamiliar with the underlying cloudflare api so if there is a better way to accomplish this i'll take that up instead.
main.tf.txt
testing_results.txt
the SSL sub-object will get its validation_records set once it transitions from initializing->pending_validation (which, if the parent hostname passes validation on the first try, will likely happen around the same time as the hostname transitioning from pending -> active. (The ssl validation records require a call to the certificate authority to happen in the background) wheras the custom hostname validation records are generated in-house).
So as for the issue described in #1840, waiting until resource.cloudflare_custom_hostname.test.ssl.0.validation_records.0.txt_value has a value can be accomplished by waiting until ssl.status== "pending_validation" or something along those lines - perhaps wait_for_ssl_pending_validation would be more appropriate?
see https://developers.cloudflare.com/ssl/reference/certificate-statuses/ and https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/domain-support/hostname-verification/#verification-statuses for references on statuses
thanks for the info - i've adjusted the pr to reflect that input
acceptance tests are passing
TF_ACC=1 go test $(go list ./...) -v -run "^TestAccCloudflareCustomHostname_" -count 1 -parallel 1 -timeout 120m -parallel 1
? github.com/cloudflare/terraform-provider-cloudflare [no test files]
=== RUN TestAccCloudflareCustomHostname_Basic
=== PAUSE TestAccCloudflareCustomHostname_Basic
=== RUN TestAccCloudflareCustomHostname_WaitForActive
=== PAUSE TestAccCloudflareCustomHostname_WaitForActive
=== RUN TestAccCloudflareCustomHostname_WithCustomOriginServer
=== PAUSE TestAccCloudflareCustomHostname_WithCustomOriginServer
=== RUN TestAccCloudflareCustomHostname_WithHTTPValidation
=== PAUSE TestAccCloudflareCustomHostname_WithHTTPValidation
=== RUN TestAccCloudflareCustomHostname_WithCustomSSLSettings
=== PAUSE TestAccCloudflareCustomHostname_WithCustomSSLSettings
=== RUN TestAccCloudflareCustomHostname_Update
=== PAUSE TestAccCloudflareCustomHostname_Update
=== RUN TestAccCloudflareCustomHostname_WithNoSSL
=== PAUSE TestAccCloudflareCustomHostname_WithNoSSL
=== RUN TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
=== PAUSE TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
=== RUN TestAccCloudflareCustomHostname_Import
=== PAUSE TestAccCloudflareCustomHostname_Import
=== CONT TestAccCloudflareCustomHostname_Basic
--- PASS: TestAccCloudflareCustomHostname_Basic (8.25s)
=== CONT TestAccCloudflareCustomHostname_Update
--- PASS: TestAccCloudflareCustomHostname_Update (14.25s)
=== CONT TestAccCloudflareCustomHostname_Import
--- PASS: TestAccCloudflareCustomHostname_Import (10.16s)
=== CONT TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource
--- PASS: TestAccCloudflareCustomHostname_UpdatingZoneForcesNewResource (15.28s)
=== CONT TestAccCloudflareCustomHostname_WithNoSSL
--- PASS: TestAccCloudflareCustomHostname_WithNoSSL (7.64s)
=== CONT TestAccCloudflareCustomHostname_WithHTTPValidation
--- PASS: TestAccCloudflareCustomHostname_WithHTTPValidation (7.60s)
=== CONT TestAccCloudflareCustomHostname_WithCustomSSLSettings
--- PASS: TestAccCloudflareCustomHostname_WithCustomSSLSettings (12.76s)
=== CONT TestAccCloudflareCustomHostname_WithCustomOriginServer
--- PASS: TestAccCloudflareCustomHostname_WithCustomOriginServer (8.60s)
=== CONT TestAccCloudflareCustomHostname_WaitForActive
--- PASS: TestAccCloudflareCustomHostname_WaitForActive (10.60s)
PASS
ok github.com/cloudflare/terraform-provider-cloudflare/internal/provider 95.440s
thanks for this one @will-bluem-olo! we appreciate the effort you've put into this one -- you're first contribution at that! 🎉
|
gharchive/pull-request
| 2022-10-07T14:19:22
|
2025-04-01T06:38:12.789084
|
{
"authors": [
"jacobbednarz",
"nickysemenza",
"will-bluem-olo"
],
"repo": "cloudflare/terraform-provider-cloudflare",
"url": "https://github.com/cloudflare/terraform-provider-cloudflare/pull/1953",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1422629324
|
Small typo fix in comment
Deafult -> Default
@vlovich ... need you to ack the CLAAssistant here...
recheck
|
gharchive/pull-request
| 2022-10-25T15:10:49
|
2025-04-01T06:38:12.790609
|
{
"authors": [
"jasnell",
"vlovich"
],
"repo": "cloudflare/workerd",
"url": "https://github.com/cloudflare/workerd/pull/127",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1468600673
|
🐛 BUG: wrangler r2 object CLI
What version of Wrangler are you using?
2.5.0
What operating system are you using?
Mac
Describe the Bug
To reproduce:
Create a new r2 bucket called foobar
Create a README.md
Attempt to upload the README.md file to foobar with the following command:
wrangler r2 object put foobar/README.md -f ./README.md
❯ wrangler r2 object put foobar/README.md -f ./README.md
⛅️ wrangler 2.5.0
-------------------
Creating object "README.md" in bucket "foobar".
✘ [ERROR] Failed to fetch /accounts/<account id>/r2/buckets/foobar/objects/README.md - 404: Not Found);
If you think this is a bug then please create an issue at https://github.com/cloudflare/wrangler2/issues/new/choose
Running this command outputs the following error:
Now I don't think this is a bug and is likely PEBKAC. Hopefully, someone can point me in the right direction, and I'll close this issue. But this is not the only time I've been frustrated by the Cloudflare developer experience. All that I want to do is copy a file from my local system to R2. The cli wasn't intuitive enough for me to guess--I assumed it would be similar to the cp command--but the -h was well done. After trying many variations of this command and still receiving similar errors, I gave up and went to the docs. I'm not using workers in any capacity, but I have to go to the workers help to get any information on wrangler, a little confusing, but maybe I'm an outlier here. I get to the commands section and see r2 bucket documentation. But nothing for r2 object. I am, again, frustrated. I try the official R2 documentation, also a little confusing going to multiple product pages. No examples using wrangler, just 3rd party libs. Hence, filing this bug.
I love what y'all are doing, and I like the direction Cloudflare is heading with its developer-focused products. Still, I've encountered many paper cuts, primarily in the fragmented documentation across multiple product umbrellas. I consider pages, workers, r2, images etc., all to be bleeding-edge products, and I constantly advocate for their use over more traditional software development patterns, but these issues make that more difficult.
Hi @lucasnad27, I'm so sorry you experienced this issue.
We'll update the wrangler command docs to document the wrangler r2 object commands, and I've raised with the team whether it still makes sense to keep Wrangler underneath Workers in the docs.
thanks @rozenmd Appreciate the quick response.
Should I keep this issue open until I see an update to the docs? Or close and re-open if I run into issues once those docs are available?
@lucasnad27 I'll make a PR into https://github.com/cloudflare/cloudflare-docs resolving the issue, which will automatically close this issue and the one I just filed in that repo (https://github.com/cloudflare/cloudflare-docs/issues/6894)
@lucasnad27 i think you're using the command correctly. Silly question though. Does the bucket foobar exist for you? If you email my GitHub username at cloudflare dot com with your account ID I can take a deeper look for you (just reference this GitHub issue for context).
Sorry for the frustration and hopefully we can get you unblocked.
As an alternative, there are other command-line tools out there that are more mature. Rclone in particular is quite popular.
No silly questions! :) foobar was a placeholder for a bucket created by my CF account.
However, I just tried reproducing the error using the same command I posted earlier (wrangler r2 object put foobar/README.md -f ./README.md) and it worked! I thought there might be a consistency issue with my last failed attempt, so I tried creating a new bucket (both from the CLI & UI) and was able to upload the README.md file without issue 🤷
Thanks for the tip re: rclone. If I do anything substantive from the CLI, I'll keep that in mind.
|
gharchive/issue
| 2022-11-29T19:39:12
|
2025-04-01T06:38:12.800782
|
{
"authors": [
"lucasnad27",
"rozenmd",
"vlovich"
],
"repo": "cloudflare/wrangler2",
"url": "https://github.com/cloudflare/wrangler2/issues/2309",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
271455205
|
Config options for metric prefix and director name.
The nozzle currently places all the metrics it derives from firehose
data in the root of Stackdriver's custom metric namespace. This means
there's a relatively large chance that these names could collide with
others that were not created by the nozzle. To mitigate this, a config
option "metric_path_prefix" (that defaults to "firehose") is added.
After this commit, Stackdriver metric names will be of the form:
custom.googleapis.com/PREFIX/origin.Name
custom.googleapis.com/firehose/gorouter.total_requests
Running multiple PCF instances in the same GCP project will result in
metrics from all instances being confused with each other. Since the
nozzle runs on BOSH, another config option "bosh_director_name" (that
defaults to "cf") has been added. This sets the value of a static
"director" label added to every metric exported by the nozzle. Setting
this to different values for each PCF instance in a project (e.g. the
GCP region, if running one PCF instance per region) allows PCF metrics
to be distinguished from one another.
This change is
I'll rebase this onto develop after PR #136 is in, which ought to make it easier to review. I wanted to base this PR off that one because it made logical sense to do so, but I'm not github-savvy enough to know how to exclude the first commit from the second PR, if such a thing is even possible.
Review status: 0 of 10 files reviewed at latest revision, 1 unresolved discussion.
src/stackdriver-nozzle/config/config.go, line 69 at r1 (raw file):
MetricsBufferSize int `envconfig:"metrics_buffer_size" default:"200"`
MetricPathPrefix string `envconfig:"metric_path_prefix" default:"firehose"`
BoshDirectorName string `envconfig:"bosh_director_name" default:"cf"`
What I actually had in mind is slightly more flexible: instead of having a single hard-coded label with user-supplied value, what if we could just accept an arbitrary number of key/value pairs that are passed as labels for all metrics? This will allow users to attach other instance-specific metadata to their metrics. There is obviously a risk that providing too many labels will exceed 10 label limit, but this will be clearly visible in the logs (and we could document this as a caveat). What do you think?
Comments from Reviewable
Review status: 0 of 10 files reviewed at latest revision, 1 unresolved discussion.
src/stackdriver-nozzle/config/config.go, line 69 at r1 (raw file):
Previously, knyar (Anton Tolchanov) wrote…
What I actually had in mind is slightly more flexible: instead of having a single hard-coded label with user-supplied value, what if we could just accept an arbitrary number of key/value pairs that are passed as labels for all metrics? This will allow users to attach other instance-specific metadata to their metrics. There is obviously a risk that providing too many labels will exceed 10 label limit, but this will be clearly visible in the logs (and we could document this as a caveat). What do you think?
I did originally do that, but stepped back, because: why?
I don't see any value in attaching >1 label to all metrics from a PCF instance, because it should be possible to derive any other values you might want to for that instance based on the one label. (I guess SD doesn't quite have JoinWithLiteralTable yet, but that's what I'm thinking of.)
This also enforces the label name to be "director" which is what the PCF folks want to standardise on as the label to differentiate between PCF instances, according to David Laing. Only allowing users to change the value enforces naming consistency.
Then there's the risk of exceeding the 10 label limit. There's some headroom there but I am not a fan of leaving guns footwards like that...
Comments from Reviewable
@fluffle - I rebased your changed on develop and force pushed. I don't know if that's proper etiquette or not
Can you also update: tile.yml.erb, jobs/stackdriver-nozzle/spec with the new fields?
Reviewed 7 of 10 files at r1.
Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
Review status: all files reviewed at latest revision, 1 unresolved discussion, all commit checks successful.
jobs/stackdriver-nozzle/templates/stackdriver-nozzle-ctl.erb, line 38 at r2 (raw file):
export METRICS_BUFFER_SIZE=<%= p('nozzle.metrics_buffer_size', '200') %>
export METRIC_PATH_PREFIX=<%= p('nozzle.metric_path_prefix', 'firehose') %>
export BOSH_DIRECTOR_NAME=<%= p('nozzle.bosh_director_name', 'cf') %>
Can you also update: tile.yml.erb, jobs/stackdriver-nozzle/spec with the new fields?
(reposting as comment so it can be resolved)
Comments from Reviewable
I had to force-push updates anyway. I think @knyar's comments may have been on changes that were in the commit from #136, not sure. Hope I've got everything now!
Review status: 7 of 9 files reviewed at latest revision, 1 unresolved discussion.
jobs/stackdriver-nozzle/templates/stackdriver-nozzle-ctl.erb, line 38 at r2 (raw file):
Previously, johnsonj (Jeff Johnson) wrote…
Can you also update: tile.yml.erb, jobs/stackdriver-nozzle/spec with the new fields?
(reposting as comment so it can be resolved)
Aha, thank you. I should have done a recursive grep to see if there was more wiring to be connected up :-)
Comments from Reviewable
Reviewed 3 of 3 files at r3.
Review status: all files reviewed at latest revision, 1 unresolved discussion, some commit checks failed.
jobs/stackdriver-nozzle/templates/stackdriver-nozzle-ctl.erb, line 38 at r2 (raw file):
Previously, fluffle (Alex Bee) wrote…
Aha, thank you. I should have done a recursive grep to see if there was more wiring to be connected up :-)
Thanks!
Comments from Reviewable
Reviewed 8 of 8 files at r4.
Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
|
gharchive/pull-request
| 2017-11-06T12:14:13
|
2025-04-01T06:38:12.823710
|
{
"authors": [
"fluffle",
"johnsonj",
"knyar"
],
"repo": "cloudfoundry-community/stackdriver-tools",
"url": "https://github.com/cloudfoundry-community/stackdriver-tools/pull/144",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
175647897
|
support global cpi configuration for tags
to allow tagging of all vms with specific set of tags to enable setting security groups. global tags would be added on top of tags specified via env arg in create_vm. wdyt about google.default_tags: [tag1, tag2]?
cc @evandbrown @dsboulder
@dsboulder @mrdavidlaing do default_tags give you capabilities that 'tag with job name' doesn't?
Otherwise I'm fine with this and happy to add if it's useful.
@evandbrown this will be useful for setting custom tags director wide (including director), e.g. staging-blah. this global cpi configuration (not in vm cloud properties) will enforce tag setting across all machines.
|
gharchive/issue
| 2016-09-08T02:02:33
|
2025-04-01T06:38:12.826683
|
{
"authors": [
"cppforlife",
"evandbrown"
],
"repo": "cloudfoundry-incubator/bosh-google-cpi-release",
"url": "https://github.com/cloudfoundry-incubator/bosh-google-cpi-release/issues/78",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
56061212
|
Tests fail occasionally with "No mapping between account names and security IDs was done."
The LocalPrincipalManager tests fail occasionally with the above error.
I believe it is related to some eventual consistency in Windows with creating a new user, and then trying to add that user to a security group. The error suggests that the failure was in mapping a username to a SID.
We should consider polling (in the code) to ensure that the user exists and can be mapped to a SID, before continuing and trying to use the new user.
Here is an example stack trace:
Test Name: IronFoundry.Container.Utilities.LocalPrincipalManagerTests.AddedUserAppearsInWardenGroup
Test FullName: IronFoundry.Container.Utilities.LocalPrincipalManagerTests.AddedUserAppearsInWardenGroup
Test Source: c:\git\if\if_warden\IronFoundry.Container.Test\Utilities\LocalPrincipalManagerTests.cs : line 62
Test Outcome: Failed
Test Duration: 0:00:00.164
Result Message: System.Runtime.InteropServices.COMException : No mapping between account names and security IDs was done.
Result StackTrace:
at System.DirectoryServices.PropertyValueCollection.PopulateList()
at System.DirectoryServices.PropertyValueCollection..ctor(DirectoryEntry entry, String propertyName)
at System.DirectoryServices.PropertyCollection.get_Item(String propertyName)
at System.DirectoryServices.AccountManagement.QbeMatcher.Matches(DirectoryEntry de)
at System.DirectoryServices.AccountManagement.SAMQuerySet.MoveNext()
at System.DirectoryServices.AccountManagement.FindResultEnumerator`1.MoveNext()
at System.DirectoryServices.AccountManagement.PrincipalSearcher.FindOne()
at IronFoundry.Container.Utilities.LocalPrincipalManager.AddUserToGroup(PrincipalContext context, String groupName, UserPrincipal user) in c:\git\if\if_warden\IronFoundry.Container.Shared\Utilities\LocalPrincipalManager.cs:line 151
at IronFoundry.Container.Utilities.LocalPrincipalManager.InnerCreateUser(String userName) in c:\git\if\if_warden\IronFoundry.Container.Shared\Utilities\LocalPrincipalManager.cs:line 136
at IronFoundry.Container.Utilities.LocalPrincipalManager.CreateUser(String userName) in c:\git\if\if_warden\IronFoundry.Container.Shared\Utilities\LocalPrincipalManager.cs:line 59
at IronFoundry.Container.Utilities.LocalPrincipalManagerTests.AddedUserAppearsInWardenGroup() in c:\git\if\if_warden\IronFoundry.Container.Test\Utilities\LocalPrincipalManagerTests.cs:line 63
Error Code for this is: 0x80070534
|
gharchive/issue
| 2015-01-30T17:29:14
|
2025-04-01T06:38:12.828961
|
{
"authors": [
"brannon",
"mosoto"
],
"repo": "cloudfoundry-incubator/if_warden",
"url": "https://github.com/cloudfoundry-incubator/if_warden/issues/9",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
349595427
|
Do not fail apply-specs if addons-spec is empty
Is your feature request related to a problem? Please describe.
kubectl fails when you apply a manifest that does not contain any objects:
$ echo "---" | kubectl apply -f -
error: no objects passed to apply
If a user provides an empty manifest, for instance --- or whitespace. That will fail the apply-specs errand. We ran into this because OpsMan can not concatenate two values without some sort of additional characters. Additionally, if a user provides --- it will fail which is not obvious.
Describe the solution you'd like
When applying specs one of three things can be done:
Parse the yaml first and see if contains data. There are already some empty value checks being done here: https://github.com/cloudfoundry-incubator/kubo-release/blob/42f962bb594aa3092094393560cdd38e571af698/jobs/apply-specs/spec
Apply the specs and if the error matches "error: no objects passed to apply" do not fail the errand.
Just trim whitespace from both sides of the document before doing the empty check. This would solve our issue but users that provide --- would still fail the errand.
Describe alternatives you've considered
We spent a day trying to get OpsMan to craft either null or "" when concatenating two empty strings. We were not able to find a way to accomplish this. Our workaround is to always have an namespace object created so that the job doesn't fail for this reason.
Additional context
Pretty straight forward issue.
Thanks for highlighting this @jasonkeene
On first glance, we think the current behaviour is appropriate. The error highlights to the user that they have passed malformed/empty spec to CFCR/kubectl, and that they would probably want to investigate as to why they are passing empty files.
I'm going to close this for now, but feel free to discuss further in the comments if you wish.
cc @karampok
Fair enough. We've worked around this on our end by always appending a namespace.
|
gharchive/issue
| 2018-08-10T17:22:30
|
2025-04-01T06:38:12.834531
|
{
"authors": [
"iainsproat",
"jasonkeene"
],
"repo": "cloudfoundry-incubator/kubo-release",
"url": "https://github.com/cloudfoundry-incubator/kubo-release/issues/241",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
496385258
|
Fix npm audit vulnerabilities
result of npm audit fix and manually removing stratos-merge-dirs
We should park this until we upgrade to Angular 8. The issue might come out in the wash.
Angular upgrade #3920
@KlapTrap This was raised as a concern by the community. Is there any reason why this shouldn't be merged other than waiting for Angular 8?
@richard-cox @KlapTrap I think we should get this in - but I don't understand the changes to the package lock file.
Many dependencies have changed from being explicitly pinned, e.g. "1.9.3" to "^1.9.3" - it would be good to understand why, so we don't have this flip-flopping with PRs.
I agree that we should fix this, I was just going to wait for the angular 8 upgrade. ng updatge will get all of the relevant dependancies to the correct & compatible versions.
Having said that, I've done some of the angular 8 migration here; https://github.com/cloudfoundry-incubator/stratos/pull/3950 and It's going to take a while to manually migrate some of the code. So, with that in mind, I don't mind this being merged once everyone is happy.
|
gharchive/pull-request
| 2019-09-20T14:30:02
|
2025-04-01T06:38:12.838291
|
{
"authors": [
"KlapTrap",
"nwmac",
"richard-cox"
],
"repo": "cloudfoundry-incubator/stratos",
"url": "https://github.com/cloudfoundry-incubator/stratos/pull/3899",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
150112759
|
registry hangs and is unresponsive: bosh deploy fails with execution expired
We saw recently a few deployments failing with the error of execution expired during writing the settings for a VM into the registry. The registry stopped responding at all, there was nothing in the logs.
Even when being on the Director itself, calling the registry didn't work. Something like
# curl http://localhost:25777/instances/bla/settings
should fail with an error saying instance 'bla' not found, but just hangs until it runs into a read timeout.
So we attached a gdb and looked at the backtrace, seeing that excon couldn't open the ssl_socket and blocked forever in the rescue. Note that the documentation of IO.select states this piece of code as an explicit example on how to implement a blocking read
The gdb stacktracke:
(gdb) call (void) rb_backtrace()
from /var/vcap/packages/registry/bin/bosh-registry:16:in `<main>'
from /var/vcap/packages/registry/bin/bosh-registry:16:in `load'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/bin/bosh-registry:28:in `<top (required)>'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/runner.rb:18:in `run'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/runner.rb:34:in `start_http_server'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in `start'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in `start'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run_machine'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in `receive_data'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in `process'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `pre_process'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `catch'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in `block in pre_process'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in `each'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/urlmap.rb:66:in `block in call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:2021:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:181:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/show_exceptions.rb:21:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/head.rb:13:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-1.6.4/lib/rack/nulllogger.rb:9:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:894:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in `call!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `invoke'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `catch'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `block in invoke'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in `block in call!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1081:in `dispatch!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `invoke'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `catch'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `block in invoke'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1084:in `block in dispatch!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in `route!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in `each'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:972:in `block in route!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in `process_route'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in `catch'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1014:in `block in process_route'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in `block (2 levels) in route!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:993:in `route_eval'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in `block (3 levels) in route!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in `[]'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1610:in `block in compile!'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1610:in `call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/api_controller.rb:22:in `block in <class:ApiController>'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/instance_manager.rb:29:in `read_settings'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/instance_manager.rb:54:in `check_instance_ips'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/bosh-registry-1.3202.0/lib/bosh/registry/instance_manager/openstack.rb:79:in `instance_ips'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/models/compute/server.rb:151:in `private_ip_addresses'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/models/compute/server.rb:127:in `floating_ip_addresses'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/models/compute/server.rb:109:in `all_addresses'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/requests/compute/list_all_addresses.rb:6:in `list_all_addresses'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-1.34.0/lib/fog/openstack/compute.rb:355:in `request'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/fog-core-1.32.1/lib/fog/core/connection.rb:81:in `request'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/connection.rb:233:in `request'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/base.rb:15:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/base.rb:15:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/base.rb:15:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/instrumentor.rb:22:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/middlewares/mock.rb:47:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/connection.rb:106:in `request_call'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/connection.rb:387:in `socket'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/connection.rb:387:in `new'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/ssl_socket.rb:119:in `initialize'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/ssl_socket.rb:122:in `rescue in initialize'
from /var/vcap/packages/registry/gem_home/ruby/2.1.0/gems/excon-0.45.4/lib/excon/ssl_socket.rb:122:in `select'
This is an acepted bug in excon 0.45.4 and has been fixed by adding a timeout to IO.select in 0.49.
fog-core is already updated to consume excon 0.49. We're now waiting for fog and fog-openstack to be updated to consume fog-core 1.38.0.
While this prevents excon from blocking forever, it might be a good idea to have a detailed look at the registry on what else could be done to prevent one hanging call blocking the entire registry for everyone.
Updating fog and excon sounds ok.
I've created https://www.pivotaltracker.com/story/show/119363951 to bump fog in bosh-registry.
Fixed with bosh 257.14
|
gharchive/issue
| 2016-04-21T15:44:00
|
2025-04-01T06:38:12.846469
|
{
"authors": [
"cppforlife",
"voelzmo"
],
"repo": "cloudfoundry/bosh",
"url": "https://github.com/cloudfoundry/bosh/issues/1234",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
583327696
|
Please complete the Cloudfoundry Component Log Timestamp Audit - as per: CF-RFC#030
Hi There!
In an effort to assure all CF components use a consistent logging timestamp as per CF-RFC#030, I'm submitting this issue requesting a little action from y'all on this x-component-team effort.
First
Please complete this audit as soon as possible.
this tracker story template includes additional information and tools to aid in completing the audit.
Second
If additional work is required to meet the requirements outlined in CF-RFC#030 please create, and take action to address, github issue(s) describing the work required to meet those requirements.
Thanks so much!
The CF-RFC#030 Authors (Josh Collins and Amelia Downs)
@heyjcollins
I asked the editor access to the audit spreadsheet.
By the way note that for the postgres-release we are only talking about bosh logs (pre-start and monit). PostgreSQL is a third-party app and its log are only configurable through the log_line_prefix parameter.
I've completed the audit and opened a story in PIvotal Tracker to address the requirement.
|
gharchive/issue
| 2020-03-17T22:07:11
|
2025-04-01T06:38:12.859908
|
{
"authors": [
"heyjcollins",
"valeriap"
],
"repo": "cloudfoundry/postgres-release",
"url": "https://github.com/cloudfoundry/postgres-release/issues/57",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.