id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
2556281051 | golang uplifts
The latest kubectl version 1.31.1 has two vulnerabilities detected
CVE-2024-34156 and CVE-2024-34158.
CVE-2024-34156
Calling Decoder.Decode on a message which contains deeply nested structures can cause a panic due to stack exhaustion. This is a follow-up to CVE-2022-30635.
ADP: CISA-ADP
Base Score: [7.5 HIGH]
CVE-2024-34158
Calling Parse on a "// +build" build tag line with deeply nested expressions can cause a panic due to stack exhaustion.
ADP: CISA-ADP
Base Score: [7.5 HIGH]
Both can be fixed by uplifting the stdlib version to 1.22.7
It would be useful to do this.
This is handled by release automatically and this is not something that is managed by kubectl. We just need to wait to get the minor bumps.
/close
| gharchive/issue | 2024-09-30T10:52:29 | 2025-04-01T06:44:44.297293 | {
"authors": [
"BartyBoi1128",
"ardaguclu"
],
"repo": "kubernetes/kubectl",
"url": "https://github.com/kubernetes/kubectl/issues/1661",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
200562654 | No option to set http_proxy for nodes, network timeout in make deploy
I am trying to use kubernetes-anywhere to create cluster in vSphere.
VMs need to have http_proxy configure in order to pull docker images needed for kubernetes cluster but I dont see any option to set proxy
null_resource.node2 (remote-exec): Failed to start kubelet.service: Unit kubelet.service failed to load: No such file or directory.
null_resource.node4 (remote-exec): Failed to start kubelet.service: Unit kubelet.service failed to load: No such file or directory.
null_resource.node4: Creation complete
null_resource.node2: Creation complete
null_resource.node5 (remote-exec): docker: Network timed out while trying to connect to https://index.docker.io/v1/repositories/ashivani/k8s-ignition/images. You may want to check your internet connection or if you are behind a proxy..
null_resource.node5 (remote-exec): See 'docker run --help'.
null_resource.node5 (remote-exec): Failed to execute operation: No such file or directory
null_resource.node5 (remote-exec): Failed to start kubelet.service: Unit kubelet.service failed to load: No such file or directory.
null_resource.node3 (remote-exec): docker: Network timed out while trying to connect to https://index.docker.io/v1/repositories/ashivani/k8s-ignition/images. You may want to check your internet connection or if you are behind a proxy..
null_resource.node3 (remote-exec): See 'docker run --help'.
null_resource.node5: Creation complete
null_resource.node3 (remote-exec): Failed to execute operation: No such file or directory
null_resource.node3 (remote-exec): Failed to start kubelet.service: Unit kubelet.service failed to load: No such file or directory.
null_resource.node3: Creation complete
Curious if you found a solution for this?
No. not really
I don't know if this helps but I was able to get further by adding lines in the Dockerfile at the kubernetes-where root and in the phase2/ignition/Dockerfile
#FROM alpine
FROM mhart/alpine-node:6.4.0
ENV http_proxy "http://<host>:8080"
ENV https_proxy "http://<host>:8080"
I also tried to edit phase2/ignition/vanilla/kubelet.service to add proxy to the environment there.
ExecStart=/usr/bin/docker run \
--net=host \
--pid=host \
--privileged \
-e http_proxy=http://<host>:8080 \
-e https_proxy=http://<host>:8080 \
-v /dev:/dev \
-v /sys:/sys:ro \
-v /var/run:/var/run:rw \
-v /var/lib/docker/:/var/lib/docker:rw \
-v /var/lib/kubelet/:/var/lib/kubelet:shared \
-v /var/log:/var/log:shared \
-v /srv/kubernetes:/srv/kubernetes:ro \
-v /etc/kubernetes:/etc/kubernetes:ro \
%(docker_registry)s/hyperkube-amd64:%(kubernetes_version)s \
/hyperkube kubelet %(kubelet_args)s
But I'm still running into problems
@TreverW did you get this working? Im at about the same spot as you...
I have not. I basically gave up for now.
From: George Cairns [mailto:notifications@github.com]
Sent: Friday, August 04, 2017 9:57 AM
To: kubernetes/kubernetes-anywhere kubernetes-anywhere@noreply.github.com
Cc: Wilhelm, Trever (GSB DevOps) trever.wilhelm@hp.com; Mention mention@noreply.github.com
Subject: Re: [kubernetes/kubernetes-anywhere] No option to set http_proxy for nodes, network timeout in make deploy (#313)
@TreverWhttps://github.com/treverw did you get this working? Im at about the same spot as you...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/kubernetes/kubernetes-anywhere/issues/313#issuecomment-320299751, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AAyMtt9pZqDuh1I7XEAXkKFN_Awcav70ks5sU01igaJpZM4LilqZ.
Setting http_proxy for the docker service is necessary but not sufficient:
In /etc/systemd/system/docker.service.d/https-proxy.conf
[Service]
Environment="HTTPS_PROXY=https://:"
The installation then fails for me when ignition tries to fetch files. It looks to me like ignition fails to make use of the ProxyFromEnvironment functionality that is used in Go's default HTTP transport:
https://github.com/coreos/ignition/blob/f671ad6650fb1c0274ba87b13d11ed75877bc970/internal/resource/http.go#L53-L76
https://github.com/coreos/ignition doesn't seem to have issues enabled, so not sure how to raise this. I guess they'll accept a PR.
| gharchive/issue | 2017-01-13T07:12:57 | 2025-04-01T06:44:44.309282 | {
"authors": [
"MattMencel",
"TreverW",
"dayglo",
"donaldh",
"maplelabs"
],
"repo": "kubernetes/kubernetes-anywhere",
"url": "https://github.com/kubernetes/kubernetes-anywhere/issues/313",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
267745313 | Allow insecure /os_cacert option for openstack provider
Fixes #467
/assign @jamiehannaford @shashidharatd
I can add the OS_CACERT, but I have not discovered how you skip certain things from being in the .tf file when not specified in the Kconfig menu. Is this just a simple if statement that will add to the provider block?
@cmluciano I think just adding an option is fine. It looks like the OpenStack terraform provider only sets the config if it's a non-empty string:
https://github.com/terraform-providers/terraform-provider-openstack/blob/95544ec71a83f5736641955e9d0542951c7f4ca3/openstack/config.go#L79
so if the default is "" in kconfig we should be okay.
@jamiehannaford added a commit. I'm a little unsure of the formatting, but will keep testing locally.
/assign @pipejakob
/lgtm
I just noted that the "" does not work for defaults. The jsonnet code is confused and spits out that it cannot fill in the empty string.
I am not an expert in jsonnet. but you can try using conditional like below:
cacert_file: (if openstack.os_cacert == "" then "" else "${file(\"%s\")}" % openstack.os_cacert)
There could be better ways :)
Hey @cmluciano, I wasn't sure if your most recent comment was a call to action to fix defaults, or if you just want this merged as-is. You do have an LGTM, so just give me a signal if you want this merged now.
@pipejakob Let's hold off for now. I need to fix the defaulting logic within jsonnet. I will push up a revised commit.
| gharchive/pull-request | 2017-10-23T17:05:48 | 2025-04-01T06:44:44.314824 | {
"authors": [
"cmluciano",
"jamiehannaford",
"pipejakob",
"shashidharatd"
],
"repo": "kubernetes/kubernetes-anywhere",
"url": "https://github.com/kubernetes/kubernetes-anywhere/pull/468",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
185776962 | Overall Improvements
[ ] All pages need a unique title: right now a lot of the generated docs have no title (ex. kubectl pages)
[ ] Replace javascript redirects with something better or not at all: everything in https://github.com/kubernetes/kubernetes.github.io/blob/master/js/redirects.js is a 302 redirect which throws a 404 then gets redirected and is pretty bad for SEO either we should replace them with 301 redirects via a web server or not have them at all, it also hurts page load time
[ ] SSL: we can get SSL pretty easily by hosting via netlify instead of github pages, it would not require any changes to jekyll etc, it would just be a flip on a DNS record and a setting inside the netlify UI (I am willing to help, I just dont have the right permissions but I have a lot of other sites I've set up doing this)
All great comments, will help search ability of the site with page titles.
SGTM - Google project need SEO help
Replace javascript redirects with something better or not at all
Netlify can handle those redirects too out of the box :smile:
https://www.netlify.com/docs/redirects/
This is a rough translation of what's in that JS to a _redirects file:
# 301 redirects
/third_party/swagger-ui http://kubernetes.io/kubernetes/third_party/swagger-ui/
/resource-quota http://kubernetes.io/docs/admin/resourcequota/
/horizontal-pod-autoscaler http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/
/docs/roadmap https://github.com/kubernetes/kubernetes/milestones/
/api-ref https://github.com/kubernetes/kubernetes/milestones/
/docs/user-guide/overview http://kubernetes.io/docs/whatisk8s/
The forwardning rules to the repo branch might be more involved, but still doable. That _redirects file can also be generated based on a template, it's always processed after Netlify builds the site.
Tagged this with P1 and Needs tech review. I think we should figure this out soon. Netlify seems to be the right way forward.
I'm targeting 2017Q3 to move k8s.io prod over to Netlify.
@chenopis happy to jump in a Hangout or meet somewhere else to prepare this.
@calavera Awesome. Thanks for volunteering to jump in. Once k8s 1.7 is out the door, hopefully next week, I will setup some meetings to work on a plan for the cutover.
OMG, all three of these are done!
| gharchive/issue | 2016-10-27T20:51:05 | 2025-04-01T06:44:44.321079 | {
"authors": [
"calavera",
"chenopis",
"chrislovecnm",
"jaredbhatti",
"jessfraz"
],
"repo": "kubernetes/kubernetes.github.io",
"url": "https://github.com/kubernetes/kubernetes.github.io/issues/1575",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
177980309 | Minor typo correction
This change is
I signed it!
Thanks for the update.
| gharchive/pull-request | 2016-09-20T06:57:42 | 2025-04-01T06:44:44.323102 | {
"authors": [
"jaredbhatti",
"pabloguerrero"
],
"repo": "kubernetes/kubernetes.github.io",
"url": "https://github.com/kubernetes/kubernetes.github.io/pull/1268",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
212922142 | Updated user guide for kubectl taint by adding NoExecute.
Fixed part of #2737
This change is
LGTM.
FYI: @mburke5678
LGTM.
You may rebase to the latest master since it's prompting "This branch is out-of-date with the base branch".
This is target for 1.6, can we have this merged soon? @aveshagarwal @kevin-wangzefeng @cmluciano
This is target for 1.6, can we have this merged soon? @aveshagarwal @kevin-wangzefeng @cmluciano
LGTM
/cc @kubernetes/sig-docs-maintainers
LGTM
@gyliu513 @chenopis There is no need to manually update the doc here. These files are all generated (see the bottom of this file)
We will run scripts to update the imported doc, like kubectl help message.
https://github.com/kubernetes/kubernetes.github.io/blob/master/update-imported-docs.sh
For new reference docs, @pwittrock has script in https://github.com/kubernetes-incubator/reference-docs to update the new style doc.
FYI, we are not maintaining the old docs.
cc: @devin-donnelly
Good to know, thanks @ymqytw
| gharchive/pull-request | 2017-03-09T03:02:11 | 2025-04-01T06:44:44.328351 | {
"authors": [
"aveshagarwal",
"davidopp",
"gyliu513",
"kevin-wangzefeng",
"mburke5678",
"ymqytw"
],
"repo": "kubernetes/kubernetes.github.io",
"url": "https://github.com/kubernetes/kubernetes.github.io/pull/2744",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
257315622 | fix typo of basic-stateful-set
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For 1.8 Features: set Milestone to 1.8 and Base Branch to release-1.8
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NOTE: Please check the “Allow edits from maintainers” box (see image below) to
allow reviewers to fix problems on your patch and speed up the review process.
Please delete this note before submitting the pull request.
needs the prefix of controller name,for example
-bash-4.2# kubectl create -f web-svc-nfs.yaml
statefulset "zzz" created
This change is
/lgtm
| gharchive/pull-request | 2017-09-13T09:31:45 | 2025-04-01T06:44:44.332840 | {
"authors": [
"jianglingxia",
"xiangpengzhao"
],
"repo": "kubernetes/kubernetes.github.io",
"url": "https://github.com/kubernetes/kubernetes.github.io/pull/5435",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
495121620 | Make PV tests work on kubemark clusters
Enabling PVs in the load test failed the kubemark presubmit because scheduler failed to schedule pods with PVs
Unable to schedule test-y6u59n-1/small-statefulset-0-0: no fit: 0/101 nodes are available: 1 node(s) were unschedulable, 100 node(s) had volume node affinity conflict.; waiting
In order to make it work on kubemark we'll most likely have to change a few places in the code (or ensure they already work this way) so they're faking operations related to attaching and mounting PDs in kubemark
Attach Detach Controller
HollowKubelet
Scheduler
?
Ref. https://github.com/kubernetes/perf-tests/issues/704
/good-first-issue
I'd like to pick this up, where would I begin?
Hey, @Jukie, that's great to hear!
This one is a bit more complicated, I'll need to think about it more and come up with a more concrete list of steps. Unfortunately, I won't have time to do it until ~mid next week.
In the meantime, feel free to pick another issue from the help wanted list. I think https://github.com/kubernetes/perf-tests/issues/595 is relatively simple and will get you through all the prep work required to start contributing to kubernetes (forking repos, signing CLA, etc.).
/remove-lifecycle stale
/remove-lifecycle stale
We still want to do it...
/remove-lifecycle stale
/lifecycle frozen
/assign
| gharchive/issue | 2019-09-18T10:01:37 | 2025-04-01T06:44:44.802877 | {
"authors": [
"Jukie",
"RafalKorepta",
"mm4tt",
"wojtek-t"
],
"repo": "kubernetes/perf-tests",
"url": "https://github.com/kubernetes/perf-tests/issues/803",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
474088009 | Clean up load tests experiments
Move and rename experimental_load to bundled_services_and_deployments and add a comment in it that it's currently frozen
Create extended_config.yaml which is an exact copy of load/config at this point. This will make reviewing future changes easier.
/assign @oxddr
/lgtm
/lgtm
/approve
| gharchive/pull-request | 2019-07-29T14:35:47 | 2025-04-01T06:44:44.805126 | {
"authors": [
"mm4tt",
"oxddr",
"wojtek-t"
],
"repo": "kubernetes/perf-tests",
"url": "https://github.com/kubernetes/perf-tests/pull/702",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1594576428 | How is the Deep Copy files are generated ?
I have cloned this repository, and tried to run ./hack/update-codegen.sh the script is not generating the deepcopy related files, what am i missing ?
go version go1.19.5 darwin/amd64 is the go version
Different repo but same problem:
I solved running go mod vendor in the repo, modified the script as you can see in the repo https://github.com/fulviodenza/sample-apiserver and then moving the right files in the right folders. Unfortunately this was the only method I found.
| gharchive/issue | 2023-02-22T07:12:24 | 2025-04-01T06:44:44.807138 | {
"authors": [
"RameshRM",
"fulviodenza"
],
"repo": "kubernetes/sample-controller",
"url": "https://github.com/kubernetes/sample-controller/issues/89",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
425475165 | Issue with k8s.io/docs/concepts/overview/working-with-objects/kubernetes-objects/
This is a...
[ ] Feature Request
[x] Bug Report
Problem:
The Kubernetes API Reference url is not working
Proposed Solution:
Generate the new Documentation
Page to Update:
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/
Fixed by #13442
| gharchive/issue | 2019-03-26T15:06:35 | 2025-04-01T06:44:44.849770 | {
"authors": [
"enGMzizo",
"tengqm"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/13452",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
436000291 | Issue with k8s.io/docs/concepts/workloads/controllers/deployment/
This is a Bug Report
Incorrect output of kubectl get deploy is shown in documentation:
Actual Output: -
kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 5/5 5 5 13m
Output in documentation: -
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 18s
Proposed Solution:
Kindly update the document
Page to Update:
https://kubernetes.io/...
kubectl version --short
Client Version: v1.14.0
Server Version: v1.14.0
/good-first-issue
I would like to solve this issue as it would be a good start but @shahamit2 can you please provide the exact URL of the page where the changed are required.
hi @Vageesha17 the page to edit is
content/en/docs/concepts/workloads/controllers/deployment.md
Thanks for your contribution
Hi, @shahamit2 I was thinking about starting working on this but I reviewed the document and the YAML file being used for the example is configuring 3 replicas which seem is the actual output exposed in the documentation. Please, kindly advice.
/language en
/triage needs-information
/remove-help
/remove-good-first-issue
| gharchive/issue | 2019-04-23T05:03:58 | 2025-04-01T06:44:44.856030 | {
"authors": [
"DanyC97",
"Vageesha17",
"judavi",
"nelvadas",
"sftim",
"shahamit2"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/13964",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
505162431 | Issue with k8s.io/docs/tasks/administer-cluster/reconfigure-kubelet/
This is a Bug Report
Article suggests running bash kubectl proxy --port=8001 &
Problem:
However, kubectl is a binary.
Proposed Solution:
Either you run bash -c "kubectl proxy --port=8001" & or simply
kubectl proxy --port=8001 &
Page to Update:
https://k8s.io/docs/tasks/administer-cluster/reconfigure-kubelet/
@ivalexm , Thank you for reporting this issue.
/language en
/kind bug
/good-first-issue
/priority backlog
| gharchive/issue | 2019-10-10T09:54:18 | 2025-04-01T06:44:44.859637 | {
"authors": [
"ivalexm",
"savitharaghunathan"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/16799",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
706494103 | Add successThreshold default value of the startupProbe
This is a Bug Report
Configure Probes
Problem:
The "successThreshold" of livenessProbe is only allowed by "1", but it is also allowed only "1" on "startupProbe" either.
So it have to be as follows.
successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
Proposed Solution:
Add the "successThreshold" of "startupProbe" with "1", it must be "1". It's the same with "livenessProbe".
Page to Update:
Configure Probes
[X]
successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.
[O]
successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup Probes. Minimum value is 1.
/sig node
/kind bug
/language en
/priority backlog
/triage accepted
| gharchive/issue | 2020-09-22T15:41:33 | 2025-04-01T06:44:44.863929 | {
"authors": [
"bysnupy",
"sftim"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/24049",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1103179420 | Install and Set Up kubectl on Windows
Hello,
I have recently installed kubectl on Windows 10 using curl and I have noticed that there is a slight error at step 3. Namely, the line says:
"Add the binary in to your PATH."
when it really should be:
"Add the binary's folder to your PATH."
I have tested it a few times and indeed, adding \path_to_kubectl\kubectl.exe to PATH does not work, while simply adding \path_to_kubectl\ yields the correct result (i.e. being able to invoke kubectl commands from the command prompt).
I know it is a minor issue but for beginners (like myself) it could make the difference between completing the installation or not.
Even though I have not tried other methods of installation, I suspect that the same step should also be corrected in section Install kubectl convert plugin, where it also says "Add the binary in to your PATH."
Best regards,
Paula
/triage accepted
Similar issue exists in "Install kubectl binary with curl on Linux" section.
and then add ~/.local/bin/kubectl to $PATH
cf. In "Install and Set Up kubectl on macOS" page:
add ~/.local/bin/kubectl to $PATH
is kind of techie vernacular (it's how I'd say it in spoken English to a colleague), but in technical docs we can be more precise.
/language en
Hi, I would like to work on this issue.
/assign
/sig windows
| gharchive/issue | 2022-01-14T07:49:34 | 2025-04-01T06:44:44.870230 | {
"authors": [
"Babapool",
"PaulaMihalcea",
"jihoon-seo",
"mehabhalodiya",
"sftim"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/31341",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1169370360 | Missing /docs/tutorials/clusters/seccomp page or this link is incorrect
This is a Bug Report
There is a link /docs/tutorials/clusters/seccomp on line 481 of the page content\en\docs\reference\labels-annotations-taints_index.md,but this link does not exist.
Problem:
There is a link /docs/tutorials/clusters/seccomp on line 481 of the page content\en\docs\reference\labels-annotations-taints_index.md,but this link does not exist.
Proposed Solution:
Confirm this link /docs/tutorials/clusters/seccomp is correct or not,
if it is correct, you need to supplement the content related to the link.
If it is incorrect, you need to modify the link address.
Page to Update:
https://kubernetes.io/docs/reference\labels-annotations-taints\_index.md
https://kubernetes.io/docs/tutorials/clusters/seccomp
v1.23
NA
https://kubernetes.io/docs/tutorials/clusters/seccomp/ redirects to https://kubernetes.io/docs/tutorials/security/seccomp/
We should update https://kubernetes.io/docs/reference/labels-annotations-taints/ so that it hyperlinks to https://kubernetes.io/docs/tutorials/security/seccomp/ instead.
/language en
/help
/triage accepted
can I work on this?
/assign
#32279
Closing this issue as it got resolved in #32279
/close
| gharchive/issue | 2022-03-15T08:57:39 | 2025-04-01T06:44:44.876612 | {
"authors": [
"PriyanshuAhlawat",
"javadoors",
"mehabhalodiya",
"sftim",
"tewarig"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/32273",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1319187182 | [ko] Translate concepts/policy/pid-limiting in Korean
This is a Feature Request
What would you like to be added
Translate concepts/policy/pid-limiting in Korean
Why is this needed
No translation with concepts/policy/pid-limiting in Korean
Comments
지난 1월에 한글화에 참여 예정이었으나 개인적인 사정으로 참여하지 못했습니다. 죄송하며 이번엔 반드시 꼭 기여하고 싶습니다.
dev-1.24-ko.2 머지가 곧 예정이시므로 다음 branch에서 PR을 목표로 진행 예정입니다. 감사합니다.
Page to update:
https://kubernetes.io/docs/concepts/policy/pid-limiting/
/language ko
/assign
기존 이슈와 중복되어 Close하겠습니다. https://github.com/kubernetes/website/issues/35204
다른 이슈로 신청하겠습니다. :)
| gharchive/issue | 2022-07-27T08:13:57 | 2025-04-01T06:44:44.880771 | {
"authors": [
"nine01223"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/35430",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
239426020 | Issue with k8s.io/docs/tutorials/kubernetes-basics/
This is a...
[x ] Feature Request
[ ] Bug Report
Problem:
Love your NodeJs tutorial, but we really need a real life starting point.
Proposed Solution:
Java tutorial to deploy a docker container with Java / Google Stackdriver Logging / Google Endpoints.
Python tutorial to deploy a docker container with Python / Google Stackdriver Logging / Google Endpoints.
Page to Update:
http://kubernetes.io/...
/help
/remove-lifecycle stale
Also, I'm really not sure this belongs here but will keep open for the moment in hope that GCP folks would look into it.
| gharchive/issue | 2017-06-29T09:58:08 | 2025-04-01T06:44:44.884927 | {
"authors": [
"errordeveloper",
"simci-wendeldw"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/4227",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
445843617 | Initialize Taints and Tolerations in Bahasa Indonesia.
This PR addresses #13929
/assign @girikuncoro
nice work, thank you for your contribution!
/lgtm
/approve
| gharchive/pull-request | 2019-05-19T17:24:12 | 2025-04-01T06:44:44.886114 | {
"authors": [
"girikuncoro",
"irvifa"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/14405",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
451908719 | WIP: 完成部分 job 的中文翻译
@fudali113 请参考翻译规范进行,我们建议中英文需要对照
https://github.com/k8smeetup/k8s-official-translation
ok, 我先申请加入微信群熟悉一下流程
@fudali113 此 PR 时间比较久了,建议尽快更新一下,谢谢。
Feel free to reopen and continue work on it.
| gharchive/pull-request | 2019-06-04T10:06:21 | 2025-04-01T06:44:44.887777 | {
"authors": [
"fudali113",
"markthink",
"tengqm"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/14712",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
494640992 | Init Russian localization
Since at different times people started a new russian translation (see PR #16003 and #16378) - we discussed and decided to merge them into one branch to create one PR.
/cc: @dianaabv, @msheldyakov, @aisonaku
waiting for @bogdaner2 for authorize cla
/check-cla
You need to be added inside the org. Feel free to mention me to sponsor your entry @Potapy4 @aisonaku @dianaabv @msheldyakov
/retest
/verify-owners
/uncc
I don't speak Russian well enough to comment
It seems that @aisonaku https://github.com/kubernetes/org/issues/1209 and @msheldyakov are members of the org https://github.com/kubernetes/org/issues/1239
It seems that we have a problem @nzoueidi @zacharysarah @mrbobbytables Could you check the memberships?
I checked and they are already members in kubernetes org. My first thought is that they didn't accept yet the invitation sent to them for the kubernetes org in Github. Could you please @aisonaku and @msheldyakov confirm this?
/verify-owners
/verify-owners
Could you rebase the different files? We got a merge conflict.
@remyleone unfortunately, @Potapy4 cannot resolve conflicts right now because he is on vacation for two weeks. Can you please do it or someone from Russian team?
@Potapy4 gave access to his fork and I resolved all conflicts, now PR is ready for merge! :heavy_check_mark:
@Potapy4 Is this PR still a work in progress, or is it ready for review?
@Potapy4 Is this PR still a work in progress? If it's ready for review, please /retitle the PR to remove "WIP". Thanks in advance!
yep, it's ready for review 🙂
Did people want to look at tweaks for https://github.com/kubernetes/website/pull/16404/files#diff-e8a2af53a84007f16a69bd8df94296efR152 and other English lines in this file?
Iif not: I would leave as is, merge as is, and log an insta-frozen cleanup issue to track the work. Tracking that work helps make sure it isn't forgotten.
@sftim I translated the remaining strings :heavy_check_mark:
There are no remaining English sentences as far as I can see. @zacharysarah Could we merge this PR and add another language 🇷🇺 ? 😄
This PR needs more work before it can merge. It looks like netlify is failing:
2:13:22 AM: ERROR 2019/11/08 10:13:22 [ru] REF_NOT_FOUND: Ref "/docs/concepts/overview/what-is-kubernetes": "/opt/build/repo/content/ru/_index.html:1:1": page not found
This looks ready to merge.
Поздравляю! 🎉
@Potapy4 @lex111 @dianaabv @msheldyakov @aisonaku @remyleone
/lgtm
/approve
/language ru
| gharchive/pull-request | 2019-09-17T13:57:03 | 2025-04-01T06:44:44.897062 | {
"authors": [
"Potapy4",
"aisonaku",
"bogdaner2",
"lex111",
"msheldyakov",
"nzoueidi",
"remyleone",
"sftim",
"zacharysarah"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/16404",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
512262270 | add @oke-py as sig-docs-ja reviewer
add @oke-py as a new member of sig-docs-ja reviewer
/lgtm
/assign @jimangel
lgtm
I don't think I can comment on this one
/uncc
/lgtm
/approve
| gharchive/pull-request | 2019-10-25T02:00:26 | 2025-04-01T06:44:44.899207 | {
"authors": [
"MasayaAoyama",
"inductor",
"jimangel",
"nasa9084",
"sftim"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/17182",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
598749266 | Translate configuration/resource-bin-packing.md in Korean.
from #19954
/language ko
이 문서는 번역하고 계신 줄 모르고 저도 한번 봤는데, 제가 의견을 살짝 달아도 될까요?
@pjhwa 아니면, 해당 문서에 대해 리뷰를 해주셔도 되십니다 😄
@ysyukr 저도 아직 어리버리라 리뷰라고 하기엔 그렇고, 일단 보겠습니다. ^^
@pjhwa 의견(리뷰)는 누구나 제시할 수 있다고 생각합니다. 😄
보시면서 수정이 필요한 부분이 있으시다면 해당 부분을 선택하시어 의견을 남겨 주신다면 감사하겠습니다.
/assign @pjhwa
@pjhwa 업데이트 및 커밋 스쿼시 진행했습니다 😄
확인 부탁드리며, 보시기에 좋다고 판단되시면 /lgtm 코맨트를 남겨주시면, 다른 리뷰어 혹은 승인자 분들께서 추가 확인을 진행해 주실 것입니다.
@ysyukr 감사합니다!
/lgtm
/lgtm
아직 membership invitation을 받지 않아서 안되나봐요. ^^
@pjhwa 맴버십의 경우 org에서 이슈를 올리시고, close 되시면 이메일로 k8s 그룹 초대관련 메일이 발송됩니다. 해당 메일을 확인하시어 초대 확인(승인)을 하시면 맴버십 등록이 완료됩니다.
이메일로 k8s 그룹 초대관련 메일이 발송됩니다. 해당 메일을 확인하시어 초대 확인(승인)을 하시면 맴버십 등록이 완료됩니다.
이런... 제가 메일을 확인을 안했었네요. ^^ 감사합니다.
/lgtm
@seokho-son 리뷰감사합니다. 확인하여 업데이트 진행했습니다. 😄
@ysyukr 프리뷰까지 문제 없음을 확인하였습니다. 감사합니다.
/lgtm
/approve
| gharchive/pull-request | 2020-04-13T08:24:57 | 2025-04-01T06:44:44.904655 | {
"authors": [
"pjhwa",
"seokho-son",
"ysyukr"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/20276",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
665945722 | Remove redundant container-environment-variables page
The container-environment-variables.md is redundant with container-environment.md, their content is exactly the same
/assign @tengqm
wow ...
/lgtm
/approve
| gharchive/pull-request | 2020-07-27T03:22:33 | 2025-04-01T06:44:44.906712 | {
"authors": [
"tengqm",
"xieyanker"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/22767",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
956798369 | [ko] Update outdated files in dev-1.21-ko.7 (p3)
Ref: #28963
This commit fixes M20~M26 on 28963.
[x] M20. content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive.html | 2(+XS) 1(-)
[x] M21. content/en/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive.html | 3(+XS) 1(-)
[x] M22. content/en/docs/tutorials/kubernetes-basics/explore/explore-interactive.html | 3(+XS) 1(-)
[x] M23. content/en/docs/tutorials/kubernetes-basics/explore/explore-intro.html | 2(+XS) 2(-)
[x] M24. content/en/docs/tutorials/kubernetes-basics/expose/expose-interactive.html | 3(+XS) 1(-)
[x] M25. content/en/docs/tutorials/kubernetes-basics/scale/scale-interactive.html | 3(+XS) 1(-)
[x] M26. content/en/docs/tutorials/kubernetes-basics/update/update-interactive.html | 6(+XS) 1(-)
@yoonian 꼼꼼한 리뷰 감사합니다!
현재 PR은 dev-1.21-ko.6에서 dev-1.21-ko.7으로 한글화팀 브랜치가 스위칭됨에 따라 그 사이에 main 브랜치에서 변경된 영어 원문의 내용을 기존에 번역된 한글 문서에 반영하는 작업을 포함하고 있습니다.
한국 문서 번역의 경우 영어 원문을 기준으로 진행을 하기 때문에, 리뷰해주신 내용은 영어 원문에서도 반영이 되어 있지 않은 내용이라, 한국 문서에 임의로 반영하기에는 어려움이 있을 것 같습니다.
따라서 리뷰해주신 내용은 영어 원문을 수정하는 PR(#29175)에 반영하였습니다. 이 PR이 머지가 되면, 다음 브랜치 스위칭 작업에서 반영될 예정입니다.
/lgtm
감사합니다!
/approve
| gharchive/pull-request | 2021-07-30T14:29:22 | 2025-04-01T06:44:44.911149 | {
"authors": [
"ClaudiaJKang",
"seokho-son",
"yoonian"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/29168",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
971742011 | [hi] Add content\hi\docs\setup\production-environment\turnkey-solutions.md
localizes file content\hi\docs\setup\production-environment\turnkey-solutions.md
Made the asked changes @mittalyashu
/retitle [hi] Add content\hi\docs\setup\production-environment\turnkey-solutions.md
/close
in favour of #29954
| gharchive/pull-request | 2021-08-16T13:29:59 | 2025-04-01T06:44:44.913328 | {
"authors": [
"ShivamTyagi12345",
"anubha-v-ardhan"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/29424",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1180270564 | [ko] Fix K8s versions in previous release webpage
Closes #32477
Thanks !
/lgtm
/approve
| gharchive/pull-request | 2022-03-25T02:31:31 | 2025-04-01T06:44:44.914564 | {
"authors": [
"jihoon-seo",
"seokho-son"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/32478",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1223680295 | Fix broken link in blog post
A couple of miscreant left-to-right mark characters led to this URL:
https://kubernetes.io/blog/2022/04/28/ingress-nginx-1-2-0/https://github.com/kubernetes-sigs/kpng
This commit should fix that, though it may look like nothing changed in the GitHub preview as the bad characters are non-printing.
/approve
| gharchive/pull-request | 2022-05-03T04:22:34 | 2025-04-01T06:44:44.916518 | {
"authors": [
"craigbox",
"sftim"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/33413",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1251770786 | Broken Link Updated.
Fixed the broken link with modification of lines #33926
Hey @davidopp @kevin-wangzefeng @Shubham82 . Please go through this PR. The broken link has been updated now. Feel free to suggest some changes again if necessary.
@tengqm Is it fine now?
Thanks.
/lgtm
/approve
You're welcome @tengqm . I got my first PR merged.
| gharchive/pull-request | 2022-05-29T02:36:52 | 2025-04-01T06:44:44.918372 | {
"authors": [
"NitishKumar06",
"tengqm"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/34019",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1679127347 | remove dead link
This heading was removed as part of https://github.com/kubernetes/website/pull/39501
@josephgardner Thanks you for your contribution.
Could you please sign the CLA before the PR can be reviewed.
You can follow the steps documented here: CLA Steps
@josephgardner : Please sign the CLA before you begin contributing to the Kubernetes project. Since there has been no action on this PR taken from your end, we'd be closing this off.
Thanks!
/close
| gharchive/pull-request | 2023-04-21T21:36:56 | 2025-04-01T06:44:44.920826 | {
"authors": [
"dipesh-rawat",
"divya-mohan0209",
"josephgardner"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/40804",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1914240179 | Add new page for kernel-level constraints
Fixes #39601
Add a new concept page that documents and compares the available kernel-level security constraints for Pods and containers
Where content was pulled from different pages, I've added comments. That content is verbatim, no grammar or style changes.
There are several outstanding TODOs that block merging
The Security Checklist page has smaller conceptual sections about seccomp apparmor and SELinux that we can remove, making the checklist more streamlined. Haven't done that yet
I used an HTML table because I wanted to wrap the lines
What's needed?
Review the content for accuracy
Address outstanding TODOs
Identify any other pages in the doc set that need to link to this page
/sig docs
/sig security
/language en
/wip
/cc @sftim @pjbgf @tallclair
My first reaction to this PR is that we may be placing the content under the wrong directory.
Not all users care about kernel level settings or constraints. Even they do care, their role might be the cluster administrator or a similar role who can manage node level configurations.
It may be okay to link to this level of details from the concepts section, but the contents themselves look to me more like cluster administration tasks. As an regular user, I sometimes cannot do a thing at this layer.
kernel level settings or constraints
That sounds like a (new) reference page to me. Maybe under https://kubernetes.io/docs/reference/node/?
@tengqm @sftim yeah good points. It doesn't really have a ton of reference information (unlike, say, a page that lists the exact dropped capabilities in the default seccomp profile). But it's more fitting in the reference directory!
(As an aside, I dream of seeing the TOC organized by the goal, not by DITA category - so "Secure" being a section, with the security pages there and broken into sub-goals like "Use ServiceAccounts to give an identity to Pods". That'd be a massive undertaking though so I doubt it'd happen)
(As an aside, I dream of seeing the TOC organized by the goal, not by DITA category - so "Secure" being a section, with the security pages there and broken into sub-goals like "Use ServiceAccounts to give an identity to Pods". That'd be a massive undertaking though so I doubt it'd happen)
I did once try that as a proposed revision to https://kubernetes.io/docs/tasks/
Lots of work and hard to make (IMO) other than as a big-bang, size/xxl change.
It makes more sense to me as a reader. I want to do the thing > I learn what the thing is > I learn how to do the thing. I think it's the kind of change that would need tech writer support from CNCF eh?
@shannonxtreme
Are you actively working on these changes?
I am, but I'm on holiday and will only be back at it in January, after the 15th
I have pending TODOs that I don't know how to resolve:
Confirm that Privileged containers ignore AppArmor profiles
Confirm how privileged containers interact with SELinux
Need a sample CVE that used the mount(2) syscall
SIG Node and SIG Security folks should be able to give advice here. At least I hope so.
Confirm that Privileged containers ignore AppArmor profiles
This is effectively done by the container runtime. Both cri-o and docker fallback to unconfined when the given container is privileged. AFAIK, this is pretty consistent across runtimes.
Confirm how privileged containers interact with SELinux
I believe this is also the case, example would be docker or podman's documentation:
A privileged container turns off the security features that isolate the
container from the host. Dropped Capabilities, limited devices, read-only mount
points, Apparmor/SELinux separation, and Seccomp filters are all disabled.
Hello @shannonxtreme, Good day! We want to merge this sooner rather than later. Could you please rebase this PR so we can proceed with a second round of docs and tech reviews?
uh oh, I messed something up in the push
uh oh, I messed something up in the push
https://xkcd.com/1597/
Lol, okay we're good now
@shannonxtreme if you fix https://github.com/kubernetes/website/pull/43214#discussion_r1548172622 then I think this'll be good to merge; all the other feedback is much less important.
/remove-language ja
Just a lil bump
Can we merge this without incorporating the remaining nits? My only question is for the comments in markdown that say where the original content in some sections came from verbatim - is that helpful for localization or should I remove the comments before merge?
Yes; nits are (kind of) defined as things that don't block a merge. This PR is awaiting approval.
/approve
| gharchive/pull-request | 2023-09-26T20:20:10 | 2025-04-01T06:44:44.935811 | {
"authors": [
"Okabe-Junya",
"divya-mohan0209",
"kbhawkey",
"pjbgf",
"sftim",
"shannonxtreme",
"tengqm"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/43214",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1920581909 | [zh] update data/i18n/zh-cn/zh-cn.toml
There are two problems here.
The first is that the current Chinese translation has a link to the English document address.
The second is that the module thridparty-content is currently different between Chinese and English, should it be synchronized?
https://kubernetes.io/zh-cn/docs/concepts/extend-kubernetes/operator/#writing-operator
https://kubernetes.io/docs/concepts/extend-kubernetes/operator/#writing-operator
See Preview, LGTM
Now I see the content of module thirdparty-content is synchronized, if you find some content inconsistently, you can commit the changes or raise another PR for it.
Thanks.
See Preview, LGTM
Now I see the content of module thirdparty-content is synchronized, if you find some content inconsistently, you can commit the changes or raise another PR for it.
Thanks.
Thanks for the review, I'll close this PR if the content has been properly synchronized.
See Preview, LGTM
Now I see the content of module thirdparty-content is synchronized, if you find some content inconsistently, you can commit the changes or raise another PR for it.
Thanks.
Thanks for the review, I'll close this PR if the content has been properly synchronized.
Hi, I mean the second question in your PR the thirdparty-content is synchronized.
And changes in this PR is correct because now the page still linked to the EN content:
https://kubernetes.io/zh-cn/docs/concepts/extend-kubernetes/operator/#writing-operator
See Preview, LGTM
Now I see the content of module thirdparty-content is synchronized, if you find some content inconsistently, you can commit the changes or raise another PR for it.
Thanks.
Thanks for the review, I'll close this PR if the content has been properly synchronized.
Hi, I mean the second question in your PR the thirdparty-content is synchronized.
And changes in this PR is correct because now the page still linked to the EN content:
https://kubernetes.io/zh-cn/docs/concepts/extend-kubernetes/operator/#writing-operator
Sorry, forgot about the changes😅. I will reopen the pr today to fix the incorrect link issue. Thanks again for the review.
/lgtm
/approve
| gharchive/pull-request | 2023-10-01T05:58:36 | 2025-04-01T06:44:44.946068 | {
"authors": [
"0xff-dev",
"1000Delta",
"tengqm"
],
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/43268",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1841418991 | use distroless base image
Overview
we don't fail more tests than before... so I guess it's fine
Why not from scratch?
We can always copy busybox into the running container for debugging.
And we can create yet another image for each build with busybox inside.
Why not from scratch? We can always copy busybox into the running container for debugging. And we can create yet another image for each build with busybox inside.
distroless is scratch + passwd + ssl certificates
you cannot copy things inside a container with kubectl cp without a shell and tar
TODO: Open a PR in the helm-chart
TODO: Open a PR in the helm-chart
https://github.com/kubescape/helm-charts/pull/259
| gharchive/pull-request | 2023-08-08T14:20:39 | 2025-04-01T06:44:44.949742 | {
"authors": [
"Bezbran",
"dwertent",
"matthyx"
],
"repo": "kubescape/host-scanner",
"url": "https://github.com/kubescape/host-scanner/pull/55",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1692044606 | fix: move host-scanner namespace
Overview
In this PR we are moving the previous dedicated host-scanner's namespace (kubescape-host-scanner), to the current in use for kubescape operator (kubescape).
Additional Information
This is useful to avoid communication issues between namespaces when a policy engine is installed on the cluster.
How to Test
Build kubescape by your own from this branch and run it against a cluster with the following command: ./kubescape scan --enable-host-scan.
Check if host-scanner is deployed in kubescape namespace.
Checklist before requesting a review
[x] My code follows the style guidelines of this project
[ ] I have commented on my code, particularly in hard-to-understand areas
[x] I have performed a self-review of my code
[x] If it is a core feature, I have added thorough tests.
[x] New and existing unit tests pass locally with my changes
I did not merge yet, I will like to merge only once our tests are ready.
Maybe we should remove the host scanner test just for this release so this release won't be blocked.
@alegrey91
in applyYAML function, we need to make sure we don't tearDownNamespace...
Yes, we already checked. The teardown function has a control that verify if the namespace was already present before host-scanner installation.
Doesn't it mean the namespace deletion clause in tearDownNamespace will never be reached? Because if true, maybe we better just remove all this redundant code.
Thanks. This should break the system tests. Let's have a PR ready in the systemtests so we can merge the tests right before we release
Hi @dwertent, I tried to run this command in order to check for possible errors from system tests, but I didn't find any error: ./systest-cli.py -t host_scanner -b development --logger DEBUG --kwargs ks_branch="feat/add-log-coupling-for-host-scanner"
What could be the problem with this new fix in your opinion?
| gharchive/pull-request | 2023-05-02T09:00:58 | 2025-04-01T06:44:44.956066 | {
"authors": [
"alegrey91",
"dwertent",
"kooomix"
],
"repo": "kubescape/kubescape",
"url": "https://github.com/kubescape/kubescape/pull/1217",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1935408899 | Generation of VEX documents by the Kubescape relevancy engine
Overview
Kubescape calculates the relevancy of container image vulnerabilities by monitoring using eBPF the application behavior and produces a filtered list of vulnerabilities. Today the results are stored in the same format as the vulnerabilities, however the VEX seems to be a much better choice to store and publish this information. Kubescape needs to publish the filtered list of vulnerabilities in a VEX format.
Solution
In the current state, the Kubevuln is watching the filtered SBOM objects, every time a new object is created or updated a filtered SBOM is created by the node-agent with only those modules that were loaded into the memory.
When a new filtered SBOM is available, the Kubevuln translates the SBOM to vulnerability list using Grype to create a filtered vulnerability list.
In the same step when the filtered vulnerability is created, Kubevuln should generate a VEX object. The object contains statements that all these vulnerabilities are loaded into the memory therefore they're relevant. This object should be stored as an API objects another vulnerability related.
See more at here
cc: @craigbox @puerco
This is wonderful, please let us know if @openvex can help!
@matthyx added support in Kubevuln PR #179 but it covers only creation and update. Where should we handle the cleanup of objects?
@matthyx added support in Kubevuln PR #179 but it covers only creation and update. Where should we handle the cleanup of objects?
In the operator, check with @vladklokun
| gharchive/issue | 2023-10-10T13:44:23 | 2025-04-01T06:44:44.960736 | {
"authors": [
"matthyx",
"puerco",
"slashben"
],
"repo": "kubescape/kubevuln",
"url": "https://github.com/kubescape/kubevuln/issues/155",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1449918375 | Release Botkube 0.16
Overview
This task aggregates all the to-dos which should be done when it comes to 0.16 release.
AC
[ ] Go through the release process instruction and release new Botkube (https://docs.botkube.io/community/contribute/release)
[ ] Describe any breaking changes in the GitHub release
Include: https://github.com/kubeshop/botkube/pull/850
[ ] Test the release candidate. Include:
Manual tests for Slack interactivity
Test telemetry
Aggregate and coordinate the manual testing described previously (e.g. https://github.com/kubeshop/botkube/issues/741#issuecomment-1263443042)
[ ] Schedule all community activities - blog post, sharing the news in social media, livestream, etc. (@brampling)
Release activities
Ensure that all security vulnerabilities are fixed or mitigated
assignee:
[ ] Cut initial release candidate forv0.16.0
assignee: @pkosiec
Finalize release the v0.16.0
assignee: @pkosiec
Test Slack (RTM)
assignee: CI
it's tested as a part of CI.
[ ] Test Slack (Socket)
test all interactive buttons/modals
assignee: @pkosiec - will test it as a part of interactivity test
[ ] Test Mattermost
assignee:
Test Discord
assignee: CI
it's tested as a part of CI.
[ ] manual tests @pkosiec - will test it as a part of interactivity test
[ ] Test MS Teams
assignee:
[ ] Test Elasticsearch
assignee:
[ ] Test Webhook
assignee:
[ ] Test Telemetry
assignee:
Check if we described all breaking changes in the release summary and if it's rendered correctly.
assignee: team
Others
⏳ Prepare the release notes
assignee: @brampling
⏳ Create a blog post and cross-post it (e.g. Medium, Dev.to)
assignee: @brampling
⏳ Share the news on social media (e.g. Reddit, Hacker News, LinkedIn, Twitter, Slack)
assignee: @brampling
Testing scenario: https://github.com/kubeshop/botkube/issues/595#issuecomment-1225458097
Event constraints
Ensure you have the following sources in local config:
communications:
'default-group':
# Settings for Slack
socketSlack:
enabled: true
appToken: # TODO: ...
botToken: # TODO: ...
notification:
type: "short"
channels:
'default':
name: botkube-demo
bindings:
executors:
- kubectl-read-only
sources:
- pod-label
- pod-annotation
- pod-name
- event-reason
- event-message
sources:
'pod-label':
kubernetes:
namespaces:
include:
- ".*"
event:
types:
- create
- delete
- error
labels:
my-label: "true"
resources:
- type: v1/pods
'pod-annotation':
kubernetes:
namespaces:
include:
- ".*"
event:
types:
- create
- delete
- error
annotations:
my-annotation: "false"
resources:
- type: v1/pods
annotations:
my-annotation: "true" # override
'pod-name':
kubernetes:
namespaces:
include:
- ".*"
event:
types:
- create
- delete
- error
resources:
- type: v1/pods
name: "^my.*"
'event-reason':
kubernetes:
annotations:
event-reason-constraints: "true"
namespaces:
include:
- ".*"
event:
types:
- create
- delete
- error
resources:
- type: v1/pods
event:
reason: "^BackOff$"
'event-message':
kubernetes:
annotations:
event-message-constraints: "true"
namespaces:
include:
- ".*"
event:
message: "^Back-off .*"
types:
- create
- delete
- error
resources:
- type: v1/pods
actions:
'get-created-resource':
enabled: true
displayName: "Get resource"
command: "kubectl get {{ .Event.TypeMeta.Kind | lower }}{{ if .Event.Namespace }} -n {{ .Event.Namespace }}{{ end }} {{ .Event.Name }}"
bindings:
sources:
- pod-label
- pod-annotation
- pod-name
- event-reason
- event-message
executors:
- kubectl-read-only
Install Botkube.
Apply the following yaml and see if you were notified about proper events:
apiVersion: v1
kind: Pod
metadata:
name: pod-labeled
labels:
my-label: "true"
spec:
containers:
- name: nginx
image: nginx:stable
command: ["foo"]
---
apiVersion: v1
kind: Pod
metadata:
name: pod-not-labeled
labels:
my-label: "false"
spec:
containers:
- name: nginx
image: nginx:stable
command: ["foo"]
---
apiVersion: v1
kind: Pod
metadata:
name: pod-annotated
annotations:
my-annotation: "true"
spec:
containers:
- name: nginx
image: nginx:stable
command: ["foo"]
---
apiVersion: v1
kind: Pod
metadata:
name: bar
annotations:
my-annotation: "false"
spec:
containers:
- name: nginx
image: nginx:stable
command: ["foo"]
---
apiVersion: v1
kind: Pod
metadata:
name: my-name
spec:
containers:
- name: nginx
image: nginx:stable
command: ["foo"]
---
apiVersion: v1
kind: Pod
metadata:
name: not-my-name
spec:
containers:
- name: nginx
image: nginx:stable
command: ["foo"]
---
apiVersion: v1
kind: Pod
metadata:
name: event-reason-constraints
annotations:
event-reason-constraints: "true"
spec:
containers:
- name: nginx
image: nginx:stable
command: ["foo"]
---
apiVersion: v1
kind: Pod
metadata:
name: event-message-constraints
annotations:
event-message-constraints: "true"
spec:
containers:
- name: nginx
image: nginx:stable
command: ["foo"]
Automation
Run @Botkube list actions
Run @Botkube enable action describe-created-resource
Create new resource from Terminal:
apiVersion: v1
kind: Pod
metadata:
name: failing
spec:
containers:
- name: nginx
image: nginx:latest
command: ["foo"]
Delete the resource from Terminal
Run @Botkube disable action describe-created-resource
Create the same resource
Bugs
MS Teams
If you get an error: Failed to parse Teams request. Authentication failed.: Unauthorized: invalid AppId passed on token" bot="MS Teams" it not necessary mean that the app id or app password are wrong. The other option is that you have a typo/wrong botName property.
When Botkube reloads the configuration, we are losing notification settings, and we need to run @Botkube notifier start each time the Botkube pod is restarted.
To avoid such situation, we could also persist the conversation reference: https://github.com/kubeshop/botkube/blob/c3230e43926dfe3399e9b4f75dd2333ddcfaf911/pkg/bot/teams.go#L527-L548
| gharchive/issue | 2022-11-15T14:55:11 | 2025-04-01T06:44:44.980399 | {
"authors": [
"mszostok",
"pkosiec"
],
"repo": "kubeshop/botkube",
"url": "https://github.com/kubeshop/botkube/issues/851",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1774581618 | Fix: Remove Text autocomplete from test create form #4002
This PR...
Changes
Fixes
https://github.com/kubeshop/testkube/issues/4002
How to test it
screenshots
Checklist
[ ] tested locally
[ ] added new dependencies
[ ] updated the docs
[ ] added a test
Hi @Pravesh-Sudha, thanks for the PR! Can you add it to the name fields of the other resource types, per the ticket comment?
Besides, @pavloburchak, if you would have some time, please check if it works in Chrome/Firefox/Safari. Otherwise, I'll try to check it about the end of next week 🙂
Test Scenario
Open form
Put some name
Submit form
Open form
Put a similar name
Check if there is autocompletion during that
The scenario should be repeated twice - once submitting with Enter key, and once submitting with the Submit button.
Sorry to sound noob, I am new to typescript, I have added the turn off autocomplete in the name field of TestSuiteCreationModelContent
Thank you @Pravesh-Sudha for your efforts! Looks like we have updated it in the whole application with https://github.com/kubeshop/testkube-dashboard/pull/889 PR, so I'll close this one.
| gharchive/pull-request | 2023-06-26T11:40:19 | 2025-04-01T06:44:44.986561 | {
"authors": [
"Pravesh-Sudha",
"rangoo94"
],
"repo": "kubeshop/testkube-dashboard",
"url": "https://github.com/kubeshop/testkube-dashboard/pull/748",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1470647029 | Remove queued status for pipelinerun #859
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #859
Special notes for reviewers:
Please check the following list before waiting reviewers:
[ ] Already committed the CRD files to the Helm Chart if you created some new CRDs
[ ] Already added the permission for the new API
[ ] Already added the RBAC markers for the new controllers
Does this PR introduce a user-facing change??
Inprove pipelinerun lifecycle,remove confused queued stage.
/cherrypick release-3.3
/approve
/approve
| gharchive/pull-request | 2022-12-01T04:17:52 | 2025-04-01T06:44:44.999796 | {
"authors": [
"chilianyi",
"yudong2015"
],
"repo": "kubesphere/ks-devops",
"url": "https://github.com/kubesphere/ks-devops/pull/860",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1305554132 | fix warning
Fix spelling mistakes
/assign @ruiyaoOps
/lgmt
/lgtm
| gharchive/pull-request | 2022-07-15T04:16:50 | 2025-04-01T06:44:45.001240 | {
"authors": [
"iNineku",
"ruiyaoOps"
],
"repo": "kubesphere/kubeeye",
"url": "https://github.com/kubesphere/kubeeye/pull/234",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1584817438 | Dev environment design and implementation for KubeStellar
Feature Description
To build a quick install dev/test environment for kubestellar
Proposed Solution
To build a dev/test enviroment using the kcp-playground to deploy the kcp components and build an automation to deploy kcp-edge specific components. The dev/test should be built with the following requirements:
a) must support multiple OS (e.g., Ubuntu, Windows, etc.)
b) must run in a laptop with minimal resource footprint
Alternative Solutions
No response
Want to contribute?
[ ] I would like to work on this issue.
Additional Context
No response
KubeStellar quickstart resolves this issue.
| gharchive/issue | 2023-02-14T20:55:32 | 2025-04-01T06:44:45.009726 | {
"authors": [
"dumb0002"
],
"repo": "kubestellar/kubestellar",
"url": "https://github.com/kubestellar/kubestellar/issues/169",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1849746323 | [BUG] Kubevela application does not reconcile the k8s-objects component
Describe the bug
I have kubevela application where I am mounting configmap created using k8s-objects component, and then mount it as volumemount trait towebservice component. It works great but if I delete the configmap manually and wait for reconciliation, the kubevela operator does create the k8s-objects but does not add any data to it, meaning it creates empty configmap from k8s-objects manifest.
To Reproduce
Create a kubevela application, with 2 components k8s-objects which will create configmap and webservice which will mount this configmap.
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: poc-service
annotations:
argocd.argoproj.io/compare-options: IgnoreExtraneous
spec:
components:
- name: dance-service-config
type: k8s-objects
properties:
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: dance-appsettings-configmap
data:
appsettings.Dev.json: |
{
"SpanOptions": {
"LogEventPropertiesNames": {
"TraceId": "TraceId",
"ParentId": "ParentId",
"SpanId": "SpanId",
"OperationName": "OperationName"
}
}
}
- name: poc-test-service
type: webservice
properties:
image: <yourimage or busybox>
#imagePullPolicy: Always
#imagePullSecrets: ["regcred"]
cpu: 200m
memory: 240Mi
ports:
- port: 80
expose: true
- port: 50001
expose: true
env:
traits:
- type: storage
properties:
configMap:
- name: "dance-appsettings-configmap"
mountPath: "/app/appsettings.Dev.json"
subPath: "appsettings.Dev.json"
Deploy the application. The application will be up and running as expected and it creates configmap.
NAME DATA AGE
dance-appsettings-configmap 1 32m
Now delete the cm manually and waits for reconcile loop. Newly created CM will have no data in in.
NAME DATA AGE
dance-appsettings-configmap 0 1m
Expected behavior
The configmap from k8s-objects component should have data as well.
KubeVela Version
CLI Version: 1.9.5
Core Version: 1.9.5
GitRevision: 00ae0c9494e0672e8df0c918f0f5d034e29ce2b8
GolangVersion: go1.20.6
Can you paste the detail status of this application?
| gharchive/issue | 2023-08-14T13:08:45 | 2025-04-01T06:44:45.014957 | {
"authors": [
"bguruprasad",
"wangyikewxgm"
],
"repo": "kubevela/kubevela",
"url": "https://github.com/kubevela/kubevela/issues/6270",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1881982847 | Bug 2236393: Not able to select StorageClass if there is no default StorageClass defined in the cluster
📝 Description
Improve check to useEffect hook on StorageClassSelect component, to allow selection when no default SC exists
🎥 Demo
Please add a video or an image of the behavior/changes
/lgtm
/retest
/cherry-pick release-4.13
| gharchive/pull-request | 2023-09-05T13:26:34 | 2025-04-01T06:44:45.017120 | {
"authors": [
"avivtur",
"gouyang",
"metalice"
],
"repo": "kubevirt-ui/kubevirt-plugin",
"url": "https://github.com/kubevirt-ui/kubevirt-plugin/pull/1516",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1263293108 | Bug 2093716: Change CPU|Memory modal button to "Restore template settings"
📝 Description
Fixes:
https://bugzilla.redhat.com/show_bug.cgi?id=2093716
Change the button text Restore default settings to Restore template settings, in the Edit CPU | Memory modal.
🎥 Demo
Before:
After:
@glekner @avivtur @metalice @pcbailey @vojtechszocs please review
/lgtm
/bugzilla refresh
| gharchive/pull-request | 2022-06-07T13:21:34 | 2025-04-01T06:44:45.020725 | {
"authors": [
"hstastna",
"yaacov"
],
"repo": "kubevirt-ui/kubevirt-plugin",
"url": "https://github.com/kubevirt-ui/kubevirt-plugin/pull/563",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
795636437 | [Flaky CI] [Serial]Operator [rfe_id:2291][crit:high][vendor:cnv-qe@redhat.com][level:component]infrastructure management [test_id:3150]should be able to update kubevirt install with custom image tag
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/triage build-watcher
/kind bug
What happened:
tests/operator_test.go:1362
Timed out after 34.003s.
Unexpected error:
<*errors.StatusError | 0xc0007285a0>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {
SelfLink: "",
ResourceVersion: "",
Continue: "",
RemainingItemCount: nil,
},
Status: "Failure",
Message: "Timeout: request did not complete within requested timeout 34s",
Reason: "Timeout",
Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0},
Code: 504,
},
}
Timeout: request did not complete within requested timeout 34s
Prow - https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/4844/pull-kubevirt-e2e-k8s-1.19/1352642101069746176
occurred
tests/operator_test.go:367
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
KubeVirt version (use virtctl version):
Kubernetes version (use kubectl version):
VM or VMI specifications:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
Kernel (e.g. uname -a):
Install tools:
Others:
The issue seemed different from this recently closed issue - https://github.com/kubevirt/kubevirt/issues/4698
The issue seemed different from this recently closed issue - https://github.com/kubevirt/kubevirt/issues/4698
/reopen
Happened again https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/logs/periodic-kubevirt-e2e-k8s-prev-prev/1361435416300883968
other error but same test, might be just a variant of the same problem
https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/5076/pull-kubevirt-e2e-k8s-1.17/1363869838698614785
tests/operator_test.go:1407
Timed out after 320.000s.
All VMIs should update via live migration
Expected
<*errors.errorString | 0xc002c5ad00>: {
s: "waiting for migration 31d89464-9cee-45f9-a8ed-4e52ef158d32 to complete for vmi kubevirt-test-default1/testvmi-59gz8",
}
to be nil
tests/operator_test.go:653
Happened again https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/logs/periodic-kubevirt-e2e-k8s-latest/1363428550803197952 in this case with this error:
tests/operator_test.go:1407
Timed out after 300.000s.
Unexpected error:
<*errors.errorString | 0xc0028d5370>: {
s: "Waiting for conditions to indicate deployment (conditions: [{Type:Available Status:True LastProbeTime:2021-02-21 13:26:18 +0000 UTC LastTransitionTime:2021-02-21 13:26:18 +0000 UTC Reason:UpdateInProgress Message:Transitioning from previous version devel with registry registry:5000/kubevirt to target version devel_alt using registry registry:5000/kubevirt} {Type:Progressing Status:True LastProbeTime:2021-02-21 13:26:18 +0000 UTC LastTransitionTime:2021-02-21 13:26:18 +0000 UTC Reason:UpdateInProgress Message:Transitioning from previous version devel with registry registry:5000/kubevirt to target version devel_alt using registry registry:5000/kubevirt} {Type:Degraded Status:True LastProbeTime:2021-02-21 13:26:18 +0000 UTC LastTransitionTime:2021-02-21 13:26:18 +0000 UTC Reason:UpdateInProgress Message:Transitioning from previous version devel with registry registry:5000/kubevirt to target version devel_alt using registry registry:5000/kubevirt}])",
}
Waiting for conditions to indicate deployment (conditions: [{Type:Available Status:True LastProbeTime:2021-02-21 13:26:18 +0000 UTC LastTransitionTime:2021-02-21 13:26:18 +0000 UTC Reason:UpdateInProgress Message:Transitioning from previous version devel with registry registry:5000/kubevirt to target version devel_alt using registry registry:5000/kubevirt} {Type:Progressing Status:True LastProbeTime:2021-02-21 13:26:18 +0000 UTC LastTransitionTime:2021-02-21 13:26:18 +0000 UTC Reason:UpdateInProgress Message:Transitioning from previous version devel with registry registry:5000/kubevirt to target version devel_alt using registry registry:5000/kubevirt} {Type:Degraded Status:True LastProbeTime:2021-02-21 13:26:18 +0000 UTC LastTransitionTime:2021-02-21 13:26:18 +0000 UTC Reason:UpdateInProgress Message:Transitioning from previous version devel with registry registry:5000/kubevirt to target version devel_alt using registry registry:5000/kubevirt}])
occurred
tests/operator_test.go:341
Happened again https://prow.apps.ovirt.org/view/gcs/kubevirt-prow/logs/periodic-kubevirt-e2e-k8s-prev-prev/1364334520727244800 with this error message:
tests/operator_test.go:1407
Timed out after 320.000s.
All VMIs should update via live migration
Expected
<*errors.errorString | 0xc002ba36d0>: {
s: "waiting for migration a96dfe14-6370-4e4a-9518-83f8146b3e88 to complete for vmi kubevirt-test-default1/testvmi-g659t",
}
to be nil
tests/operator_test.go:653
| gharchive/issue | 2021-01-28T03:33:24 | 2025-04-01T06:44:45.031583 | {
"authors": [
"fgimenez",
"oshoval",
"shwetaap"
],
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/issues/4913",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1364299574 | Investigate unexpected error
What happened:
Unexpected error occurred :
Unexpected Warning event received: vmi-kernel-boot,40972611-13bf-40e8-be4a-8e96b76e9f7e: unable to create virt-launcher client connection: can not add ghost record when entry already exists with differing UID
Seen in https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirt/8419/pull-kubevirt-e2e-k8s-1.23-sig-compute-nonroot/1567301949986967552
This might be a quick turnaround or a real issue with clean up.
What you expected to happen:
Not to see this warning.
How to reproduce it (as minimally and precisely as possible):
The mentioned test might be helpful.
Additional context:
Add any other context about the problem here.
Environment:
KubeVirt version (use virtctl version): N/A
Kubernetes version (use kubectl version): N/A
VM or VMI specifications: N/A
Cloud provider or hardware configuration: N/A
OS (e.g. from /etc/os-release): N/A
Kernel (e.g. uname -a): N/A
Install tools: N/A
Others: N/A
The impact on CI is high https://search.ci.kubevirt.io/?search=external+alpine-based+kernel+only&maxAge=12h&context=1&type=build-log&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
/assign
/reopen
/remove-lifecycle rotten
@iholder101 Did you have a chance to look into this?
I see this is now relevant for all VMs, not only for those using kernel boot.
@enp0s3 had found the root cause and is now working on a fix.
Thanks a lot @enp0s3!!
@iholder101 Thank you!
From what I've seen in Itamar's PR, it can happen when using a VMI with fixed name. I will backport the randname usage when creating VMs with libvmi
PR fix is merged: https://github.com/kubevirt/kubevirt/pull/9222.
I think we can close this now.
/close
/reopen
@iholder101
Are we sure we did not treat the symptom rather than the root cause? I mean our records should be fast enough.
The change also seems no-op to me but maybe I am missing something.
/reopen @iholder101 Are we sure we did not treat the symptom rather than the root cause? I mean our records should be fast enough.
Can you elaborate?
The change also seems no-op to me but maybe I am missing something.
Also here. What do you mean? why a no-op?
/reopen @iholder101 Are we sure we did not treat the symptom rather than the root cause? I mean our records should be fast enough.
Can you elaborate?
The message says that a record with the same key ( namespace + name ) but different uid already exists. This is a case when you delete/stop a VM/I and create/start VM/I with same name in the same namespace (of course the VM/I needs to land on the same node). This is valid case imho and the problem is that the cache is not cleared. In other words the cleanup is not called or we might have a bottle neck somewhere. I think it is worth to have a look on.
Hey @xpivarc!
IIRC, the root cause was that two tests, that were running in parallel, created a VMI with the same name on the same namespace. Therefore, the error message seems entirely valid. Since now the VMI names are randomized the test it not flakie anymore.
The change also seems no-op to me but maybe I am missing something.
Also here. What do you mean? why a no-op?
Only test was changed and the randname was moved into New while before we passed the result as argument. This is not a big change.
Right, it's not a big change but it was necessary and solved the root cause. It isn't a no-op. Or am I missing something?
| gharchive/issue | 2022-09-07T08:35:10 | 2025-04-01T06:44:45.044219 | {
"authors": [
"enp0s3",
"iholder-redhat",
"iholder101",
"xpivarc"
],
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/issues/8423",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
417924703 | tests: replace remaining occurrence of "local" by StorageClassLocal
What this PR does / why we need it:
Allows to change the storage class used during tests by modifying the code in a unique place.
Release note:
NONE
Allows to change the storage class used during tests by modifying the code in a unique place.
Sounds like you would actually need a commandline flag to the tests binary?
@rmohr
Indeed, I began to implement adding a new command line flag for this but then I figured out that basically we will need one for all those https://github.com/kubevirt/kubevirt/blob/master/tests/utils.go#L137-L143.
So many flags doesn't seems a viable solution so I stopped here for now.
I guess the right solution would be to use a configuration file but I don't feel confident with my current skills in go to develop such a change.
I guess the right solution would be to use a configuration file but I don't feel confident with my current skills in go to develop such a change.
Hm, might make sense. The PR as is is ok for me as cleanup PR. Thanks.
| gharchive/pull-request | 2019-03-06T17:30:24 | 2025-04-01T06:44:45.048695 | {
"authors": [
"dollierp",
"rmohr"
],
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/2094",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1163749646 | tests/numa: use libvmi.NewCirros()
It makes the code a bit shorter and more readable. While at it, use the fresh value returned by tests.WaitForSuccessfulVMIStart() instead of an additional Get() call.
NONE
/retry pull-kubevirt-generate
/retest
/retest
| gharchive/pull-request | 2022-03-09T10:21:28 | 2025-04-01T06:44:45.050408 | {
"authors": [
"dankenigsberg",
"enp0s3"
],
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/7333",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
373413455 | Move template util methods to utils/templates
Rename utils/template and utils/validation to utils/templates and utils/validations
Pull Request Test Coverage Report for Build 184
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 55.921%
Totals
Change from base Build 233:
0.0%
Covered Lines:
292
Relevant Lines:
470
💛 - Coveralls
@mareklibra rebased on master
| gharchive/pull-request | 2018-10-24T10:28:54 | 2025-04-01T06:44:45.068360 | {
"authors": [
"coveralls",
"rawagner"
],
"repo": "kubevirt/web-ui-components",
"url": "https://github.com/kubevirt/web-ui-components/pull/67",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1636039802 | test: extend e2e tests
Ensure context aware policies work as expected.
Everything is green now, should I merge it?
| gharchive/pull-request | 2023-03-22T15:43:38 | 2025-04-01T06:44:45.069217 | {
"authors": [
"flavio"
],
"repo": "kubewarden/kwctl",
"url": "https://github.com/kubewarden/kwctl/pull/463",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1813088558 | frontend: Why the service deployment resource uses 2 identical storage services and wants to be optimized
Description
frontend deployment mainfest
...
- command:
- /usr/local/bin/kelemetry
- --log-level=info
- --pprof-enable=true
- --jaeger-backend=jaeger-storage
- --jaeger-cluster-names=cluster1
- --jaeger-redirect-server-enable=true
- --jaeger-storage-plugin-address=:17271 # localhost:17271
- --jaeger-storage-plugin-enable=true
- --jaeger-storage.grpc-storage.server=kelemetry-1689762474-storage.kelemetry.svc:17271 # storage-svc
- --jaeger-storage.span-storage.type=grpc-plugin
- --jaeger-trace-cache=etcd
- --jaeger-trace-cache-etcd-endpoints=kelemetry-1689762474-etcd.kelemetry.svc:2379
- --jaeger-trace-cache-etcd-prefix=/trace/
- --trace-server-enable=true
image: ghcr.io/kubewharf/kelemetry:0.1.0
...
User story
It is convenient for users to be more familiar with the source code of the project
See USAGE.txt:
--jaeger-storage-plugin-address string storage plugin grpc server bind address (default ":17271")
--jaeger-storage.grpc-storage.server string The remote storage gRPC server address as host:port
and the diagram in DEPLOY.md:
jaeger-storage-plugin-address is the address that the storage plugin listens on, to serve requests from "Jaeger Query UI". In the case of helm chart, "Jaeger Query UI" and "Kelemetry storage plugin" are deployed as sidecar containers of the same pod, so this is always :17271 (I think we could make this localhost:17271 since sidecar containers are on the same network stack, but it is not necessary to change this for now).
The options starting with --jaeger-storage.{SPAN_STORAGE_TYPE}.* are options that determine how "Kelemetry storage plugin" connects to "Jaeger storage". In the case of helm chart with Badger DB, since frontend pods are stateless but Badger is a single-instance database, we need to run the database in a single-pod statefulset so that multiple frontend instances access the Badger volume through the same process (see https://www.jaegertracing.io/docs/1.47/deployment/#remote-storage-component for explanation):
graph LR
subgraph frontend-pod-0
jaeger-query-0 --> storage-plugin-0
end
subgraph frontend-pod-1
jaeger-query-1 --> storage-plugin-1
end
subgraph frontend-pod-2
jaeger-query-2 --> storage-plugin-2
end
storage-plugin-0 --> remote-badger
storage-plugin-1 --> remote-badger
storage-plugin-2 --> remote-badger
subgraph badger [badger node]
remote-badger --> badger-volume
end
We cannot directly let jaeger-query-* access remote-badger because remote-badger is a native Jaeger image that does not know how to perform trace transformation, but we cannot directly let storage-plugin-* access badger-volume because that would cause concurrent access to the same badger DB from multiple processes.
If you use a distributed database instead of Badger, the helm chart will no longer generate the kelemetry-storage StatefulSet but call the database directly.
| gharchive/issue | 2023-07-20T03:34:22 | 2025-04-01T06:44:45.075375 | {
"authors": [
"SOF3",
"jackwillsmith"
],
"repo": "kubewharf/kelemetry",
"url": "https://github.com/kubewharf/kelemetry/issues/129",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
567742961 | Elastic operator does not work with current KUDO
Noticed when restructuring docs in https://github.com/kudobuilder/operators/pull/226.
This might be due to the fact that KUDO 0.10.x no longer prefixes pod names with instance name.
[root@master-0 elasticsearch]# curl coordinator-0.coordinator-hs:9200/_cluster/health?pretty
curl: (6) Could not resolve host: coordinator-0.coordinator-hs; Unknown error
[root@master-0 elasticsearch]# curl coordinator-hs:9200/_cluster/health?pretty
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
[root@master-0 elasticsearch]#
I hit the same issue with kude versions 0.11.1 and 0.12.0 .. Some details are below..
[root@master-0 elasticsearch]# curl coordinator-0.coordinator-hs:9200/_cluster/health?pretty
curl: (6) Could not resolve host: coordinator-0.coordinator-hs; Unknown error
[root@master-0 elasticsearch]# curl coordinator-hs:9200/_cluster/health?pretty
{
"error" : {
"root_cause" : [
{
"type" : "master_not_discovered_exception",
"reason" : null
}
],
"type" : "master_not_discovered_exception",
"reason" : null
},
"status" : 503
}
[root@master-0 elasticsearch]#
kubectl get pod
NAME READY STATUS RESTARTS AGE
coordinator-0 1/1 Running 0 16m
data-0 1/1 Running 0 17m
data-1 1/1 Running 0 17m
master-0 1/1 Running 0 19m
master-1 1/1 Running 0 18m
master-2 1/1 Running 0 17m
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coordinator-hs ClusterIP None <none> 9200/TCP 13m
data-hs ClusterIP None <none> 9200/TCP 14m
ingest-hs ClusterIP None <none> 9200/TCP 13m
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 95m
master-hs ClusterIP None <none> 9200/TCP 16m
This is the same problem mentioned in https://github.com/kudobuilder/operators/pull/257
Snippet from master pod log:
{"type": "server", "timestamp": "2020-05-11T07:23:23,375+0000", "level": "WARN", "component": "o.e.d.SeedHostsResolver", "cluster.name": "elastic-cluster", "node.name": "master-0", "message": "failed to resolve host [elastic-master-2.elastic-master-hs]" ,
"stacktrace": ["java.net.UnknownHostException: elastic-master-2.elastic-master-hs: Name or service not known",
"at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[?:?]",
| gharchive/issue | 2020-02-19T18:09:23 | 2025-04-01T06:44:45.079841 | {
"authors": [
"porridge",
"ranjithwingrider"
],
"repo": "kudobuilder/operators",
"url": "https://github.com/kudobuilder/operators/issues/227",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
946171551 | kuka-serverless tests failing
They used to work with spinning up a serverless-offline in the CI. Flat out not working anymore. Tried to fix it in a couple of ways already.
This does not seem like stable solution. I might genuinely just deploy to AWS Lambda dev on CI and then test against that dev.
There have been some articles making arguments for testing against actual AWS Lambda and not serverless-offline.
Still fixing this. CI still failing. But basically got everything to work. I'm just deploying straight to Lambda to run CI tests. Some tests failed, because the table already existed and had some entries. Gotta make it if CI then create and read from table with -ci -prefix or something.
https://www.serverless.com/dynamodb#dynamodb-with-serverless-framework
https://www.serverless.com/framework/docs/providers/aws/guide/resources/#configuration
Can only add one GSI to cloudformation: https://cloudkatha.com/solved-cannot-perform-more-than-one-gsi-creation-or-deletion-in-a-single-update/
Solution:
Put one GSI in serverless.yaml.
Deploy.
Then add second GSI through API.
Upon success, run tests.
Delete table through API. (Have to delete table even if tests fail... If tests fail CI quits? Risk that table stays? Maybe make lambda that checks for table every X minutes and deletes if exists.)
https://stackoverflow.com/questions/36918408/unable-to-add-gsi-to-dynamodb-table-using-cloudformation
Cloudformation appears to be useless for this.
Do it programatically: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-6.html
I think this is the method to call in the SDK: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB.html#updateTable-property
Thinking of something like this to create the table and first GSI:
resources: # CloudFormation template syntax
Resources:
usersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${env:TABLE_NAME}-${env:STAGE}
AttributeDefinitions:
- AttributeName: "PK"
AttributeType: S
- AttributeName: "SK"
AttributeType: S
KeySchema:
- AttributeName: "PK"
KeyType: "HASH"
- AttributeName: "SK"
KeyType: "RANGE"
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
GlobalSecondaryIndexes:
IndexName: "sk-pk-index"
KeySchema:
- AttributeName: "SK"
KeyType: "HASH"
- AttributeName: "PK"
KeyType: "RANGE"
Projection:
ProjectionType: "ALL"
Gotta still make script to add second GSI and delete table after tests (even if tests fail).
Table deletion could look something like this:
delete-table.js
const { DynamoDB } = require("@aws-sdk/client-dynamodb")
;(async () => {
const client = new DynamoDB({ region: process.env.REGION })
const params = {
TableName: process.env.TABLE_NAME + "-" + process.env.STAGE,
}
await client.deleteTable(params)
})()
To add GSI to existing table:
create-GSI.js
const { DynamoDB } = require("@aws-sdk/client-dynamodb")
;(async () => {
const client = new DynamoDB({ region: process.env.REGION })
const params = {
TableName: process.env.TABLE_NAME + "-" + process.env.STAGE,
GlobalSecondaryIndexUpdates: {
Create: {
IndexName: "email-pk-index",
KeySchema: [
{ AttributeName: "email", KeyType: "HASH" },
{ AttributeName: "pk", KeyType: "RANGE" },
],
Projection: "ALL",
ProvisionedThroughput: { ReadCapacityUnits: 1, WriteCapacityUnits: 1 },
},
},
}
await client.updateTable(params)
})()
my scripts/ci.sh will look something like this:
#!/usr/bin/env bash
set -e
npm config set @kuka-js:registry http://registry.npmjs.org
npm config set //registry.npmjs.org/:_authToken $NPM_TOKEN
npm whoami
npm ci --also=dev
./node_modules/serverless/bin/serverless.js config credentials --provider aws --key $AWS_KEY --secret $AWS_SECRET
./node_modules/serverless/bin/serverless.js deploy
# Need to create new GSI, because during deployment only one GSI can be added. https://github.com/kuka-js/kuka/issues/165
./scripts/create-GSI.js
CI_URL="https://$(node ./get-api-id.js).execute-api.eu-north-1.amazonaws.com/ci/"
sed -i -e "s|URL_PREFIX|$CI_URL|g" ./.newman/postman_environment.json
npm test
./scripts/delete-table.js
# Gotta disable semantic release because I'm trying to fix CICD currently
#npm run semantic-release
Still gotta add endpoint deletion there though (deletion of CI related apigw and lambda, and maybe something else)
finally managed to programatically create a GSI to an existing table
const {
DynamoDBClient,
UpdateTableCommand,
} = require("@aws-sdk/client-dynamodb")
;(async () => {
const config = {
region: process.env.REGION,
credentials: {
accessKeyId: process.env.AWS_KEY,
secretAccessKey: process.env.AWS_SECRET,
},
}
console.log(config)
const client = new DynamoDBClient(config)
const params = {
TableName: process.env.TABLE_NAME + "-" + process.env.STAGE,
AttributeDefinitions: [
{ AttributeName: "email", AttributeType: "S" },
{ AttributeName: "pk", AttributeType: "S" },
],
GlobalSecondaryIndexUpdates: [
{
Create: {
IndexName: "email-pk-index",
KeySchema: [
{ AttributeName: "email", KeyType: "HASH" },
{ AttributeName: "pk", KeyType: "RANGE" },
],
Projection: { ProjectionType: "ALL" },
ProvisionedThroughput: {
ReadCapacityUnits: 1,
WriteCapacityUnits: 1,
},
},
},
],
}
const command = new UpdateTableCommand(params)
try {
await client.send(command)
} catch (e) {
console.error(e)
}
console.log(JSON.stringify(command))
})()
However creating table is not instantaneous. Script ends but GSI not created. So we need to poll the creation and when it is done continue with the CI/CD pipeline.
In fact why do we even have this email-pk-index. It does not seem to be used.
Yeah, I used ag to search the codebase and only sk-pk-index is actually used.
oh my god, after so much work finally I think I'm almost done... finally
Holy crap. Finally done. https://github.com/kuka-js/kuka/runs/3269743728?check_suite_focus=true
Too many commits straight to main. Here's some of the latest PRs related to this mess:
https://github.com/kuka-js/kuka/pull/164
https://github.com/kuka-js/kuka/pull/167
https://github.com/kuka-js/kuka/pull/168
https://github.com/kuka-js/kuka/pull/169
There is very ugly git history now. But I do not care. I'm not gonna start commit squashing and rebasing, because then I might have to do "release not found release branch after git push --force" fix described here all over again. And there is no way in hell I'm doing that again.
Ticket closed. Finally. Did I say finally often enough?
| gharchive/issue | 2021-07-16T10:49:31 | 2025-04-01T06:44:45.097071 | {
"authors": [
"nake89"
],
"repo": "kuka-js/kuka",
"url": "https://github.com/kuka-js/kuka/issues/165",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
715506282 | refactor(config): move version and digest to ConfigSet
Summary
version and digest is used by ConfigSet only. It makes sense to move these 2 to ConfigSet
Test plan
Green CI
Does this PR introduce a breaking change?
[ ] Yes
[x] No
Other information
N.A.
Pull Request Test Coverage Report for Build 6069
2 of 2 (100.0%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.01%) to 93.206%
Totals
Change from base Build 6068:
-0.01%
Covered Lines:
1107
Relevant Lines:
1141
💛 - Coveralls
| gharchive/pull-request | 2020-10-06T09:33:38 | 2025-04-01T06:44:45.107293 | {
"authors": [
"ahnpnl",
"coveralls"
],
"repo": "kulshekhar/ts-jest",
"url": "https://github.com/kulshekhar/ts-jest/pull/2009",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2590582983 | SimpleDefect class does not meet MSONable requirement
Test code:from pydefect.input_maker.defect import SimpleDefect
sd = SimpleDefect(in_atom="A",out_atom="B",charge_list=[0,1,2])
sd_dict = sd.as_dict()
SimpleDefect.from_dict(sd_dict)
Result:---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[3], line 5
3 sd = SimpleDefect(in_atom="A",out_atom="B",charge_list=[0,1,2])
4 sd_dict = sd.as_dict()
----> 5 SimpleDefect.from_dict(sd_dict)
File ~/packages_dev/vasp_workflows/.venv/lib/python3.9/site-packages/pydefect/input_maker/defect.py:18, in Defect.from_dict(cls, d)
16 @classmethod
17 def from_dict(cls, d):
---> 18 return cls(name=d["name"], charges=tuple(d["charges"]))
KeyError: 'name'
Reason
SimpleDefect inherit from_dict() from parent Defect class, which is not compatible since SimpleDefect has a different __init__() method.
Possible fix plan (tested)class SimpleDefect_MSONable(SimpleDefect):
@classmethod
def from_dict(cls, d):
_d = {k:v for k,v in d.items() if not "@" in k}
return cls(**_d)
Thank you for your post. I fixed the issue, which is valid from ver 0.9.5.
| gharchive/issue | 2024-10-16T04:34:21 | 2025-04-01T06:44:45.111299 | {
"authors": [
"yuuukuma",
"zeyuan-ni"
],
"repo": "kumagai-group/pydefect",
"url": "https://github.com/kumagai-group/pydefect/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1770403613 | Added Spiderman Game
Added Spider Game
I have added the Spider Man Game which developed using HTML, CSS, JavaScript also I have used sound and theme designs and I have used the key listeners for the optimize and smooth gameplay.
Fixes #1865
Mark for Completed tasks 💯
[x] I follow CONTRIBUTING GUIDELINE & CODE OF CONDUCT of this project.
[x] I have performed a self-review of my own code or work.
[x] I have commented my code, particularly in hard-to-understand areas.
[x] My changes generates no new warnings.
[x] I have followed proper naming convention showed in CONTRIBUTING GUIDELINE
[x] I have added screenshot for website preview in assets/images
[x] I have added entries for my game in main README.md
[x] I have added README.md in my folder
[x] I have added working video of the game in README.md (optional)
[x] I have specified the respective issue number for which I have requested the new game.
Screenshot -
Video -
Screencast from 23-06-23 02:31:41 AM IST.webm
Thank you @SyedImtiyaz-1 ,for creating the PR and contributing to our GameZone 💗
Review team will review the PR and will reach out to you soon! 😇
Make sure that you have marked all the tasks that you are done with ✅.
Thank you for your patience! 😀
@kunjgit ,
I have fixed the conflicts.
make sure you are not doing the changes in the game data !!!
create PR again by not altering any other info !!
Thank you @SyedImtiyaz-1 , for your valuable time and contribution in our GameZone 💗.
It’s our GameZone, so Let’s build this GameZone altogether !!🤝
Hoping to see you soon with another PR again 😇
Wishing you all the best for your journey into Open Source🚀
| gharchive/pull-request | 2023-06-22T21:13:09 | 2025-04-01T06:44:45.131420 | {
"authors": [
"SyedImtiyaz-1",
"kunjgit"
],
"repo": "kunjgit/GameZone",
"url": "https://github.com/kunjgit/GameZone/pull/1866",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1827158333 | Fix misspelling in README.md
Fixes #2651
Thank you @ahmed0saber ,for creating the PR and contributing to our GameZone 💗
Review team will review the PR and will reach out to you soon! 😇
Make sure that you have marked all the tasks that you are done with ✅.
Thank you for your patience! 😀
Sorry but this kind of one word PRs are not accepted in gssoc
Thank you @ahmed0saber , for your valuable time and contribution in our GameZone 💗.
It’s our GameZone, so Let’s build this GameZone altogether !!🤝
Hoping to see you soon with another PR again 😇
Wishing you all the best for your journey into Open Source🚀
| gharchive/pull-request | 2023-07-28T22:37:46 | 2025-04-01T06:44:45.134128 | {
"authors": [
"ahmed0saber",
"gurjeetsinghvirdee",
"kunjgit"
],
"repo": "kunjgit/GameZone",
"url": "https://github.com/kunjgit/GameZone/pull/2652",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2050073580 | init fleet plugin install flagger and public testloader
What type of PR is this?
/kind feature
What this PR does / why we need it:
Complete install flagger and public testlaoder via fleet plugin
Which issue(s) this PR fixes:
Fixes # #435
/lgtm
| gharchive/pull-request | 2023-12-20T08:22:52 | 2025-04-01T06:44:45.137254 | {
"authors": [
"LiZhenCheng9527",
"hzxuzhonghu"
],
"repo": "kurator-dev/kurator",
"url": "https://github.com/kurator-dev/kurator/pull/523",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
459656725 | Making new Binds
Sorry to bother you again, but how do you add more than two keybinds?
First, check your keycode here.
After that, you can add multiple values into the fields key1 and key2 in the osu setting. For example, if you have AS and ZX as your keys, you can do key1: [65, 90] and key2: [83, 88].
Thanks !
im sorry but where can i find the osu setting
can i add more than 2 keybind?
can i add more than 2 keybind?
Go to keycode.info and write the keycodes in order in the config. It worked for me.
how do i add more than 2 keybinds on Bongo Cat V2? i tried with numbers it doesnt work
| gharchive/issue | 2019-06-24T02:16:11 | 2025-04-01T06:44:45.142309 | {
"authors": [
"Dokimochi",
"MahibGD",
"NinjaXGithub",
"acewest",
"brampaard",
"kuroni"
],
"repo": "kuroni/bongocat-osu",
"url": "https://github.com/kuroni/bongocat-osu/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1932511975 | Custom-flood after some time is not capable of producing new transactions with "AlreadyKnown" error
Custom-flood recently works much more stable but recently recognized new issue.
After some time it started to output me plenty of "Already Known" errors when sending raw transactions:
root@localhost ➜ ~ kurtosis service logs verdant-forest mev-custom-flood -a | grep eth_sendRawTransaction | grep "Already"
20:10:16,624 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:19,659 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:22,686 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:25,714 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:28,742 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:31,774 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:34,803 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:37,830 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:40,858 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:43,886 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:46,915 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:49,942 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
20:10:52,968 web3.providers.HTTPProvider DEBUG Getting response HTTP. URI: http://172.16.4.8:8545, Method: eth_sendRawTransaction, Response: {'jsonrpc': '2.0', 'error': {'code': -32010, 'message': 'AlreadyKnown'}, 'id': 5}
Once it start happening, it is being this way all the time.
Wonder if https://github.com/kurtosis-tech/ethereum-package/pull/283 will solve it!
| gharchive/issue | 2023-10-09T08:02:04 | 2025-04-01T06:44:45.175778 | {
"authors": [
"h4ck3rk3y",
"kamilchodola"
],
"repo": "kurtosis-tech/ethereum-package",
"url": "https://github.com/kurtosis-tech/ethereum-package/issues/276",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2317105762 | Nixify
Description
Packaged kurtosis with nix
To try it out run, checkout the branch and run
$ nix shell .#kurtosis
or (without checking out)
$ nix shell github:kurtosis-tech/kurtosis/nixify
or (once merged)
$ nix shell github:kurtosis-tech/kurtosis
REMINDER: Tag Reviewers, so they get notified to review
Is this change user facing?
It is, but it only adds the capability to add kurtosis as a dependency with nix
@lostbean I would appreciate your review
I didn't know that the license was proprietary, so sad.
I didn't know that the license was proprietary, so sad.
Yeah for L1 infrastructure this is kind of ridiculous.
Hey @marijanp we are fully OSS now : )
| gharchive/pull-request | 2024-05-25T16:08:22 | 2025-04-01T06:44:45.179618 | {
"authors": [
"jonas089",
"marijanp",
"tedim52"
],
"repo": "kurtosis-tech/kurtosis",
"url": "https://github.com/kurtosis-tech/kurtosis/pull/2461",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
237166514 | Release 1.0.0
1.0.0 (2017-06-20)
Compatibility
Kuzzle
Proxy
rc.x
rc.x
Bug fixes
[ #85 ] Deny to activate a new backend if another one is already active. (ballinette)
[ #83 ] Close client connection when no backend left (stafyniaksacha)
[ #75 ] Add an error message when there is no kuzzle instance (dbengsch)
[ #74 ] Filter volatile instead of metadata in access logs (dbengsch)
[ #68 ] Fix http client connection leak (stafyniaksacha)
[ #63 ] Fixes #62 - empty errors coming from proxy (benoitvidis)
New features
[ #72 ] Add support for Kuzzle graceful shutdown (scottinet)
Enhancements
[ #77 ] Add request headers to request.context (ballinette)
[ #69 ] Remove stack from error (AnthonySendra)
Others
[ #79 ] Fix 836 ghost rooms (benoitvidis)
[ #71 ] Improve debug function to allow toggle one/multiple lines (stafyniaksacha)
[ #58 ] Update node prerequisite in package.json (scottinet)
[ #57 ] Remove unused bluebird dependency (scottinet)
Codecov Report
Merging #88 into master will decrease coverage by 1.2%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #88 +/- ##
==========================================
- Coverage 98.98% 97.78% -1.21%
==========================================
Files 16 17 +1
Lines 690 677 -13
==========================================
- Hits 683 662 -21
- Misses 7 15 +8
Impacted Files
Coverage Δ
var/app/lib/service/Router.js
94.73% <0%> (-5.27%)
:arrow_down:
var/app/lib/service/Backend.js
96.93% <0%> (-3.07%)
:arrow_down:
var/app/lib/service/protocol/SocketIo.js
98.46% <0%> (-0.24%)
:arrow_down:
var/app/lib/service/protocol/Websocket.js
98.98% <0%> (-0.11%)
:arrow_down:
var/app/lib/core/clientConnection.js
100% <0%> (ø)
:arrow_up:
var/app/lib/service/Broker.js
100% <0%> (ø)
:arrow_up:
var/app/lib/core/Context.js
100% <0%> (ø)
:arrow_up:
var/app/lib/store/PendingRequest.js
100% <0%> (ø)
:arrow_up:
var/app/lib/store/PendingItem.js
100% <0%> (ø)
:arrow_up:
var/app/lib/core/config.js
100% <0%> (ø)
:arrow_up:
... and 7 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ec03f11...b3afb08. Read the comment docs.
| gharchive/pull-request | 2017-06-20T10:41:49 | 2025-04-01T06:44:45.208114 | {
"authors": [
"codecov-io",
"stafyniaksacha"
],
"repo": "kuzzleio/kuzzle-proxy",
"url": "https://github.com/kuzzleio/kuzzle-proxy/pull/88",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2571213119 | feature: update code theme
What kind of changes does this PR include?
Minor fixes (broken links, typos, css, etc.)
Changes with larger consequences (logic, library updates, etc.)
Something else!
Description
Closes #
What does this PR change? A brief description would be great.
Did you change something visual? A before/after screenshot can be helpful.
#95
#94
#93 👈
master
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @slackermorris and the rest of your teammates on Graphite
Merge activity
Oct 12, 8:00 PM EDT: A user started a stack merge that includes this pull request via Graphite.
| gharchive/pull-request | 2024-10-07T18:55:05 | 2025-04-01T06:44:45.237443 | {
"authors": [
"slackermorris"
],
"repo": "kwicherbelliaken/kwicherbelliaken.xyz",
"url": "https://github.com/kwicherbelliaken/kwicherbelliaken.xyz/pull/93",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
55874940 | Efficient data structures for the features
Benchmarks need to be done in order to find efficient on-disk formats for the features.
Features are used for:
Feature View (a subset of the spikes, two features x and y)
Split action (find all spikes which features x and y within a given polygon)
Similarity matrix (a subset of the spikes, but all feature columns)
Example size (high estimate): a (n_spikes, n_features) numerical matrix with:
n_spikes = 100,000,000
n_features = 10,000
about 20 non-null values per spike (sparse array)
float32 data type
total size (sparse): ~10 GB
Access patterns:
View: arbitrary subset of <10,000 of rows, 2 arbitrary columns x and y.
Split: arbitrary subset of several 10,000s of rows, 2 arbitrary columns x and y.
Matrix: regular subset of ~10,000 rows (strided selection), all columns.
Possibilities:
HDF5 (dense, sparse csr, something else)
sqlite
flat binary
Notes:
Possibility to duplicate the data on disk using different structures for different access patterns.
Possibility to cache up to X GB of data, with X being a user option (1 by default?), the larger X, the better the performance.
We can consider SSDs exclusively for benchmarks.
I'm wondering if a SQL or NoSQL database might not be the most efficient system (sqlite, redis, cassandra, mongodb, etc.). For each spike index, we have one column per channel with either n_pcs=3 values in that column (features) or n_samples=50 values (waveforms). This would be used as an internal cache, it would have nothing to do with the HDF5-based file format. We could use a BLOB data type. Querying may be fast with indexing (storing the cluster label of every spike, that would change during a manual clustering session).
Future spike sorting workflow (when SD2 and KK will be reimported into phy):
Spike detection
Load chunk of raw data
HP filter chunk
Threshold
A spike is detected
Masks are computed
The HP waveform is extracted in RAM
The fractional time is computed
Save time sample, fractional time, sparse masks in the kwik file
The entire process can be parallelized over chunks and channel groups.
PCA
Choose 100 regularly spaced contiguous chunks of 100 spikes each
Compute the PCs across channels
Save the PCs in the kwik file
Feature extraction
Loop over all waveforms in memory
Realign all waveforms
Project all waveforms onto the PCs
Save the sparse features in the kwx file
This can be parallelized over waveforms.
Automatic clustering
Load all features from the kwx file
Perform the clustering
Save the spike clusters
Here is a summary of the discussions on this subject:
For now, we load and filter the waveforms on the fly; no need to cache them. Performance seems okay.
We have an internal data store that persists cluster data on disk. The key point is that data pertaining to a cluster is contiguous on disk for high performance. Data is reorganized on the fly during manual clustering. Merging clusters is relatively cheap. Splitting and reclustering requires to regenerate some or all the data store.
This store is used:
During the spike extraction process; the waveforms are extracted, the features are computed and stored there.
During manual clustering, notably in the Feature View.
There is one file per cluster. We let the OS handle caching.
We are free to use any format we want for the data store.
Flat binary: easiest format but not very practice when storing several arrays.
HDF5
We'll offer easy-to-use import/export functions.
For each cluster, we store:
the cluster label
the masks of all spikes in the cluster (simplified CSC, see below)
the features of all spikes in the cluster (simplified CSC, see below)
the spike labels
the spike times
Within each cluster, spikes may be organized in chronological order.
We assume that when looking at a cluster, we can load the entire cluster data in memory.
Reading the masks, features, and spike information for a given cluster is fast.
The features can be stored in a simplified CSC sparse format:
We consider the dense feature array with only the spikes belonging to the cluster
We remove all columns containing only zeros (= most columns in principle)
We store the indices of the columns that are kept
We need an abstraction layer that lets us read and write data without worrying about the way data is stored.
Example of an API:
ds = DataStore()
### Automatic clustering.
# Add data for one or several spikes.
# If spike_clusters is None, all data is put in a single file.
ds.add(spike_labels=None, spike_clusters=None, spike_times=None
features=None, masks=None)
### Manual clustering.
# Reorganize the structure knowing that some clusters have been merged.
ds.merge(clusters, to)
# Reorganize the structure with a new clustering of all or a subset of
# the spikes.
ds.recluster(spike_clusters, spike_labels=None)
# Getting the data.
masks = ds.masks[my_cluster] # a SparseCSC array
fet = ds.features[my_cluster] # a SparseCSC array
spikes = ds.spike_labels[my_cluster]
times = ds.spike_times[my_cluster]
### Import/Export.
ds.load(features=features, # a dense array or a SparseCSR array
masks=masks,
spike_times=spike_times...)
ds.save('my_file.h5')
Example of the tree structure:
/
00000/
00001.h5
00002.h5
00100/
00013.h5
00015.h5
00200/
...
01000/
| gharchive/issue | 2015-01-29T09:20:07 | 2025-04-01T06:44:45.257657 | {
"authors": [
"rossant"
],
"repo": "kwikteam/phy",
"url": "https://github.com/kwikteam/phy/issues/58",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
685282796 | Exception handling is too greedy
While implementing some next.jdbc support for postgres types (json, bytea, etc), I ran into a case where gungnir ate an exception it couldn't handle.
Gungnir is eating all Exceptions yet, the exception->map multimethod can only handle exceptions of type SQLException (see stack trace below).
I believe the solution to this is to catch only SQLExceptions and let others bubble up. Thoughts @kwrooijen ?
gungnir.query/save! query.clj: 104
gungnir.query/save! query.clj: 108
gungnir.database/insert! database.clj: 259
gungnir.database/execute-one! database.clj: 187
gungnir.database/execute-one! database.clj: 195
...
gungnir.database/eval47644/fn database.clj: 158
java.lang.ClassCastException: class java.lang.IllegalArgumentException cannot be cast to class java.sql.SQLException (java.lang.IllegalArgumentException is in module java.base of loader 'bootstrap'; java.sql.SQLException is in module java.sql of loader 'platform')
Sounds like a reasonable solution to me
| gharchive/issue | 2020-08-25T08:27:19 | 2025-04-01T06:44:45.270788 | {
"authors": [
"Ramblurr",
"kwrooijen"
],
"repo": "kwrooijen/gungnir",
"url": "https://github.com/kwrooijen/gungnir/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1410567182 | sqlc.narg does not work with strict_function_checks enabled
Version
1.15.0
What happened?
sqlc.narg does not work with strict_function_checks enabled, it throws an error saying narg function not found
Relevant log output
No response
Database schema
CREATE TABLE foo (bar text not null, maybe_bar text);
SQL queries
-- name: IdentOnNullable :one
SELECT maybe_bar FROM foo WHERE maybe_bar = sqlc.narg(maybe_bar);
Configuration
- schema: "query.sql"
queries: "query.sql"
engine: "postgresql"
strict_function_checks: true
gen:
go:
package: "querytest"
out: "./"
sql_package: "pgx/v4"
emit_prepared_queries: true
emit_json_tags: true
emit_db_tags: true
json_tags_case_style: snake
Playground URL
No response
What operating system are you using?
Windows
What database engines are you using?
PostgreSQL
What type of code are you generating?
Go
Fixed by https://github.com/kyleconroy/sqlc/pull/1814, will go out in the next release.
| gharchive/issue | 2022-10-16T17:57:50 | 2025-04-01T06:44:45.336116 | {
"authors": [
"brlala",
"kyleconroy"
],
"repo": "kyleconroy/sqlc",
"url": "https://github.com/kyleconroy/sqlc/issues/1900",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1962388490 | Local File Support.
Hey Kyle, I was wondering if you're planning to add local file support in the future so we can load soundings from EOL Projects such as VORTEX and PERILS. Sharppy does a good job at displaying everything needed for it but at the same time not everyone can seem to get Sharppy to work due to script errors, plus the sounderpy layout is just very easy on the eyes.
Hey @CocoasColas! Thanks for the idea.
Can you attach a sample file or two that I can work with? I might be able to add something to make this possible.
Are these files usually formatted consistently?
Yeah, I'll send a bit of the files from TORUS-2019 in the C Plains, specifically 5/20/19. I'll also add the readme from EOL. They are formatted in .csv which I didn't expect and it's a lot of data since they did it at quite the high frequencies.
Far_Field_MW41_output_20190520_165957.csv
Far_Field_MW41_output_20190520_192909.csv
Far_Field_MW41_output_20190520_203418.csv
Far_Field_MW41_output_20190520_214246.csv
Far_Field_MW41_output_20190520_224224.csv
LIDAR_1_MW41_output_20190520_205240.csv
LIDAR_1_MW41_output_20190520_220607.csv
LIDAR_1_MW41_output_20190520_232343.csv
Probe_1_MW41_output_20190520_124207.csv
Probe_1_MW41_output_20190520_160048.csv
Probe_1_MW41_output_20190520_181720.csv
README_NSSLsonde_TORUS_2019.pdf
Ok so I sat down and wrote this up really quickly so you'll have a spot to get started from if you'd like to use this soon. I tested it on a single file but if they are the same then this should work for each.
# imports software
import pandas as pd
from metpy.units import units
import metpy.calc as mpcalc
import sounderpy as spy
# declare file
file = 'Far_Field_MW41_output_20190520_165957.csv'
# parse CSV into pandas df
obs_df = pd.read_csv(file, skiprows=2)
info_df = pd.read_csv(file)
# parse needed df values into a SounderPy `clean_data` dict of values
old_keys = ['Filtered Pressure (mb)', 'Filtered Altitude (m)', 'Filtered Temperature (K)', 'Filtered Dewpoint (K)']
new_keys = ['p', 'z', 'T', 'Td']
units_list = ['hPa', 'meter', 'K', 'K']
clean_data = {}
for old_key, new_key, unit in zip(old_keys, new_keys, units_list):
clean_data[new_key] = (obs_df[old_key].values)*units(unit)
clean_data['u'], clean_data['v'] = mpcalc.wind_components(((obs_df['Filtered Wind Spd (m/s)'].values)*1.94384)*units('kts'),
(obs_df['Filtered Wind Dir'].values)*units.deg)
# create dict of "site info" data -- I just quickly made this part up and it can be changed
clean_data['site_info'] = {
'site-id' : info_df.iloc[0][0],
'site-name' : 'Far_Field_MW41',
'site-lctn' : 'none',
'site-latlon' : [obs_df['Filtered Longitude'][0],
obs_df['Filtered Latitude'][0]],
'site-elv' : obs_df['Filtered Altitude (m)'][0],
'source' : 'TORUS-2019 FIELD CAMPAIGN OBSERVED PROFILE',
'model' : 'none',
'fcst-hour' : f'none',
'run-time' : ['none', 'none', 'none', 'none'],
'valid-time': [info_df.iloc[2][0][1:5], info_df.iloc[2][0][6:8], info_df.iloc[2][0][9:11], info_df.iloc[2][0][12:17]]}
spy.metpy_sounding(clean_data)
Ope, quick note: I lat/lon backwards above.
It should be:
'site-latlon' : [obs_df['Filtered Latitude'][0],
obs_df['Filtered Longitude'][0]],
Thanks Kyle!
| gharchive/issue | 2023-10-25T22:41:49 | 2025-04-01T06:44:45.347910 | {
"authors": [
"CocoasColas",
"kylejgillett"
],
"repo": "kylejgillett/sounderpy",
"url": "https://github.com/kylejgillett/sounderpy/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
156822090 | Update README.md
Typo fix
Thanks!
| gharchive/pull-request | 2016-05-25T18:44:23 | 2025-04-01T06:44:45.349351 | {
"authors": [
"kylemanna",
"rgarrigue"
],
"repo": "kylemanna/docker-openvpn",
"url": "https://github.com/kylemanna/docker-openvpn/pull/130",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
750660833 | Improve parallel-installation logging
Description
Changes proposed in this pull request:
Pass logging function as a configurable parameter
Add log prefix to internal packages
Related issue(s)
https://github.com/kyma-project/kyma/issues/9347
/meow
| gharchive/pull-request | 2020-11-25T09:45:08 | 2025-04-01T06:44:45.353100 | {
"authors": [
"Tomasz-Smelcerz-SAP",
"colunira"
],
"repo": "kyma-incubator/hydroform",
"url": "https://github.com/kyma-incubator/hydroform/pull/143",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1778728283 | Remove api prefix from the tunnel's URL
Gardener no longer allows to create certificate for API server subdomains. So if the dns name is of form cp.api.myserver.com, and api.myserver.com is API server domain, Gardener will fail to create a certificate.
This PR contains the following changes:
Changing Connectivity Proxy's tunnel address to not contain api prefix
Fixing configuration for existent clusters
/test pre-main-reconciler-lint
/test all
/test pre-main-reconciler-cli-kyma-prev-to-last-release-upgrade-k3d
| gharchive/pull-request | 2023-06-28T11:12:47 | 2025-04-01T06:44:45.355857 | {
"authors": [
"akgalwas",
"m00g3n"
],
"repo": "kyma-incubator/reconciler",
"url": "https://github.com/kyma-incubator/reconciler/pull/1370",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2578078162 | chore: add ref for checkout in ACC GHA
Description
Changes proposed in this pull request:
chore: add ref for checkout
Related issue(s)
Definition of done
[x] The PR's title starts with one of the following prefixes:
feat: A new feature
fix: A bug fix
docs: Documentation only changes
refactor: A code change that neither fixes a bug nor adds a feature
test: Adding tests
chore: Maintainance changes to the build process or auxiliary tools, libraries, workflows, etc.
[x] Related issues are linked. To link internal trackers, use the issue IDs like backlog#4567
[x] Explain clearly why you created the PR and what changes it introduces
[x] All necessary steps are delivered, for example, tests, documentation, merging
/override tide
| gharchive/pull-request | 2024-10-10T08:32:37 | 2025-04-01T06:44:45.360407 | {
"authors": [
"mrCherry97"
],
"repo": "kyma-project/busola",
"url": "https://github.com/kyma-project/busola/pull/3398",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
788022239 | Fix converting orchestration ID in upgrade ops
Description
Orchestration ID in runtime's upgradingKyma.data block is duplicated for all operations:
"upgradingKyma": {
"data": [
{
"state": "in progress",
"description": "kyma upgrade in progress",
"createdAt": "2021-01-14T09:57:59.070884Z",
"operationID": "71c63c56-54c5-420d-8e9a-d0949982d52e",
"orchestrationID": "1c887a00-42fd-44a4-ae59-0f4118a9251d"
},
{
"state": "in progress",
"description": "Operation created",
"createdAt": "2021-01-13T10:35:36.471386Z",
"operationID": "afd519b4-6947-4ade-a1f7-6d8e23afdcc7",
"orchestrationID": "1c887a00-42fd-44a4-ae59-0f4118a9251d"
}
It is caused by the local variable used in the for loop being reused, and pointer to the source.OrchestrationID being used.
/test pre-master-control-plane-gke-integration
/test pre-master-control-plane-gke-integration
/retest
/retest
/retest
/retest
| gharchive/pull-request | 2021-01-18T07:55:21 | 2025-04-01T06:44:45.376139 | {
"authors": [
"ebensom"
],
"repo": "kyma-project/control-plane",
"url": "https://github.com/kyma-project/control-plane/pull/449",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2011669910 | Kim benchmark improvements
Description
Changes proposed in this pull request:
put file name is based on date and time so it will be possible to easily revisit or possibly undo multiple benchmark runs
fixes an issue where the output of the script couldn't be piped to kubectl apply because runtimeIDs were not lowercase
adds documentation on how to use benchmark.sh
Related issue(s)
Improves https://github.com/kyma-project/infrastructure-manager/pull/71
wrong fork was selected
| gharchive/pull-request | 2023-11-27T07:34:19 | 2025-04-01T06:44:45.380201 | {
"authors": [
"Disper"
],
"repo": "kyma-project/infrastructure-manager",
"url": "https://github.com/kyma-project/infrastructure-manager/pull/72",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
665755811 | Error while installing rafter chart
Please provide more context:
How did you install the chart? Directly? From CLI? Installer?
What version?
What environment?
What helm version do you use?
It will help a lot to investigate the issue.
Closed due to inactivity
| gharchive/issue | 2020-07-26T10:37:57 | 2025-04-01T06:44:45.398982 | {
"authors": [
"AronChristopher93",
"pbochynski"
],
"repo": "kyma-project/rafter",
"url": "https://github.com/kyma-project/rafter/issues/78",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2056235311 | Need a provision to hide the tags of selected filter under the dropdown component.
I am creating a filtering module with multiple multi-select dropdowns. The dropdowns mentioned add tags at the bottom for selected options. As per UX provided for the page required us to add another selection with all the selected options.
In shidoka the tags are always added and there is duplication of tags. Therefore need a provision to hide the tags of selected options under the dropdown component (multi-select variant )
A clear and concise description of what you expected to happen.
Since this is the intended design for the multi-select, we will have to raise this to the design team as an enhancement request.
@vaishnavi-dhakankar can you give us the use case in more detail? It's not clear when or how this use case gets used.
@cunningham-kyndryl As per the design shared with us, we need to show all the selected options from multiple dropdowns in a separate section with a single clear action to clear the selection.
design below:
however, with the current implementation of Shidoka dropdown, we see additional tags (blue tags) just below the dropdown
(see previous img) which causes duplication of tags. We want a flag based on which we could show/hide these blue tags.
@vaishnavi-dhakankar can you show the design that needs this? In the DD, the user can unselect all sections by clicking on the black tag within the DD. Does this not solve that problem?
I think they want to be able to reset all of the dropdowns/filters with on click, whereas the black tag in the dropdown only clears that one dropdown.
@brian-patrick-3 @cunningham-kyndryl I have already discussed it Robert and Manoj, they will get back to you for further discussion.
:tada: This issue has been resolved in version 1.1.16 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/issue | 2023-12-26T09:42:59 | 2025-04-01T06:44:45.423175 | {
"authors": [
"brian-patrick-3",
"cunningham-kyndryl",
"vaishnavi-dhakankar"
],
"repo": "kyndryl-design-system/shidoka-applications",
"url": "https://github.com/kyndryl-design-system/shidoka-applications/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2559626638 | 🛑 clinicafabregat.com is down
In f6048b0, clinicafabregat.com (https://clinicafabregat.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: clinicafabregat.com is back up in e3dfd95 after 10 minutes.
| gharchive/issue | 2024-10-01T15:46:14 | 2025-04-01T06:44:45.431591 | {
"authors": [
"kyryl-bogach"
],
"repo": "kyryl-bogach/upptime",
"url": "https://github.com/kyryl-bogach/upptime/issues/1717",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1822193855 | 🛑 grazindafranco.com is down
In 0af1120, grazindafranco.com (https://grazindafranco.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: grazindafranco.com is back up in a3a03ff.
| gharchive/issue | 2023-07-26T11:25:15 | 2025-04-01T06:44:45.434894 | {
"authors": [
"kyryl-bogach"
],
"repo": "kyryl-bogach/upptime",
"url": "https://github.com/kyryl-bogach/upptime/issues/174",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1902148247 | 🛑 senoshaidodelasmanos.love is down
In 334b96c, senoshaidodelasmanos.love (https://senoshaidodelasmanos.love) was down:
HTTP code: 0
Response time: 0 ms
Resolved: senoshaidodelasmanos.love is back up in 5b3c14a after 1 hour, 24 minutes.
| gharchive/issue | 2023-09-19T03:36:04 | 2025-04-01T06:44:45.437653 | {
"authors": [
"kyryl-bogach"
],
"repo": "kyryl-bogach/upptime",
"url": "https://github.com/kyryl-bogach/upptime/issues/977",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1263147027 | [Feature] Set OFPT_FLOW_MOD and OFPT_BARRIER_REQUEST message priority
Description of the change
Set KytosEvent priority for OFPT_FLOW_MOD and OFPT_BARRIER_REQUEST
This PR makes use of the msg_out queue priority infrastructure that's being delivered on this kytos core PR, and it also depends on this of_core PR
Looks good to me
Appreciated your review, Italo.
| gharchive/pull-request | 2022-06-07T11:33:10 | 2025-04-01T06:44:45.440184 | {
"authors": [
"viniarck"
],
"repo": "kytos-ng/flow_manager",
"url": "https://github.com/kytos-ng/flow_manager/pull/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1595289288 | Add link to homepage
Man, I forgot to add the link to the homepage, oops 😅 Turning the h1 into a link should work
Completed in 23a83e9ed322d742466917a9866d0daba5a00506
| gharchive/issue | 2023-02-22T15:13:37 | 2025-04-01T06:44:45.441189 | {
"authors": [
"kytta"
],
"repo": "kytta/www",
"url": "https://github.com/kytta/www/issues/11",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
972421639 | Yandex Object Storage Target
Motivation
Hi - Im from Yandex.Cloud solution architect team. I believe that kyverno is the best way to manage kubernetes policies. But Yandex.Cloud users demand export to Yandex.Cloud storages
Feature
Yandex Cloud has multiple services that can be targets,but most demanded output right now is Yandex.Storage which has S3 API
Additional context
I could implement this feature by my own - Recently I added Yandex.Storage as part of falcosidekick https://github.com/falcosecurity/falcosidekick/pull/261
hey, thank you for this suggestion. You are welcome to implement it on your own. Let me know if you need any help.
Thank you for your contribution. Yandex is available with Helm Version v1.11.0. Would be great if you can test it,
| gharchive/issue | 2021-08-17T08:14:11 | 2025-04-01T06:44:45.467270 | {
"authors": [
"fjogeleit",
"nar3k"
],
"repo": "kyverno/policy-reporter",
"url": "https://github.com/kyverno/policy-reporter/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2104711506 | Update validate.md
Removed data in Deny Rules section that is no longer accurate.
Related issue
Proposed Changes
Checklist
[x] I have read the contributing guidelines.
[x] I have inspected the website preview for accuracy.
[x] I have signed off my issue.
/cherry-pick release-1-11-0
| gharchive/pull-request | 2024-01-29T05:58:58 | 2025-04-01T06:44:45.470417 | {
"authors": [
"chipzoller",
"mviswanathsai"
],
"repo": "kyverno/website",
"url": "https://github.com/kyverno/website/pull/1120",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
301105289 | Fix compatibility with omniauth-oauth 1.4+
Re-implements callback_url removed by omniauth/omniauth-oauth2#70
oops, dupe of #4 :)
| gharchive/pull-request | 2018-02-28T17:05:00 | 2025-04-01T06:44:45.506894 | {
"authors": [
"jonlunsford"
],
"repo": "l1h3r/omniauth-infusionsoft",
"url": "https://github.com/l1h3r/omniauth-infusionsoft/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2410810589 | Add Edgeless (full)
Resolves L2B-4837
rebased on latest due to lil badge bug (calldata badge instead of customDA)
| gharchive/pull-request | 2024-07-16T10:37:08 | 2025-04-01T06:44:45.507893 | {
"authors": [
"lucadonnoh",
"sekuba"
],
"repo": "l2beat/l2beat",
"url": "https://github.com/l2beat/l2beat/pull/4497",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2378062230 | Add presets?
Following a recent discussion with @frostedoyster regarding changing the default PET hypers towards a more lightweight and fast model, we agreed that it would be better to offer different presets rather than a single set of default hyperparameters. Presets might be something like "fast," "medium," and "large." This issue is created to track the implementation of this feature. I guess it is up to me to address the presets themselves, but this feature also requires scaffolding infrastructure.
I think having small, medium, large presets will be unfortunately necessary because people can't be bothered to read the docs and change the parameters themselves
This is a great idea!
I think having at most three preset is good (more than three means you have to read the documentation again), and we could try to name the presets in the same ways for all architectures, following the "good", "better", "best" convention.
Some possibilities would be
small/medium/large
fast/???/accurate
loose/???/tight (close to AIMS basis names)
upd. The main motivation for presets was the impression that PET with default hypers is slow, and thus, a 'light' preset was urgently needed. An investigation with @DavideTisi, though, revealed that the issue is that the LAMMPS MD interface is currently 40 times slower than intrinsic PET (see https://github.com/lab-cosmo/metatrain/issues/274). I guess there are two implications:
It is still a super nice feature, but priority can probably be low (at least from the point of view of PET)
the default PET preset should be the one that will correspond to the current default hypers
and [unrelated] 3) probably it is worth to think about naming to incorporate more than 3 presets if needed.
and [unrelated] 3) probably it is worth to think about naming to incorporate more than 3 presets if needed.
I disagree here. I don't think more than three preset is helpful for users. If they are using presets, it is because they don't know a lot about the architecture, and offering too many choices does not help. The presets are not here for people who want to tune the hypers, but as a way to balance speed vs accuracy easily.
and [unrelated] 3) probably it is worth to think about naming to incorporate more than 3 presets if needed.
I disagree here. I don't think more than three preset is helpful for users. If they are using presets, it is because they don't know a lot about the architecture, and offering too many choices does not help. The presets are not here for people who want to tune the hypers, but as a way to balance speed vs accuracy easily.
yeah, good point actually
and [unrelated] 3) probably it is worth to think about naming to incorporate more than 3 presets if needed.
I disagree here. I don't think more than three preset is helpful for users. If they are using presets, it is because they don't know a lot about the architecture, and offering too many choices does not help. The presets are not here for people who want to tune the hypers, but as a way to balance speed vs accuracy easily.
Thought №2 about this. Probably, users who do not want to tune hypers want some kind of slider that represents a one-dimensional tradeoff between accuracy and computational efficiency (contrary to the multidimensional nature of hypers). And this 1D slider actually opens up a space for a continuum number of presets, not just 3.
No, this is just introducing a new hyper (the 1D "slider"), that they don't want to tune either. I feel like "good", "better", "best" presets is the best way to think about these.
No, this is just introducing a new hyper (the 1D "slider"), that they don't want to tune either. I feel like "good", "better", "best" presets is the best way to think about these.
Can work, do not have strong feelings about this, anyway.
Small, medium, large?
Fast, balanced, accurate?
1, 2, 3?
| gharchive/issue | 2024-06-27T12:45:26 | 2025-04-01T06:44:45.526863 | {
"authors": [
"Luthaf",
"frostedoyster",
"spozdn"
],
"repo": "lab-cosmo/metatrain",
"url": "https://github.com/lab-cosmo/metatrain/issues/273",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
243558636 | change lumen style config getter to laravel style
Potentially resolves issue Call to undefined method 'configure'
might be nice to support Lumen as well, but I think this is a necessary change to use the AwsSnsTopicChannel in Laravel
I added this fix for the Laravel projects and added a new service provider for Lumen 5.x projects
Thanks your contribuition @usulix
| gharchive/pull-request | 2017-07-17T23:37:13 | 2025-04-01T06:44:45.532057 | {
"authors": [
"jeanpfs",
"usulix"
],
"repo": "lab123it/aws-sns",
"url": "https://github.com/lab123it/aws-sns/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1369510780 | do not calculate contradicting elements if there are no elements from…
… both classes
resolves #226
Codecov Report
Merging #229 (493af5c) into main (676bcab) will decrease coverage by 0.04%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## main #229 +/- ##
==========================================
- Coverage 82.67% 82.63% -0.05%
==========================================
Files 56 56
Lines 3672 3674 +2
==========================================
Hits 3036 3036
- Misses 636 638 +2
Impacted Files
Coverage Δ
label_sleuth/analysis_utils/labeling_reports.py
25.31% <0.00%> (-0.66%)
:arrow_down:
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
| gharchive/pull-request | 2022-09-12T08:51:09 | 2025-04-01T06:44:45.551961 | {
"authors": [
"arielge",
"codecov-commenter"
],
"repo": "label-sleuth/label-sleuth",
"url": "https://github.com/label-sleuth/label-sleuth/pull/229",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1320477948 | Question: 安装报错exec remote command failed
如图所示:
10.0.4.2 机器上执行一下。你用的哪个版本的k8s
10.0.4.2 机器上执行一下。你用的哪个版本的k8s
@cuisongliu 我这个就是在10.0.4.2机器上执行的。用的是v1.19.17版本。刚刚还遇到别的问题,不知道是否有关:https://github.com/labring/sealos/issues/1420
10.0.4.2 机器上执行一下。你用的哪个版本的k8s
@cuisongliu 我这个就是在10.0.4.2机器上执行的。用的是v1.19.17版本。刚刚还遇到别的问题,不知道是否有关:#1420
我去删除一下 这个镜像又问题 最新的是1.19.16 尴尬
有点倒霉啊,一连踩了好几个坑。
| gharchive/issue | 2022-07-28T06:53:39 | 2025-04-01T06:44:45.602179 | {
"authors": [
"cuisongliu",
"itzhoujun"
],
"repo": "labring/sealos",
"url": "https://github.com/labring/sealos/issues/1421",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2217666574 | docs(DropDown): added docs
close #97
:tada: This PR is included in version 7.26.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2024-04-01T06:57:35 | 2025-04-01T06:44:45.611281 | {
"authors": [
"Bibazavr",
"vpsmolina"
],
"repo": "lad-tech/mobydick",
"url": "https://github.com/lad-tech/mobydick/pull/184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
101267574 | It doesn't compile for me without these fixes
Include fixes should be merged for sure.
Private->Public fix is fine
QDocument() constructor should probably be fixed properly but it doesnt bother me that much since it works this way.
Hi. It's strange that it doesn't compile. For me it works like a charm. What compiler do you use? What the error exactly? Have you tried to do git submodule update --init --recursive before compiling?
Change includes it is a good suggestion defenetly, but have you read README file? I use gumbo as a git submodule. And your fix breaks the existed solution. It would be great if your fix will work for both cases(git submodule and system wide installed gumbo)
As for constructor. It was implemented private by design. You don't need to invoke it manualy just use QGumboDocument::parse("") instead. Constructor of the QGumboNode the same. I don't see any reasons to change it now.
Why did you do it public? What is your case?
I'm sorry but I can't accept the request for now.
Ofcourse i did load submodule.
Im using it directly under my project not as submodule or library.
But those include paths are clearly broken you should at least update those they probably work for you because you have gumbo installed globaly under /usr/lib
I have checked the project a couple minutes ago. I did the following steps
git clone git@github.com:lagner/QGumboParser.git
cd QGumboParser/ && git submodule update --init --recursive
open QtCreator -> Open Project -> Select QGumboParser.pro
build and run
As a result I have Totals: 10 passed, 0 failed, 0 skipped, 0 blacklisted. There is no any errors or even warnings.
I don't have system wide installed gumbo. But I checked it one more time to be completely sure
$:~/Projects/QGumboParser$ find /usr/lib -iname "*gumbo*"
$:~/Projects/QGumboParser$ find /usr/include/ -iname "*gumbo*"
$:~/Projects/QGumboParser$
Its ok but by common logic thoee include shoudnt work becaue there is no file under "gumbo/gumbo.hr" but there is "gumbo-parser/src/gumbo.hr"
Now I got it. You are right this include should be changed. I will fix it as soon as possible.
Many thanks.
| gharchive/pull-request | 2015-08-16T14:17:01 | 2025-04-01T06:44:45.636345 | {
"authors": [
"lagner",
"moonshadow565"
],
"repo": "lagner/QGumboParser",
"url": "https://github.com/lagner/QGumboParser/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1973912168 | We need explicitly specify the Python version in context
It is also necessary to explicitly specify the Python version in the context, otherwise a lot of the solving will be affected.
#2
| gharchive/issue | 2023-11-02T10:07:22 | 2025-04-01T06:44:45.648294 | {
"authors": [
"rainzee",
"yanyongyu"
],
"repo": "laike9m/Python-Type-Challenges",
"url": "https://github.com/laike9m/Python-Type-Challenges/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2351607537 | Add support for validating EIP-4844 transactions
https://eips.ethereum.org/EIPS/eip-4844
Closed by: #73 as evm already performs validations before executing
| gharchive/issue | 2024-06-13T16:47:07 | 2025-04-01T06:44:45.664197 | {
"authors": [
"fmoletta",
"mpaulucci"
],
"repo": "lambdaclass/ethereum_rust",
"url": "https://github.com/lambdaclass/ethereum_rust/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1910643002 | feat(fields): add support for u64 Goldilocks
TITLE
Description
Description of the pull request changes and motivation.
Adds support for u64 Goldilocks #393
Please delete options that are not relevant.
[x] New feature
Checklist
[x] Linked to Github Issue
[ ] Unit tests added
[ ] This change requires new documentation.
[ ] Documentation has been added/updated.
[ ] This change is an Optimization
[ ] Benchmarks added/run
Codecov Report
Merging #573 (55d60eb) into main (976c237) will decrease coverage by 0.37%.
Report is 3 commits behind head on main.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## main #573 +/- ##
==========================================
- Coverage 95.46% 95.09% -0.37%
==========================================
Files 112 113 +1
Lines 18811 19088 +277
==========================================
+ Hits 17957 18152 +195
- Misses 854 936 +82
Files Changed
Coverage Δ
math/src/field/fields/u64_goldilocks_field.rs
0.00% <0.00%> (ø)
... and 8 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
@MauroToscano @schouhy ready for review
| gharchive/pull-request | 2023-09-25T04:38:35 | 2025-04-01T06:44:45.673124 | {
"authors": [
"PatStiles",
"codecov-commenter"
],
"repo": "lambdaclass/lambdaworks",
"url": "https://github.com/lambdaclass/lambdaworks/pull/573",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1907277245 | Remove serde_json_pythonic.
Remove serde_json_pythonic
Description
Description of the pull request changes and motivation.
Checklist
[ ] Linked to Github Issue
[ ] Unit tests added
[ ] Integration tests added.
[ ] This change requires new documentation.
[ ] Documentation has been added/updated.
Codecov Report
Merging #1047 (0c5a7c9) into main (a7e2e56) will decrease coverage by 0.47%.
Report is 1 commits behind head on main.
The diff coverage is 93.18%.
@@ Coverage Diff @@
## main #1047 +/- ##
==========================================
- Coverage 90.49% 90.03% -0.47%
==========================================
Files 54 49 -5
Lines 14009 13205 -804
==========================================
- Hits 12678 11889 -789
+ Misses 1331 1316 -15
Files Changed
Coverage Δ
...c/core/contract_address/sierra_contract_address.rs
92.36% <93.02%> (+0.05%)
:arrow_up:
...re/contract_address/deprecated_contract_address.rs
93.83% <100.00%> (ø)
... and 10 files with indirect coverage changes
| gharchive/pull-request | 2023-09-21T15:44:09 | 2025-04-01T06:44:45.682616 | {
"authors": [
"azteca1998",
"codecov-commenter"
],
"repo": "lambdaclass/starknet_in_rust",
"url": "https://github.com/lambdaclass/starknet_in_rust/pull/1047",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2730033641 | ✨ Add an artifact loader for .yaml
Builds on and extends https://github.com/laminlabs/lamindb/pull/2172
test (storage) failure was fixed in https://github.com/laminlabs/lamindb/pull/2266, i don't want to introduce conflicts so leave it failing here.
| gharchive/pull-request | 2024-12-10T12:47:39 | 2025-04-01T06:44:45.693985 | {
"authors": [
"Koncopd"
],
"repo": "laminlabs/lamindb",
"url": "https://github.com/laminlabs/lamindb/pull/2270",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1014526839 | Building on Ubuntu Linux
After encountering problems in windows os, i tried to compile files in ubuntu linux instead and i managed to successfully compiled the main.cpp file (the hello world program). I tried to move the sample file out from the source file and copied the BlinkyTest.cpp and Test.h to the source file alone and tried to redo the same process. I got this error:
/usr/lib/gcc/arm-none-eabi/9.2.1/../../../arm-none-eabi/bin/ld: /usr/lib/gcc/arm-none-eabi/9.2.1/../../../arm-none-eabi/lib/thumb/v7e-m+fp/softfp/crt0.o: in function `_mainCRTStartup':
/build/newlib-CVVEyx/newlib-3.3.0/build/arm-none-eabi/thumb/v7e-m+fp/softfp/libgloss/arm/semihv2m/../../../../../../../../libgloss/arm/crt0.S:545: undefined reference to `main'
Did I do any wrong steps on this or how can I use the compiler properly?
Hi @TDanielI,
The test files don't have a main() function to run, they contain test functions to be called inside the original main.cpp file.
What syntax should I use to call those functions?
Only a few take arguments: https://github.com/lancaster-university/microbit-v2-samples/blob/master/source/samples/Tests.h
You need to include that header file and then call the functions.
Hi @TDanielI, were you able to get this working?
| gharchive/issue | 2021-10-03T20:13:46 | 2025-04-01T06:44:45.726974 | {
"authors": [
"TDanielI",
"microbit-carlos"
],
"repo": "lancaster-university/microbit-v2-samples",
"url": "https://github.com/lancaster-university/microbit-v2-samples/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2359532124 | 🛑 vipgifts.net is down
In fb408bb, vipgifts.net (https://vipgifts.net) was down:
HTTP code: 567
Response time: 393 ms
Resolved: vipgifts.net is back up in 5ec41a9 after 18 minutes.
| gharchive/issue | 2024-06-18T10:37:05 | 2025-04-01T06:44:45.759500 | {
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/11795",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1904155117 | 🛑 rexingsports.cn is down
In 5331cf7, rexingsports.cn (https://www.rexingsports.cn) was down:
HTTP code: 567
Response time: 666 ms
Resolved: rexingsports.cn is back up in 6c0d2eb after 1 hour, 28 minutes.
| gharchive/issue | 2023-09-20T05:14:37 | 2025-04-01T06:44:45.762575 | {
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/4594",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1979694421 | 🛑 vipgifts.net is down
In 3b8383f, vipgifts.net (https://vipgifts.net) was down:
HTTP code: 567
Response time: 528 ms
Resolved: vipgifts.net is back up in e03f0b6 after 18 minutes.
| gharchive/issue | 2023-11-06T17:38:41 | 2025-04-01T06:44:45.765897 | {
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/6946",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2063000576 | 🛑 rexingsports.cn is down
In 354f469, rexingsports.cn (https://www.rexingsports.cn) was down:
HTTP code: 567
Response time: 865 ms
Resolved: rexingsports.cn is back up in 04dbe69 after 15 minutes.
| gharchive/issue | 2024-01-02T21:55:13 | 2025-04-01T06:44:45.768906 | {
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/9288",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2092065223 | 🛑 vipgifts.net is down
In 838a055, vipgifts.net (https://vipgifts.net) was down:
HTTP code: 567
Response time: 920 ms
Resolved: vipgifts.net is back up in 4040ab5 after 17 minutes.
| gharchive/issue | 2024-01-20T14:03:29 | 2025-04-01T06:44:45.771978 | {
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/9509",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.