added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:39:19.429356
| 2018-05-08T18:54:39
|
321305875
|
{
"authors": [
"arschles",
"kikisdeliveryservice"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7670",
"repo": "kubernetes-incubator/service-catalog",
"url": "https://github.com/kubernetes-incubator/service-catalog/pull/2023"
}
|
gharchive/pull-request
|
Add zsh completion to svcat
closes: #1912
This is my first PR for svcat.
Issue 1912 requires two steps to close:
[x] 1. bump spf13/cobra to 0.0.1 (or above)
[x] 2. add zsh completion - zsh completion for svcat is added
Hi @carolynvs I wasn't able to get the GenZshCompletion() working a few iterations back so I ended up implementing the kubectl / helm way (which does indeed work).
As requested I re-implemented using cobra's GenZshCompletion. It doesn't seem to be working for me still but you can pull and try it. We can always revert back to the previous change if you confirm using GenZshCompletion() is no good.
@carolynvs
yes!
svcat get b actually has two options: bindings and brokers. so can confirm:
svcat get b --> shows on next line: bindings brokers
svcat get bi --> svcat get bindings
svcat get br --> svcat get brokers
LGTM, but I'll let folks from another org give the final LGTM label
|
2025-04-01T06:39:19.518216
| 2022-12-08T21:30:45
|
1485433538
|
{
"authors": [
"hrak",
"wanyufe"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7671",
"repo": "kubernetes-sigs/cluster-api-provider-cloudstack",
"url": "https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/issues/202"
}
|
gharchive/issue
|
CloudStackMachine could not be deleted when using invalid serviceOffing
/kind bug
What steps did you take and what happened:
[A clear and concise description of what the bug is.]
E2E testing detects a bug that when invalid serviceOffering is configured to create CloudStackMachine. This CloudStackMachine could not be deleted due to missing instanceID.
What did you expect to happen:
CloudStackMachine should be deleted successfully without stopping cluster deletion.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
This is for admin doc purpose, a PR had been created to address this:https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/pull/201
Environment:
Cluster-api-provider-cloudstack version:
Kubernetes version: (use kubectl version):
OS (e.g. from /etc/os-release):
This issue seems to be addressed by #201 already. Does it need to stay open?
|
2025-04-01T06:39:19.578391
| 2023-09-17T15:07:50
|
1899814530
|
{
"authors": [
"SergeyKanzhelev",
"byeong0",
"saschagrunert"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7672",
"repo": "kubernetes-sigs/cri-tools",
"url": "https://github.com/kubernetes-sigs/cri-tools/issues/1266"
}
|
gharchive/issue
|
symlink Permission denied occurs when pull using the Crictl command
What happened:
When using docker pull, the image is pulled normally, but when using Crictl to pull the image, the following symlink permission denied error occurs.
$sudo crictl pull docker.io/calico/node:v3.25.1
DEBU[0000] get image connection
DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:docker.io/calico/node:v3.25.1,Annotations:map[string]string{},},Auth:nil,SandboxConfig:nil,}
E0917 23:55:07.345848 21476 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/calico/node:v3.25.1\": failed to extract layer sha256:b1d7f02a32791d579abb161bccbf82ba1deaa7fb57805c93e84ddd30f0cb9560: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3830924437: symlink /usr/lib/systemd/system/reboot.target /var/lib/containerd/tmpmounts/containerd-mount3830924437/etc/systemd/system/ctrl-alt-del.target: permission denied: unknown" image="docker.io/calico/node:v3.25.1"
FATA[0001] pulling image: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/calico/node:v3.25.1": failed to extract layer sha256:b1d7f02a32791d579abb161bccbf82ba1deaa7fb57805c93e84ddd30f0cb9560: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3830924437: symlink /usr/lib/systemd/system/reboot.target /var/lib/containerd/tmpmounts/containerd-mount3830924437/etc/systemd/system/ctrl-alt-del.target: permission denied: unknown
What you expected to happen:
The docker image should pull normally.
How to reproduce it (as minimally and precisely as possible):
$ sudo crictl pull docker.io/calico/node:v3.25.1
Anything else we need to know?:
Environment:
Container runtime or hardware configuration:
OS (e.g: cat /etc/os-release):
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Kernel (e.g. uname -a):
Linux 3.10.0-1160.59.1.el7.x86_64 #1 SMP Wed Feb 23 16:47:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Others:
Docker version 24.0.6, build ed223bc
containerd containerd.io 1.6.22 8165feabfdfe38c65b599c4993d227328c231fca
crictl version v1.26.0
kubernetes-cni 1.2.0-0
kubeadm 1.28.2-0
kubectl 1.28.2-0
kubelet 1.28.2-0
Hey @byeong0, thank you for the report! This looks like an issue with containerd rather than cri-tools.
I checked on Containerd 1.6.18 and sudo crictl pull docker.io/calico/node:v3.25.1 worked. Can you check with ctr with verbose logging? Is there an issue with any other images?
/close
to continue on this, please open the bug in Containerd repository or ask at Containerd slack for support.
|
2025-04-01T06:39:19.584403
| 2019-05-08T02:06:37
|
441517770
|
{
"authors": [
"font",
"marun"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7673",
"repo": "kubernetes-sigs/federation-v2",
"url": "https://github.com/kubernetes-sigs/federation-v2/pull/857"
}
|
gharchive/pull-request
|
Switch from glog to klog
#695 is WIP, but this switch to use klog is needed in order to enable webhook dependencies in a follow-up PR.
Posting on behalf of @pmorie who did the work.
Fixes #694
@font I guess neither you nor @pmorie noticed that @poothia had already started work on this, and that I made the following suggestion:
https://github.com/kubernetes-sigs/federation-v2/pull/695#issuecomment-477244315
@font I guess neither you nor @pmorie noticed that @poothia had already started work on this in #695?
@poothia fyi, you can close yours.
@marun I made a comment about it that it looks like it is WIP and hasn't been updated recently. It's not my intent to take the work from @poothia (apologies!), it was more to see it through since a follow-up PR for the webhook framework which depends on it is forthcoming.
@marun @xunpan I've pushed a commit that reorders things based on our converged convention. We should really document that in the development guide.
Thanks @font and @pmorie!
/lgtm
|
2025-04-01T06:39:19.592036
| 2024-07-26T13:42:01
|
2432269961
|
{
"authors": [
"arkodg",
"mikemorris",
"xtineskim"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7674",
"repo": "kubernetes-sigs/gateway-api",
"url": "https://github.com/kubernetes-sigs/gateway-api/pull/3219"
}
|
gharchive/pull-request
|
GRPCRoute timeout - GEP-3139
What type of PR is this?
/kind gep
What this PR does / why we need it:
Staying consistent with the HTTPRoute timeout feature, opening a GEP to allow for GRPCRoute timeouts
Which issue(s) this PR fixes:
Fixes # https://github.com/kubernetes-sigs/gateway-api/issues/3139
Does this PR introduce a user-facing change?:
cc @robscott @arkodg
thanks for authoring this GEP @xtineskim and @gnossen for reviewing this in depth !
thinking out loud for gRPC timeouts, thoughts on the below semantics for GRPCRoute ?
If no timeout section is defined, rely on grpc-timeout header for deciding a per request timeout https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests
timeouts.maxStreamDuration which overrides grpc-timeout header timeout and instead enforces a HTTP/2 stream duration timeout
Thanks @arkodg 😄 !
for your point here:
timeouts.maxStreamDuration which overrides grpc-timeout header timeout and instead enforces a HTTP/2 stream duration timeout
I wonder if this should be the opposite - if a request were to propagate to another service, could it just continually be growing in duration 🤔
Thanks @arkodg 😄 ! for your point here:
timeouts.maxStreamDuration which overrides grpc-timeout header timeout and instead enforces a HTTP/2 stream duration timeout
I wonder if this should be the opposite - if a request were to propagate to another service, could it just continually be growing in duration 🤔
i meant the timeouts.maxStreamDuration would override the timeout value defined in the header, but not overwrite the grpc-timeout header itself
/remove-lifecycle stale
|
2025-04-01T06:39:19.595431
| 2021-08-27T23:09:47
|
981653289
|
{
"authors": [
"jpeach",
"robscott",
"youngnick"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7675",
"repo": "kubernetes-sigs/gateway-api",
"url": "https://github.com/kubernetes-sigs/gateway-api/pull/839"
}
|
gharchive/pull-request
|
Another round of v1alpha2 cleanup
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
This takes care of another big chunk of feedback from #780 as well as some of the items in #790.
Does this PR introduce a user-facing change?:
* "Controller" has been renamed to "ControllerName"
* "Admitted" condition has been renamed to "Accepted" and now defaults to an "Unknown" state instead of "False"
@youngnick @hbagdi @howardjohn Thanks for the great feedback on this! I think I've responded to everything, PTAL.
I think you got it @robscott, nice work.
/lgtm
/hold
for another lgtm though.
Just a couple of formatting nits.
/lgtm
/lgtm
/hold cancel
|
2025-04-01T06:39:19.605948
| 2021-04-08T17:44:48
|
853704248
|
{
"authors": [
"rikatz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7676",
"repo": "kubernetes-sigs/kpng",
"url": "https://github.com/kubernetes-sigs/kpng/pull/2"
}
|
gharchive/pull-request
|
Correct kpng api subcommand
subcommands are conflicting, and I spent like 3 hours to figure out why :P
do not merge yet, forgot another part here :/
/close
will fix other things here before :D
|
2025-04-01T06:39:19.608914
| 2022-08-31T18:55:24
|
1357729985
|
{
"authors": [
"astoycos",
"dougsland"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7677",
"repo": "kubernetes-sigs/kpng",
"url": "https://github.com/kubernetes-sigs/kpng/pull/338"
}
|
gharchive/pull-request
|
Remove static headerfiles
Remove static headerfiles and instead rely on
UAPI from https://github.com/libbpf/libbpf/tree/master/include/uapi/linux
bpf helpers from https://github.com/libbpf/libbpf/tree/master/src
from libbpf version v0.8.0
Add new make target which handles downloading these headers if needed
/hold
I want to get some more opinions before merging this
/un-hold
/remove hold
/unhold
/lgtm
|
2025-04-01T06:39:19.612674
| 2023-05-16T09:41:19
|
1711647919
|
{
"authors": [
"kerthcet",
"sanposhiho"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7678",
"repo": "kubernetes-sigs/kube-scheduler-wasm-extension",
"url": "https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension/pull/4"
}
|
gharchive/pull-request
|
update(docs): update readme to explain what it is briefly
What type of PR is this?
/kind documentation
What this PR does / why we need it:
update docs to explain what it is briefly
Which issue(s) this PR fixes:
Fixes #3
Special notes for your reviewer:
Does this PR introduce a user-facing change?
No.
Sorry, opened it because I wanted to check the bot's behavior. Actually, it's WIP yet.
/hold
/kind documentation
/unhold
/label tide/merge-method-squash
PTAL @kerthcet @codefromthecrypt
@kerthcet could you check this one again please?
/lgtm
/label tide/merge-method-squash
|
2025-04-01T06:39:19.615670
| 2023-06-16T03:23:57
|
1759857407
|
{
"authors": [
"codefromthecrypt",
"sanposhiho"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7679",
"repo": "kubernetes-sigs/kube-scheduler-wasm-extension",
"url": "https://github.com/kubernetes-sigs/kube-scheduler-wasm-extension/pull/41"
}
|
gharchive/pull-request
|
Updates wazero to 1.2.1
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
We're constantly tracking performance concerns and updating to the latest wazero release makes notable improvements without any change to the guest.
Which issue(s) this PR fixes:
Special notes for your reviewer:
Performance related changes in the latest patch were thanks to @ncruces
Does this PR introduce a user-facing change?
NONE
What are the benchmark results of this change?
goos: darwin
goarch: arm64
pkg: sigs.k8s.io/kube-scheduler-wasm-extension/internal/e2e
│ v1.2.0.txt │ v1.2.1.txt │
│ sec/op │ sec/op vs base │
PluginFilter/noop-wat/params:_small-12 267.4n ± 2% 257.6n ± 4% -3.65% (p=0.024 n=6)
PluginFilter/noop-wat/params:_real-12 270.4n ± 1% 270.1n ± 1% ~ (p=0.623 n=6)
PluginFilter/noop/params:_small-12 333.2n ± 1% 329.2n ± 0% -1.19% (p=0.002 n=6)
PluginFilter/noop/params:_real-12 337.0n ± 0% 336.2n ± 1% ~ (p=0.701 n=6)
PluginFilter/test/params:_small-12 6.770µ ± 0% 6.279µ ± 0% -7.25% (p=0.002 n=6)
PluginFilter/test/params:_real-12 122.6µ ± 5% 114.7µ ± 1% -6.51% (p=0.002 n=6)
PluginScore/noop-wat/params:_small-12 256.2n ± 2% 257.8n ± 0% ~ (p=0.327 n=6)
PluginScore/noop-wat/params:_real-12 260.2n ± 1% 260.6n ± 2% ~ (p=0.331 n=6)
PluginScore/noop/params:_small-12 374.0n ± 1% 375.4n ± 1% ~ (p=0.626 n=6)
PluginScore/noop/params:_real-12 343.2n ± 1% 348.3n ± 1% +1.49% (p=0.004 n=6)
PluginScore/test/params:_small-12 3.786µ ± 1% 3.604µ ± 0% -4.82% (p=0.002 n=6)
PluginScore/test/params:_real-12 43.48µ ± 16% 46.18µ ± 1% ~ (p=0.394 n=6)
PluginFilterAndScore/noop-wat/params:_small-12 378.8n ± 3% 376.1n ± 1% ~ (p=0.061 n=6)
PluginFilterAndScore/noop-wat/params:_real-12 382.1n ± 1% 380.9n ± 0% ~ (p=0.074 n=6)
PluginFilterAndScore/noop/params:_small-12 576.7n ± 1% 578.0n ± 1% ~ (p=0.260 n=6)
PluginFilterAndScore/noop/params:_real-12 543.1n ± 1% 543.4n ± 0% ~ (p=0.509 n=6)
PluginFilterAndScore/test/params:_small-12 10.62µ ± 0% 10.06µ ± 1% -5.23% (p=0.002 n=6)
PluginFilterAndScore/test/params:_real-12 176.5µ ± 0% 168.0µ ± 1% -4.80% (p=0.002 n=6)
geomean 1.450µ 1.429µ -1.48%
/lgtm
|
2025-04-01T06:39:19.663175
| 2023-07-29T02:41:34
|
1827279258
|
{
"authors": [
"BinL233",
"alculquicondor",
"mimowo",
"tenzen-y"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7680",
"repo": "kubernetes-sigs/kueue",
"url": "https://github.com/kubernetes-sigs/kueue/pull/1025"
}
|
gharchive/pull-request
|
Fix Flaky test: Kueue when Creating a Job With Queueing [It] Should unsuspend a job and set nodeSelectors
What type of PR is this?
/kind bug
What this PR does / why we need it:
Create const LongTimeout for e2e test.
Pods in test sometimes takes long time because of the image pulling.
Which issue(s) this PR fixes:
Fixes #1021
Special notes for your reviewer:
Does this PR introduce a user-facing change?
NONE
/ok-to-test
/approve
+1 on @mimowo's suggestion
There is another flaky test: https://github.com/kubernetes-sigs/kueue/issues/1027
/retest
At this point I think it is better to implement pre-pull. I will open an Issue
|
2025-04-01T06:39:19.666237
| 2024-08-02T06:43:59
|
2444195646
|
{
"authors": [
"mbobrovskyi",
"mimowo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7681",
"repo": "kubernetes-sigs/kueue",
"url": "https://github.com/kubernetes-sigs/kueue/pull/2759"
}
|
gharchive/pull-request
|
Add dependabot npm configuration for the site directory.
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Add dependabot npm configuration for the site directory.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
NONE
/cc @tenzen-y @mimowo
/lgtm
Thanks!
/approve
|
2025-04-01T06:39:19.669889
| 2024-11-05T07:02:12
|
2634602822
|
{
"authors": [
"mimowo",
"tenzen-y"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7682",
"repo": "kubernetes-sigs/kueue",
"url": "https://github.com/kubernetes-sigs/kueue/pull/3444"
}
|
gharchive/pull-request
|
Update latest version to v0.8.3
What type of PR is this?
/kind documentation
What this PR does / why we need it:
Which issue(s) this PR fixes:
Part-of https://github.com/kubernetes-sigs/kueue/issues/3441
Special notes for your reviewer:
Does this PR introduce a user-facing change?
NONE
/hold for official release
/assign @mimowo
/cherry-pick website
/hold cancel
/lgtm
/approve
|
2025-04-01T06:39:19.675780
| 2023-02-20T11:14:05
|
1591641981
|
{
"authors": [
"alculquicondor",
"kerthcet"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7683",
"repo": "kubernetes-sigs/kueue",
"url": "https://github.com/kubernetes-sigs/kueue/pull/588"
}
|
gharchive/pull-request
|
Bump k8s.io deps to v0.26.1
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes https://github.com/kubernetes-sigs/kueue/pull/586
Special notes for your reviewer:
can you bump all the k8s libraries together?
/retest
Failed test: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_kueue/588/pull-kueue-test-unit-main/1628060447326343168
The test error was introduced in https://github.com/kubernetes-sigs/controller-runtime/pull/2025, for fakeClient will check the indexes but we didn't register any in unit tests.
But we can register these indexes when building the client.
Yes, the test error fixed.
All addressed except https://github.com/kubernetes-sigs/kueue/pull/588#discussion_r1114570049.
/lgtm
@kerthcet we can also get rid of checks like this:
https://github.com/kubernetes-sigs/kueue/blob/c45d3dd98e3cae593e844092e2f334879a40ce7c/pkg/queue/manager.go#L172
|
2025-04-01T06:39:19.692527
| 2024-03-05T01:06:42
|
2168074270
|
{
"authors": [
"kerthcet",
"liurupeng"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7684",
"repo": "kubernetes-sigs/lws",
"url": "https://github.com/kubernetes-sigs/lws/pull/37"
}
|
gharchive/pull-request
|
Update readme expression
What type of PR is this?
/kind documentation
What this PR does / why we need it
Which issue(s) this PR fixes
Fixes #
Special notes for your reviewer
Does this PR introduce a user-facing change?
/ok-to-test
|
2025-04-01T06:39:19.699736
| 2023-12-19T08:08:24
|
2048134716
|
{
"authors": [
"saschagrunert"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7685",
"repo": "kubernetes-sigs/security-profiles-operator",
"url": "https://github.com/kubernetes-sigs/security-profiles-operator/issues/2031"
}
|
gharchive/issue
|
Release v0.8.2
Ref https://github.com/kubernetes-sigs/security-profiles-operator/pull/2030
Release notes
# Release notes
Welcome to our glorious v0.8.2 release of the **security-profiles-operator**! The general usage and setup can be found [in our documentation][0]. :partying_face: :dancers:
To install the operator, run:
```
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/security-profiles-operator/v0.8.2/deploy/operator.yaml
```
You can also verify the container image signature by using [cosign][1]:
```
$ cosign verify \
--certificate-identity<EMAIL_ADDRESS>\
--certificate-oidc-issuer https://accounts.google.com \
registry.k8s.io/security-profiles-operator/security-profiles-operator:v0.8.2
```
Beside the operator image, we now also ship `spoc`, the official Security Profiles Operator Command Line Interface! Binaries for `amd64` and `arm64` are attached to this release.
To verify the signature of `spoc`. download all release artifacts and run for `amd64` (works in the same way for `arm64`:
```
$ cosign verify-blob \
--certificate-identity<EMAIL_ADDRESS>\
--certificate-oidc-issuer https://github.com/login/oauth \
--certificate spoc.amd64.cert \
--signature spoc.amd64.sig \
spoc.amd64
```
To verify the Bill of Materials (BOM) using the [`bom`](https://github.com/kubernetes-sigs/bom) tool, download the artifacts into a `build` directory and run:
```
> bom validate -e spoc.spdx -d build/
+-------------------+-------+-----------------------------+----------------+
| FILENAME | VALID | MESSAGE | INVALID HASHES |
+-------------------+-------+-----------------------------+----------------+
| spoc.amd64 | OK | File validated successfully | - |
| spoc.amd64.cert | OK | File validated successfully | - |
| spoc.amd64.sha512 | OK | File validated successfully | - |
| spoc.amd64.sig | OK | File validated successfully | - |
| spoc.arm64 | OK | File validated successfully | - |
| spoc.arm64.cert | OK | File validated successfully | - |
| spoc.arm64.sha512 | OK | File validated successfully | - |
| spoc.arm64.sig | OK | File validated successfully | - |
+-------------------+-------+-----------------------------+----------------+
```
The `.spdx` file is signed as well and we also provide `.sha512` sum files for the binaries.
Feel free to provide us any kind of feedback in the official [Kubernetes Slack #security-profiles-operator channel][2].
[0]: https://github.com/kubernetes-sigs/security-profiles-operator/blob/v0.8.2/installation-usage.md
[1]: https://github.com/sigstore/cosign
[2]: https://app.slack.com/client/T09NY5SBT/C013FQNB0A2
## Changes by Kind
### Failing Test
- Fixed upgrade issue introduced in v0.8.1. (#2023, @yuumasato)
## Dependencies
### Added
- github.com/DATA-DOG/go-sqlmock: [v1.5.0](https://github.com/DATA-DOG/go-sqlmock/tree/v1.5.0)
- github.com/Khan/genqlient: [v0.6.0](https://github.com/Khan/genqlient/tree/v0.6.0)
- github.com/alexflint/go-arg: [v1.4.2](https://github.com/alexflint/go-arg/tree/v1.4.2)
- github.com/alexflint/go-scalar: [v1.0.0](https://github.com/alexflint/go-scalar/tree/v1.0.0)
- github.com/aws/aws-sdk-go-v2/feature/s3/manager: [v1.11.76](https://github.com/aws/aws-sdk-go-v2/feature/s3/manager/tree/v1.11.76)
- github.com/buildkite/go-pipeline: [v0.2.0](https://github.com/buildkite/go-pipeline/tree/v0.2.0)
### Changed
- cloud.google.com/go/compute: v1.23.2 → v1.23.3
- cloud.google.com/go/iam: v1.1.4 → v1.1.5
- cloud.google.com/go/kms: v1.15.4 → v1.15.5
- cloud.google.com/go: v0.110.9 → v0.110.10
- github.com/Azure/azure-sdk-for-go/sdk/azcore: [v1.8.0 → v1.9.0](https://github.com/Azure/azure-sdk-for-go/sdk/azcore/compare/v1.8.0...v1.9.0)
- github.com/Azure/azure-sdk-for-go/sdk/internal: [v1.4.0 → v1.5.0](https://github.com/Azure/azure-sdk-for-go/sdk/internal/compare/v1.4.0...v1.5.0)
- github.com/DataDog/datadog-agent/pkg/obfuscate: [v0.48.1 → v0.48.0](https://github.com/DataDog/datadog-agent/pkg/obfuscate/compare/v0.48.1...v0.48.0)
- github.com/DataDog/datadog-agent/pkg/remoteconfig/state: [v0.48.1 → 2549ba9](https://github.com/DataDog/datadog-agent/pkg/remoteconfig/state/compare/v0.48.1...2549ba9)
- github.com/DataDog/sketches-go: [v1.4.3 → v1.4.2](https://github.com/DataDog/sketches-go/compare/v1.4.3...v1.4.2)
- github.com/andybalholm/brotli: [v1.0.6 → v1.0.1](https://github.com/andybalholm/brotli/compare/v1.0.6...v1.0.1)
- github.com/aws/aws-sdk-go-v2/config: [v1.19.1 → v1.25.11](https://github.com/aws/aws-sdk-go-v2/config/compare/v1.19.1...v1.25.11)
- github.com/aws/aws-sdk-go-v2/credentials: [v1.13.43 → v1.16.9](https://github.com/aws/aws-sdk-go-v2/credentials/compare/v1.13.43...v1.16.9)
- github.com/aws/aws-sdk-go-v2/feature/ec2/imds: [v1.13.13 → v1.14.9](https://github.com/aws/aws-sdk-go-v2/feature/ec2/imds/compare/v1.13.13...v1.14.9)
- github.com/aws/aws-sdk-go-v2/internal/configsources: [v1.1.43 → v1.2.8](https://github.com/aws/aws-sdk-go-v2/internal/configsources/compare/v1.1.43...v1.2.8)
- github.com/aws/aws-sdk-go-v2/internal/endpoints/v2: [v2.4.37 → v2.5.8](https://github.com/aws/aws-sdk-go-v2/internal/endpoints/v2/compare/v2.4.37...v2.5.8)
- github.com/aws/aws-sdk-go-v2/internal/ini: [v1.3.45 → v1.7.1](https://github.com/aws/aws-sdk-go-v2/internal/ini/compare/v1.3.45...v1.7.1)
- github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding: [v1.9.14 → v1.10.3](https://github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/compare/v1.9.14...v1.10.3)
- github.com/aws/aws-sdk-go-v2/service/internal/presigned-url: [v1.9.37 → v1.10.8](https://github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/compare/v1.9.37...v1.10.8)
- github.com/aws/aws-sdk-go-v2/service/kms: [v1.24.7 → v1.27.2](https://github.com/aws/aws-sdk-go-v2/service/kms/compare/v1.24.7...v1.27.2)
- github.com/aws/aws-sdk-go-v2/service/sso: [v1.15.2 → v1.18.2](https://github.com/aws/aws-sdk-go-v2/service/sso/compare/v1.15.2...v1.18.2)
- github.com/aws/aws-sdk-go-v2/service/ssooidc: [v1.17.3 → v1.21.2](https://github.com/aws/aws-sdk-go-v2/service/ssooidc/compare/v1.17.3...v1.21.2)
- github.com/aws/aws-sdk-go-v2/service/sts: [v1.23.2 → v1.26.2](https://github.com/aws/aws-sdk-go-v2/service/sts/compare/v1.23.2...v1.26.2)
- github.com/aws/aws-sdk-go-v2: [v1.21.2 → v1.23.5](https://github.com/aws/aws-sdk-go-v2/compare/v1.21.2...v1.23.5)
- github.com/aws/aws-sdk-go: [v1.47.0 → v1.48.11](https://github.com/aws/aws-sdk-go/compare/v1.47.0...v1.48.11)
- github.com/aws/smithy-go: [v1.15.0 → v1.18.1](https://github.com/aws/smithy-go/compare/v1.15.0...v1.18.1)
- github.com/buildkite/agent/v3: [v3.58.0 → v3.59.0](https://github.com/buildkite/agent/v3/compare/v3.58.0...v3.59.0)
- github.com/buildkite/bintest/v3: [v3.1.1 → v3.2.0](https://github.com/buildkite/bintest/v3/compare/v3.1.1...v3.2.0)
- github.com/cert-manager/cert-manager: [v1.13.2 → v1.13.3](https://github.com/cert-manager/cert-manager/compare/v1.13.2...v1.13.3)
- github.com/containers/common: [v0.57.0 → v0.57.1](https://github.com/containers/common/compare/v0.57.0...v0.57.1)
- github.com/ebitengine/purego: [v0.5.0 → v0.5.0-alpha.1](https://github.com/ebitengine/purego/compare/v0.5.0...v0.5.0-alpha.1)
- github.com/felixge/httpsnoop: [v1.0.3 → v1.0.4](https://github.com/felixge/httpsnoop/compare/v1.0.3...v1.0.4)
- github.com/gabriel-vasile/mimetype: [v1.4.3 → v1.4.2](https://github.com/gabriel-vasile/mimetype/compare/v1.4.3...v1.4.2)
- github.com/go-openapi/spec: [v0.20.9 → v0.20.11](https://github.com/go-openapi/spec/compare/v0.20.9...v0.20.11)
- github.com/go-openapi/strfmt: [v0.21.7 → v0.21.8](https://github.com/go-openapi/strfmt/compare/v0.21.7...v0.21.8)
- github.com/go-openapi/validate: [v0.22.1 → v0.22.3](https://github.com/go-openapi/validate/compare/v0.22.1...v0.22.3)
- github.com/go-rod/rod: [v0.114.4 → v0.114.5](https://github.com/go-rod/rod/compare/v0.114.4...v0.114.5)
- github.com/google/go-tpm-tools: [v0.4.1 → v0.4.2](https://github.com/google/go-tpm-tools/compare/v0.4.1...v0.4.2)
- github.com/gorilla/mux: [v1.8.0 → v1.8.1](https://github.com/gorilla/mux/compare/v1.8.0...v1.8.1)
- github.com/hashicorp/go-retryablehttp: [v0.7.4 → v0.7.5](https://github.com/hashicorp/go-retryablehttp/compare/v0.7.4...v0.7.5)
- github.com/jellydator/ttlcache/v3: [v3.1.0 → v3.1.1](https://github.com/jellydator/ttlcache/v3/compare/v3.1.0...v3.1.1)
- github.com/montanaflynn/stats: [v0.6.6 → 1bf9dbc](https://github.com/montanaflynn/stats/compare/v0.6.6...1bf9dbc)
- github.com/open-policy-agent/opa: [v0.58.0 → v0.59.0](https://github.com/open-policy-agent/opa/compare/v0.58.0...v0.59.0)
- github.com/pierrec/lz4/v4: [v4.1.18 → v4.1.2](https://github.com/pierrec/lz4/v4/compare/v4.1.18...v4.1.2)
- github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring: [v0.69.1 → v0.70.0](https://github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/compare/v0.69.1...v0.70.0)
- github.com/sigstore/cosign/v2: [v2.2.1 → v2.2.2](https://github.com/sigstore/cosign/v2/compare/v2.2.1...v2.2.2)
- github.com/sigstore/rekor: [v1.3.3 → v1.3.4](https://github.com/sigstore/rekor/compare/v1.3.3...v1.3.4)
- github.com/sigstore/sigstore/pkg/signature/kms/aws: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/pkg/signature/kms/aws/compare/v1.7.5...v1.7.6)
- github.com/sigstore/sigstore/pkg/signature/kms/azure: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/pkg/signature/kms/azure/compare/v1.7.5...v1.7.6)
- github.com/sigstore/sigstore/pkg/signature/kms/gcp: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/pkg/signature/kms/gcp/compare/v1.7.5...v1.7.6)
- github.com/sigstore/sigstore/pkg/signature/kms/hashivault: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/pkg/signature/kms/hashivault/compare/v1.7.5...v1.7.6)
- github.com/sigstore/sigstore: [v1.7.5 → v1.7.6](https://github.com/sigstore/sigstore/compare/v1.7.5...v1.7.6)
- github.com/stretchr/objx: [v0.5.1 → v0.5.0](https://github.com/stretchr/objx/compare/v0.5.1...v0.5.0)
- github.com/theupdateframework/go-tuf: [v0.6.1 → v0.7.0](https://github.com/theupdateframework/go-tuf/compare/v0.6.1...v0.7.0)
- github.com/tidwall/pretty: [v1.2.1 → v1.2.0](https://github.com/tidwall/pretty/compare/v1.2.1...v1.2.0)
- github.com/urfave/cli/v2: [v2.25.7 → v2.26.0](https://github.com/urfave/cli/v2/compare/v2.25.7...v2.26.0)
- github.com/xanzy/go-gitlab: [v0.93.2 → v0.94.0](https://github.com/xanzy/go-gitlab/compare/v0.93.2...v0.94.0)
- go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc: v0.45.0 → v0.46.0
- go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp: v0.45.0 → v0.46.1
- go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc: v1.19.0 → v1.21.0
- go.opentelemetry.io/otel/exporters/otlp/otlptrace: v1.19.0 → v1.21.0
- go.opentelemetry.io/otel/metric: v1.19.0 → v1.21.0
- go.opentelemetry.io/otel/sdk: v1.19.0 → v1.21.0
- go.opentelemetry.io/otel/trace: v1.19.0 → v1.21.0
- go.opentelemetry.io/otel: v1.19.0 → v1.21.0
- go.step.sm/crypto: v0.36.1 → v0.38.0
- golang.org/x/crypto: v0.16.0 → v0.17.0
- golang.org/x/exp: 7918f67 → 2478ac8
- golang.org/x/oauth2: v0.13.0 → v0.15.0
- golang.org/x/time: v0.3.0 → v0.5.0
- golang.org/x/tools: v0.14.0 → v0.15.0
- google.golang.org/api: v0.149.0 → v0.152.0
- google.golang.org/genproto/googleapis/api: 49dd2c1 → bbf56f3
- google.golang.org/genproto/googleapis/bytestream: d783a09 → 83a465c
- google.golang.org/genproto/googleapis/rpc: 49dd2c1 → 83a465c
- google.golang.org/genproto: 49dd2c1 → bbf56f3
- google.golang.org/grpc: v1.59.0 → v1.60.1
- k8s.io/api: v0.28.4 → v0.29.0
- k8s.io/apiextensions-apiserver: v0.28.3 → v0.28.4
- k8s.io/apimachinery: v0.28.4 → v0.29.0
- k8s.io/apiserver: v0.28.3 → v0.28.4
- k8s.io/cli-runtime: v0.28.4 → v0.29.0
- k8s.io/client-go: v0.28.4 → v0.29.0
- k8s.io/code-generator: v0.28.3 → v0.28.4
- k8s.io/component-base: v0.28.3 → v0.28.4
- k8s.io/kms: v0.28.3 → v0.28.4
- k8s.io/utils: 3b25d92 → b307cd5
- sigs.k8s.io/structured-merge-diff/v4: v4.3.0 → v4.4.1
### Removed
- github.com/99designs/gqlgen: [v0.17.36](https://github.com/99designs/gqlgen/tree/v0.17.36)
- github.com/DataDog/gostackparse: [v0.7.0](https://github.com/DataDog/gostackparse/tree/v0.7.0)
- github.com/IBM/sarama: [v1.40.0](https://github.com/IBM/sarama/tree/v1.40.0)
- github.com/Shopify/sarama: [v1.38.1](https://github.com/Shopify/sarama/tree/v1.38.1)
- github.com/aws/aws-sdk-go-v2/service/dynamodb: [v1.21.4](https://github.com/aws/aws-sdk-go-v2/service/dynamodb/tree/v1.21.4)
- github.com/aws/aws-sdk-go-v2/service/ec2: [v1.93.2](https://github.com/aws/aws-sdk-go-v2/service/ec2/tree/v1.93.2)
- github.com/aws/aws-sdk-go-v2/service/eventbridge: [v1.20.4](https://github.com/aws/aws-sdk-go-v2/service/eventbridge/tree/v1.20.4)
- github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery: [v1.7.34](https://github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery/tree/v1.7.34)
- github.com/aws/aws-sdk-go-v2/service/kinesis: [v1.18.4](https://github.com/aws/aws-sdk-go-v2/service/kinesis/tree/v1.18.4)
- github.com/aws/aws-sdk-go-v2/service/sfn: [v1.19.4](https://github.com/aws/aws-sdk-go-v2/service/sfn/tree/v1.19.4)
- github.com/aws/aws-sdk-go-v2/service/sns: [v1.21.4](https://github.com/aws/aws-sdk-go-v2/service/sns/tree/v1.21.4)
- github.com/aws/aws-sdk-go-v2/service/sqs: [v1.24.4](https://github.com/aws/aws-sdk-go-v2/service/sqs/tree/v1.24.4)
- github.com/bradfitz/gomemcache: [acc6962](https://github.com/bradfitz/gomemcache/tree/acc6962)
- github.com/bytedance/sonic: [v1.10.0](https://github.com/bytedance/sonic/tree/v1.10.0)
- github.com/chenzhuoyu/base64x: [296ad89](https://github.com/chenzhuoyu/base64x/tree/296ad89)
- github.com/chenzhuoyu/iasm: [v0.9.0](https://github.com/chenzhuoyu/iasm/tree/v0.9.0)
- github.com/confluentinc/confluent-kafka-go/v2: [v2.2.0](https://github.com/confluentinc/confluent-kafka-go/v2/tree/v2.2.0)
- github.com/confluentinc/confluent-kafka-go: [v1.9.2](https://github.com/confluentinc/confluent-kafka-go/tree/v1.9.2)
- github.com/decred/dcrd/crypto/blake256: [v1.0.1](https://github.com/decred/dcrd/crypto/blake256/tree/v1.0.1)
- github.com/denisenkom/go-mssqldb: [v0.11.0](https://github.com/denisenkom/go-mssqldb/tree/v0.11.0)
- github.com/dimfeld/httptreemux/v5: [v5.5.0](https://github.com/dimfeld/httptreemux/v5/tree/v5.5.0)
- github.com/dvyukov/go-fuzz: [6a8e9d1](https://github.com/dvyukov/go-fuzz/tree/6a8e9d1)
- github.com/eapache/go-resiliency: [v1.4.0](https://github.com/eapache/go-resiliency/tree/v1.4.0)
- github.com/eapache/go-xerial-snappy: [c322873](https://github.com/eapache/go-xerial-snappy/tree/c322873)
- github.com/eapache/queue: [v1.1.0](https://github.com/eapache/queue/tree/v1.1.0)
- github.com/elastic/elastic-transport-go/v8: [v8.1.0](https://github.com/elastic/elastic-transport-go/v8/tree/v8.1.0)
- github.com/elastic/go-elasticsearch/v6: [v6.8.5](https://github.com/elastic/go-elasticsearch/v6/tree/v6.8.5)
- github.com/elastic/go-elasticsearch/v7: [v7.17.1](https://github.com/elastic/go-elasticsearch/v7/tree/v7.17.1)
- github.com/elastic/go-elasticsearch/v8: [v8.4.0](https://github.com/elastic/go-elasticsearch/v8/tree/v8.4.0)
- github.com/emicklei/go-restful: [v2.16.0+incompatible](https://github.com/emicklei/go-restful/tree/v2.16.0)
- github.com/garyburd/redigo: [v1.6.4](https://github.com/garyburd/redigo/tree/v1.6.4)
- github.com/gin-contrib/sse: [v0.1.0](https://github.com/gin-contrib/sse/tree/v0.1.0)
- github.com/gin-gonic/gin: [v1.9.1](https://github.com/gin-gonic/gin/tree/v1.9.1)
- github.com/globalsign/mgo: [eeefdec](https://github.com/globalsign/mgo/tree/eeefdec)
- github.com/go-pg/pg/v10: [v10.11.1](https://github.com/go-pg/pg/v10/tree/v10.11.1)
- github.com/go-pg/zerochecker: [v0.2.0](https://github.com/go-pg/zerochecker/tree/v0.2.0)
- github.com/go-playground/assert/v2: [v2.2.0](https://github.com/go-playground/assert/v2/tree/v2.2.0)
- github.com/go-redis/redis/v7: [v7.4.1](https://github.com/go-redis/redis/v7/tree/v7.4.1)
- github.com/go-redis/redis/v8: [v8.11.5](https://github.com/go-redis/redis/v8/tree/v8.11.5)
- github.com/go-redis/redis: [v6.15.9+incompatible](https://github.com/go-redis/redis/tree/v6.15.9)
- github.com/go-stack/stack: [v1.8.0](https://github.com/go-stack/stack/tree/v1.8.0)
- github.com/gobuffalo/attrs: [a9411de](https://github.com/gobuffalo/attrs/tree/a9411de)
- github.com/gobuffalo/depgen: [v0.1.0](https://github.com/gobuffalo/depgen/tree/v0.1.0)
- github.com/gobuffalo/envy: [v1.7.0](https://github.com/gobuffalo/envy/tree/v1.7.0)
- github.com/gobuffalo/genny: [v0.1.1](https://github.com/gobuffalo/genny/tree/v0.1.1)
- github.com/gobuffalo/gitgen: [cc08618](https://github.com/gobuffalo/gitgen/tree/cc08618)
- github.com/gobuffalo/gogen: [v0.1.1](https://github.com/gobuffalo/gogen/tree/v0.1.1)
- github.com/gobuffalo/logger: [86e12af](https://github.com/gobuffalo/logger/tree/86e12af)
- github.com/gobuffalo/mapi: [v1.0.2](https://github.com/gobuffalo/mapi/tree/v1.0.2)
- github.com/gobuffalo/packd: [v0.1.0](https://github.com/gobuffalo/packd/tree/v0.1.0)
- github.com/gobuffalo/packr/v2: [v2.2.0](https://github.com/gobuffalo/packr/v2/tree/v2.2.0)
- github.com/gobuffalo/syncx: [33c2958](https://github.com/gobuffalo/syncx/tree/33c2958)
- github.com/gocql/gocql: [0eacd31](https://github.com/gocql/gocql/tree/0eacd31)
- github.com/gofiber/fiber/v2: [v2.50.0](https://github.com/gofiber/fiber/v2/tree/v2.50.0)
- github.com/gofrs/uuid: [v4.4.0+incompatible](https://github.com/gofrs/uuid/tree/v4.4.0)
- github.com/golang-sql/civil: [b832511](https://github.com/golang-sql/civil/tree/b832511)
- github.com/golang-sql/sqlexp: [v0.1.0](https://github.com/golang-sql/sqlexp/tree/v0.1.0)
- github.com/gomodule/redigo: [v1.8.9](https://github.com/gomodule/redigo/tree/v1.8.9)
- github.com/googleapis/gnostic: [v0.5.5](https://github.com/googleapis/gnostic/tree/v0.5.5)
- github.com/graph-gophers/graphql-go: [v1.5.0](https://github.com/graph-gophers/graphql-go/tree/v1.5.0)
- github.com/hailocab/go-hostpool: [e80d13c](https://github.com/hailocab/go-hostpool/tree/e80d13c)
- github.com/hashicorp/go-uuid: [v1.0.3](https://github.com/hashicorp/go-uuid/tree/v1.0.3)
- github.com/hashicorp/golang-lru/v2: [v2.0.3](https://github.com/hashicorp/golang-lru/v2/tree/v2.0.3)
- github.com/jackc/pgpassfile: [v1.0.0](https://github.com/jackc/pgpassfile/tree/v1.0.0)
- github.com/jackc/pgservicefile: [091c0ba](https://github.com/jackc/pgservicefile/tree/091c0ba)
- github.com/jackc/pgx/v5: [v5.3.1](https://github.com/jackc/pgx/v5/tree/v5.3.1)
- github.com/jcmturner/aescts/v2: [v2.0.0](https://github.com/jcmturner/aescts/v2/tree/v2.0.0)
- github.com/jcmturner/dnsutils/v2: [v2.0.0](https://github.com/jcmturner/dnsutils/v2/tree/v2.0.0)
- github.com/jcmturner/gofork: [v1.7.6](https://github.com/jcmturner/gofork/tree/v1.7.6)
- github.com/jcmturner/gokrb5/v8: [v8.4.4](https://github.com/jcmturner/gokrb5/v8/tree/v8.4.4)
- github.com/jcmturner/rpc/v2: [v2.0.3](https://github.com/jcmturner/rpc/v2/tree/v2.0.3)
- github.com/jinzhu/gorm: [v1.9.16](https://github.com/jinzhu/gorm/tree/v1.9.16)
- github.com/jinzhu/inflection: [v1.0.0](https://github.com/jinzhu/inflection/tree/v1.0.0)
- github.com/jinzhu/now: [v1.1.5](https://github.com/jinzhu/now/tree/v1.1.5)
- github.com/joho/godotenv: [v1.3.0](https://github.com/joho/godotenv/tree/v1.3.0)
- github.com/karrick/godirwalk: [v1.10.3](https://github.com/karrick/godirwalk/tree/v1.10.3)
- github.com/klauspost/cpuid/v2: [v2.2.5](https://github.com/klauspost/cpuid/v2/tree/v2.2.5)
- github.com/konsorten/go-windows-terminal-sequences: [v1.0.2](https://github.com/konsorten/go-windows-terminal-sequences/tree/v1.0.2)
- github.com/labstack/echo/v4: [v4.11.1](https://github.com/labstack/echo/v4/tree/v4.11.1)
- github.com/labstack/echo: [v3.3.10+incompatible](https://github.com/labstack/echo/tree/v3.3.10)
- github.com/labstack/gommon: [v0.4.0](https://github.com/labstack/gommon/tree/v0.4.0)
- github.com/markbates/oncer: [bf2de49](https://github.com/markbates/oncer/tree/bf2de49)
- github.com/markbates/safe: [v1.0.1](https://github.com/markbates/safe/tree/v1.0.1)
- github.com/microsoft/go-mssqldb: [v0.21.0](https://github.com/microsoft/go-mssqldb/tree/v0.21.0)
- github.com/richardartoul/molecule: [32cfee0](https://github.com/richardartoul/molecule/tree/32cfee0)
- github.com/segmentio/kafka-go: [v0.4.42](https://github.com/segmentio/kafka-go/tree/v0.4.42)
- github.com/spaolacci/murmur3: [v1.1.0](https://github.com/spaolacci/murmur3/tree/v1.1.0)
- github.com/tidwall/btree: [v1.6.0](https://github.com/tidwall/btree/tree/v1.6.0)
- github.com/tidwall/buntdb: [v1.3.0](https://github.com/tidwall/buntdb/tree/v1.3.0)
- github.com/tidwall/gjson: [v1.16.0](https://github.com/tidwall/gjson/tree/v1.16.0)
- github.com/tidwall/grect: [v0.1.4](https://github.com/tidwall/grect/tree/v0.1.4)
- github.com/tidwall/match: [v1.1.1](https://github.com/tidwall/match/tree/v1.1.1)
- github.com/tidwall/rtred: [v0.1.2](https://github.com/tidwall/rtred/tree/v0.1.2)
- github.com/tidwall/tinyqueue: [v0.1.1](https://github.com/tidwall/tinyqueue/tree/v0.1.1)
- github.com/tmthrgd/go-hex: [447a304](https://github.com/tmthrgd/go-hex/tree/447a304)
- github.com/twitchtv/twirp: [v8.1.3+incompatible](https://github.com/twitchtv/twirp/tree/v8.1.3)
- github.com/twitchyliquid64/golang-asm: [v0.15.1](https://github.com/twitchyliquid64/golang-asm/tree/v0.15.1)
- github.com/ugorji/go/codec: [v1.2.11](https://github.com/ugorji/go/codec/tree/v1.2.11)
- github.com/valyala/bytebufferpool: [v1.0.0](https://github.com/valyala/bytebufferpool/tree/v1.0.0)
- github.com/valyala/fasthttp: [v1.50.0](https://github.com/valyala/fasthttp/tree/v1.50.0)
- github.com/valyala/fasttemplate: [v1.2.2](https://github.com/valyala/fasttemplate/tree/v1.2.2)
- github.com/valyala/tcplisten: [v1.0.0](https://github.com/valyala/tcplisten/tree/v1.0.0)
- github.com/vmihailenco/bufpool: [v0.1.11](https://github.com/vmihailenco/bufpool/tree/v0.1.11)
- github.com/vmihailenco/msgpack/v5: [v5.3.5](https://github.com/vmihailenco/msgpack/v5/tree/v5.3.5)
- github.com/vmihailenco/tagparser/v2: [v2.0.0](https://github.com/vmihailenco/tagparser/v2/tree/v2.0.0)
- github.com/vmihailenco/tagparser: [v0.1.2](https://github.com/vmihailenco/tagparser/tree/v0.1.2)
- github.com/zenazn/goji: [v1.0.1](https://github.com/zenazn/goji/tree/v1.0.1)
- golang.org/x/arch: v0.4.0
- gopkg.in/jinzhu/gorm.v1: v1.9.2
- gopkg.in/olivere/elastic.v3: v3.0.75
- gopkg.in/olivere/elastic.v5: v5.0.84
- gorm.io/driver/mysql: v1.0.1
- gorm.io/driver/postgres: v1.4.6
- gorm.io/driver/sqlserver: v1.4.2
- gorm.io/gorm: v1.25.3
- honnef.co/go/gotraceui: v0.2.0
- mellium.im/sasl: v0.3.1
Done
|
2025-04-01T06:39:19.721491
| 2018-02-24T13:50:50
|
299942514
|
{
"authors": [
"a-robinson",
"hvaara",
"unguiculus"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7686",
"repo": "kubernetes/charts",
"url": "https://github.com/kubernetes/charts/pull/3858"
}
|
gharchive/pull-request
|
Update readme to reflect move from incubator to stable for cockroachdb
CockroachDB is now in stable. Updated the install section, and one more place referring to the incubator.
/cc @a-robinson Does this look ok?
/assign @mattfarina
The build failed because I didn't bump the chart version. I've only updated README.md should I still update the chart version? If I do, I'll wait for #3769 to be merged first.
Thanks @hvaara! This LGTM. Feel free to use the same version number as #3769, I can update that one if this gets in first.
/ok-to-test
/lgtm
/approve
@a-robinson Thanks a lot for taking a look! I've updated the chart version.
/lgtm
|
2025-04-01T06:39:19.754723
| 2022-03-24T20:06:02
|
1179997861
|
{
"authors": [
"jichenjc",
"vowywowy"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7687",
"repo": "kubernetes/cloud-provider-openstack",
"url": "https://github.com/kubernetes/cloud-provider-openstack/issues/1818"
}
|
gharchive/issue
|
[occm] Node ExternalIP not being added
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Attempting to switch from in-tree cloud provider to occm. The nodes don't have external IP's as revealed with kubectl get nodes -o wide:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
tf-rancher-cluster-demo-test-1 Ready controlplane,etcd,worker 58m v1.21.5 <IP_ADDRESS> <none> Ubuntu 18.04.6 LTS 5.4.0-91-generic docker://20.10.11
tf-rancher-cluster-demo-test-2 Ready controlplane,etcd,worker 27d v1.21.5 <IP_ADDRESS> <none> Ubuntu 18.04.6 LTS 5.4.0-91-generic docker://20.10.11
tf-rancher-cluster-demo-test-3 Ready controlplane,etcd,worker 27d v1.21.5 <IP_ADDRESS> <none> Ubuntu 18.04.6 LTS 5.4.0-91-generic docker://20.10.11
The name of the single floating network is specified under public-network-name in the cloud.conf and when logging verbosity is increase to --v=4, occm is detecting the floating IPs (relevant instances.go entries):
...
I0324 19:55:31.297534 1 instances.go:72] openstack.Instances() called
I0324 19:55:31.297569 1 instances.go:131] NodeAddressesByProviderID () called
I0324 19:55:31.297898 1 instances.go:116] NodeAddresses(tf-rancher-cluster-demo-test-1) called
I0324 19:55:31.633329 1 instances.go:123] NodeAddresses(tf-rancher-cluster-demo-test-1) => [{InternalIP <IP_ADDRESS>} {ExternalIP <IP_ADDRESS>}]
...
What you expected to happen:
The actual node resources have external IPs
How to reproduce it:
I don't believe I'm doing anything out of the ordinary. This is a plain install from scratch with no other workloads.
Anything else we need to know?:
N/A
Environment:
openstack-cloud-controller-manager(or other related binary) version: 1.23 (using the chart)
OpenStack version: Queens
Others: k8s, os, and kernel versions are in the above snippets
do you have floating ip set and which version you are using ?
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m2 Ready control-plane,master 16d v1.23.4 <IP_ADDRESS> <IP_ADDRESS> Ubuntu 18.04.6 LTS 4.15.0-167-generic docker://20.10.12
# nova list
`...
| f40cf042-3788-414d-8f06-efbd58e09da5 | m2 | ACTIVE | - | Running | private=<IP_ADDRESS>, fd81:d720:65dd:0:f816:3eff:fe85:4f98, <IP_ADDRESS>
...
$ kubectl get pod openstack-cloud-controller-manager-5n8fg -n kube-system -o yaml | grep image
image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:latest
I was on 1.23, but I just switched to latest and there is no change.
Yes, the nodes have floating IPs:
+--------------------------------------+--------------------------------+--------+------------+-------------+----------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------------------------+--------+------------+-------------+----------------------------------------------------------+
| e5817584-7fc5-4900-a1ff-d6c7f01227e9 | tf-rancher-cluster-demo-test-1 | ACTIVE | - | Running | tf-rancher-cluster-demo-test-net=<IP_ADDRESS>, <IP_ADDRESS> |
| 816972e2-3b6e-4b9f-b69e-ec834573db2f | tf-rancher-cluster-demo-test-2 | ACTIVE | - | Running | tf-rancher-cluster-demo-test-net=<IP_ADDRESS>, <IP_ADDRESS> |
| 1815fbe4-5a8e-4a78-a256-b022c3435e66 | tf-rancher-cluster-demo-test-3 | ACTIVE | - | Running | tf-rancher-cluster-demo-test-net=<IP_ADDRESS>, <IP_ADDRESS> |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
And in horizon:
um... I think there might be some pre-condition not satisfied
can you enable v==5 first then check whether there are any suspected logs might worth a look?
https://github.com/kubernetes/cloud-provider/blob/master/controllers/node/node_controller.go#L323
is the code that set the node address, there are multiple checks and I guess some might break the logic in updating the ip...
@vowywowy so, can you help paste full log of OCCM with --v = 4 so it might contains more info? Thanks
Thanks for these messages! They ended up sending me down a rabbit hole to the solution.
As you can probably guess based on the naming, everything here is terraform/rancher/rke. When I switched the cluster resource to stop using the in-tree cloud provider, somehow kubelet ended up with the flag --cloud-provider= which is obviously not correct.
For anyone in the same position as me, instead of doing this to the rancher_cluster resource:
resource "rancher_cluster" "example" {
rke_config {
cloud_provider {}
# cloud_provider {
# openstack_cloud_provider {
# ...
# }
# }
...
}
...
}
do this:
resource "rancher_cluster" "example" {
rke_config {
cloud_provider { name = "external" } # <-- very important "name" attribute
# cloud_provider {
# openstack_cloud_provider {
# ...
# }
# }
...
}
...
}
The name field has poor documentation, and is only semi-alluded to in the rke provider (not the rancher2 provider). If you want to use an out-of-tree cloud provider, you have to name it external.
Thanks again @jichenjc! Closing, since this was just user error.
ok, glad it's solved :) and yes, seems it's user input instead of OCCM
|
2025-04-01T06:39:19.766218
| 2017-02-22T00:09:19
|
209312536
|
{
"authors": [
"dhawal55",
"mumoshu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7688",
"repo": "kubernetes/contrib",
"url": "https://github.com/kubernetes/contrib/issues/2402"
}
|
gharchive/issue
|
cluster-autoscaler assumes kube-system as its namespace
I'm trying to run cluster-autoscaler in a namespace other than kube-system and get below error:
leaderelection.go:210] failed to renew lease kube-system/cluster-autoscaler
The leaderElection section assumes that cluster-autoscaler is running in kube-system namespace. It should lookup the namespace OR at-least update the ReadMe to mention that this will only run in kube-system namespace.
Hi @dhawal55!
AFAIK, there's --namespace flag as of today, to override the namespace to which CA is scheduled, to whatever you'd like.
Would it solve your problem?
Oh great, that's what I was looking for. Closing the issue
|
2025-04-01T06:39:19.804437
| 2017-07-18T07:16:40
|
243621327
|
{
"authors": [
"andyxning",
"k8s-reviewable",
"loburm",
"piosz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7689",
"repo": "kubernetes/heapster",
"url": "https://github.com/kubernetes/heapster/pull/1731"
}
|
gharchive/pull-request
|
Update deploy of grafana with version 4.4.1.
Upgrade grafana version to 4.4.1 in the deploy file.
This change is
/assign @piosz
SGTM
/lgtm
Thanks a lot for the fix!
|
2025-04-01T06:39:19.812693
| 2017-05-04T20:13:11
|
226395260
|
{
"authors": [
"HamzaK8s",
"atombender",
"technosophos",
"thomastaylor312",
"tomislater"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7690",
"repo": "kubernetes/helm",
"url": "https://github.com/kubernetes/helm/issues/2397"
}
|
gharchive/issue
|
Upgrade fails with "already exists"
This happens occasionally:
$ helm upgrade [...] picaxe-staging
Error: UPGRADE FAILED: release: already exists
Log:
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:15 storage.go:94: Listing all releases with filter
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 storage.go:133: Getting release history for 'picaxe-staging'
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 release_server.go:936: Executing pre-upgrade hooks for picaxe-staging
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 release_server.go:965: Hooks complete for pre-upgrade picaxe-staging
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 client.go:398: generating strategic merge patch for *runtime.Unstructured
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 client.go:398: generating strategic merge patch for *runtime.Unstructured
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:19 client.go:592: beginning wait for resources with timeout of 5m0s
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:936: Executing post-upgrade hooks for picaxe-staging
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:965: Hooks complete for post-upgrade picaxe-staging
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:53: Updating "picaxe-staging" (v51) in storage
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:45: Create release "picaxe-staging" (v52) in storage
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:936: Executing post-upgrade hooks for picaxe-staging
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:965: Hooks complete for post-upgrade picaxe-staging
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:53: Updating "picaxe-staging" (v51) in storage
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:45: Create release "picaxe-staging" (v52) in storage
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:936: Executing post-upgrade hooks for picaxe-staging
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 release_server.go:965: Hooks complete for post-upgrade picaxe-staging
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:53: Updating "picaxe-staging" (v51) in storage
[tiller-deploy-3161650477-5jj7b] 2017/05/04 20:07:31 storage.go:45: Create release "picaxe-staging" (v52) in storage
We don't use any hooks.
Helm 2.3.0, Kubernetes 1.5.6.
Thank you for providing the logs on this. A few questions. How "occasionally" does this occur? Have you tried with helm 2.3.1 or 2.4.1 to see if it solves the problem?
It's only happened 2 times out of about 51 times so far, and it's not consistently reproducible, although I could of course write a stress-test script.
Only tried 2.3.0. Just installed 2.4.1, will let you know if it happens again.
That typically happens if the release name is already present and being used. Do you have the --install flag set? Does the error only happen after an upgrade fails (or when the last upgrade is still in progres)?
Yes, we started using helm upgrade --install. And the release already exists when this happens.
@thomastaylor312 @technosophos I'm facing the same issue here, I can't do an upgrade over an existing resource, even thought I used hooks.
For instance, I already installed a release named myproject that had a secret
kind: Secret
metadata:
name: secret
annotations:
"helm.sh/hook": pre-install,pre-upgrade
labels:
app: {{ .Values.global.environment.app }}
environment: {{ $env }}
type: Opaque
{{- end }}
when I do helm upgrade --install --force project --tiller-namespace dev dev/ I got this as result
Error: UPGRADE FAILED: secrets "secret" already exists
Logs of Tiller:
[tiller] 2019/03/20 10:51:39 creating updated release for project
[storage] 2019/03/20 10:51:39 creating release "project.v2"
[tiller] 2019/03/20 10:51:39 performing update for project
[tiller] 2019/03/20 10:51:39 executing 2 pre-upgrade hooks for project
[kube] 2019/03/20 10:51:39 building resources from manifest
[kube] 2019/03/20 10:51:40 creating 1 resource(s)
[tiller] 2019/03/20 10:51:40 warning: Release project pre-upgrade project/templates/secret.yaml failed: secrets "secret" already exists
[storage] 2019/03/20 10:58:34 listing all releases with filter
Any help for this issue ?
Thank you
I have encountered the same issue in Helm 3:
client.go:87: [debug] creating 377 resource(s)
Error: secrets "helm3-test-rabbitmq" already exists
helm.go:76: [debug] secrets "helm3-test-rabbitmq" already exists
I have run helm upgrade --install, so the namespace is nearly empty...
@tomislater We need a little more information to debug that. Is the secret managed as a hook, or as a regular resource? If you can give us content of the secret's metadata, that might be helpful in figuring out why it is not upgrading. But if it is a hook, its behavior is subject to all the caveats described in the manual. Attempting to install over an existing secret that was created by a hook will still not work.
Again, though, I'm guessing about what your chart is trying to do... we really need more info to be able to provide any meaningful feedback. I would recommend opening a new issue with complete details, because I do not think it is the same issue as the one marked Closed here.
@technosophos You are right, my issue is connected to https://github.com/helm/helm/issues/7093
Sorry for interruption!
|
2025-04-01T06:39:19.815462
| 2016-03-29T20:54:14
|
144384547
|
{
"authors": [
"jackgr",
"sparkprime"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7691",
"repo": "kubernetes/helm",
"url": "https://github.com/kubernetes/helm/issues/485"
}
|
gharchive/issue
|
Expandybird choking on replicatedservice-3.tgz
jackgr@jackgr-macbookpro:~/gopath/src/github.com/kubernetes/helm> helm deploy --name test1 gs://kubernetes-charts-testing/replicatedservice-v3.tgz
[ERROR] {"status":"Bad Request","message":"cannot expand configuration:expandybird response:\nerror expanding chart: test1: ExpandyBird cannot do this kind of expansion: %!!(MISSING)(EXTRA string=Expandybird)\n\u0026{[0xc2080824e0]}\n"}
/cc @sparkprime
We should probably just remove that check, as it allows deploying several versions of expandybird with different "names". The expansion service should just assume that the thing it's given is of the kind it's designed to deal with.
Sounds like a good approach. Probably best if you do it, since you know that code better than I do.
Delete this code
if chartFile.Expander.Name != "ExpandyBird" {
message := fmt.Sprintf("ExpandyBird cannot do this kind of expansion: ", chartFile.Expander.Name)
return nil, fmt.Errorf("%s: %s", chartInv.Name, message)
}
in cmd/expandybird/expander/expander.go
Fixed by #486.
|
2025-04-01T06:39:19.816670
| 2018-05-14T09:13:54
|
322732798
|
{
"authors": [
"bacongobbler",
"bonifaido"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7692",
"repo": "kubernetes/helm",
"url": "https://github.com/kubernetes/helm/pull/4046"
}
|
gharchive/pull-request
|
Add the possibility to delete a full table from values
We had an where users wanted to delete some default configuration from the values.yaml, but it wasn't possible, this fix overcomes that issue, of course, I'm not 100%-ly sure in this implementation, but it is a good topic starter.
@bonifaido would you mind adding unit tests and docs here to cover this use case? Thanks!
|
2025-04-01T06:39:19.856688
| 2017-09-05T06:34:07
|
255172718
|
{
"authors": [
"aledbf",
"coveralls",
"dunjut",
"k8s-reviewable"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7693",
"repo": "kubernetes/ingress",
"url": "https://github.com/kubernetes/ingress/pull/1299"
}
|
gharchive/pull-request
|
fix two doc issues in nginx/README
As discussed with @aledbf , these two doc issues are confirmed, and I'm fixing them smoothly.
Link to issue #1296
Plus, there's another doc issue as described in issue1296 (the 3rd one), we can't really tell differences with or without default-ssl-certificate according to those two curl output examples. They are the almost the same, should be fixed at spare time.
This change is
Coverage remained the same at 43.484% when pulling f6946738f893b35de527ac64f3f32da4b52e64a5 on dunjut:master into 85e1a650090b79c1dc53ce41835f65bc33d81e76 on kubernetes:master.
/lgtm
@dunjut thanks!
|
2025-04-01T06:39:19.893282
| 2023-02-26T20:08:31
|
1600184438
|
{
"authors": [
"ardaguclu",
"mbehm"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7694",
"repo": "kubernetes/kubectl",
"url": "https://github.com/kubernetes/kubectl/issues/1378"
}
|
gharchive/issue
|
kubectl diff does not report missing ConfigMap keys
I have noticed that when using kubectl diff to compare a YAML file containing a ConfigMap to the current state of the ConfigMap in the cluster, missing key-value pairs in the YAML file that exist in the current state of the ConfigMap are not reported as differences. This can lead to unexpected changes to the ConfigMap when applying the YAML file with kubectl apply.
For example, if a key-value pair exists in the current ConfigMap in the cluster but is not present in the YAML file, and I apply the YAML file with kubectl apply, the key-value pair will be removed from the ConfigMap. However, kubectl diff does not report this as a difference, even though it would result in a change to the ConfigMap. It'll only be listed as changed if the key exist in the file but has a different value than what's in the cluster.
I believe that removing a key-value pair from a ConfigMap is just as much a change as modifying its value. Therefore, I suggest that kubectl diff should report missing ConfigMap keys in the same way that it reports modified values. This would make it easier to identify situations where a key-value pair would be removed from the ConfigMap if the YAML file were applied.
Thank you for your attention to this issue.
@mbehm in which version are you using?, could you please provide steps for reproducing this issue?. Thanks.
Tried to create a small test case to reproduce it and my bad it seems if the key doesn't exist in the new ConfigMap then apply will leave the current value in the cluster as is, I must have accidentally used create or done something to completely overwrite the existing one.
Sorry for the inconvenience, closing the issue.
|
2025-04-01T06:39:20.333010
| 2018-06-18T19:37:06
|
333406790
|
{
"authors": [
"Random-Liu",
"dashpole"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7695",
"repo": "kubernetes/node-problem-detector",
"url": "https://github.com/kubernetes/node-problem-detector/pull/180"
}
|
gharchive/pull-request
|
Add log-counter plugin written in go
This PR adds a new binary to the node-problem-detector repository: log-counter.
The new binary uses the kmsg log watcher to get kmsg log events, and checks the number of events that occurred.
The binary accepts command-line flags for the pattern, count, and period of time to look back.
It sets the condition NodeRecreationRequired when it sees the unregister_netdevice error 3 times in 20 minutes, and runs every 10 minutes.
/assign @Random-Liu
can you tripple check the changes to the Makefile? I am not really sure what the proper structure is. This set of changes is mostly a guess.
This is working now!
/lgtm
|
2025-04-01T06:39:20.386482
| 2020-12-15T12:12:40
|
767529471
|
{
"authors": [
"lyzs90",
"marseel",
"mm4tt",
"wojtek-t"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7696",
"repo": "kubernetes/perf-tests",
"url": "https://github.com/kubernetes/perf-tests/issues/1636"
}
|
gharchive/issue
|
Validate config before running test
What would you like to be added:
Validate if config is correct before running test.
Why is this needed:
Currently, step can be either measurement step or phase step (modules are being added in #1634)
https://github.com/kubernetes/perf-tests/blob/18e90fc65c6b95c0bff96938458a77d74e19b27a/clusterloader2/api/types.go#L60
Unfortunately, this behaviour is not well implemented:
https://github.com/kubernetes/perf-tests/blob/14afe3828ba8d9eaa98aa8a9f5ddd0cdc5f8eebf/clusterloader2/api/extensions.go#L54
https://github.com/kubernetes/perf-tests/blob/14afe3828ba8d9eaa98aa8a9f5ddd0cdc5f8eebf/clusterloader2/pkg/test/simple_test_executor.go#L143
Current implementation does not validate if config is correct before running test.
From user experience perspective it would be much better to return error right after test starts.
Also, If I'm not wrong, if user specifies both measurements and phases, only measurements will be executed.
@marseel could I help out with this one?
@marseel could I help out with this one?
@lyzs90 Yes sure, I would appreciate it :)
If you will have some PR I can help with reviewing.
@lyzs90 Yes sure, I would appreciate it :)
If you will have some PR I can help with reviewing.
@lyzs90 - this long-standing PR is relevant to this one:
https://github.com/kubernetes/perf-tests/pull/142
It shows how should probably approach this problem - feel free to pick it up and continue working on that.
@lyzs90 - this long-standing PR is relevant to this one:
https://github.com/kubernetes/perf-tests/pull/142
It shows how should probably approach this problem - feel free to pick it up and continue working on that.
Thanks @wojtek-t I'll check it out
/assign
Thanks @wojtek-t I'll check it out
/assign
@wojtek-t I had a look at https://github.com/kubernetes/perf-tests/pull/142 and had some thoughts:
Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time
I understand that the current style of validation used to keep consistent with k8s, and I'm happy to stick with it. Just wanted to find out what are your thoughts on using a declarative approach like json schema? https://github.com/xeipuuv/gojsonschema
Checks like this and this could be simplified with https://json-schema.org/understanding-json-schema/reference/combining.html#combining-schemas
Basically cuts down on boilerplate, but we may still have to implement some custom validation for things like IsDNS1123Subdomain, file exists at objectTemplatePath, referenced tuning set has been declared etc.
@wojtek-t I had a look at https://github.com/kubernetes/perf-tests/pull/142 and had some thoughts:
Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time
I understand that the current style of validation used to keep consistent with k8s, and I'm happy to stick with it. Just wanted to find out what are your thoughts on using a declarative approach like json schema? https://github.com/xeipuuv/gojsonschema
Checks like this and this could be simplified with https://json-schema.org/understanding-json-schema/reference/combining.html#combining-schemas
Basically cuts down on boilerplate, but we may still have to implement some custom validation for things like IsDNS1123Subdomain, file exists at objectTemplatePath, referenced tuning set has been declared etc.
Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time
@mm4tt - for thoughts
I understand that the current style of validation used to keep consistent with k8s, and I'm happy to stick with it. Just wanted to find out what are your thoughts on using a declarative approach like json schema? https://github.com/xeipuuv/gojsonschema
The consistency with k8s has a benefit that you don't have to learn new stuff when you're operating in a single k8s ecosystem. I agree that this can reduce the boilerplate, but (a) this boilerplate is well-separated (b) it's fairly trivial, so I don't think it's actually that huge advantage. So I would rather stick to that for now.
Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time
@mm4tt - for thoughts
I understand that the current style of validation used to keep consistent with k8s, and I'm happy to stick with it. Just wanted to find out what are your thoughts on using a declarative approach like json schema? https://github.com/xeipuuv/gojsonschema
The consistency with k8s has a benefit that you don't have to learn new stuff when you're operating in a single k8s ecosystem. I agree that this can reduce the boilerplate, but (a) this boilerplate is well-separated (b) it's fairly trivial, so I don't think it's actually that huge advantage. So I would rather stick to that for now.
The consistency with k8s has a benefit that you don't have to learn new stuff when you're operating in a single k8s ecosystem. I agree that this can reduce the boilerplate, but (a) this boilerplate is well-separated (b) it's fairly trivial, so I don't think it's actually that huge advantage. So I would rather stick to that for now.
Noted on this 👍
The consistency with k8s has a benefit that you don't have to learn new stuff when you're operating in a single k8s ecosystem. I agree that this can reduce the boilerplate, but (a) this boilerplate is well-separated (b) it's fairly trivial, so I don't think it's actually that huge advantage. So I would rather stick to that for now.
Noted on this 👍
Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time
I think it makes sense, but let's double check with @mm4tt
Currently Modules are being recursively compiled into steps at runtime. Correct me if I am wrong but the mapping passed to the Module seems static and there is no use case where runtime variables are passed in (unlike Objects). Hence to support validation of Modules, may I suggest to move loading of modules before test execution - after https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/pkg/test/test.go#L66, so that validation of the entire Config (less Objects) can be validated ahead of time
I think it makes sense, but let's double check with @mm4tt
It makes a lot of sense :) Just a nit, you should move it after this to make sure that the config is valid after a custom modifications have been applied. There is also testConfig.Validate here that we could probably reuse or remove - it doesn't make sense to have two "validate" components.
In general, big +1 for extracting things out of ExecuteTest method. In my opinion, this method is too big. It does too many things and doesn't adhere to the single principle rule - it should just execute the test that is prepared and validated.
It makes a lot of sense :) Just a nit, you should move it after this to make sure that the config is valid after a custom modifications have been applied. There is also testConfig.Validate here that we could probably reuse or remove - it doesn't make sense to have two "validate" components.
In general, big +1 for extracting things out of ExecuteTest method. In my opinion, this method is too big. It does too many things and doesn't adhere to the single principle rule - it should just execute the test that is prepared and validated.
@mm4tt gotcha, I'll first submit a PR to extract module compilation out of ExecuteTest
@mm4tt gotcha, I'll first submit a PR to extract module compilation out of ExecuteTest
@mm4tt Actually since there might be multiple tests (via test suite / config paths), would it be worth moving the compilation and validation logic even higher up eg. before https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/cmd/clusterloader.go#L317 and https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/cmd/clusterloader.go#L323.
That way we can fail faster if any config is invalid. This would mean RunTest would only be left with basic checks, namespace deletion and test execution
@mm4tt Actually since there might be multiple tests (via test suite / config paths), would it be worth moving the compilation and validation logic even higher up eg. before https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/cmd/clusterloader.go#L317 and https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/cmd/clusterloader.go#L323.
That way we can fail faster if any config is invalid. This would mean RunTest would only be left with basic checks, namespace deletion and test execution
Sounds good. Let me know if there is anything I can help you with. Thanks!
Sounds good. Let me know if there is anything I can help you with. Thanks!
|
2025-04-01T06:39:20.472159
| 2020-04-29T13:08:14
|
609043572
|
{
"authors": [
"ShivamGoyal1899",
"gm7y8",
"kbhawkey",
"pranshu-s18",
"sftim"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7697",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/20649"
}
|
gharchive/issue
|
Move Eviction Policy into Scheduling & Eviction section
This is a Feature Request
What would you like to be added
Migrate the content from https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#eviction-policy into somewhere inside https://kubernetes.io/docs/concepts/scheduling-eviction/
Why is this needed
https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#eviction-policy is not task documentation; instead, it's conceptual background.
Comments
/language en
/kind cleanup
Initially, a straightforward cut-and-paste would be fine, so I'll mark this as:
/good-first-issue
/assign
@sftim do I need to move all the page content or only the eviction-policy section?
Just the Eviction Policy section. Overall, https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#eviction-policy is a task page and can stay as a task page.
@sftim
Eviction Policy is a kind of a separate concept. I guess it should be an entirely new page in Scheduling and Eviction section, such that it adds an entry in accordion menu as well.
entirely new page in Scheduling and Eviction section
There's more than one way to do this, and that's one of the viable approaches.
If anyone's ready to tackle this: feel free!
/assign
@sftim i am interested to do pick this up.. let me know if these changes are still required?
This work still needs doing. The changes in https://github.com/kubernetes/website/pull/20724/commits/032a7ea337978697dda435e0950fc53d83113695 looked like a good starting point.
/close
@sftim Is this issue ready to close?
Yep
/close
|
2025-04-01T06:39:20.475432
| 2020-06-09T15:30:10
|
635537855
|
{
"authors": [
"Cweiping",
"sftim",
"wawa0210"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7698",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/21605"
}
|
gharchive/issue
|
Clarify the explanation when environment variables refer to each other
This is a Feature Request
What would you like to be added
Why is this needed
If the environment variables started by the pod are cyclically dependent and mutually referenced, the resulting environment variable values will not be what we expected. At present, there is no detailed description about this in the official document, it is recommended to reopen a page to describe it in detail
Comments
https://github.com/kubernetes/website/pull/21553
https://github.com/kubernetes/kubernetes/issues/90466
/kind feature
/assign
|
2025-04-01T06:39:20.480192
| 2021-12-13T10:57:32
|
1078381136
|
{
"authors": [
"PurneswarPrasad",
"celestehorgan",
"jihoon-seo",
"killerkc12",
"shannonxtreme"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7699",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/30896"
}
|
gharchive/issue
|
Remove docker from Node-pressure Eviction
URL: https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/
File: /docs/concepts/scheduling-eviction/node-pressure-eviction.md
Umbrella issue: #30771
Partially fixes: #30921
What to do
On line 238, remove docker from the examples of system daemons.
How to do it
Refer to the Contributor Guide for instructions.
Do the following:
Fork k/website and switch to a new branch
Remove docker from line 238 of the /docs/concepts/scheduling-eviction/node-pressure-eviction.md file
Open a PR for the change and mention this issue to close it when the PR is merged
/triage accepted
/sig docs
/help-wanted
/language en
I would like to work on this
/assign @killerkc12
@killerkc12 do you intend to work on this? If not I'd like to reassign it to someone who will. No hard feelings either way!
@celestehorgan I think this issue has been resolved in this PR here #30913
/close
|
2025-04-01T06:39:20.491276
| 2023-01-09T12:17:21
|
1525484816
|
{
"authors": [
"javiermarasco",
"madhumita-kundo",
"mrgiles",
"sftim",
"tengqm"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7700",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/38843"
}
|
gharchive/issue
|
Pos OS spec description is not clear.
While reading the documentation about the OS spec for Pods it states:
Pod OS
FEATURE STATE: Kubernetes v1.25 [stable]
You should set the .spec.os.name field to either windows or linux to indicate the OS on which you want the pod to run. These two are the only operating systems supported for now by Kubernetes. In future, this list may be expanded.
In Kubernetes v1.26, the value you set for this field has no effect on scheduling of the pods. Setting the .spec.os.name helps to identify the pod OS authoratitively and is used for validation. The kubelet refuses to run a Pod where you have specified a Pod OS, if this isn't the same as the operating system for the node where that kubelet is running. The Pod security standards also use this field to avoid enforcing policies that aren't relevant to that operating system.
The second paragraph mentions 1.26 doesn't take this field into consideration to schedule the pod but later in the same paragraph it states kubelet will take this field in consideration to schedule the pod (assigning the pod to a matching OS node). Am I reading it incorrectly or the paragraph should read "In Kubernetes v1.25, the vaule you set for this field has no effect on scheduling" ?
In Kubernetes v1.26, the value you set for this field has no effect on scheduling of the pods
is correct, but perhaps misleading.
How about changing the order round:
Kubernetes v1.26 uses the value of .spec.os.name to validate Pods (the kubelet checks that the Pod OS matches the operating system that the kubelet is running on). If you create (or try to create a Pod) in a namespace that uses Pod security admission, the control plane also uses the value of .spec.os.name to work out what restrictions to verify and / or enforce.
In Kubernetes v1.26, the value of .spec.os.name does not affect how the kube-scheduler picks a Pod to run a node. In any cluster where there is more than one operating system for nodes, you should set the kubernetes.io/os label correctly on each nodes, and define Pods with a nodeSelector than matches the correct operating system.
If you set .spec.os.name anddo not specify a nodeSelector based on the operating system label, the scheduler assigns your pod to a node based on other criteria and may or may not succeed in picking a suitable node placement where the node OS is right for the containers in that Pod.
?
We could link to a task page that explains how to assign Linux Pods to Linux nodes, and Windows Pods to Windows nodes (if we had such a task page).
/sig node
Hi @sftim by your message I understand that:
Kubelet will fail to run a pod with a non matching .spec.os.name in the node that is running (as if a Linux pod is assigned to a Windows Node for example)
Kube-Scheduler will not take into account .spec.os.name to define in which node to run a pod (unless a kubernetes.io/os label is specified) so it would potentially scheduling a Linux pod in a Windows node.
Am I getting it right? if so I believe your message is great to be a replacement of the one in the current documentation.
Just there is a missing space in this paragraph (getting a bit picky sorry 😂 )
If you set .spec.os.name anddo not specify a nodeSelector based on the operating system label, the scheduler assigns your pod to a node based on other criteria and may or may not succeed in picking a suitable node placement where the node OS is right for the containers in that Pod.
This could warrant a PR since the fix is clear.
/help
/triage accepted
/assign
Also see https://github.com/kubernetes/website/issues/40825
/retitle Pod OS spec description is not clear
/assign
|
2025-04-01T06:39:20.494993
| 2024-11-26T13:26:35
|
2694676859
|
{
"authors": [
"dipesh-rawat",
"polocto"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7701",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/48849"
}
|
gharchive/issue
|
[fr] Inactive interactive tutorial in "Deploying an application" page
Tutorial description :
Pour interagir avec le terminal, veuillez utiliser la version bureau / tablette.
Though I am using my desktop.
Page reported in issue (based on issue title): https://kubernetes.io/fr/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/
/language fr
/retitle [fr] Inactive interactive tutorial in "Deploying an application" page
/kind bug
/area localization
@polocto Katacoda environment for the Kubernetes tutorial has been shut down. Refer the announcement here.
We have an umbrella issue raised https://github.com/kubernetes/website/issues/41496 to remove tutorials that rely on Katacoda for all localized documents.
|
2025-04-01T06:39:20.499063
| 2018-03-06T05:37:27
|
302570293
|
{
"authors": [
"RajeshJeyapaul"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7702",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/issues/7652"
}
|
gharchive/issue
|
Issue with k8s.io/docs/concepts/storage/dynamic-provisioning/
Requirement is to have new filesystem created on top of the provisioned volumes. Since while we created containers, we need to have say a folder by name "license" where we need to store the license key. Hence my container will expect me to create a folder by name "license". Details of this is not available any where. I guess it should be a feature request
[ x ] Feature Request
[ ] Bug Report
Problem:
Option to create new file and directory under provisioned volumes is missing. Container need this flexibility since the packaged container will have a requirement to have some file and directory available in the host environment
Proposed Solution:
YAML fie should support mkdir and create file option
Page to Update:
https://kubernetes.io/...
hostpath and mountpath needs to be same...as of now , I am not able to create a directory..Hope I am not missing out anything..
|
2025-04-01T06:39:20.501228
| 2018-08-29T02:14:28
|
354968681
|
{
"authors": [
"Bradamant3",
"zembutsu"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7703",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/10124"
}
|
gharchive/pull-request
|
Change RedSpread link
Fix #10103
RedSpread was acquired by CoreOS.
The site does not exist. Therefore, I changed to GitHub repository.
I signed CLA.
Preview https://deploy-preview-10124--kubernetes-io-master-staging.netlify.com/docs/setup/minikube/#design
Nice catch! Thank you!
/lgtm
/approve
|
2025-04-01T06:39:20.507598
| 2019-02-18T03:57:48
|
411292010
|
{
"authors": [
"Bradamant3",
"Rajakavitha1",
"fauwazalijdpro",
"zparnold"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7704",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/12680"
}
|
gharchive/pull-request
|
cluster-myfirst.html
creating new cluster
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Please delete this note before submitting the pull request.
For 1.14 Features: set Milestone to 1.14 and Base Branch to dev-1.14
For Chinese localization, base branch to release-1.12
For Korean Localization: set Base Branch to dev-1.13-ko.
Help editing and submitting pull requests:
https://kubernetes.io/docs/contribute/start/#improve-existing-content.
Help choosing which branch to use:
https://kubernetes.io/docs/contribute/start#choose-which-git-branch-to-use.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/check-cla
Thanks for the PR @fauwazalijdpro !!!!
Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.
It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Hey there! @fauwazalijdpro, looks like you haven't signed the CLA yet. Could I please have you do that? https://github.com/kubernetes/community/blob/master/CLA.md
/close
@fauwazalijdpro Thanks for your PR, but we need you to take a few additional steps. if you want to re-open it, or start a new one, please first sign the CLA. It's also not clear why the proposed change is needed. It's true that the topic is pretty generic, but it does also address Minikube specifically and if the issue is the tangle around starting the interactive tutorial vs starting with Minikube, changing the title only does not clear up any confusion.
|
2025-04-01T06:39:20.511815
| 2019-02-22T17:52:30
|
413511845
|
{
"authors": [
"DanyC97",
"zparnold"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7705",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/12793"
}
|
gharchive/pull-request
|
Update the fine-parallel-processing-work-queue.md task file to remove $ and remvoe text not appropiate for end user
this PR address one task mentioned in https://github.com/kubernetes/website/issues/12740 which is
[ ] https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/
remove the section about
If you are working from the website source tree, you can go to the following directory and start a temporary Pod running Redis and a service so we can find it.
End user doesn't work from git repo
/assign @steveperry-53
any help here with reviewing please ?
reduced the scope to keep it isolated as per @zacharysarah comment on another PR
/assign @zparnold
/unassign @steveperry-53
/lgtm
/approve
|
2025-04-01T06:39:20.514187
| 2019-04-12T18:52:48
|
432699629
|
{
"authors": [
"DanyC97",
"tengqm",
"yzhong52"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7706",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/13803"
}
|
gharchive/pull-request
|
Update minikube.md
It is minor, but I think it is more readable by moving the comments out of the code blocks since it is the document not code.
This is how it looks currently here https://kubernetes.io/docs/setup/minikube/:
@yzhong52 could you please sign the CLA?
@DanyC97 Just signed. Sorry for the delay.
/lgtm
/approve
|
2025-04-01T06:39:20.516102
| 2019-05-25T10:06:09
|
448448330
|
{
"authors": [
"raelga"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7707",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/14523"
}
|
gharchive/pull-request
|
Update node glossary page
When reviewing the spanish localization for this page #14360, we spotted minor issues with the content.
This PR:
Removes the services tooltip as it points to the Service object and in this context, refers to Kubernetes processes and agents running on the nodes.
Replaces Docker for the Container Runtime interface, as Kubernetes supports other CRI, not only Docker.
Adds the tooltip for each kubernetes service: cri, kubelet, kube-proxy
I just saw your PR #14317 that aims the same thing and the conversation already started there, we can close this PR.
Thanks @sftim !!
|
2025-04-01T06:39:20.517696
| 2019-09-23T12:19:56
|
497063146
|
{
"authors": [
"DavidZisky",
"steveperry-53"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7708",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/16514"
}
|
gharchive/pull-request
|
Added logo change for blog site
Currently on blog site (https://kubernetes.io/blog/) if you scroll down the page, logo (top left corner) disappears because the background is being changed to white. On the main page, this is fixed by changing the logo to the blue version on scroll down but it wasn't working on blog site. Added the same behaviour.
/retest
/approve
/lgtm
|
2025-04-01T06:39:20.523194
| 2023-05-16T05:24:35
|
1711259801
|
{
"authors": [
"AmarNathChary",
"tengqm"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7709",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/41163"
}
|
gharchive/pull-request
|
removed hyperlink in restrict a container
Removed invalid link in Restrict a container
Under
Securing a Pod in note Upgrade path to GA
There is no valid link that can be replaced with it
This PR Resolves issue #40865
/lgtm
/lgtm
Please mind our policy on trivial edits @AmarNathChary This is OK to accept; a review of the whole page would be even more helpful
@sftim I understand the importance of the policy and I will make sure to comply. Thanks for the approve.
|
2025-04-01T06:39:20.525288
| 2024-03-26T05:53:47
|
2207322201
|
{
"authors": [
"reylejano",
"sftim",
"shurup"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7710",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/45672"
}
|
gharchive/pull-request
|
Add kirkonru to sig-docs-ru
Since we have @kirkonru in the organisation, we can add him to the reviewers and owners list for the Russian localisation :tada:
Related PR: https://github.com/kubernetes/org/pull/4846
Waiting for confirmation from Russian localization approvers
Currently, our approvers' list has me & @Arhell only (that's why we need Kirill so much). Ihor, please assist :pray:
As an ru approver, @Arhell confirmed
/approve
|
2025-04-01T06:39:20.527396
| 2018-03-27T02:45:08
|
308802441
|
{
"authors": [
"Bradamant3",
"heckj",
"tengqm"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7711",
"repo": "kubernetes/website",
"url": "https://github.com/kubernetes/website/pull/7865"
}
|
gharchive/pull-request
|
Fix NodePodSelector annotation name
Resubmitted to master branch.
I can't tell whether I'm nitpicking or catching an issue here. Should the annotation key be the same wherever you use it? If so, should it be scheduler.alpha.kubernetes.io/node-selector? Or scheduler.alpha.kubernetes.io/nodeSelector?
/assign
@tengqm 👋 looks like this needs a rebase, and I didn't see any answer to @Bradamant3 question about the content.
/assign
Closing because the master branch already has this fix.
|
2025-04-01T06:39:20.537553
| 2022-09-08T08:59:24
|
1365834838
|
{
"authors": [
"Aman123lug",
"dazzag24",
"dwertent",
"fredbi",
"operon-io"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7712",
"repo": "kubescape/kubescape",
"url": "https://github.com/kubescape/kubescape/issues/789"
}
|
gharchive/issue
|
panic: runtime error: index out of range [3] with length 3
Using the version installed today using brew on a MAC M1 (seems that brew version is not the latest)
kubescape scan .
[info] ARMO security scanner starting
[warning] current version 'v2.0.166' is not updated to the latest release: 'v2.0.170'
panic: runtime error: index out of range [3] with length 3
goroutine 1 [running]:
github.com/armosec/go-git-url/gitlabparser/v1.(*GitLabURL).Parse(0x140003fc1c0, {0x1400005bc20?, 0x1046a5ce0?})
github.com/armosec/go-git-url@v0.0.15/gitlabparser/v1/parser.go:89 +0x33c
github.com/armosec/go-git-url/gitlabparser/v1.NewGitLabParserWithURL({0x1400005bc20, 0x3e})
github.com/armosec/go-git-url@v0.0.15/gitlabparser/v1/parser.go:28 +0x98
github.com/armosec/go-git-url.NewGitURL({0x1400005bc20, 0x3e})
github.com/armosec/go-git-url@v0.0.15/init.go:28 +0x1b0
github.com/armosec/kubescape/v2/core/cautils.metadataGitLocal({0x16d837657?, 0x3?})
github.com/armosec/kubescape/v2/core/cautils/scaninfo.go:405 +0xe8
github.com/armosec/kubescape/v2/core/cautils.setContextMetadata(0x14000c3d3f0, {0x16d837657, 0x3})
github.com/armosec/kubescape/v2/core/cautils/scaninfo.go:358 +0x364
github.com/armosec/kubescape/v2/core/cautils.scanInfoToScanMetadata(0x140002fc4e0)
github.com/armosec/kubescape/v2/core/cautils/scaninfo.go:289 +0x328
github.com/armosec/kubescape/v2/core/cautils.NewOPASessionObj({0x0, 0x0, 0x0}, 0x0, 0x140002fc4e0)
github.com/armosec/kubescape/v2/core/cautils/datastructures.go:43 +0x5c
github.com/armosec/kubescape/v2/core/pkg/policyhandler.(*PolicyHandler).CollectResources(0x14000b9b808, {0x14000611500, 0x5, 0x8}, 0x140002fc4e0)
github.com/armosec/kubescape/v2/core/pkg/policyhandler/handlenotification.go:26 +0x40
github.com/armosec/kubescape/v2/core/core.(*Kubescape).Scan(0x103f05b60?, 0x140002fc4e0)
github.com/armosec/kubescape/v2/core/core/scan.go:142 +0x618
github.com/armosec/kubescape/v2/cmd/scan.getFrameworkCmd.func2(0x1045bd120?, {0x14000427680, 0x2, 0x140006b1560?})
github.com/armosec/kubescape/v2/cmd/scan/framework.go:102 +0x3ac
github.com/armosec/kubescape/v2/cmd/scan.GetScanCommand.func1(0x140000ff680?, {0x140006b1560, 0x1, 0x1?})
github.com/armosec/kubescape/v2/cmd/scan/scan.go:45 +0x180
github.com/spf13/cobra.(*Command).ValidateArgs(...)
github.com/spf13/cobra@v1.5.0/command.go:1018
github.com/spf13/cobra.(*Command).execute(0x140000ff680?, {0x140006b1540?, 0x1?, 0x1?})
github.com/spf13/cobra@v1.5.0/command.go:841 +0x3a4
github.com/spf13/cobra.(*Command).ExecuteC(0x140000ff400)
github.com/spf13/cobra@v1.5.0/command.go:990 +0x354
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra@v1.5.0/command.go:918
github.com/armosec/kubescape/v2/cmd.Execute()
github.com/armosec/kubescape/v2/cmd/root.go:84 +0x34
main.main()
github.com/armosec/kubescape/v2/main.go:9 +0x1c
This also happens with the latest v2.0.170 release as well.
origin git@gitlab.com:foobar/machine-learning/cluster-manifests.git (fetch)
origin git@gitlab.com:foobar/machine-learning/cluster-manifests.git (push)
This bug occurs on GitLab with versions v2.0.165-172. Version v2.0.164 seems to work, so until patch is ready will resort to use that.
@dazzag24 Have you done here any progress?
@dazzag24 Have you done here any progress?
I think Aman123lug volunteered to work on this.
@Aman123lug Any updates?
@dwertent hello. I've looked at it briefly and the offending line is: https://github.com/kubescape/go-git-url/blame/master/gitlabparser/v1/parser.go#L86
Wrapping it with a check on the "-" particular path part is enough to avoid a panic:
if splittedRepo[index] == "-" {
index += 1 // skip "-" symbol in URL
}
However, I am not really sure about the consistency of the eventual result. This parser obviously is not designed in the first place to support general URLs.
@Aman123lug Any updates?
I think @dazzag24 working on it
Also I've detected that there are 2 versions of this go-git-url repo being used: one under the kubescape owner, one under the armosec owner: while the armosec version is still being pulled as a direct dependency, the other version is pulled indirectly...
I have a small patch ready if you guys are interested. This assumes a few things. I'd need a piece of advice to be sure I am heading in the right direction. I wouldn't want to interfere with some other people's work. Let me know if you want a PR (actually 2 since 2 repos are concerned).
kubescape/go-git-url: added the simple check above. In the case of such "scp-like" git-urls, it no longer panics if the input is not exactly as expected. However, the OP-provided remote ```````````git@gitlab.com:foobar/machine-learning/cluster-manifests.git` won't really work as expected: in this example, "foobar" is considered the owner, "machine learning" the repo and "cluster..." the branch.
With a correct origin like so "git@gitlab.com:gitlab-tests/sample-project.git", the owner & repo are inferred correctly.
I am not sure that this is acceptable behavior.
Parsed "pseudo-URL" results in :
=== RUN TestFred
fred_test.go:26: remote: git@gitlab.com:foobar/gitlab-tests/sample-project.git
(*v1.GitLabURL)(0xc00030a380)({
host: (string) (len=10) "gitlab.com",
owner: (string) (len=6) "foobar",
repo: (string) (len=12) "gitlab-tests",
project: (string) "",
branch: (string) (len=18) "sample-project.git",
path: (string) "",
token: (string) ""
})
--- PASS: TestFred (0.00s)
PASS
kubescape/kubescape: replaced deps on github.com/armosec/go-git-url by github.com/kubescape/go-git-url (both repos are the same right now). I assume this is the right repo to look into. Will need to upgrade dep on go-git-url.
@dwertent here is a proposal for a fix. @dazzag24, @Aman123lug feel free to discard this if you've already started something better. For sure, the fix is not perfect: just assuring that wrong/unexpected input doesn't panic the CLI.
|
2025-04-01T06:39:20.539829
| 2021-11-15T14:38:26
|
1053739999
|
{
"authors": [
"mortada-codes",
"olensmar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7713",
"repo": "kubeshop/monokle",
"url": "https://github.com/kubeshop/monokle/issues/666"
}
|
gharchive/issue
|
Create separate Form editor for common metadata properties
Since every Resource Kind has a metadata section with the same properties we can put these in a separate Form in the Form Editor - this form would be in a collapsable "Resource Metadata" section at the top of the panel - with the unique resource properties shown in a separate Form below
as discussed - lets move this into a separate tab instead of a section on top of the existing form editor
https://user-images.githubusercontent.com/20525304/151196553-2818d44a-eb41-42d3-aa3b-ae53c765a838.mp4
|
2025-04-01T06:39:20.581860
| 2022-06-26T17:11:08
|
1284979569
|
{
"authors": [
"charlie0129"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7714",
"repo": "kubevela/kubevela.io",
"url": "https://github.com/kubevela/kubevela.io/pull/792"
}
|
gharchive/pull-request
|
Docs: add instructions about chartmuseum
Signed-off-by: Charlie Chiang<EMAIL_ADDRESS>
Updated Build your Own Registry with ChartMuseum and addon push command
Use addon init command in Build Your Own Addon
Corresponding feature PR: https://github.com/kubevela/kubevela/pull/4261
does this doc for 1.5 only?
Yes, 1.5 or later
|
2025-04-01T06:39:20.587225
| 2023-03-24T14:55:35
|
1639547861
|
{
"authors": [
"hstastna",
"metalice",
"pcbailey"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7715",
"repo": "kubevirt-ui/kubevirt-plugin",
"url": "https://github.com/kubevirt-ui/kubevirt-plugin/pull/1189"
}
|
gharchive/pull-request
|
Bug 2158550: Display MigrationPolicy page after renaming correctly
📝 Description
Fixes:
https://bugzilla.redhat.com/show_bug.cgi?id=2158550
Display MigrationPolicy details page or MigrationPolicies list correctly with no error, after changing the name of some MigrationPolicy resource (depending on the actual location while renaming the policy).
More details:
The core of the problem was here where the original name of the policy was searched in the WHOLE URL and not at its end, where it belongs, so it was only logical that the first occurence of "a" was found "earlier" in the url string than expected and replaces by the new name, so "migraations" occurred in the url, that lead to the error, as such page wasn't found.
🎥 Screenshots
Before:
Error after renaming MigrationPolicy and incorrect url, especially if MigrationPolicy had a very simple name like "a":
URL: /k8s/cluster/migraations.kubevirt.io~v1alpha1~MigrationPolicy/a
After:
No error after renaming MigrationPolicy (to 'aa'), page rendered correctly:
URL: /k8s/cluster/migrations.kubevirt.io~v1alpha1~MigrationPolicy/aa
/lgtm
/retest
/retest
/retest
|
2025-04-01T06:39:20.618861
| 2024-02-05T11:16:43
|
2118274696
|
{
"authors": [
"RamLavi",
"enp0s3"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7716",
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/11146"
}
|
gharchive/pull-request
|
node-labeller: Remove obsolete functionalities
What this PR does
Before this PR:
node labeller supported deprecated annotations (since v0.40 release).
After this PR:
node labeller will not support these annotations, as kubevirt does not support upgrade from this release anymore.
Fixes #
Why we need it and why it was done in this way
The following tradeoffs were made:
The following alternatives were considered:
Links to places where the discussion took place:
Special notes for your reviewer
Checklist
This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.
[x] Design: A design document was considered and is present (link) or not required
[x] PR: The PR description is expressive enough and will help future contributors
[x] Code: Write code that humans can understand and Keep it simple
[x] Refactor: You have left the code cleaner than you found it (Boy Scout Rule)
[x] Upgrade: Impact of this change on upgrade flows was considered and addressed if required
[x] Testing: New code requires new unit tests. New features and bug fixes require at least on e2e test
[x] Documentation: A user-guide update was considered and is present (link) or not required. You want a user-guide update if it's a user facing feature / API change.
[x] Community: Announcement to kubevirt-dev was considered
Release note
node-labeller: Remove obsolete functionalities
It's strange that the unit test lane failed, as I didn't add/remove any in this PR..
/test pull-kubevirt-unit-test-arm64
Hey, I have a question ^^
Thanks for asking! Please see answer
/test pull-kubevirt-e2e-arm64
/hold cancel
|
2025-04-01T06:39:20.632315
| 2022-06-19T14:38:43
|
1276081262
|
{
"authors": [
"brybacki",
"codingben",
"iholder-redhat"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7717",
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/7944"
}
|
gharchive/pull-request
|
test: move execute functions from utils
Move execute command on pod functions from utils to a new package,
and also move CopyFromPod from utils to imageupload to make utils shorter.
Signed-off-by: Ben Oukhanov<EMAIL_ADDRESS>Release note:
NONE
/sig code-quality
/retest-required
It's ready for review, and previous discussion was resolved. @dankenigsberg Please let me know if we can remove hold label.
Hey @codingben! Good job!
I like the direction this PR is going towards, but still have a few concerns.
I don't really understand why we need 2 (or 3 in the current PR implementation) different functions to execute a command on a Pod. Let me explain why:
First of all, the only difference between ExecuteCommandOnPod() and ExecuteCommandOnPodV2() is that the the first one returns stdout only, while V2 returns stdout and stderr. Even if we set aside the horrible naming for these functions, I don't really see what is the motivation for having them two. Performance-wise there is no difference at all since V2 calls "V1" and simply does not return the stderr part (or more accurately returns it as an error).
Secondly, ExecCommandOnPod() was a private helper function, now it is public. I can't see why we need it to be public, or even need it at all.
What I would do is keep one function, ExecuteCommandOnPod(). This function should have V2's signature, or IOW, should return stdout, stderr and an error. This is how it would be used:
// If we need stderr
stdout, stderr, err := ExecuteCommandOnPod(virtCli, pod, containerName, command)
// If we don't need stderr
stdout, _, err := ExecuteCommandOnPod(virtCli, pod, containerName, command)
Please note again that under the hood nothing is really changed, since stderr is fetched either way (whether you use V1 or V2).
Implementation-wise, this new function needs to have the code of current ExecCommandOnPod() inside it. So eventually we end up with only one unified function.
Another small note: please squash the last commit with spaces only. I would make life difficult for rebases / backports, etc.
Another small note: please squash the last commit with spaces only. I would make life difficult for rebases / backports, etc.
Sorry, do you mean to not have that style commit and make it as part of previous commit?
Yes, exactly
By the way, I'd choose squash commits option when merging this PR. Is there any reason why kubevirt-bot isn't doing it? It's popular approach in other open source projects, for example in Angular.
Not sure who made the decision for Kubevirt, but tbh I don't like squash commits :) I think it's valuable to be able to view commits history as they were originally. For many PRs, if all of their commits were squashed, the changes were very difficult to grasp when looking through history.
Secondly, ExecCommandOnPod() was a private helper function, now it is public. I can't see why we need it to be public, or even need it at all.
It's public because it's used here, and it's giving to ExecuteCommandOnPodWithOptions custom options. In your example, there's no options parameter. WDYT?
Secondly, ExecCommandOnPod() was a private helper function, now it is public. I can't see why we need it to be public, or even need it at all.
It's public because it's used here, and it's giving to ExecuteCommandOnPodWithOptions custom options. In your example, there's no options parameter. WDYT?
We can have ExecuteCommandOnPod and ExecuteCommandOnPodWithOptions. ExecuteCommandOnPod will be merged into ExecuteCommandOnPodV2.
Aha! Got you :)
Sounds good to me, I also like your naming.
Also, if that's the case, I guess ExecuteCommandOnPod would internally call ExecuteCommandOnPodWithOptions, providing default options.
/test pull-kubevirt-e2e-k8s-1.22-sig-compute
/test pull-kubevirt-e2e-k8s-1.22-sig-compute
@codingben why is it DRAFT? Are you still experimenting? If the design is final and you expect review please change from draft to final with the use of "Ready for Review". I looks final to me. It looks good.
It's on Draft to not trigger all CI tests. I tried to run pull-kubevirt-e2e-k8s-1.22-sig-compute twice here and it's failing - I tried to run locally and it's failing on this error:
Delete "https://<IP_ADDRESS>:49178/api/v1/namespaces/kubevirt-test-default1/serviceaccounts/kubevirt-subresource-test-sa": dial tcp <IP_ADDRESS>:49178: connect: connection refused
I think I didn't setup something properly. I'd like to try to execute another test to make sure some local tests passed before we'll execute all CI tests.
I'm going to make cluster-up :) I'll remove Draft once I'll verify tests locally.
@codingben I think You need to rebase, then change the PR to ready for review, so all the tests run, and we can finalize the review process
I'll open another PR to just move execute functions from utils without refactoring them.
|
2025-04-01T06:39:20.640428
| 2023-04-18T17:04:19
|
1673524672
|
{
"authors": [
"0xFelix",
"lyarwood"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7718",
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/9628"
}
|
gharchive/pull-request
|
api: Move the core API storage version to v1 and deprecate v1alpha3
What this PR does / why we need it:
This change moves the storage version for all core API CRDs to v1. This does not impact existing objects that will continue to be stored using their original v1alpha3 version while being served as v1alpha3 or v1, as was the case previously.
Work will be required in the future to ensure all stored v1alpha3 objects are read and updated to v1 but for the time being this isn't required as part of this PR.
For more context please review the following k8s documentation:
https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning
Additionally the following KubeVirt dev ML thread covers this topic:
https://groups.google.com/g/kubevirt-dev/c/bSayedthHmY
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #
Special notes for your reviewer:
Interested in this PR? Then you will love https://github.com/kubevirt/kubevirt/pull/9575 !
Release note:
* The `kubevirt.io/v1` `apiVersion` is now the default storage version for newly created objects
* The `kubevirt.io/v1alpha3` `apiVersion` is now deprecated and will be removed in a future release
/test pull-kubevirt-apidocs
/test pull-kubevirt-generate
/test pull-kubevirt-e2e-k8s-1.26-sig-compute
/retest-required
/retest-required
/retest
/retest
/retest
/retest-required
/hold
Need to create the runbook for the alert first
@sradco
Moved https://github.com/kubevirt/kubevirt/pull/9724/commits/91a2b7a002c957995036e5543f2b92a674324e7b to separate PR https://github.com/kubevirt/kubevirt/pull/9724
/unhold
|
2025-04-01T06:39:20.654392
| 2023-05-04T08:59:54
|
1695576810
|
{
"authors": [
"alicefr",
"dharmit",
"jean-edouard"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7719",
"repo": "kubevirt/kubevirt",
"url": "https://github.com/kubevirt/kubevirt/pull/9696"
}
|
gharchive/pull-request
|
Removes dependency on go-ps
What this PR does / why we need it:
What it does: $subject
Why we need: 1) the library isn't updated since a while, 2) KubeVirt needs only UNIX part of the code from the original repo
Which issue(s) this PR fixes:
Fixes #9671
Special notes for your reviewer:
Release note:
NONE
/cc @alicefr
/ok-to-test
@dharmit it will be nice to have a couple of unit tests checking the new functions
@dharmit it will be nice to have a couple of unit tests checking the new functions
Sure, I'll add them and ping back.
Please, if you copied the functions from the original library, please write a comment with the reference to it
Please, if you copied the functions from the original library, please write a comment with the reference to it
I'd like to take a step back and confirm if it's OK to copy from the original library? Or do you recommend I implement it afresh. I don't mind doing the latter (and would even prefer going that route, as that would help me learn).
Please, if you copied the functions from the original library, please write a comment with the reference to it
I'd like to take a step back and confirm if it's OK to copy from the original library? Or do you recommend I implement it afresh. I don't mind doing the latter (and would even prefer going that route, as that would help me learn).
The original library is under the MIT license. Under the license is possible to take and modify the code. The only thing is we need to include the copyright not. Hence, you could put a comment before the code and adding the link to the library and the comment with license note
Please, if you copied the functions from the original library, please write a comment with the reference to it
I'd like to take a step back and confirm if it's OK to copy from the original library? Or do you recommend I implement it afresh. I don't mind doing the latter (and would even prefer going that route, as that would help me learn).
The original library is under the MIT license. Under the license is possible to take and modify the code. The only thing is we need to include the copyright not. Hence, you could put a comment before the code and adding the link to the library and the comment with license note
What this PR does / why we need it:
$subject
The subject does not state why we need this. Please take a few seconds to write a sentence about what problem this PR is solving. Thank you
What this PR does / why we need it:
$subject
The subject does not state why we need this. Please take a few seconds to write a sentence about what problem this PR is solving. Thank you
Thanks @jean-edouard. I've updated it.
What this PR does / why we need it: What it does: $subject Why we need:
the library isn't updated since a while
Does it need updating? Can't we submit pull requests to them for what needs to be updated?
KubeVirt needs only UNIX part of the code from the original repo
It is normal not to use every aspect of the things we import, we could probably make similar statements about virtually every other library we use!
Importing a bunch of code into KubeVirt increases the maintenance burden/cost, which I guess is fine if there's a good reason for it, but I don't see it here...
What this PR does / why we need it: What it does: $subject Why we need:
the library isn't updated since a while
Does it need updating? Can't we submit pull requests to them for what needs to be updated?
We can, I think. At least, I can't think of why we can't. :)
KubeVirt needs only UNIX part of the code from the original repo
It is normal not to use every aspect of the things we import, we could probably make similar statements about virtually every other library we use!
👍🏾 I take it back.
Importing a bunch of code into KubeVirt increases the maintenance burden/cost, which I guess is fine if there's a good reason for it, but I don't see it here...
@alicefr can you PTAL?
@jean-edouard This was my suggestion, there are a couple of go libraries that implement this. We could fix it in their repo, but since they are just a couple of functions, I thought it would be better to get rid of the dependency. At least, it was my thought about it. Please, let me know what you think
No fixes are needed so far. It was a suggestion for refactoring. If we want to keep it, then we can open fixes there
I'm closing this since it looks like we have an agreement to open PRs on the original repo, if need be.
Thanks for your help @alicefr @0xFelix, and for the discussion @jean-edouard. :)
/close
|
2025-04-01T06:39:20.665159
| 2023-04-03T14:12:19
|
1652158209
|
{
"authors": [
"0xFelix",
"akrejcir"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7720",
"repo": "kubevirt/vm-console-proxy",
"url": "https://github.com/kubevirt/vm-console-proxy/issues/16"
}
|
gharchive/issue
|
Allow running vm-console-proxy on Kubernetes
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind enhancement
What happened:
With the current configuration supplied vm-console-proxy is only able to run on OKD / OpenShift.
What you expected to happen:
I expected vm-console-proxy to be available on upstream Kubernetes too.
E.g. there is at least documentation on how to run it without OpenShift.
How to reproduce it:
Run make deploy.
/remove-lifecycle stale
|
2025-04-01T06:39:20.667744
| 2023-08-10T15:45:29
|
1845452956
|
{
"authors": [
"flavio"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7721",
"repo": "kubewarden/policy-server",
"url": "https://github.com/kubewarden/policy-server/pull/518"
}
|
gharchive/pull-request
|
feat: policy optimizer
This PR introduces a new cli tool called policy-optimizer. This binary will download the policies and optimize them. The goal is to implement what is described inside of this RFC.
Currently the code doesn't do any download/optimization. It just leverages the Lease primitive declared by Kubernetes to ensure only one process can have write access to the directory where the optimized policies are going to be written.
The actual code similates the download & optimize work with a simple sleep.
The change is pretty invasive as you can see, I don't want to merge it in the main branch yet.
Closing in favor of https://github.com/kubewarden/policy-server/pull/519, which is open against a feature branch of policy-server
|
2025-04-01T06:39:20.673537
| 2022-05-02T07:35:16
|
1222580459
|
{
"authors": [
"flavio",
"viccuad"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7722",
"repo": "kubewarden/verify-image-signatures",
"url": "https://github.com/kubewarden/verify-image-signatures/pull/11"
}
|
gharchive/pull-request
|
Fix policy metadata
The policy metadata was broken:
The free form key/value section requires the values to be strings, the policy was trying to use an array
There was an indexing error inside of the description
I think we should just tag 0.1.1 once this is merged
Reopening, the metadata in hub doesn't seem to be updated:
https://github.com/kubewarden/policy-hub/blob/main/web/policies/kubewarden:verify-image-signatures.json
Fixed, it's all good now
|
2025-04-01T06:39:20.687816
| 2023-05-19T23:40:12
|
1717940294
|
{
"authors": [
"acehoss",
"avroliner780",
"donks",
"jimblair",
"kubilus1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7723",
"repo": "kubilus1/autoortho",
"url": "https://github.com/kubilus1/autoortho/issues/145"
}
|
gharchive/issue
|
Not running on MacOs
I have used code from both the macOS branch and master branch, and can run the initial setup and download GUI, however when I try to run the program, I get the following error:
"System is not supported" - when running code from master branch
or
"ERROR:aoimage.AoImage:System is not supported" when running from macOS branch.
Should have mentioned that I'm using an Intel based Mac, not M1.
Also happens on M1 macs:
~/Projects/autoortho on main! ⌚ 9:37:00
$ build/autoortho.pyz
Mac OS Version is 13.3.1 and patch enabled so applying the patch
Applyting Mac OS 12.3+ Alpha Channel fix. Your default Alpha Channel is now 0.99
Config file found /Users/aaron/.autoortho reading...
Saving config ...
Wrote config file: /Users/aaron/.autoortho
INFO:downloader:Looking for regions ...
INFO:downloader:Last release refresh time: 2023-05-24 09:35:38.501631
INFO:downloader:Using cache ...
INFO:downloader:Using scenery dir /Users/aaron/X-Plane 12/Custom Scenery
INFO:downloader:Found region eur version 0.0.50
INFO:downloader: ... eur not setup yet
INFO:downloader:Found region na version 0.0.49
INFO:downloader: ... na not setup yet
INFO:downloader:Found region sa version 0.0.46-1
INFO:downloader: ... sa not setup yet
INFO:downloader:Found region afr version 0.0.45-1
INFO:downloader: ... afr not setup yet
INFO:downloader:Found region asi version 0.0.44-1
INFO:downloader: ... asi not setup yet
INFO:downloader:Found region aus_pac version 0.0.42-1
INFO:downloader: ... aus_pac not setup yet
Saving config ...
INFO:aoconfig:Wrote config file: /Users/aaron/.autoortho
Wrote config file: /Users/aaron/.autoortho
Config file found /Users/aaron/.autoortho reading...
INFO:aoconfig:Config file found /Users/aaron/.autoortho reading...
Setting download dir to /Users/aaron/.autoortho-data/downloads
INFO:downloader:Download na
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_00.zip
100.00% 30.38 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_01.zip
100.00% 52.70 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_02.zip
100.00% 22.25 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_03.zip
100.00% 51.62 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_04.zip
100.00% 25.42 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_05.zip
100.00% 52.05 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_06.zip
100.00% 53.14 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_07.zip
100.00% 31.36 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_08.zip
100.00% 33.05 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_09.zip
100.00% 34.26 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_10.zip
100.00% 36.03 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_11.zip
100.00% 29.08 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_12.zip
100.00% 42.20 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_13.zip
100.00% 41.08 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_14.zip
100.00% 24.73 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_15.zip
100.00% 31.99 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_16.zip
100.00% 28.30 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_17.zip
100.00% 53.04 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_18.zip
100.00% 55.66 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_19.zip
100.00% 46.10 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_20.zip
100.00% 30.74 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_21.zip
100.00% 41.91 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_22.zip
100.00% 47.21 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_23.zip
100.00% 43.49 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_24.zip
100.00% 46.83 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_25.zip
100.00% 37.20 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_26.zip
100.00% 33.81 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_27.zip
100.00% 33.85 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_28.zip
100.00% 44.56 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/z_na_29.zip
100.00% 45.99 MBpsINFO:downloader: DONE!
INFO:downloader:ORTHOS DOWNLOADED
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/y_na_overlays.zip.00
100.00% 30.18 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/y_na_overlays.zip.01
100.00% 27.81 MBpsINFO:downloader: DONE!
INFO:downloader:Will download https://github.com/kubilus1/autoortho-scenery/releases/download/0.0.49/y_na_overlays.zip.02
100.00% 47.31 MBpsINFO:downloader: DONE!
INFO:downloader:OVERLAYS DOWNLOADED
Setting extract dir to /Users/aaron/X-Plane 12/Custom Scenery
INFO:downloader: ... na not setup yet
INFO:downloader:Ready to extract archives for na v0.0.49!
INFO:downloader:Split zip detected for ('/Users/aaron/.autoortho-data/downloads/y_na_overlays.zip',)
INFO:downloader:ZIPNAME /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip
INFO:downloader:Split zip detected for ('/Users/aaron/.autoortho-data/downloads/y_na_overlays.zip',)
INFO:downloader:ZIPNAME /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip
INFO:downloader:Split zip detected for ('/Users/aaron/.autoortho-data/downloads/y_na_overlays.zip',)
INFO:downloader:ZIPNAME /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip
INFO:downloader:Cleaning up parts for /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_00.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_01.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_02.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_03.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_04.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_05.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_06.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_07.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_08.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_09.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_10.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_11.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_12.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_13.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_14.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_15.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_16.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_17.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_18.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_19.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_20.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_21.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_22.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_23.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_24.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_25.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_26.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_27.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_28.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/z_na_29.zip...
INFO:downloader:Extracting /Users/aaron/.autoortho-data/downloads/y_na_overlays.zip...
/Users/aaron/X-Plane 12/Custom Scenery/z_na_21
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_19
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_26
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_10
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_28
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_17
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_29
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_16
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_11
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_18
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_27
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_20
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_02
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_05
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_04
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_03
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_25
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_22
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_14
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_13
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_12
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_15
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_23
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_24
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_06
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_01
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_08
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_09
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_00
INFO:downloader:Setting up directories ... /Users/aaron/X-Plane 12/Custom Scenery/z_na_07
INFO:downloader:Copy /Users/aaron/X-Plane 12/Custom Scenery/z_ao_na/textures to /Users/aaron/X-Plane 12/Custom Scenery/z_autoortho/_textures
INFO:downloader:Done with extract
INFO:downloader: ... na up to date and validated.
Updating config.
Saving config ...
INFO:aoconfig:Wrote config file: /Users/aaron/.autoortho
Wrote config file: /Users/aaron/.autoortho
Config file found /Users/aaron/.autoortho reading...
INFO:aoconfig:Config file found /Users/aaron/.autoortho reading...
SectionParser(# x-plane custom scenery path='', scenery_path='/Users/aaron/X-Plane 12/Custom Scenery', # directory where satellite images are cached='', cache_dir='/Users/aaron/.autoortho-data/cache', # set directory for temporary downloading of scenery and other support files='', download_dir='/Users/aaron/.autoortho-data/downloads', # changing log_file dir is currently not supported='', log_file='/Users/aaron/.autoortho-data/logs/autoortho.log')
Updating config.
Saving config ...
INFO:aoconfig:Wrote config file: /Users/aaron/.autoortho
Wrote config file: /Users/aaron/.autoortho
Config file found /Users/aaron/.autoortho reading...
INFO:aoconfig:Config file found /Users/aaron/.autoortho reading...
Exiting ...
root: /Users/aaron/X-Plane 12/Custom Scenery/z_autoortho/_textures
mountpoint: /Users/aaron/X-Plane 12/Custom Scenery/z_autoortho/textures
INFO:aostats:Creating stats object
INFO:autoortho:Running in multi-threaded mode.
INFO:autoortho:Running in FUSE mode.
ERROR:aoimage.AoImage:System is not supported
Hey there! I was wondering if anyone has had success using Autoortho on a Mac. I was able to get it up and running on my PC, but I'm feeling a bit lost as to where to begin on my Mac. Any guidance you can offer would be greatly appreciated. Thank you!
Currently Mac isn't supported yet, it's possible it will be in the future.
What's needed is mostly related to compiling several binary dependencies.
On Sun, May 28, 2023 at 6:12 PM avroliner780 @.***>
wrote:
Hey there! I was wondering if anyone has had success using Autoortho on a
Mac. I was able to get it up and running on my PC, but I'm feeling a bit
lost as to where to begin on my Mac. Any guidance you can offer would be
greatly appreciated. Thank you!
—
Reply to this email directly, view it on GitHub
https://github.com/kubilus1/autoortho/issues/145#issuecomment-1566282233,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABVEDLC4NTSWKYJHS5F2RNTXIPEVTANCNFSM6AAAAAAYILBM2U
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
Thank you so much for your quick reply. I'm really excited for the Mac version to come out! If you need any help testing it, please don't hesitate to let me know. I'm more than happy to lend a hand.
Will do, definitely will need some help testing once that happens.
On Sun, May 28, 2023, 8:09 PM avroliner780 @.***> wrote:
Thank you so much for your quick reply. I'm really excited for the Mac
version to come out! If you need any help testing it, please don't hesitate
to let me know. I'm more than happy to lend a hand.
—
Reply to this email directly, view it on GitHub
https://github.com/kubilus1/autoortho/issues/145#issuecomment-1566318738,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABVEDLAFGUMNZ34QHXX36YLXIPSLBANCNFSM6AAAAAAYILBM2U
.
You are receiving this because you commented.Message ID:
@.***>
btw, I also attempted to build the "macos" branch of autoortho, and did get that building, but I ran into the same error that the macos_prmerge branch ended with, and that is the tile building doesn't produce the DDS files needed by x-plane.
kubilus1: I would gladly help in debugging the builds for MacOS, as I have the two branches building (macos and macos_prmerge) but the data flow in each is not producing the correct scenery files needed. Just let me know if you can afford a few minutes to walk-through the data flow, so I can help in debugging the MacOS builds.
|
2025-04-01T06:39:20.690794
| 2019-06-25T15:33:10
|
460496362
|
{
"authors": [
"alenkacz",
"zmalik"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7724",
"repo": "kudobuilder/frameworks",
"url": "https://github.com/kudobuilder/frameworks/pull/29"
}
|
gharchive/pull-request
|
kafka: upgrade kafka to 2.2.1 and enable metrics
this PR updates Kafka framework to
use a Kafka 2.2.1 docker image.
enable metrics for kafka.
add an option to enable advertised listeners for clients, and not just limited to localhost.
log.dirs to be configurable using the env variable.
Let's merge this before we start migrating packages
thank you @alenkacz
|
2025-04-01T06:39:20.694190
| 2019-04-18T18:30:03
|
434901159
|
{
"authors": [
"djannot",
"gerred"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7725",
"repo": "kudobuilder/kudo",
"url": "https://github.com/kudobuilder/kudo/pull/199"
}
|
gharchive/pull-request
|
Adding the kep for the previous values in update plans
What type of PR is this?
/kind kep
What this PR does / why we need it:
Knowing what were the previous values will help someone who develop a KUDO framework to define the logic that needs to take place when an update occurs.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Closed as stale. We will re-open if we can sanely re-evaluate this, but let's re-evaluate this once we have server-side apply in (Kubernetes 1.15) because we may end up with more previous state than we thought once that's in.
|
2025-04-01T06:39:20.717472
| 2021-09-09T13:06:09
|
992201534
|
{
"authors": [
"Bradamant3",
"lahabana"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7726",
"repo": "kumahq/kuma-website",
"url": "https://github.com/kumahq/kuma-website/pull/530"
}
|
gharchive/pull-request
|
docs(dns) add extra documentation on the DNS
CoreDNS wasn't mentionned anywhere, explain how DPP DNS works
and how to update the template
Signed-off-by: Charly Molter<EMAIL_ADDRESS>
I'm unsure about your suggestion I feel like you don't need to understand it to set it up.
You need it for advance use cases which is why it's lower down in the docs.
Otherwise I did all the updates you suggested and sorry about the future tense (bad habits die hard!)
Heh future tense is all over the place and I'm not consistent about correcting it either. Iterate, iterate ...
Point of clarification (I might revisit in a separate PR) -- I made the suggestion to move the explanation of how DNS works up the file to avoid duplication, not to explain something users don't really need :D. So I'll go back to the file as a whole and rethink organization generally (this is a common issue throughout the docs, not limited to this page).
I'm also not seeing commits for some of the suggestions I made, but they aren't a big deal -- we can revisit another time ...
I took the suggestions in the general commit where other suggestions were added.
|
2025-04-01T06:39:20.728822
| 2017-06-05T11:19:42
|
233560000
|
{
"authors": [
"MichaelTague",
"arturokunder",
"frndxyz"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7727",
"repo": "kunder-lab/cl.kunder.webview",
"url": "https://github.com/kunder-lab/cl.kunder.webview/issues/26"
}
|
gharchive/issue
|
support for ionic angular 2
any demo to work with ionic 2 , angular 2?
thanks..
Here is what I did to make it work in Ionic 2. By way of example, here it is added to a simple Ionic 2 app:
ionic start ionictest blank // Builds a simple Hello World style app.
cd ionictest
ionic cordova platform add android // Only works on Android or iOS
ionic cordova run android
At this point I got an error about Gradle not being in the path. This seems to be a recent bug. This line will fix it (at some point in the future this won't be necessary); then run again:
cordova platform update<EMAIL_ADDRESS>ionic cordova run android
So far, we haven't touched the plugin. Assuming all is good, let's add the plugin:
ionic cordova plugin add https://github.com/kunder-lab/cl.kunder.webview.git
Now edit pages/home/home.html to add a button (after the "If you get lost ..." paragraph):
<!-- Any URL will do -->
<button ion-button (click)="launch('http://MichaelTague.com')">
Simple Web Site
</button>
And then in pages/home/home.ts, add this right after the imports:
declare var webview: any;
and then inside the "export class HomePage" block, put this:
launch(url: string) {
webview.Show(url);
}
The "declare" tells TypeScript that is is OK to refer to "webview" without it otherwise being imported or instantiated. Note: there is no import of the plugin. The webview.Show(...) will open this URL in a second webview. It seems to have no trouble loading the HTML and related files such as an image. However, if you touch a link, the webview will open a browser.
As for how to get it to talk to an existing cordova plugin, I'm still working on that! Maybe someone else will comment.
Good luck, Michael Tague (tague@win.net).
Thanks @MichaelTague for your tutorial!
|
2025-04-01T06:39:20.733014
| 2023-05-23T11:18:28
|
1721846124
|
{
"authors": [
"kunjgit",
"lmalkam"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7728",
"repo": "kunjgit/GameZone",
"url": "https://github.com/kunjgit/GameZone/issues/305"
}
|
gharchive/issue
|
[New game]: Tic Tac Tow
🎮 Game Request
Players take turns placing a mark in one of the cells of the grid. The goal of the game is for players to position their marks so that they make a continuous line of three cells vertically, horizontally, or diagonally. An opponent can prevent a win by blocking the completion of the opponent's line.
will use HTML CSS and javascript.
Point down the features
The game is played on a grid that's 3 squares by 3 squares.
You are X , your friend (or the computer in this case) is O . Players take turns putting their marks in empty squares.
The first player to get 3 of her marks in a row (up, down, across, or diagonally) is the winner.
When all 9 squares are full, the game is over. If no player has 3 marks in a row, the game ends in a tie.
Select program in which you are contributing
GSSoC23
Code of Conduct
[X] I follow CONTRIBUTING GUIDELINE of this project.
Hey @lmalkam!
We are already having a similar game request in #245 👀
Make sure you come up with a cool unique idea 😀
Waiting for your new game idea 💗.
Hey @lmalkam ! Thank you so much for your raising the issue💗
It’s all yours, you can come anytime again and make some contributions! 🚀
Alone, we can do little, but together we can do so much! 😇
|
2025-04-01T06:39:20.738379
| 2023-06-02T15:34:29
|
1738416543
|
{
"authors": [
"S-ishita",
"kunjgit"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7729",
"repo": "kunjgit/GameZone",
"url": "https://github.com/kunjgit/GameZone/issues/731"
}
|
gharchive/issue
|
[New game]: 3d planet game
🎮 Game Request
player needs to avoid junks and destroy asteroids
Point down the features
Game points will be awarded on the basis of tokens collected and missions completed
Select program in which you are contributing
GSSoC23
Code of Conduct
[X] I follow CONTRIBUTING GUIDELINE of this project.
Hey @S-ishita !
Thank you for raising an issue 💗
You can self assign the issue by commenting /assign in comment 😀
Make sure you follow CODE OF CONDUCT and CONTRIBUTING GUIDELINES 🚀
Don’t Forget to ⭐ our GameZone🎮
Make sure you join our Discord🕹️
Hey @S-ishita!
We are already having a similar game request in #730 👀
Make sure you come up with a cool unique idea 😀
Waiting for your new game idea 💗.
Hey @S-ishita ! Thank you so much for your raising the issue💗
It’s all yours, you can come anytime again and make some contributions! 🚀
Alone, we can do little, but together we can do so much! 😇
|
2025-04-01T06:39:20.759546
| 2012-02-07T16:46:12
|
3126924
|
{
"authors": [
"anlek",
"brentkirby"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7730",
"repo": "kurbmedia/carpool",
"url": "https://github.com/kurbmedia/carpool/issues/1"
}
|
gharchive/issue
|
Documentation is quite outdated
It's really hard to follow the readme when 90% of carpool specific calls are no longer working. I am trying to read through the source code but it would be nice if the examples in the readme were updated.
As you can see the last commit for this was back in 2010. You should maybe look at using something like oauth (which is what we moved on to) as its pretty easy to implement a oauth provider with something like devise/omniauth.
I'm not sure whats changed since the readme was updated but if you want to update any of it I'll definitely merge in any pull requests.
|
2025-04-01T06:39:20.760479
| 2023-10-27T00:37:00
|
1964570453
|
{
"authors": [
"avalonche",
"barnabasbusa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7731",
"repo": "kurtosis-tech/ethereum-package",
"url": "https://github.com/kurtosis-tech/ethereum-package/pull/343"
}
|
gharchive/pull-request
|
fix: builder args incorrectly configured
as new args were added to geth the hardcoded indexes are not configured properly. This makes cmd args for builder more robust.
Thanks for the quick fix, totally missed it!
|
2025-04-01T06:39:20.764434
| 2024-09-27T19:37:25
|
2553613870
|
{
"authors": [
"lostbean"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7732",
"repo": "kurtosis-tech/kardinal",
"url": "https://github.com/kurtosis-tech/kardinal/pull/258"
}
|
gharchive/pull-request
|
chore(main): release 0.3.2
:robot: I have created a release beep boop
0.3.2 (2024-09-28)
Bug Fixes
add the --service flag in the kardinal flow telepresence intercept command (#259) (5d22282)
fix broken website CSS by refactoring styled-components SSR logic (#257) (505e885)
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/kurtosis-tech/kardinal/releases/tag/0.3.2 :sunflower:
|
2025-04-01T06:39:20.765337
| 2024-10-10T11:07:06
|
2578501165
|
{
"authors": [
"kuruczgy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7733",
"repo": "kuruczgy/x1e-nixos-config",
"url": "https://github.com/kuruczgy/x1e-nixos-config/pull/29"
}
|
gharchive/pull-request
|
Add release process documentation
Closes #18
Hm ideally we should also add a sentence to the readme to draw users' attention to the fact that binary releases are available?
|
2025-04-01T06:39:20.767758
| 2023-05-18T18:08:52
|
1716055130
|
{
"authors": [
"excalq"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7734",
"repo": "kuskoman/logstash-exporter",
"url": "https://github.com/kuskoman/logstash-exporter/issues/121"
}
|
gharchive/issue
|
Certain metrics should be gauges not counters
As I evaluate certain metrics in Grafana, it seems likely that some counter metrics are likely originated as gauges such as logstash_stats_pipeline_queue_events_count.
This needs some careful testing to better verify, which I'll do as I find time.
Logstash docs as a reference: https://www.elastic.co/guide/en/logstash/current/node-stats-api.html (Though it doesn't answer this question). Here's a code pointer as well: https://github.com/elastic/logstash/blob/main/logstash-core/lib/logstash/api/commands/stats.rb#L38-L41C49
Though the use of += make me less sure. Again, this needs some smoke testing to validate.
I'll close this, as it does not seem to be the case. Indeed these metrics are monotonic counters.
|
2025-04-01T06:39:20.821862
| 2023-04-13T08:10:16
|
1665935626
|
{
"authors": [
"haochenx",
"kxc-wraikny"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7735",
"repo": "kxcteam/kxclib-ocaml",
"url": "https://github.com/kxcteam/kxclib-ocaml/pull/40"
}
|
gharchive/pull-request
|
add jv_kind_of_jv
Since jv and jv_kind are different, string_of_jv_kind cannot be applied to values of type jv. I added a function jv_kind_of_jv that converts a value of type jv to the corresponding value of type jv_kind.
close as Json.clasify_jv added in https://github.com/kxcteam/kxclib-ocaml/commit/df8c805ba1eeb3ee159cd71bf797f6d49a9a4b76
(🙏 @kxc-wraikny )
|
2025-04-01T06:39:20.834256
| 2018-03-16T19:36:25
|
306052549
|
{
"authors": [
"erip",
"kylebgorman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7736",
"repo": "kylebgorman/Pynini",
"url": "https://github.com/kylebgorman/Pynini/pull/2"
}
|
gharchive/pull-request
|
Refactor docs
This PR unifies the docs and adds extensions for automatic rendering in GitHub.
I'm not sure if this matters anymore (maybe this should just work on GitHub and that's fine) but I think I went with .rst only because PyPi only supports it (and not Markdown). Does anybody care about the docs being rendered in HTML on PyPi?
Ah, yes. FWIW, the conversion is simple enough, but perhaps needless. I can make the README an rst instead since GitHub can render either.
Yeah, let's do that instead!
One last small request: for context, can you rewrite the sentence from the
old README.md to read something like:
"Note that the GitHub repository is a (primarily read-only) mirror to
enable bug reports and outside contributions."
On Fri, Mar 16, 2018 at 3:48 PM, Elijah Rippeth<EMAIL_ADDRESS>wrote:
Ah, yes. FWIW, the conversion is simple enough, but perhaps needless. I
can make the README an rst instead since GitHub can render either.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kylebgorman/Pynini/pull/2#issuecomment-373825819, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAJuOYjxmtWot1nVDhpL3tamGlzgHRZ7ks5tfBcngaJpZM4SuVP4
.
It's here.
Eep, don't merge yet... rst isn't rendering quite correctly...
OK, at last... I've made the changes so rst can render. :-)
|
2025-04-01T06:39:20.846171
| 2020-07-22T23:11:17
|
664103642
|
{
"authors": [
"alecbz",
"kyleconroy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7737",
"repo": "kyleconroy/sqlc",
"url": "https://github.com/kyleconroy/sqlc/pull/613"
}
|
gharchive/pull-request
|
Handle MySQL renames
https://github.com/kyleconroy/sqlc/issues/610
This change is
I've merged #608. Could you update this to return the error? Thanks
|
2025-04-01T06:39:20.851322
| 2012-12-10T10:04:17
|
9133833
|
{
"authors": [
"Gelbotron",
"kasimbadami",
"kylefox"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7738",
"repo": "kylefox/jquery-tablesort",
"url": "https://github.com/kylefox/jquery-tablesort/issues/6"
}
|
gharchive/issue
|
Exclude some column from being sorted.
Is there any way I can specify column that should not be sorted ?
Quite old but may be useful. I just set the class of the column to "column-unsortable" and did the following:
$('table').tablesort();
$(".column-unsortable").unbind();
This functionality is now included in jquery-tablesort (0.0.3).
To prevent a column from being sortable, just add the no-sort class to your th:
<th class="no-sort">Photo</th>
Try the "Photo" column in the demo:
https://dl.dropboxusercontent.com/u/780754/tablesort/index.html
|
2025-04-01T06:39:20.859074
| 2021-02-04T23:49:18
|
801722287
|
{
"authors": [
"batesenergy",
"ivanNieto13",
"ruippeixotog"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7739",
"repo": "kylemanna/docker-openvpn",
"url": "https://github.com/kylemanna/docker-openvpn/issues/638"
}
|
gharchive/issue
|
Can't route all traffic through VPN
I want to migrate my existing OpenVPN install to use this Docker container but I'm having some trouble finding the right settings so that it can route all Internet traffic through the VPN.
I'm trying to set an OpenVPN instance with the following setup:
Using TCP
VPN internal network should be <IP_ADDRESS>/24
VPN clients should be able machines in the host network <IP_ADDRESS>/24
VPN clients should be able to tunnel Internet traffic through the VPN
In order to build a config for this, I configuring the following service in my docker-compose.yml file:
openvpn:
image: kylemanna/openvpn
container_name: openvpn
restart: unless-stopped
cap_add:
- NET_ADMIN
volumes:
- $MY_HOST_CONF_DIR:/etc/openvpn
ports:
- 1194:1194
And I ran the following commands:
$ docker-compose run --rm openvpn ovpn_genconfig -N -d -u tcp://$MY_DNS -s <IP_ADDRESS>/24 -p "route <IP_ADDRESS> <IP_ADDRESS>"
$ docker-compose run --rm openvpn ovpn_initpki
$ docker-compose run --rm openvpn easyrsa build-client-full $MY_CLIENT nopass
$ docker-compose run --rm openvpn ovpn_getclient $MY_CLIENT > $MY_CLIENT.ovpn
I'm now trying to connect to connect with Tunnelblick. If I connect with the "Route all IPv4 traffic through the VPN" option I can't reach either <IP_ADDRESS>/24 addresses nor Internet addresses. If I connect without this option I can access <IP_ADDRESS>/24 addresses.
I'm not an expert in networking or OpenVPN configuration, so I may be missing something obvious. What am I doing wrong?
I'm facing the same issue, do you find any solution?
Unfortunately not, I've made no progress so far. Documentation seems to assume that all traffic is routed through the VPN by default, but I can't get it to work even with the default config. Maybe one of the maintainers can help with this?
I tried adding this to /etc/docker/daemon.json file:
{ "iptables": true }
and it worked.
That didn't work for me unfortunately and it's surprising that it worked for you, given that iptables should be true by default. Can you share the exact config you used (minus public IPs and other sensitive info)?
@ruippeixotog If you are running on this on GCP or other cloud services make sure your VM has "IP Forwarding" enabled.
@batesenergy I was trying to run it in my own server, which used to run OpenVPN outside Docker without any problems. In any case, I ended up moving to WireGuard, which is simpler and has a much better supported Docker image.
|
2025-04-01T06:39:20.894905
| 2022-10-07T09:49:58
|
1400914282
|
{
"authors": [
"piotrmiskiewicz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7740",
"repo": "kyma-project/control-plane",
"url": "https://github.com/kyma-project/control-plane/pull/2124"
}
|
gharchive/pull-request
|
Allow to retrigger suspension of expired instance
Description
Changes proposed in this pull request:
...
...
...
Related issue(s)
/hold
fixes: https://github.tools.sap/kyma/backlog/issues/3038
/unhold
|
2025-04-01T06:39:20.902479
| 2024-04-10T06:28:55
|
2239269439
|
{
"authors": [
"strekm",
"triffer"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7741",
"repo": "kyma-project/istio",
"url": "https://github.com/kyma-project/istio/issues/732"
}
|
gharchive/issue
|
Extend GH Workflows to support experimental functionality
Description
Extend Istio module build and release process to support experimental functionality. GH Actions need to include building experimental image, that later on can be used in testing. Release process also needs to be extended to produce experimental artefacts, that later on can be used to rollout experimental offering.
ACs:
[ ] experimental image build (prow)
[ ] Update CI/CD documentation
[x] experimental release artefacts exists
[ ] experimental release notes created
[ ] release documentation updated
[x] execute experimental tests
Reasons
Support experimental offering
DoD:
- [ ] Provide unit and integration tests.
[ ] Provide documentation.
[ ] Verify if the solution works for both open-source Kyma and SAP BTP, Kyma runtime.
- [ ] If you changed the resource limits, explain why it was needed.
- [ ] Verify that your contributions don't decrease code coverage. If they do, explain why this is the case.
- [ ] Add release notes.
Attachments
Prow build doesn't support building a different tag for pull request builds.
The decision was made to have a new section for experimental features in the release notes template and to not have experimental builds for PRs for now.
PRs:
https://github.com/kyma-project/istio/pull/731
https://github.com/kyma-project/test-infra/pull/10408
https://github.tools.sap/kyma/documentation/pull/550
Issue for support of a custom tag for PR builds is created:
https://github.com/kyma-project/test-infra/issues/10415
After merge of https://github.com/kyma-project/istio/pull/731, we need to update the links to the newly added jobs in the CI/CD documentation.
|
2025-04-01T06:39:20.938072
| 2024-01-26T14:40:20
|
2102326976
|
{
"authors": [
"halamix2"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7742",
"repo": "kyma-project/serverless",
"url": "https://github.com/kyma-project/serverless/pull/666"
}
|
gharchive/pull-request
|
bump k8s version used with envtest
Description
Changes proposed in this pull request:
bump k8s version used with envtest to 1.27 series
Related issue(s)
/retest
|
2025-04-01T06:39:20.942153
| 2024-02-01T12:16:03
|
2112377020
|
{
"authors": [
"chrkl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7743",
"repo": "kyma-project/telemetry-manager",
"url": "https://github.com/kyma-project/telemetry-manager/pull/762"
}
|
gharchive/pull-request
|
docs: Add OTLP Logs PoC documentation
Description
Changes proposed in this pull request (what was done and why):
Add documentation about OpenTelemetry logging PoC
Changes refer to particular issues, PRs or documents:
https://github.com/kyma-project/telemetry-manager/issues/720
Traceability
[ ] The PR is linked to a GitHub issue.
[ ] New features have a milestone set.
[ ] New features have defined acceptance criteria in a corresponding GitHub Issue, and all criteria are satisfied with this PR.
[ ] The corresponding GitHub issue has a respective area and kind label.
[ ] The follow-up issues (if any) are linked in the Related Issues section.
[ ] Adjusted the documentation if the change is user-facing.
[ ] The feature is unit-tested
[ ] The feature is e2e-tested
/unhold
|
2025-04-01T06:39:20.955192
| 2023-08-08T22:17:38
|
1842174831
|
{
"authors": [
"Alopalao",
"viniarck"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7745",
"repo": "kytos-ng/flow_manager",
"url": "https://github.com/kytos-ng/flow_manager/pull/167"
}
|
gharchive/pull-request
|
Sliced large number of flows
Closes #164
Summary
Added a function to slice a large number of flows to lists of 200
Local Tests
Reproduced test on issue
Let's include this on 2023.1, since it ended up being tagged late, let's take the opportunity.
|
2025-04-01T06:39:20.961442
| 2023-11-06T23:11:23
|
1980243423
|
{
"authors": [
"Alopalao",
"viniarck"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7746",
"repo": "kytos-ng/mef_eline",
"url": "https://github.com/kytos-ng/mef_eline/pull/396"
}
|
gharchive/pull-request
|
Added vlan_range support
Closes #18
Closes #309
Summary
This PR needs kytos PR
Added support for vlan_range epic. Currently only when both UNIs have the same list of tags
Local Tests
Created, updated and deleted circuit.
Restarted kytos to check consistency work. Partial updates for list of tags also tested. For example if both UNI have [[20, 30]] and they are being updated to [[25, 30]].
Added and updated tests.
End-To-End Tests
============================= test session starts ==============================
platform linux -- Python 3.9.2, pytest-7.2.0, pluggy-1.3.0
rootdir: /tests
plugins: timeout-2.1.0, rerunfailures-10.2, anyio-3.6.2
collected 244 items
tests/test_e2e_01_kytos_startup.py .. [ 0%]
tests/test_e2e_05_topology.py .................. [ 8%]
tests/test_e2e_10_mef_eline.py ..........ss.....x.....x................ [ 24%]
tests/test_e2e_11_mef_eline.py ...... [ 27%]
tests/test_e2e_12_mef_eline.py .....Xx. [ 30%]
tests/test_e2e_13_mef_eline.py ....Xs.s.....Xs.s.XXxX.xxxx..X........... [ 47%]
. [ 47%]
tests/test_e2e_14_mef_eline.py x [ 47%]
tests/test_e2e_15_mef_eline.py .... [ 49%]
tests/test_e2e_20_flow_manager.py ..................... [ 58%]
tests/test_e2e_21_flow_manager.py ... [ 59%]
tests/test_e2e_22_flow_manager.py ............... [ 65%]
tests/test_e2e_23_flow_manager.py .............. [ 71%]
tests/test_e2e_30_of_lldp.py .... [ 72%]
tests/test_e2e_31_of_lldp.py ... [ 74%]
tests/test_e2e_32_of_lldp.py ... [ 75%]
tests/test_e2e_40_sdntrace.py ............. [ 80%]
tests/test_e2e_41_kytos_auth.py ........ [ 84%]
tests/test_e2e_42_sdntrace.py .. [ 84%]
tests/test_e2e_50_maintenance.py ........................ [ 94%]
tests/test_e2e_60_of_multi_table.py ..... [ 96%]
tests/test_e2e_70_kytos_stats.py ........ [100%]
Last commit is more stable. Tested with this script. This script work with the last update from kytos, topology and of_lldp PRs.
It runs as python3 evcs.py 5. It set tag ranges to "01:1" and "02:1" interfaces and creates a set number of circuits.
The result should an empty available_tags["vlan"] for "01:1" and "02:1" interfaces.
Changelog also hasn't been updated
Bypassing checking of tags for use_tags() and make_tags_available() since these tags are not managed by the user.
Commit a29489619e26a6ebea63ad69ce6c386992644dfc
Closing this since Aldo's PR #407 has landed. Nicely done, Aldo.
|
2025-04-01T06:39:21.022880
| 2019-11-03T10:32:36
|
516808519
|
{
"authors": [
"gcp",
"kz04px"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7747",
"repo": "kz04px/python-ataxx",
"url": "https://github.com/kz04px/python-ataxx/pull/13"
}
|
gharchive/pull-request
|
Avoid crashes on comments before the first move.
Example PGN:
[Event "?"]
[White "Lelax IV"]
[Black "Autaxx"]
[FEN "3x2o/7/7/7/2o4/7/6x o 3 2"]
[Adjudicated "Engine crashed"]
[Result "1-0"]
{ engine crashed } 1-0
Thanks for the fix.
|
2025-04-01T06:39:21.076379
| 2016-12-25T21:22:48
|
197515623
|
{
"authors": [
"lachs0r",
"pavelxdd"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7749",
"repo": "lachs0r/mingw-w64-cmake",
"url": "https://github.com/lachs0r/mingw-w64-cmake/issues/27"
}
|
gharchive/issue
|
cuda works
@lachs0r, from https://mpv.srsfckn.biz/changes/2016-12-25/ :
but if you’re interested, try to use it and report back so I can update this note.
Just tried the latest stable build, and I can confirm that CUDA hwdec works fine 👍
Thanks.
|
2025-04-01T06:39:21.151292
| 2023-11-29T18:43:49
|
2017203126
|
{
"authors": [
"azteca1998",
"codecov-commenter"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7750",
"repo": "lambdaclass/cairo_native",
"url": "https://github.com/lambdaclass/cairo_native/pull/356"
}
|
gharchive/pull-request
|
Revert initial gas check fix.
Revert initial gas check fix
Description
Description of the pull request changes and motivation.
Checklist
[ ] Linked to Github Issue
[ ] Unit tests added
[ ] Integration tests added.
[ ] This change requires new documentation.
[ ] Documentation has been added/updated.
Codecov Report
Attention: 181 lines in your changes are missing coverage. Please review.
Comparison is base (87d0c98) 74.27% compared to head (b4e273b) 73.81%.
Report is 1 commits behind head on main.
Files
Patch %
Lines
src/bin/cairo-native-compile.rs
0.00%
134 Missing :warning:
src/libfuncs/stark_net.rs
0.00%
32 Missing :warning:
src/ffi.rs
88.46%
15 Missing :warning:
Additional details and impacted files
@@ Coverage Diff @@
## main #356 +/- ##
==========================================
- Coverage 74.27% 73.81% -0.46%
==========================================
Files 96 97 +1
Lines 21889 22189 +300
==========================================
+ Hits 16257 16379 +122
- Misses 5632 5810 +178
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
|
2025-04-01T06:39:21.159921
| 2023-06-28T18:20:15
|
1779468407
|
{
"authors": [
"codecov-commenter",
"matias-gonz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7751",
"repo": "lambdaclass/starknet_in_rust",
"url": "https://github.com/lambdaclass/starknet_in_rust/pull/704"
}
|
gharchive/pull-request
|
Fix coverage workflow
Fix coverage workflow
Description
Fixes coverage workflow by installing nightly
Codecov Report
Merging #704 (8ab690a) into main (708dc65) will increase coverage by 0.00%.
The diff coverage is 94.73%.
@@ Coverage Diff @@
## main #704 +/- ##
=======================================
Coverage 91.97% 91.97%
=======================================
Files 52 52
Lines 11335 11341 +6
=======================================
+ Hits 10425 10431 +6
Misses 910 910
Impacted Files
Coverage Δ
src/definitions/block_context.rs
100.00% <ø> (ø)
crates/starknet-contract-class/src/lib.rs
80.20% <90.00%> (+0.63%)
:arrow_up:
.../api/contract_classes/deprecated_contract_class.rs
96.92% <100.00%> (+0.07%)
:arrow_up:
src/storage/errors/storage_errors.rs
100.00% <100.00%> (ø)
|
2025-04-01T06:39:21.162040
| 2021-10-02T01:38:52
|
1013878352
|
{
"authors": [
"KevDoy",
"Zeldaboy14",
"lambertjamesd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7752",
"repo": "lambertjamesd/gb64",
"url": "https://github.com/lambertjamesd/gb64/issues/23"
}
|
gharchive/issue
|
SGB Pallet/Border Support?
Would it be possible to add a way to toggle the border and pallets for SGB titles that make use of it? Would be very nice to have this sorta feature!
I was planning to at least have some pre packaged border in the major version. I'll see how much effort it would take to implement SGB features but I probably wont implement all SGB features since it would require emulating the super nintendo sound chip and possibly the CPU to do correctly and I don't want to commit to that much effort. If implementing partial features is viable I will consider it.
Yeah. Borders, and possible the pallets are the only 2 i can see being supported. Trying to do things like what Donkey Kong and one other title dude (using the snes hardware itself) is virtually impossible
Loving your project. I was also hoping to see SGB border support. Don't much care for emulating the rest of the features. But if there's a simple way to pull the border file out and apply that around the game (maybe as one of the zoom options), I'd love to see it.
|
2025-04-01T06:39:21.164063
| 2023-09-17T20:48:37
|
1899913788
|
{
"authors": [
"MesaBlack",
"lambertjamesd",
"rmn20"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7753",
"repo": "lambertjamesd/n64brew2023",
"url": "https://github.com/lambertjamesd/n64brew2023/issues/2"
}
|
gharchive/issue
|
Potential speed boost from using 4-bit textures
If I understand everything correctly, this demo uses 16-bit 32x32 tiles for rendering. 16-bit tiles could be replaced with 4-bit tiles with individual palettes, reducing the memory load approximately 4 times. 16 colors actually should be enough, considering that the size of each tile is only 32x32 pixels. With a proper palette generation algorithm, everything should look fine. As I remember, pngquant even had an option to quantize colors to RGB555, which should provide a good palette within N64 rendering limits.
That would actually work really well for more toonish textures. It would also help reduce the size of the ROM which is the real limitation of the technique. I don't think I will be doing any more work on this any time soon but I'll keep this issue open
I'm pretty sure it should look good enough even on realistic textures given the tiles resolution, but this should be tested
I'm thinking a fps partially running on portal64's map work utilizing the megatextures, shaders, and shadows.
|
2025-04-01T06:39:21.296958
| 2024-12-18T18:53:14
|
2748529425
|
{
"authors": [
"ShaharZivanOnvego",
"jthack"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7758",
"repo": "langchain-ai/langchain-google",
"url": "https://github.com/langchain-ai/langchain-google/issues/651"
}
|
gharchive/issue
|
ChatAnthropicVertex prompt caching support
Hello,
As of recently, prompt caching is supposedly in preview in Vertex AI. Can you add support for it to ChatAnthropicVertex?
Thanks!
I just want to bump this, and clarify that the "regular" method I use for cache prompting in the standard ChatAnthropic causes an error:
`
content = [{
"text": "Do something or other...",
"type": "text",
"cache_control": {"type": "ephemeral"}
}]
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(content=content),
("placeholder", "{messages}"),
]
)
`
This method fails when giving this prompt to ChatAnthropicVertex with the error:
File ".../python3.11/site-packages/langchain_google_vertexai/_anthropic_utils.py", line 143, in _format_messages_anthropic raise ValueError( ValueError: System message must be a string, instead was: <class 'list'>
So simply modifying it to support a list rather than a string would be enough to allow caching. Could be a quick fix
|
2025-04-01T06:39:21.307606
| 2024-02-14T14:53:30
|
2134532581
|
{
"authors": [
"baskaryan",
"francisc0garcia",
"yoch"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7759",
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/issues/17531"
}
|
gharchive/issue
|
convert_to_openai_function drop some (nested?) properties
Checked other resources
[X] I added a very descriptive title to this issue.
[X] I searched the LangChain documentation with the integrated search.
[X] I used the GitHub search to find a similar question and didn't find it.
[X] I am sure that this is a bug in LangChain rather than my code.
Example Code
from typing import Set, Literal
from langchain_core.utils.function_calling import convert_to_openai_function
class UserInfos(BaseModel):
"general information about a user"
gender: Literal["male", "female", "other"]
preferences: Set[Literal["games", "books"]]
Error Message and Stack Trace (if applicable)
No response
Description
The resulting function is not well defined and missing some properties.
Output
{
"name": "UserInfos",
"description": "general information about a user",
"parameters": {
"type": "object",
"properties": {
"gender": {
"enum": [
"male",
"female",
"other"
],
"type": "string"
}
},
"required": [
"gender",
"preferences"
]
}
}
Excepted
NOTE: This is produced by the deprecated convert_pydantic_to_openai_function function.
{
"name": "UserInfos",
"description": "general information about a user",
"parameters": {
"properties": {
"gender": {
"enum": [
"male",
"female",
"other"
],
"type": "string"
},
"preferences": {
"items": {
"enum": [
"games",
"books"
],
"type": "string"
},
"type": "array",
"uniqueItems": true
}
},
"required":[
"gender",
"preferences"
],
"type":"object"
}
}
System Info
System Information
OS: Linux
OS Version: #40~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 16 10:53:04 UTC 2
Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
langchain_core: 0.1.23
langchain: 0.1.7
langchain_community: 0.0.20
langsmith: 0.0.87
langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
langgraph
langserve
Related: https://github.com/langchain-ai/langchain/issues/14899
@francisc0garcia I believe this happens if you have pydantic v2 installed and aren't using langchain_core.pydantic_v1. If you change your pydantic imports from langchain_core.pydantic_v1, should work:
@baskaryan I can confirm this point, but that's a bit problematic in my case because I use some features of v2.
Are there plans to upgrade pydantic to v2 soon? In the meantime I can use the convert_pydantic_to_openai_function function.
@baskaryan I can confirm this point, but that's a bit problematic in my case because I use some features of v2.
Are there plans to upgrade pydantic to v2 soon? In the meantime I can use the convert_pydantic_to_openai_function function.
Note you can have pydantic v2 installed and use langchain_core.pydantic_v1, but yea under the hood it'll use pydantic.v1 classes so it won't have all the pydantic v2 features.
A lot of our community still runs on pydantic v1 so we definitely want to continue supporting it for the moment. Hard to estimate when we'll fully switch to v2 since that depends on factors outside of our control (ie what % of our users need pydantic v1 support).
|
2025-04-01T06:39:21.327490
| 2024-02-27T06:57:56
|
2155814048
|
{
"authors": [
"SoulEvill",
"hinthornw",
"yfontana"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7760",
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/issues/18173"
}
|
gharchive/issue
|
Langchain Expression Language (LCEL) pass through does not work with two consecutive chain
Checked other resources
[X] I added a very descriptive title to this issue.
[X] I searched the LangChain documentation with the integrated search.
[X] I used the GitHub search to find a similar question and didn't find it.
[X] I am sure that this is a bug in LangChain rather than my code.
Example Code
There are cases where user needs to pass through variables for more than one chains for later use, but current implementation doesnt support this.
Reproducible example following on RAG langchain expression language example from https://python.langchain.com/docs/expression_language/cookbook/retrieval
`from operator import itemgetter
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()
chain = (
{"context": retriever, "question": RunnablePassthrough()}
### Only line added to the example
| {'context': itemgetter('context'), "question": itemgetter('question')}
| prompt
| model
| StrOutputParser()
)
chain.invoke("where did harrison work?")
'
Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/zhengisamazing/1.python_dir/vigyan-llm-api/dev/langchain_playground.py", line 110, in
chain.invoke("where did harrison work?")
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2056, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2693, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2693, in
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3504, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1243, in _call_with_config
context.run(
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3378, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
TypeError: string indices must be integers, not 'str'
Description
There are cases where user needs to pass through variables for more than one chains for later use, but current implementation doesnt support this.
Provided reproducible example following on RAG langchain expression language example from https://python.langchain.com/docs/expression_language/cookbook/retrieval
System Info
langchain==0.1.7
langchain-cli==0.0.21
langchain-community==0.0.20
langchain-core==0.1.27
langchain-google-genai==0.0.9
langchain-openai==0.0.6
platform: mac
python version:3.11.7
The reason this doesn't work is because python parses the AST in left to right order. You are piping two stdlib python dicts together before it touches any langchain code, which implicitly deletes the first dict.
See for yourself:
{"context": retriever, "question": RunnablePassthrough()} | {
"context": itemgetter("context"),
"question": itemgetter("question"),
}
Results in output:
{'context': operator.itemgetter('context'),
'question': operator.itemgetter('question')}
To fix, explicitly create the langchain object using RunnableParallel:
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
### Only line added to the example
| {'context': itemgetter('context'), "question": itemgetter('question')}
| prompt
| model
| StrOutputParser()
)
Then the first function in the sequence is a langchain object, which can be composed with dicts, runnables, etc. as intended.
I ran into a similar issue, and it took me a while to figure out that I needed to replace dicts with RunnableParallel.
I would suggest either:
Updating the docs and examples to make this behavior clear and explicit (particularly the "TIP" section on https://python.langchain.com/docs/expression_language/primitives/parallel/, which states that a dict and a RunnableParallel are equivalent. In this case they aren't.)
Finding a way to make this syntax work. It is easy to run into this as soon as you want to implement a somewhat complex chain, and it's not intuitive for a non-Python expert that a dict would work for one step but not for a second one.
|
2025-04-01T06:39:21.335936
| 2024-05-09T11:22:20
|
2287481626
|
{
"authors": [
"moneebullah25",
"wood001"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7761",
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/issues/21478"
}
|
gharchive/issue
|
DOC: No example of usage implementation is provided for the langchain.chains.query_constructor.base.load_query_constructor_runnable function
Checklist
[X] I added a very descriptive title to this issue.
[X] I included a link to the documentation page I am referring to (if applicable).
Issue with current documentation:
Description:
Currently, the load_query_constructor_runnable function documentation lacks doesn't have usage examples or scenarios, making it challenging for developers to understand.
URL to the documentation: https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.base.load_query_constructor_runnable.html#langchain.chains.query_constructor.base.load_query_constructor_runnable
Idea or request for content:
I tried running the function and below is the complete code and output:
from langchain.chains.query_constructor.base import load_query_constructor_runnable
from langchain.chains.query_constructor.schema import AttributeInfo
from langchain_openai import ChatOpenAI
from langchain.chains.query_constructor.ir import (
Comparator,
Comparison,
Operation,
Operator,
StructuredQuery,
)
# Define your document contents and attribute information
document_contents = """
product_name: Widget, price: $20
product_name: Gadget, price: $35
product_name: Gizmo, price: $50
"""
attribute_info: AttributeInfo = [
{"name": "product_name", "type": "string"},
{"name": "price", "type": "number"},
]
model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5)
# Create a runnable for constructing queries
runnable = load_query_constructor_runnable(
llm=model,
document_contents=document_contents,
attribute_info=attribute_info,
allowed_comparators=[Comparator.EQ, Comparator.LT, Comparator.GT],
allowed_operators=[Operator.AND, Operator.NOT, Operator.OR],
enable_limit=True,
schema_prompt="Describe the query schema using allowed comparators and operators.",
fix_invalid=True,
)
# Now you can use the runnable to construct queries based on user input
user_input = "Show me products with price less than 30"
query = runnable.middle[0].invoke(user_input).content
print(f"Constructed query: {query}")
Output:
Constructed query: 1. Wireless Bluetooth Earbuds - $29.99
2. Portable Phone Charger - $24.99
3. Travel Makeup Bag - $19.99
4. Insulated Water Bottle - $15.99
5. LED Desk Lamp - $27.99
6. Resistance Bands Set - $12.99
7. Stainless Steel Mixing Bowls - $19.99
8. Yoga Mat - $24.99
9. Essential Oil Diffuser - $28.99
10. Electric Handheld Milk Frother - $14.99
However the output is wrong and is not providing the references to the original documents provided. Needed usage implementation.
run you code, and client will send prompt as following:
Your goal is to structure the user\'s query to match the request schema provided below.
Describe the query schema using allowed comparators and operators.
<< Example 1. >>
Data Source:
'''json
{
"content": "Lyrics of a song",
"attributes": {
"artist": {
"type": "string",
"description": "Name of the song artist"
},
"length": {
"type": "integer",
"description": "Length of the song in seconds"
},
"genre": {
"type": "string",
"description": "The song genre, one of "pop", "rock" or "rap""
}
}
}
'''
User Query:
What are songs by Taylor Swift or Katy Perry about teenage romance under 3 minutes long in the dance pop genre
Structured Request:
'''json
{
"query": "teenager love",
"filter": "and(or(eq(\\"artist\\", \\"Taylor Swift\\"), eq(\\"artist\\", \\"Katy Perry\\")), lt(\\"length\\", 180), eq(\\"genre\\", \\"pop\\"))"
}
'''
<< Example 2. >>
Data Source:
'''json
{
"content": "Lyrics of a song",
"attributes": {
"artist": {
"type": "string",
"description": "Name of the song artist"
},
"length": {
"type": "integer",
"description": "Length of the song in seconds"
},
"genre": {
"type": "string",
"description": "The song genre, one of "pop", "rock" or "rap""
}
}
}
'''
User Query:
What are songs that were not published on Spotify
Structured Request:
'''json
{
"query": "",
"filter": "NO_FILTER"
}
'''
<< Example 3. >>
Data Source:
'''json
{
"content": "Lyrics of a song",
"attributes": {
"artist": {
"type": "string",
"description": "Name of the song artist"
},
"length": {
"type": "integer",
"description": "Length of the song in seconds"
},
"genre": {
"type": "string",
"description": "The song genre, one of "pop", "rock" or "rap""
}
}
}
'''
User Query:
What are three songs about love
Structured Request:
'''json
{
"query": "love",
"filter": "NO_FILTER",
"limit": 2
}
'''
<< Example 4. >>
Data Source:
'''json
{
"content": "Hardware Products Price List",
"attributes": {
"product_name": {
"type": "string"
},
"price": {
"type": "number"
}
}
}
'''
User Query:
Show me products with price less than 30
Structured Request:
so I change the document_contents content, and get the correct answer.
# Define your document contents and attribute information
document_contents = "Hardware Products Price List"
attribute_info: AttributeInfo = [
{"name": "product_name", "type": "string"},
{"name": "price", "type": "number"},
]
# Create a runnable for constructing queries
runnable = load_query_constructor_runnable(
llm=llm,
document_contents=document_contents,
attribute_info=attribute_info,
allowed_comparators=[Comparator.EQ, Comparator.LT, Comparator.GT],
allowed_operators=[Operator.AND, Operator.NOT, Operator.OR],
enable_limit=True,
schema_prompt="Describe the query schema using allowed comparators and operators.",
fix_invalid=True,
)
# Now you can use the runnable to construct queries based on user input
user_input = "What are products that price less than 30"
query = runnable.invoke(user_input)
print(f"Constructed query: {query}")
you can try, query will been one StructuredQuery object.
|
2025-04-01T06:39:21.337315
| 2023-10-23T23:19:59
|
1958187003
|
{
"authors": [
"baskaryan",
"leo-gan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7762",
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/pull/12177"
}
|
gharchive/pull-request
|
updated integrations/providers/microsoft
Added several missed tools, utilities, toolkits to the Microsoft page.
amazing, thanks @leo-gan!
|
2025-04-01T06:39:21.344168
| 2024-12-13T10:00:42
|
2737997302
|
{
"authors": [
"Bhargav2525",
"efriis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7763",
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/pull/28706"
}
|
gharchive/pull-request
|
docs: added region parameter for awsBedrockParamsOrDefault in ChatModelTabs.js
Thank you for contributing to LangChain!
[X] PR title: "package: description"
Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for CI changes.
Example: "community: add foobar LLM"
[X] PR message:
**Description:**This PR contains the change in docs for in chatModelTabs.js In default parameters of awsBedrockParamsOrDefault, region should be mandatory for ChatBedRock
without region getting this validation error, so region should be there
Twitter handle: https://twitter.com/BhargavPrince18
Additional guidelines:
Make sure optional dependencies are imported within a function.
Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
Most PRs should not touch more than one package.
Changes should be backwards compatible.
If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
in general the docs recommend setting this in an environment variable (similar to access creds), so will close this instead of adding to that block!
Everyone do that only, but people need to know right they have to use region parameter inside ChatBedRock. I am following the docs and I got the error, It took almost 20 min for me to understand that I have to give region parameter inside AwsBedrockParams. Let's give atleast region parameter remove us-east-1 value and users will give whatever they want
Got it. We can consider linking the api ref for the overall classes in the tabs, but in general these tabs aren't for documenting the end-to-end use of the provider - they're just showing how the chat mdoels are used in each
so can i link api references to the model classes in each tab
|
2025-04-01T06:39:21.358598
| 2023-09-24T16:37:52
|
1910299017
|
{
"authors": [
"SagefulAI",
"adrienjoly",
"codenameakshay",
"dhruv-anand-aintech",
"drewB",
"getcreatr",
"icelic",
"jacoblee93",
"stevenmilstein",
"wcummings"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7764",
"repo": "langchain-ai/langchainjs",
"url": "https://github.com/langchain-ai/langchainjs/issues/2706"
}
|
gharchive/issue
|
Retry logic for OpenAI timeouts
I'm seeing the following error in prod:
Error [TimeoutError]: Request timed out.
at wrapOpenAIClientError (file:///app/node_modules/langchain/dist/util/openai.js:6:17)
at file:///app/node_modules/langchain/dist/chat_models/openai.js:518:31
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async RetryOperation._fn (/app/node_modules/p-retry/index.js:50:12) {
attemptNumber: 1,
retriesLeft: 6
}
It's getting captured in my catch block, so I'm fairly sure the retries aren't happening, unless the first attempt is the one that gets re-thrown or something confusing like that. Is it possible this doesn't meet the criteria for retryable? Could this be addressed using the FailedAttemptHandler interface?
I've not set a timeout for the LLM. I'm having a hard time figuring out the default value.
/**
* Custom handler to handle failed attempts. Takes the originally thrown
* error object as input, and should itself throw an error if the input
* error is not retryable.
*/
onFailedAttempt?: FailedAttemptHandler;
The default failure handler looks like the culprit:
const STATUS_NO_RETRY = [
400, // Bad Request
401, // Unauthorized
402, // Payment Required
403, // Forbidden
404, // Not Found
405, // Method Not Allowed
406, // Not Acceptable
407, // Proxy Authentication Required
408, // Request Timeout // <<<<<<<<<<<<<<<<<<<<
409, // Conflict
];
const defaultFailedAttemptHandler = (error: any) => {
if (
error.message.startsWith("Cancel") ||
error.message.startsWith("TimeoutError") || // <<<<<<<<<<
error.name === "TimeoutError" ||
error.message.startsWith("AbortError") ||
error.name === "AbortError"
) {
throw error;
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any
if ((error as any)?.code === "ECONNABORTED") {
throw error;
}
const status =
// eslint-disable-next-line @typescript-eslint/no-explicit-any
(error as any)?.response?.status ?? (error as any)?.status;
if (status && STATUS_NO_RETRY.includes(+status)) {
throw error;
}
// eslint-disable-next-line @typescript-eslint/no-explicit-any
if ((error as any)?.error?.code === "insufficient_quota") {
const err = new Error(error?.message);
err.name = "InsufficientQuotaError";
throw err;
}
};
However, reviewing OpenAI's documentation:
A `Timeout` error indicates that your request took too long to complete and our server closed the connection. This could be due to a network issue, a heavy load on our services, or a complex request that requires more processing time.
If you encounter a Timeout error, please try the following steps:
**Wait a few seconds and retry your request.** Sometimes, the network congestion or the load on our services may be reduced and your request may succeed on the second attempt.
Check your network settings and make sure you have a stable and fast internet connection. You may need to switch to a different network, use a wired connection, or reduce the number of devices or applications using your bandwidth.
If the issue persists, check out our persistent errors next steps section.
It sounds like this should be retryable to me.
Going to try this out in my service, will make a PR if it solves my problem ^
Some more details here: https://github.com/openai/openai-node/blob/master/README.md#retries
Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.
You can use the maxRetries option to configure or disable this:
Maybe this is all we need, to increase maxRetries on the OAI client?
@wcummings Thanks for hunting this down. Did that change solve the problem? If so, will you submit a PR?
I just found out that retries are not happening at all, despite setting maxRetries to a valid value in the langchain ChatOpenAI object.
This sort of bug remaining unsolved for so long makes me doubt whether anyone uses langchainjs at all.
Can the maintainers fix this?
+1 would like to see this implemented
I can confirm I am seeing this same issue where no retry is happening even when I pass a maxRetries param. Will have to switch to openAI's native lib which has this working until it is fixed.
For anyone else following this issue, the change made in https://github.com/langchain-ai/langchainjs/issues/2706#issuecomment-1734422202 was to remove "TimeoutError" from the list of things not to retry. I can confirm that does fix the issue. That said, we can't just make that change to fix this because this handler is used for more than just openai.
I notice there are many places where we explicitly set maxRetries to 0 in calls to the native openai lib. Perhaps the best route would be to change that to use the maxRetries value for the langchain openai model?
@codenameakshay what is the status of this PR that is supposed to fix maxRetries param not working?
@codenameakshay what is the status of this PR that is supposed to fix maxRetries param not working?
The PR doesn't actually fix the bug as I discussed with @jacoblee93. It is still an open issue.
See https://github.com/langchain-ai/langchainjs/pull/3370#discussion_r1402660699
I do still really want to get to the bottom of it 😕 but yeah we need to differentiate between user defined timeouts, which probably shouldn't be retried as the user expects some resolution in a timeframe, vs OpenAI default timeouts.
I am not following. Why would the timeouts be different? Seems like we are just dealing with a timeout value that has a default if the user doesn’t supply one. Since OpenAI already handles retrying timeouts why do we need langchain to try and handle retries on timeouts as well? Couldn’t we just pass the user timeout value to OpenAI?
Or maybe it is less about timeouts and more about retries. Since OpenAI now handles retries in the library natively, seems like we should just let it do its thing rather than use a separate mechanism outside the library.
If that was their desire won't they just setup retries to 0?
Yeah but now maxRetries param is not working as expected so I think this is necessary change.
I'm surprised to see that the bot's suggestion above (https://github.com/langchain-ai/langchainjs/issues/2706#issuecomment-1732617130) was disliked that much.
It actually inspired us to come up with a working solution: extend LangchainLLMChain into a class that calls withRetry() (owned by one of its ancestor class: Runnable) every time we call invoke.
class LLMChainWithRetry extends LangchainLLMChain {
async call(values, config = undefined) {
const runnableChain = super.withRetry({
stopAfterAttempt: 3,
onFailedAttempt: (error) => {
if (error.name === 'TimeoutError') {
console.log(`[LLMChainWithRetry] Attempt ${error.attemptNumber} failed. There are ${error.retriesLeft} retries left.`);
} else {
throw error;
}
}
});
return await runnableChain.invoke(values, config);
}
}
Oh, you should also just pass onFailedAttempt where you'd be able to pass maxRetries and it would work as well. I don't think you'd need to subclass.
Closing for now given the above.
I don't understand why this is being closed. Seems to me that there is still a clear bug here even if there is a workaround.
Because it's not clear what should be happening IMO
Does anyone know how this is handled in the python version? Could that perhaps guide us to a solution?
IMHO it is fine to make a breaking change and make this retryable. The timeout is not the same as a deadline, and if a request is retried for any other reason the total request time will exceed the timeout anyway.
Here's a draft of what I did in my codebase: https://github.com/langchain-ai/langchainjs/pull/4633
Thanks for all your patience and especially @wcummings for the PR! New behavior will be live in the next core release (probably today).
|
2025-04-01T06:39:21.386868
| 2024-08-14T20:38:24
|
2466785888
|
{
"authors": [
"Yurlungur",
"jhp-lanl"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7765",
"repo": "lanl/singularity-eos",
"url": "https://github.com/lanl/singularity-eos/pull/406"
}
|
gharchive/pull-request
|
[trivial][bugfix] add include guard
PR Summary
In PR #330 I missed an include guard in the new tests. Here the guard is added. Resolves #405 .
PR Checklist
[ ] Adds a test for any bugs fixed. Adds tests for new features.
[x] Format your changes by using the make format command after configuring with cmake.
[ ] Document any new features, update documentation for changes made.
[ ] Make sure the copyright notice on any files you modified is up to date.
[ ] After creating a pull request, note it in the CHANGELOG.md file.
[ ] LANL employees: make sure tests pass both on the github CI and on the Darwin CI
If preparing for a new release, in addition please check the following:
[ ] Update the version in cmake.
[ ] Move the changes in the CHANGELOG.md file under a new header for the new release, and reset the categories.
[ ] Ensure that any when='@main' dependencies are updated to the release version in the package.py
As a note for the future (or an additional one line change to add to this MR) would we maybe want to consider removing the spiner build from the minimal tests on github?
https://github.com/lanl/singularity-eos/blob/09cf65cd06eb249ed6f7de736f8c1f4165020a78/.github/workflows/tests_minimal.yml#L31
As a note for the future (or an additional one line change to add to this MR) would we maybe want to consider removing the spiner build from the minimal tests on github?
https://github.com/lanl/singularity-eos/blob/09cf65cd06eb249ed6f7de736f8c1f4165020a78/.github/workflows/tests_minimal.yml#L31
Good suggestion. :+1: Done.
|
2025-04-01T06:39:21.409665
| 2022-05-29T12:30:15
|
1251876379
|
{
"authors": [
"alperenersoy",
"danharrin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7766",
"repo": "laravel-filament/filament",
"url": "https://github.com/laravel-filament/filament/pull/2598"
}
|
gharchive/pull-request
|
Added 500ms debounce to key value field inputs
@danharrin As we discussed in the discord channel key value field is now pretty hard to type when used with ->reactive(). Until we can debounce $entangle statement this should do the trick.
Thanks
|
2025-04-01T06:39:21.423418
| 2017-03-27T13:46:40
|
217251213
|
{
"authors": [
"Dylan-DPC",
"SirriC",
"austinjherman",
"marathonstudios",
"ntzm",
"olssonm",
"themsaid",
"websanova"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7767",
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/issues/18515"
}
|
gharchive/issue
|
Artisan commands will not show line numbers of errors on php7+
Laravel Version: 5.4.16 (and previous)
PHP Version: 7+, 7.1+
Description:
When running an artisan command on Windows that contains an error the error message is shown without a line-number on php 7+.
On php 5.6 the error is shown with the line number.
This does not happen on all files, for instance adding an error to the actual artisan file will show the line number but adding an error to MigrateCommand.php will not.
Steps To Reproduce:
Windows with php 7+
Create a fresh install of Laravel.
Add an error to \artisan and run php artisan - the line number will be shown with the error
Remove above error and add one to \vendor\laravel\framework\src\Illuminate\Database\Console\Migrations\MigrateCommand.php - the error will be shown without the line number
PHP 5.6
PHP 7
@SirriC can you try running PHP script that throws an error? do you see line numbers?
@Dylan-DPC Yes, if I have a simple php file with an error it’ll show the line number.
I also get line numbers up to a point in Laravel, so I can edit some files and see them but then they stop showing. I think it might be when the errors are handled by Symfony - see the bottom screenshot in my original post. The first time I run artisan it files the error on line 16. On the second run the error is much deeper into the application and no line number is shown.
It turns out this is not just on Windows. My colleague was actually running php 5.6. I have also tried php 7.1 on two linux machines and neither show line numbers for errors.
Try running with the -v option
Use --verbose to get more information about the error if you want.
This is still a serious issue, and -v or --verbose has no effect in my Install. Why is this closed?
Agreed, having same issue here....
I'm having the same issue. Using Windows 10, PHP 5.6, Laravel 5.4. --verbose flag doesn't work:
For those experiencing a similar issue, I found that's it's more verbosely logging the error in storage/logs/laravel.log
Seems to be an issue with the php.ini being shipped with PHP >= 7.0. Since Xdebug 2.4 there's a new option available xdebug.show_error_trace, this should be set to 1 in your xdebug-configuration (or just your php.ini), note that this should be the php.ini for CLI, not FPM.
I.e, if using Homestead just put:
xdebug.show_error_trace=1 somewhere in /etc/php/7.1/cli/php.ini and it should work.
Discussion on Twitter about this: https://twitter.com/mattiasgeniar/status/905450118152953857
|
2025-04-01T06:39:21.429709
| 2017-11-24T12:13:13
|
276600169
|
{
"authors": [
"royvanv",
"themsaid"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7768",
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/issues/22196"
}
|
gharchive/issue
|
Translator case insensitivity on Windows
Laravel Version: 5.5.21
PHP Version: 7.0.10
Database Driver & Version: Irrelevant
Description:
The translator class looks for the translation file in a case insensitive matter on Windows. Causing it to look up a PHP translation file instead of looking for a string in the JSON file.
Expected result:* a string from the JSON translation file (e.g. resources/lang/en.json)
Actual result:* the content from a PHP translation file (e.g. resources/lang/en/faq.php)
__('faq') should return the contents of the file resources/lang/en/faq.php.
__('FAQ') (or any other non-lowercase variant) should return a string from resources/lang/en.json.
Steps To Reproduce:
Create a file named faq.php in resources/lang/en and make it return an array:
<?php
return [
'key' => 'value',
];
Make sure resources/lang/en.json does not exist or does not contain the key FAQ.
Call __('FAQ') or app('translator')->getFromJson('FAQ') from a controller.
The quick fix for this issue to add the keys causing problems to resources/lang/en.json.
Yeah you need to watch out from this edge case if you're using both translation methods in your project.
@themsaid This is not very clear from reading the documentation page.
Also, I assume this is not an issue on Unix-like systems where file names are case-sensitive. I haven't tested it, yet.
Laravel will first check for a JSON translation string before trying to find a PHP file, so yes including a JSON translation line for that key would fix the issue.
|
2025-04-01T06:39:21.438394
| 2021-01-03T12:08:48
|
777625440
|
{
"authors": [
"danikp",
"taylorotwell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7769",
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/issues/35767"
}
|
gharchive/issue
|
routes with 'group' as binding does not work
Laravel Version: 8.20.1
PHP Version: 7.4
Database Driver & Version:
Description:
After upgrade from version 6 to latest, all routes having 'group' binding failing to work. Changing it to anything else works perfectly. Behavior reproducible with and without using explicit binding. All other 100+ bindings are totally fine
Steps To Reproduce:
create route in following format
Route::get('groups/{group}', 'GroupsController@show');
1.1 update RouteServiceProvider to use default namespace for above to work
create controller and model
php artisan make:controller --api --model=Group GroupsController
create new table and fill with few rows or set any existing table to be used with model
make a call to route above with id existing in table above
exception "Target class [Group] does not exist." thrown even with explicit binging being set like that
Route::model('group', \App\Models\Group::class);
Unable to recreate. Works for me.
Unable to recreate. Works for me.
|
2025-04-01T06:39:21.445840
| 2015-02-08T04:29:26
|
56938545
|
{
"authors": [
"Arrilot",
"GrahamCampbell",
"JosephSilber",
"barryvdh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7770",
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/issues/7331"
}
|
gharchive/issue
|
Registering middleware conditionally
In L4, we used to be able to load service providers based on the environment using append_config.
The recommended way now in L5 is to instead load it conditionally in AppServiceProvidor:
if ($this->app->environment('local'))
{
$this->app->register('LocalOnlyServiceProvider');
}
The same can't be said for middleware. There seems to currently be no way to do the same for middleware. The only way I was able to accomplish conditional middleare loading was by extending the app kernel's constructor and load it there. Ugh :cry:
Another possible solution might be to create my own ConditionalMiddleware that always runs, and then pass the request through to various additional middleare conditionally. Again, ugh!
For reference: slack discussion
I think there should be a built-in way to do this, but I'm not sure the exact approach to take.
Ideas?
Perhaps add the middleware option back to App, like before? That was also proposed to simplify middleware in packages..
$this->app->middleware('MyConditionalMiddleware');
Doesn't the order of middlewares matter?
This partially duplicates https://github.com/laravel/framework/issues/6211.
Closing this.
Taylor says to just do it in the kernel's handle method, like this.
|
2025-04-01T06:39:21.446856
| 2017-11-16T18:39:51
|
274620450
|
{
"authors": [
"mateusjatenee",
"taylorotwell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7771",
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/22104"
}
|
gharchive/pull-request
|
[5.6] Allow different collection classes to be returned
Just an idea -- this PR would allow different collection classes to be returned by setting a property instead of the newCollection method.
Would rather people just override the method.
|
2025-04-01T06:39:21.448065
| 2020-09-23T13:23:39
|
707363632
|
{
"authors": [
"btaskew",
"taylorotwell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7772",
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/34492"
}
|
gharchive/pull-request
|
[8.x] Allow dynamic factory methods to obey newFactory method on model
PR is for issue #34490
Benefit is that models that have to define a factory via the newFactory method can utilise the magic factory methods for has and for.
What if the class isn't using that trait?
|
2025-04-01T06:39:21.449388
| 2022-05-02T09:08:44
|
1222656101
|
{
"authors": [
"taylorotwell",
"usernotnull"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7773",
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/42214"
}
|
gharchive/pull-request
|
[9.x] Add to_action helper
After replacing the route redirects with the to_route helper, it makes sense to use the same naming for action redirects.
I appreciate the consistency, but I'm not a huge fan of action based routing. Controllers can be moved / renamed, etc. which makes action routing a bit brittle compared to named routes.
|
2025-04-01T06:39:21.451532
| 2023-11-09T11:03:35
|
1985382793
|
{
"authors": [
"driesvints",
"jcergolj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7774",
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/48955"
}
|
gharchive/pull-request
|
[10.x] ExpectsTable fails if new table prompt method is used
This PR tries to fix the following bug:
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
use function Laravel\Prompts\table;
class DemoCommand extends Command
{
protected $signature = 'run-demo-command';
protected $description = 'Command description';
public function handle(): void
{
table(['name', 'email'], [['joe doe', 'joe.doe@example.com']]);
}
}
// test file
namespace Tests\Feature;
use Tests\TestCase;
class ExampleTest extends TestCase
{
/**
* A basic test example.
*/
public function test_demo_command(): void
{
$this->artisan('run-demo-command')
->expectsTable(['name', 'email'], [['joe doe', 'joe.doe@example.com']]);
}
}
// There was 1 failure:
// 1) Tests\Feature\ExampleTest::test_demo_command
// Output "+---------+---------------------+" was not printed
Based on my understanding, when a table is rendered, the method 'write' from the OutputStyle class is called and not 'doWrite' from BufferedOutput. That's why the expectation fails.
I tried to fix it, but I don't know what to pass to the mock here for the $table variable:
$mock->shouldReceive('write')
->once()
->ordered()
->with($table, Mockery::any())
->andReturnUsing(function () use ($i) {
unset($this->test->expectedTables[$i]);
});
Feel free to resend this once you have time.
|
2025-04-01T06:39:21.469458
| 2023-04-03T10:15:44
|
1651901908
|
{
"authors": [
"nunomaduro",
"tuannpa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7775",
"repo": "laravel/sail",
"url": "https://github.com/laravel/sail/issues/570"
}
|
gharchive/issue
|
Unable to install new dependencies when using Sail
Laravel Version: 10.5.1
PHP Version: 8.2.4
Database Driver & Version: mysql 8.0
OS: Ubuntu 20.04, 22.04
Description:
I installed Laravel with Sail using WSL2 and whenever I tried to install new packages, it gave the following error from ZipDownloader.php:
The archive may contain identical file names with different capitalization (which fails on case insensitive filesys tems): ZipArchive::extractTo(/var/www/html/vendor/composer/e0174f41/symfony-psr-http-message-bridge-a125b93/Argumen tValueResolver/PsrServerRequestResolver.php): Operation failed: Operation not permitted
ZipArchive::extractTo(/var/www/html/vendor/composer/e0174f41/symfony-psr-http-message-bridge-a125b93/ArgumentValueR esolver/PsrServerRequestResolver.php): Operation failed: Operation not permitted
Steps To Reproduce:
Download Ubuntu (any version, I used 20.04, 22.04) from Microsoft Store.
Open Ubuntu and start a fresh installation of Laravel Sail.
Execute up command with: sail up.
Install specific package like laravel/passport: sail composer require laravel/passport.
Observe the error as per screenshot.
I've made a search in Laracasts, a few people reporting this issue ran "composer clearcache" to address this issue. Can you try it, and let me know how it goes? No worries, if the issue persist, I will re-open this issue.
I think this can be closed @nunomaduro since I have found the issue. It is probably related to the mounting operation between Windows and WSL2 not relating to Laravel sail itself. I added a wsl.conf like this and my problem is solved
`[boot]
systemd=true
[automount]
enabled = true
options = "metadata"
mountFsTab = false
[user]
default=tuannpa`
I found some helpful info with below links:
https://stackoverflow.com/questions/66620301/laravel-sail-on-wsl2-wrong-permissions
Fstab: https://superuser.com/questions/1710001/how-do-you-configure-windows-subsystem-for-linux-2-wsl2-to-use-fstab-to-automa
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.