added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:39:06.551250
| 2023-05-03T14:38:34
|
1694209486
|
{
"authors": [
"isaacs",
"marcbachmann"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7101",
"repo": "isaacs/ttlcache",
"url": "https://github.com/isaacs/ttlcache/pull/25"
}
|
gharchive/pull-request
|
Reduce memory usage by only creating one timer
This reduces memory usage and also speeds up the .set calls by a factor of 2.6.
Thanks, this is a very elegant improvement. Published on 1.3.0, also went ahead and made cache.cancelTimer() a first-class public method.
|
2025-04-01T06:39:06.617225
| 2020-10-14T09:21:20
|
721291428
|
{
"authors": [
"mayankmusaddi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7102",
"repo": "ismms-himc/clustergrammer2",
"url": "https://github.com/ismms-himc/clustergrammer2/issues/83"
}
|
gharchive/issue
|
Unable to set any other color other than white to denote a 0 value in a heatmap
I wish to have a heatmap with a 0 value corresponding to red and a 1 value corresponding to white in the Clustergrammer JS Module. However this seems unachievable by any parametric changes. The only option provided in Clustergrammer JS is changing the color of the positive and the negative end.
This Feature is especially needed for P-Value heat maps where a lower value denotes higher significance, hence a darker color corresponding to the 0 is preferred.
Thanks for the suggestion @cornhundred . It helped me indeed.
This issue is not pertaining to this repository. Hence I'm closing this issue here and would reopen it in the clustergrammer JS repository belonging to the MaayanLab.
Apologies.
|
2025-04-01T06:39:06.638110
| 2017-05-02T06:16:51
|
225599922
|
{
"authors": [
"ayj",
"geeknoid"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7105",
"repo": "istio/istio.github.io",
"url": "https://github.com/istio/istio.github.io/issues/90"
}
|
gharchive/issue
|
Right column index overlaps main body content.
The right column index on many pages overlaps the main body content. This is particular bad if the web browser isn't maximum on a large screen. The example below is half-screen width on 15" MBP (other half is terminal to run exercises) with a reasonably small font.
https://istio.io/docs/reference/contribute/style-guide.html.
Dup of #77.
Dave's got a fix for this coming up soon.
|
2025-04-01T06:39:06.732010
| 2020-01-31T16:25:57
|
558232237
|
{
"authors": [
"howardjohn",
"infa-rbliznet",
"jasonwoodman-ascend",
"pmoncadaisla"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7106",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/20733"
}
|
gharchive/issue
|
Pilot configures proxy with wrong routes/listeners
Bug description
Pilot adds incorrect listeners/routes to envoy proxies, resulting in 503's. This is the output of proxy-config routes on 2 identical pods created from the same deployment:
This is the proxy that received the correct config:
$ istioctl pc routes pod1.our-namespace
NOTE: This output only contains routes loaded via RDS.
NAME VIRTUAL HOSTS
8080 4
9091 2
istio-telemetry.istio-system.svc.cluster.local:42422 1
15004 2
15014 2
inbound|8080|http|correct-service.our-namespace.svc.cluster.local 1
inbound|8080|http|correct-service.our-namespace.svc.cluster.local 1
inbound|8080|http|correct-service.our-namespace.svc.cluster.local 1
inbound|8080|http|correct-service.our-namespace.svc.cluster.local 1
Here is the output from an incorrectly configured proxy on the other pod from the same deployment as above:
$ istioctl pc routes pod2.our-namespace
NOTE: This output only contains routes loaded via RDS.
NAME VIRTUAL HOSTS
8080 4
istio-telemetry.istio-system.svc.cluster.local:42422 1
9091 2
15004 2
15014 2
inbound|8800|http|service-from-another-namespace.wrong-namespace.svc.cluster.local 1
inbound|8800|http|service-from-another-namespace.wrong-namespace.svc.cluster.local 1
inbound|8800|http|service-from-another-namespace.wrong-namespace.svc.cluster.local 1
inbound|8800|http|service-from-another-namespace.wrong-namespace.svc.cluster.local 1
Also notice the port number is not right either. Restarting the second pod will result in a proxy with the same config as the first, which is correct.
Expected behavior
Pilot configures proxy with correct routes/listeners for the service.
Steps to reproduce the bug
It seems to only happen on a deployment when new pods are created and not every time. Restarting the pods seems to yield correct listeners every time.
Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
$ istioctl version --remote
client version: 1.4.3
citadel version: 1.4.3
galley version: 1.4.3
galley version: 1.4.3
istio-ig-custom version:
istio-ig-custom version:
istio-ig-custom2 version:
istio-ig-custom2 version:
istio-ig-custom3 version:
istio-ig-custom3 version:
pilot version: 1.4.3
pilot version: 1.4.3
sidecar-injector version: 1.4.3
sidecar-injector version: 1.4.3
telemetry version: 1.4.3
telemetry version: 1.4.3
data plane version: 1.4.0 (1 proxies), 1.4.3 (229 proxies)
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:34Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
How was Istio installed?
Helm 2 with Tiller
Environment where bug was observed (cloud vendor, OS, etc)
On premise, centos
We are seeing the same or simular issue randomly in our pod after upgrading from Istio 1.3.3 to 1.4.3
Steps to reproduce the bug
It seems to only happen on a deployment when new pods are created and not every time. Restarting the pods seems to yield correct listeners every time.
Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
$ istioctl version --remote
client version: 1.4.3
citadel version: 1.4.3
egressgateway version: 1.4.3
egressgateway version: 1.4.3
egressgateway version: 1.4.3
egressgateway version: 1.4.3
egressgateway version: 1.4.3
galley version: 1.4.3
ilbgateway version: 1.4.3
ilbgateway version: 1.4.3
ingressgateway version: 1.4.3
ingressgateway version: 1.4.3
ingressgateway version: 1.4.3
ingressgateway version: 1.4.3
ingressgateway version: 1.4.3
nodeagent version:
[... nodeagent version ...]
nodeagent version:
pilot version: 1.4.3
pilot version: 1.4.3
policy version: 1.4.3
sidecar-injector version: 1.4.3
telemetry version: 1.4.3
telemetry version: 1.4.3
data plane version: 1.4.3 (838 proxies)
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.7-gke.23", GitCommit:"81c87c699557fed991e292cd328b2129c2f242a2", GitTreeState:"clean", BuildDate:"2019-11-07T19:23:23Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
How was Istio installed?
Helm 2 with Tiller
Environment where bug was observed (cloud vendor, OS, etc)
Google Cloud
GKE
$ gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
a-cluster europe-west1 1.14.7-gke.23 x.x.x.x.x n1-standard-4 1.14.7-gke.23 108 RUNNING
Additional information about this issue:
These are the routes in a normal pod (edited the names, I can provide the output wihtout edits via a private channel):
$ istioctl pc routes a-deployment-794f446794-zzwvm.correct-namespace
NOTE: This output only contains routes loaded via RDS.
NAME VIRTUAL HOSTS
80 33
96 2
5556 2
8060 2
8080 10
8081 3
9090 3
9091 30
9901 2
10901 2
15004 3
15010 2
15014 7
15030 2
15031 2
jaeger-istio-collector.istio-system.svc.cluster.local:9411 1
jaeger-istio-collector.istio-system.svc.cluster.local:14267 1
inbound|9091|http-prom-a-deployment-correct-namespace|a-deployment.correct-namespace.svc.cluster.local 1
jaeger-istio-collector.istio-system.svc.cluster.local:14268 1
istio-telemetry.istio-system.svc.cluster.local:42422 1
inbound|9091|http-prom-a-deployment-correct-namespace|a-deployment.correct-namespace.svc.cluster.local 1
inbound|80|http-a-deployment-correct-namespace|a-deployment.correct-namespace.svc.cluster.local 1
inbound|80|http-a-deployment-correct-namespace|a-deployment.correct-namespace.svc.cluster.local 1
20001 2
1
This are the routes from a faulty pod
istioctl pc routes a-deployment-794f446794-clvbg.correct-namespace
NOTE: This output only contains routes loaded via RDS.
NAME VIRTUAL HOSTS
80 33
96 2
5556 2
8060 2
istio-telemetry.istio-system.svc.cluster.local:42422 1
jaeger-istio-collector.istio-system.svc.cluster.local:14268 1
inbound|80|http-other-deployment-wrong-namespace|other-deployment.wrong-namespace.svc.cluster.local 1
jaeger-istio-collector.istio-system.svc.cluster.local:9411 1
jaeger-istio-collector.istio-system.svc.cluster.local:14267 1
inbound|80|http-other-deployment-wrong-namespace|other-deployment.wrong-namespace.svc.cluster.local 1
8080 10
8081 3
9090 3
9091 30
9901 2
10901 2
15004 3
15010 2
15014 7
15030 2
15031 2
20001 2
1
When executing an http request to a POD with the correct configuration we can see the headerx-envoy-decorator-operation set to the correct POD name:
curl -I http://a-deployment-794f446794-zzwvm.correct-namespace:8080
< server: istio-envoy
< x-envoy-decorator-operation: a-deployment.correct-namespace:80/*
If we execute the same request to a POD with a wrong configuration we see:
curl -I http://a-deployment-794f446794-clvbg.correct-namespace:8080
< server: istio-envoy
< x-envoy-decorator-operation: other-deployment.wrong-namespace:80/*
where you can notice the x-envoy-decorator-operation header indicates other-deployment.wrong-namespace:80 but we expected to see a-deployment-794f446794-clvbg.correct-namespace:80
Additionally, we have detected that the content response is correct, it is provided by a-deployment-794f446794-clvbg.correct-namespace but Policies are wrong (in our case, JWT Policy).
Kubernetes and Istio resources for a-deployment in correct-namespace:
VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
creationTimestamp: "2019-08-07T06:15:32Z"
generation: 3
labels:
app: a-deployment
chart: base-v1.2.17
fullapp: correct-namespace
heritage: Tiller
release: a-deployment-correct-namespace
version: v1.2.17
name: a-deployment
namespace: correct-namespace
resourceVersion: "281486396"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/correct-namespace/virtualservices/a-deployment
uid: c3360aef-b8da-11e9-b4e4-42010ac50709
spec:
gateways:
- default/ingressgateway
hosts:
- correct-namespace.dev-01.my.domain.com
- other.dev.my.domain.com
http:
- corsPolicy:
allowCredentials: false
allowHeaders:
- origin
- x-requested-with
- accept
- content-type
- x-application-id
- x-correlation-id
- x-request-id
- authorization
- x-authorization
- X-Customer-ID
- X-SFID
- X-MSISDN
- x-front-client
- scoring-token
- x-target
- cache-control
allowOrigin:
- '*'
maxAge: 600s
match:
- uri:
prefix: /v1/a-deployment/
rewrite:
uri: /
route:
- destination:
host: a-deployment.correct-namespace.svc.cluster.local
port:
number: 80
Service:
piVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "9091"
prometheus.io/probe: "true"
creationTimestamp: "2019-08-07T06:15:32Z"
labels:
app: a-deployment
chart: base-v1.2.17
fullapp: correct-namespace
heritage: Tiller
release: a-deployment-correct-namespace
version: v1.2.17
name: a-deployment
namespace: correct-namespace
resourceVersion: "435659168"
selfLink: /api/v1/namespaces/correct-namespace/services/a-deployment
uid: c3314118-b8da-11e9-b4e4-42010ac50709
spec:
clusterIP: <IP_ADDRESS>
ports:
- name: http-a-deployment-correct-namespace
port: 80
protocol: TCP
targetPort: 8080
- name: http-prom-a-deployment-correct-namespace
port: 9091
protocol: TCP
targetPort: 9091
selector:
app: a-deployment
release: a-deployment-correct-namespace
sessionAffinity: None
type: ClusterIP
Policy
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
creationTimestamp: "2019-10-21T14:11:59Z"
generation: 1
labels:
app: a-deployment
chart: base-v1.2.17
fullapp: correct-namespace
heritage: Tiller
release: a-deployment-correct-namespace
tier: backend
version: v1.2.17
name: a-deployment-authn
namespace: correct-namespace
resourceVersion: "281486392"
selfLink: /apis/authentication.istio.io/v1alpha1/namespaces/correct-namespace/policies/a-deployment-authn
uid: bf1eaa37-f40c-11e9-8d86-42010ac50709
spec:
origins:
- jwt:
issuer: issuer1
jwksUri: https://auth-provider-jwks.example.com/.well-known/jwks.json
trigger_rules:
- excluded_paths:
- exact: /health
- prefix: /docs
- exact: /health
- exact: /metrics
included_paths:
- prefix: /
- jwt:
issuer: issuer2
jwksUri: https://auth-provider-jwks.example.com/.well-known/jwks.json
trigger_rules:
- excluded_paths:
- exact: /health
- prefix: /docs
- exact: /health
- exact: /metrics
included_paths:
- prefix: /
principalBinding: USE_ORIGIN
targets:
- name: a-deployment
Kubernetes and Istio resources for other-deployment in wrong-namespace:
VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
creationTimestamp: "2019-08-14T10:09:51Z"
generation: 3
labels:
app: other-deployment
chart: base-v1.2.17
fullapp: wrong-namespace
heritage: Tiller
release: other-deployment-wrong-namespace
version: v1.2.17
name: other-deployment
namespace: wrong-namespace
resourceVersion: "294862447"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/wrong-namespace/virtualservices/other-deployment
uid: a7b23296-be7b-11e9-bf65-42010ac5070b
spec:
gateways:
- default/ingressgateway
hosts:
- wrong-namespace.dev.my.domain.com
- other-url.dev.my.domain.com
http:
- corsPolicy:
allowCredentials: false
allowHeaders:
- origin
- x-requested-with
- accept
- content-type
- x-requestId
- x-requestSessionId
- authorization
- cache-control
allowOrigin:
- '*'
maxAge: 600s
match:
- uri:
prefix: /other-deployment/
rewrite:
uri: /
route:
- destination:
host: other-deployment.wrong-namespace.svc.cluster.local
port:
number: 80
Service
apiVersion: v1
kind: Service
metadata:
annotations:
creationTimestamp: "2019-08-14T10:09:51Z"
labels:
app: other-deployment
chart: base-v1.2.17
fullapp: wrong-namespace
heritage: Tiller
release: other-deployment-wrong-namespace
version: v1.2.17
name: other-deployment
namespace: wrong-namespace
resourceVersion: "435658730"
selfLink: /api/v1/namespaces/wrong-namespace/services/other-deployment
uid: a7afea06-be7b-11e9-bf65-42010ac5070b
spec:
clusterIP: <IP_ADDRESS>
ports:
- name: http-other-deployment-wrong-namespace
port: 80
protocol: TCP
targetPort: 8080
selector:
app: other-deployment
release: other-deployment-wrong-namespace
sessionAffinity: None
type: ClusterIP
Policy
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
creationTimestamp: "2019-09-11T10:26:43Z"
generation: 1
labels:
app: other-deployment
chart: base-v1.2.17
fullapp: wrong-namespace
heritage: Tiller
release: other-deployment-wrong-namespace
tier: backend
version: v1.2.17
name: other-deployment-authn
namespace: wrong-namespace
resourceVersion: "294862440"
selfLink: /apis/authentication.istio.io/v1alpha1/namespaces/wrong-namespace/policies/other-deployment-authn
uid: a64ba2ce-d47e-11e9-bf65-42010ac5070b
spec:
origins:
- jwt:
issuer: issuer3
jwksUri: https://auth-provider-jwks.example.com/.well-known/jwks.json
trigger_rules:
- excluded_paths:
- exact: /monitoring/health
- prefix: /doc/
principalBinding: USE_ORIGIN
targets:
- name: other-deployment
As for the config dump file, it is attached.
istio-config-dump-edited.json.gz
I hope this information is useful.
Thanks!
This may just be some recency bias, but this feels a lot like https://github.com/istio/istio/issues/20676. tl;dr is a pod IP gets re-used, causing pilot to generate config for the old pod instead of the new pod. So presumably, if this was the case, there was a pod with some ip <IP_ADDRESS> in service-from-another-namespace.wrong-namespace, then later your new pod in service correct-service.our-namespace comes up with pod ip <IP_ADDRESS> and gets the wrong config. I think this can only happen with pre-emptible nodes though, are you using those?
The tell-tale sign of this is if you see this pattern in the logs:
Handling event update for pod pod1-sha123 in namespace foo -> <IP_ADDRESS>
Handling event update for pod pod1-sha123 in namespace foo -> <IP_ADDRESS> # same pod name, new IP
Handling event update for pod new-pod-sha345 in namespace foo -> <IP_ADDRESS> # re-use the old pod
@howardjohn In our case the nodes are long-living, they are on-prem and have been running for nearly a year.
Hi @howardjohn , we are indeed using preemtible nodes (and not preemtible).
I've search for "Handling event update for pod" in Stackdriver logs (whole cluster) and I couldn't find a single match. (Also tested substrings)
@pmoncadaisla if you are using pre-emptible nodes it seems likely the root cause. Thats weird it doesn't match, that log happens any time a pod is changed at all, do you not have info level logging enabled?
@howardjohn I can see this in the istio-proxy container of any POD: --proxyLogLevel="warning" and --log_output_level="default:info"
Pre-emptible notes seems to me the root cause, but, why is this happening after the upgrade from 1.3.3 to 1.4.3 and not before? We've seem this exact behaviour in 2 different Kubernetes clusters, upgraded from and to the same Istio version.
My understand was that this code path is entirely unchanged, so I am surprised it change during upgrade - very possible its a different root cause. The logs I am referring to are on pilot not the proxy.
You are right @howardjohn , I was looking for logs in a cluster were logs from pilot are excluded from ingestion at Stackdriver.
I've checked for the same one I reported before, and I can see what you described:
2020-02-05T12:10:59.408811Z info Handling event update for pod other-deployment-6d94d5f9db-8mt4d in namespace wrong-namespace -> <IP_ADDRESS>
2020-02-05T12:12:50.781199Z info Handling event update for pod other-deploymenti-6d94d5f9db-8mt4d in namespace wrong-namespace -> <IP_ADDRESS>
2020-02-05T15:24:21.009595Z info Handling event update for pod a-deployment-794f446794-clvbg in namespace correct-namespace -> <IP_ADDRESS>
In this case IP address <IP_ADDRESS> is being reused as you described.
I guess our options if we are affected by this issue are:
Shutdown preemtible nodes
Wait for 1.5 and expect #20676 to be included
@pmoncadaisla this will be backported to 1.4 as well. If you want, you can run the dev builds that include this commit: https://github.com/istio/istio/wiki/Dev Builds. You can see all commits at https://github.com/istio/istio/commits/release-1.4. Note that we don't validate these much, so it would be at your own risk
Hi @howardjohn , we have removed our preemptible nodes until the fix is included in a release.
We will be monitoring and we will keep you informed if we see the same behaviour after removing preemptible nodes.
If we don't have problems, then our issue is not related to @jasonwoodman-ascend 's
Its possible @jasonwoodman-ascend is the same issue without pre-emptible nodes. The symptom here is a running pod has its IP address changed. I've only seen it on node restarts, but perhaps it could happen in other cases?
@howardjohn @pmoncadaisla I think my issue can be explained by the same root cause. In our case, though the nodes are long living, we do restart them for patching. I am guessing our issue started in our last patching cycle, the timing seems about right.
At this point I am waiting for the fix to be released and will re-evaluate then as I think it will most likely fix it.
I think, we have same or very similar issue, so that's why I'm not filing a new bug. In our case as well istio pickups listeners from completely different namespace (kube-system) instead of namespace where service is deployed. However, it's intermittent issue, we have to run delete-create loop for deployment/service to reproduce it. The only thing that helped as is to manually define Sidecar object and repeat all the ports defined in Service object.
This should be fixed in 1.4+ now. If you still see this issue on these versions, please reopen/open another issue. Thanks!
|
2025-04-01T06:39:06.736335
| 2020-05-04T10:44:19
|
611775848
|
{
"authors": [
"howardjohn",
"jsalgado78"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7107",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/23493"
}
|
gharchive/issue
|
Istio CNI plugin install with SELinux enabled not documented
Describe the feature request
Istio CNI plugin install and repair when SELinux is enabled and docker daemon is running with --selinux-enabled flag aren't documented.
install-cni container fails when it runs with SELinux enabled but SELinux requirements aren't documented to get Istio CNI plugin running without issues.
Describe alternatives you've considered
Add to Istio CNI plugin install documentation SELinux requirements to get Istio CNI plugin running without issues.
Additional context
Closing as a duplicate of https://github.com/istio/istio/issues/23605. Still valid though. Thanks!
|
2025-04-01T06:39:06.775417
| 2021-02-23T07:14:59
|
814187220
|
{
"authors": [
"esnible",
"howardjohn",
"hzxuzhonghu",
"jasonwzm",
"psanzm",
"samirmajen",
"xichengliudui",
"zhengqisong"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7108",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/31014"
}
|
gharchive/issue
|
when VirtualService Status ErrorοΌistiod can not start
istio version: 1.18.3
kubernetes:1.18.10
Bug description
when VirtualService config error, istiod can not started
istiod error log:
{"level":"error","time":"2021-02-23T06:40:18.954868Z","scope":"klog","msg":"k8s.io/client-go@v0.19.3/tools/cache/reflector.go:156: Failed to watch *v1alpha3.VirtualService: failed to list *v1alpha3.VirtualService: v1alpha3.VirtualServiceList.Items: []v1alpha3.VirtualService: v1alpha3.VirtualService.v1alpha3.VirtualService.Status: unmarshalerDecoder: unknown value \"Error\" for enum istio.analysis.v1alpha1.AnalysisMessageBase_Level, error found in #10 byte of ...|l+v1\\\"\"}]}},{\"apiVer|..., bigger context ...|\"map-nginx.project-1925.svc.cluster.local+v1\\\"\"}]}},{\"apiVersion\":\"networking.istio.io/v1alpha3\",\"ki|...[]"}
error VirtualService config
spec:
exportTo:
- .
gateways:
- gateway-122-71
hosts:
- nginx-map.gw-wso2t-sy-in.earth.xcloud.lenovo.com
http:
- match:
- uri:
prefix: /
rewrite:
uri: /
route:
- destination:
host: map-nginx.project-1925.svc.cluster.local
port:
number: 80
subset: v1
weight: 100
status:
validationMessages:
- code: IST0101
documentation_url: https://istio.io/docs/reference/config/analysis/ist0101/?ref=status-controller
level: Error
message: 'Referenced host+subset in destinationrule not found: "map-nginx.project-1925.svc.cluster.local+v1"'
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[X] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
[ ] Upgrade
Expected behavior
@zhengqisong can u provide a complete well indented vs to help us understand what happened?
+1
yikes. We need to fix this. One issue - I don't know how :slightly_smiling_face:
Just ran into this exact problem when upgrading istio - had to delete the offending resources for the upgrade to work correctly
@jasonwzm do you know if its possible to discard bad configs instead of block?
I removed the strict deserialization from CLI things and expect it will be easy to fix for istiod.
@howardjohn To test any fixes, I need to be able to reproduce. Two years ago it was easy to put bogus stuff in status using commands like kubectl patch vs bookinfo --type=json --patch='[{"op": "replace", "path": "/status", "value": {"startTime": "2019-06-05T17:52:46Z", "state": "ACTIVE"}}]'. Kubernetes no longer allows it. I made two guesses as to how to enable adding bogus stuff to /status and failed:
// Doesn't work, schema required
kubectl patch crd virtualservices.networking.istio.io --type='json' -p='[{"op": "remove", "path": "/spec/versions/0/schema"}]'
// Doesn't work "no change"
kubectl patch crd virtualservices.networking.istio.io --type='json' -p='[{"op": "remove", "path": "/metadata/managedFields"}]'
@therealmitchconnors
Any idea what protects /status? I know it is something in the API server, because if I do --dry-run=client I can set the status, but if I do --dry-run=server the status does not get patched.
You might need a raw api call. when you do patch it probably sends a PUT
call to /crds/virtualservices/foo or similar - we need it to go to
/crds/virtualservices/foo/status. You might be able to add -v9 to the
kubectl command to get the API its calling, add /status, then call it with
kubectl patch --raw?
On Thu, Mar 18, 2021 at 7:09 AM Ed Snible @.***> wrote:
I removed the strict deserialization from CLI things and expect it will be
easy to fix for istiod.
@howardjohn https://github.com/howardjohn To test any fixes, I need to
be able to reproduce. Two years ago it was easy to put bogus stuff in
status using commands like kubectl patch vs bookinfo --type=json
--patch='[{"op": "replace", "path": "/status", "value": {"startTime":
"2019-06-05T17:52:46Z", "state": "ACTIVE"}}]'. Kubernetes no longer
allows it. I made two guesses as to how to enable adding bogus stuff to
/status and failed:
// Doesn't work, schema required
kubectl patch crd virtualservices.networking.istio.io --type='json' -p='[{"op": "remove", "path": "/spec/versions/0/schema"}]'
// Doesn't work "no change"
kubectl patch crd virtualservices.networking.istio.io --type='json' -p='[{"op": "remove", "path": "/metadata/managedFields"}]'
@therealmitchconnors https://github.com/therealmitchconnors
Any idea what protects /status? I know it is something in the API server,
because if I do --dry-run=client I can set the status, but if I do
--dry-run=server the status does not get patched.
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/istio/istio/issues/31014#issuecomment-801959404, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AAEYGXN2G2H4NTGOUM4MSU3TEICTNANCNFSM4YB6FBNQ
.
There is no patch --raw in kubectl (I checked 1.19 and 1.20).
I tried
MYAPISERVER=c7.us-south.containers.cloud.ibm.com:28514
echo '{"status": "banana"}' | kubectl create --raw https://${MYAPISERVER}/apis/networking.istio.io/v1beta1/namespaces/default/virtualservices/httpbin/status -f -
... but it fails with 'create is not supported on resources of kind "virtualservices.networking.istio.io"'
https://github.com/kubernetes/kubectl/issues/564
On Thu, Mar 18, 2021 at 10:21 AM Ed Snible @.***> wrote:
There is no patch --raw in kubectl (I checked 1.19 and 1.20).
I tried
MYAPISERVER:28514=c7.us-south.containers.cloud.ibm.com
echo '{"status": "banana"}' | kubectl create --raw https://${MYAPISERVER}/apis/networking.istio.io/v1beta1/namespaces/default/virtualservices/httpbin/status -f -
... but it fails with 'create is not supported on resources of kind "
virtualservices.networking.istio.io"'
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/istio/istio/issues/31014#issuecomment-802141282, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AAEYGXNETDAT275QKNDCPB3TEIZB7ANCNFSM4YB6FBNQ
.
It seems to be an issue when analysis is enabled during startup and the unmarshaller encountered an issue. I vaguely remember that we changed the format of message level we put in status in CRDs?
To manually put garbage into status, first install Istio with WITHOUT values.global.istiod.enableAnalysis=true and WITHOUT values.pilot.env.PILOT_ENABLE_STATUS=true. Then
kubectl proxy &
curl -k -s -X PATCH -H "Accept: application/json, */*" \
-H "Content-Type: application/merge-patch+json" \
<IP_ADDRESS>:8001/apis/networking.istio.io/v1beta1/namespaces/default/virtualservices/httpbin/status \
--data '{"status":{"monkey":"banana"}}'
At this point, for master, I have no problem restarting or re-installing Istio with or without analysis. It is possible this was fixed in an earlier version. Checking...
I have a backtrace. istio.io/istio is not in the path. istio.io/api is generated code. Not sure what to do.
A user with authority to do kubectl proxy and modify any VirtualService can use the technique in https://github.com/istio/istio/issues/31014#issuecomment-802810868 to cause trouble for the control plane.
istio.io/api/meta/v1alpha1.(*IstioStatus).UnmarshalJSON(0xc000df93c8, 0xc002c300f0, 0x45, 0x50, 0xc000df93c8, 0x6a48b38)
/Users/snible/go/pkg/mod/istio.io/api@v0.0.0-20210318104759-fbefbc937cef/meta/v1alpha1/status_json.gen.go:35 +0x18d
github.com/json-iterator/go.(*unmarshalerDecoder).Decode(0xc001f6dc40, 0xc002abf1b0, 0xc000f03050)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/reflect_marshaler.go:200 +0xdb
github.com/json-iterator/go.(*referenceDecoder).Decode(0xc001f6dc50, 0xc000df93c8, 0xc000f03050)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/reflect_optional.go:128 +0x68
github.com/json-iterator/go.(*structFieldDecoder).Decode(0xc001f958e0, 0xc000df9200, 0xc000f03050)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/reflect_struct_decoder.go:1054 +0x78
github.com/json-iterator/go.(*fiveFieldsStructDecoder).Decode(0xc000b38e40, 0xc000df9200, 0xc000f03050)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/reflect_struct_decoder.go:739 +0x311
github.com/json-iterator/go.(*sliceDecoder).doDecode(0xc00106bd70, 0xc0002547c8, 0xc000f03050)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/reflect_slice.go:86 +0xdd
github.com/json-iterator/go.(*sliceDecoder).Decode(0xc00106bd70, 0xc0002547c8, 0xc000f03050)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/reflect_slice.go:60 +0x45
github.com/json-iterator/go.(*structFieldDecoder).Decode(0xc001f95c20, 0xc000254770, 0xc000f03050)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/reflect_struct_decoder.go:1054 +0x78
github.com/json-iterator/go.(*fourFieldsStructDecoder).Decode(0xc0010587d0, 0xc000254770, 0xc000f03050)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/reflect_struct_decoder.go:697 +0xbf
github.com/json-iterator/go.(*Iterator).ReadVal(0xc000f03050, 0x3ae97c0, 0xc000254770)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/reflect.go:79 +0xc2
github.com/json-iterator/go.(*frozenConfig).Unmarshal(0xc000374c80, 0xc002b57000, 0xc49, 0x1000, 0x3ae97c0, 0xc000254770, 0x0, 0x0)
/Users/snible/go/pkg/mod/github.com/json-iterator/go@v1.1.10/config.go:348 +0xb7
k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Decode(0xc000226550, 0xc002b57000, 0xc49, 0x1000, 0x0, 0x4033d98, 0xc000254770, 0xc00d5a63302ad750, 0xc059add2, 0x5cdda00, ...)
/Users/snible/go/pkg/mod/k8s.io/apimachinery@v0.20.4/pkg/runtime/serializer/json/json.go:264 +0x5be
k8s.io/apimachinery/pkg/runtime.WithoutVersionDecoder.Decode(0x3ffcc00, 0xc000226550, 0xc002b57000, 0xc49, 0x1000, 0x0, 0x4033d98, 0xc000254770, 0xc002c24160, 0xc000bc5770, ...)
/Users/snible/go/pkg/mod/k8s.io/apimachinery@v0.20.4/pkg/runtime/helper.go:252 +0x97
k8s.io/client-go/rest.Result.Into(0xc002b57000, 0xc49, 0x1000, 0x0, 0x0, 0x0, 0xc001337c90, 0x10, 0x0, 0x0, ...)
/Users/snible/go/pkg/mod/k8s.io/client-go@v0.20.4/rest/request.go:1273 +0xb4
istio.io/client-go/pkg/clientset/versioned/typed/networking/v1alpha3.(*virtualServices).List(0xc002c181c0, 0x407ce28, 0xc000076098, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/Users/snible/go/pkg/mod/istio.io/client-go@v0.0.0-20210218000043-b598dd019200/pkg/clientset/versioned/typed/networking/v1alpha3/virtualservice.gen.go:93 +0x29d
istio.io/client-go/pkg/informers/externalversions/networking/v1alpha3.NewFilteredVirtualServiceInformer.func1(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x3c28a4f, ...)
/Users/snible/go/pkg/mod/istio.io/client-go@v0.0.0-20210218000043-b598dd019200/pkg/informers/externalversions/networking/v1alpha3/virtualservice.gen.go:63 +0x1bc
k8s.io/client-go/tools/cache.(*ListWatch).List(0xc000e16888, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/Users/snible/go/pkg/mod/k8s.io/client-go@v0.20.4/tools/cache/listwatch.go:106 +0x75
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1.2(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x3c28a4f, ...)
/Users/snible/go/pkg/mod/k8s.io/client-go@v0.20.4/tools/cache/reflector.go:283 +0x75
k8s.io/client-go/tools/pager.SimplePageFunc.func1(0x407ce28, 0xc000076090, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/Users/snible/go/pkg/mod/k8s.io/client-go@v0.20.4/tools/pager/pager.go:40 +0x75
k8s.io/client-go/tools/pager.(*ListPager).List(0xc002c39e60, 0x407ce28, 0xc000076090, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/Users/snible/go/pkg/mod/k8s.io/client-go@v0.20.4/tools/pager/pager.go:91 +0x175
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc002b7f4a0, 0xc000dfe1c0, 0xc002260870, 0xc002c06860, 0xc002c1a009, 0xc002c06870, 0xc001b83bc0)
/Users/snible/go/pkg/mod/k8s.io/client-go@v0.20.4/tools/cache/reflector.go:309 +0x1e5
created by k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
/Users/snible/go/pkg/mod/k8s.io/client-go@v0.20.4/tools/cache/reflector.go:271 +0x298
I am test in istio 1.6.8,1.6.13 1.7.4 ,1.7.6,1.8.3
public-system(the namespace publish service to outward) has gateway, virtualservice(exportTo: .)
project-1925(the namespace publish service to namespace`s service) has service, deploy, destinationrule, virtualservice(exportTo: .)
In <=1.7.4 public-system`s virtualservice has under error, but istod can restart
validationMessages:
- code: IST0101
documentation_url: https://istio.io/docs/reference/config/analysis/ist0101/?ref=status-controller
level: Error
message: 'Referenced host+subset in destinationrule not found: "map-nginx.project-1925.svc.cluster.local+v1"'
in 1.7.6 public-system`s virtualservice no error
upgrade from 1.7.4 to 1.8.3, istiod(1.8.3) canot not start
upgrade from 1.7.4 to 1.7.6, the public-system`s virtualservice error remain, but istiod(1.7.6) can started, and service can be connected.
upgrade from 1.7.4 to 1.7.6, then upgrade from 1.7.6 to 1.8.3, the public-system`s virtualservice error remain, but istiod(1.8.3) can not started,
{"level":"error","time":"2021-03-22T10:39:08.252656Z","scope":"klog","msg":"k8s.io/client-go@v0.19.3/tools/cache/reflector.go:156: Failed to watch *v1alpha3.VirtualService: failed to list
*v1alpha3.VirtualService: v1alpha3.VirtualServiceList.Items: []v1alpha3.VirtualService: v1alpha3.VirtualService.v1alpha3.VirtualService.Status: unmarshalerDecoder: unknown value \"Error\"
for enum istio.analysis.v1alpha1.AnalysisMessageBase_Level, error found in #10 byte of ...|l+v1\\\"\"}]}},{\"apiVer|..., bigger context ...|esh-b.project-1896.svc.frankfurtinp.local+v1\\\"
\"}]}},{\"apiVersion\":\"networking.istio.io/v1alpha3\",\"ki|...[]"}
install 1.7.6, then upgrade upgrade from 1.7.6 to 1.8.3 the public-system`s virtualservice no error, istiod(1.8.3) can started
I think This issues has two bug:
vs check has a bug, destination no support other namespaces destinationrule .
vs in public-system
http:
- match:
- uri:
prefix: /
rewrite:
uri: /
route:
- destination:
host: map-nginx.project-1896.svc.cluster.local
port:
number: 80
subset: v1
weight: 10
istiod(1.8.3) has a bug for parse VirtualService that has Error(level: Error)
status:
validationMessages:
- code: IST0101
documentation_url: https://istio.io/docs/reference/config/analysis/ist0101/?ref=status-controller
level: Error #this line parse Error
message: 'Referenced host+subset in destinationrule not found: "map-nginx.project-1896.svc.cluster.local+v1"
@zhengqisong Sorry for the late reply. Thank you for providing many details on reproducing the issue.
This is caused by https://github.com/istio/istio/pull/25960 where we were updating the validation output message to align with our status API. This changed the API output from 1.7 to 1.8 and therefore you see the unmarshalling error during istiod startup.
We are working on a refactoring of analysis and hoping to move it (including the API) out of alpha before 1.11. Before that, the upgrade experience may not be ideal when you have analysis enabled.
Does the current behavior cause additional trouble using Istio?
Jason, I think the problem extends beyond that? There is a potential DOS
vector by generating invalid configuration in etcd/api-server. istiod
should be resilient to bad configs
On Mon, Mar 29, 2021 at 10:25 AM Jason Wang @.***>
wrote:
@zhengqisong https://github.com/zhengqisong Sorry for the late reply.
Thank you for providing many details on reproducing the issue.
This is caused by #25960 https://github.com/istio/istio/pull/25960
where we were updating the validation output message to align with our
status API. This changed the API output from 1.7 to 1.8 and therefore you
see the unmarshalling error during istiod startup.
We are working on a refactoring of analysis and hoping to move it
(including the API) out of alpha before 1.11. Before that, the upgrade
experience may not be ideal when you have analysis enabled.
Does the current behavior cause additional trouble using Istio?
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/istio/istio/issues/31014#issuecomment-809562694, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AAEYGXL32XQZO3XGSY2KF3TTGCZWZANCNFSM4YB6FBNQ
.
@howardjohn Yes, that is the bigger concern here. This applies to any istio config that can be patched by users in etcd, though patching etcd is a highly privileged action. We should look into a more general way of handling that.
In this particular case, it is our code that output differently than expected in an enum field.
I modify etcd data success upgrade istiod
first get virtualservice-63-66 json in etcd
ETCDCTL_API=3 ./etcdctl --endpoints="https://<IP_ADDRESS>:2379" put /registry/networking.istio.io/virtualservices/mesh-system/virtualservice-63-66
scal deploy istiod --replicas=0
update virtualservice-63-66 drop status info in etcd
ETCDCTL_API=3 ./etcdctl --endpoints="https://<IP_ADDRESS>:2379" put /registry/networking.istio.io/virtualservices/mesh-system/virtualservice-152-75 '{"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"creationTimestamp":"2021-04-06T03:28:39Z","generation":1,"labels":{"application":"applicatiion-4089","project":"project-1927"},"managedFields":[{"apiVersion":"networking.istio.io/v1alpha3","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:application":{},"f:project":{}}},"f:spec":{".":{},"f:exportTo":{},"f:gateways":{},"f:hosts":{},"f:http":{}}},"manager":"okhttp","operation":"Update","time":"2021-04-06T03:28:39Z"},{"apiVersion":"networking.istio.io/v1alpha3","fieldsType":"FieldsV1","fieldsV1":{"f:status":{".":{},"f:validationMessages":{}}},"manager":"pilot-discovery","operation":"Update","time":"2021-04-06T03:28:39Z"}],"name":"virtualservice-152-75","namespace":"mesh-system","uid":"899e3b64-fe55-4b4f-825b-d5e058842281"},"spec":{"exportTo":["."],"gateways":["gateway-152-75"],"hosts":["sdf4gdsfg.demo.com"],"http":[{"match":[{"uri":{"prefix":"/"}}],"rewrite":{"uri":"/"},"route":[{"destination":{"host":"mesh-01-nginx.project-1927.svc.cluster.local","port":{"number":80},"subset":"v1"},"weight":100}]}]}}'
scal deploy istiod --replicas=2
istiod only started new version
Hi everyone! First at all, thanks for your work and contribution!
I ran with the same problem upgrading Istio control plane, from v1.7.4 to v1.8.5 and from v1.6.14 to v1.8.5.
We have a lot of Istio's resources so we cannot patch them all to remove the status. There is any workaround to fix this?
Also the problem is not only in VirtualService CRD, seems the Sidecar is affected too:
{"level":"error","time":"2021-05-04T16:03:13.205337Z","scope":"klog","msg":"k8s.io/client-go@v0.19.3/tools/cache/reflector.go:156: Failed to watch *v1alpha3.Sidecar: failed to list *v1alpha3.Sidecar: v1alpha3.SidecarList.Items: []v1alpha3.Sidecar: v1alpha3.Sidecar.v1alpha3.Sidecar.Status: unmarshalerDecoder: unknown value \"Error\" for enum istio.analysis.v1alpha1.AnalysisMessageBase_Level, error found in #10 byte of ...|avior.\"}]}},{\"apiVer|..., bigger context ...|elector, which can lead to undefined behavior.\"}]}},{\"apiVersion\":\"networking.istio.io/v1alpha3\",\"ki|...[]"}
Thanks in advance, regards!
|
2025-04-01T06:39:06.786249
| 2021-09-30T17:25:08
|
1012435670
|
{
"authors": [
"bianpengyuan",
"eliavem",
"linsun",
"ricosega"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7109",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/35429"
}
|
gharchive/issue
|
service with externalname overrides istio-proxy 443 rules
Bug Description
I have some services without selector pointing to endpoints for external services to the cluster in order to monitor them.
At the moment I create the following object:
apiVersion: v1
kind: Service
metadata:
name: nexus-metrics-monitoring
namespace: external-monitoring
labels:
app: nexus-metrics-monitoring
spec:
externalName: nexus.internal
ports:
- name: nexus
port: 4443
protocol: TCP
targetPort: 443
- name: node
port: 9100
protocol: TCP
targetPort: 9100
- name: containers
port: 8080
protocol: TCP
targetPort: 8080
type: ExternalName
The TLS connections originated from inside the cluster in a pod with the istio-proxy sidecar starts to fail returning me 404 errors.
One more thing, shouldn't this be awared somehow?
I mean, when you run "istioctl analyze --all-namespaces" this errors are being shown as an INFO [IST0118], but exactly this case an https port changed completely the behavior of all the connections from inside.
Yeah good point. It is an INFO because we have protocol sniffing and it is not necessary to name port. But in the case of externalname service, unnamed port can be disruptive. We probably should add a bit more warn message when service is externalname type.
@bianpengyuan I'd like to work on this issue. Can you assign it to me please?
@eliavem done. thanks for taking this!
Hi @bianpengyuan, I can see there are two possibilities for making users aware about this issue.
We can extend the diag.MessageType for message "PortNameIsNotUnderNamingConvention" - Info [IST0118]. We can add something like "Warning: unnamed port can affect [block] routing to external services for ExternalName service type"
If we want to display this warning only when we have an ExternalName service type, then we could create a separate diag.MessageType of type WARNING. In PortNameAnalyzer, we can add a check for ExternalName service type and display this message instead of [IST0118].
I would like to understand if your suggestion above - "We probably should add a bit more warn message when service is externalname type" can be covered with point #1
Hi @eliavem thank you for picking up this WI. I'd vote for only display this warning (perhaps even an error?) for service that has ExternalName. Because in normal services (not having ExternalName), it should be fine without causing users any issue with auto protocol detection.
I agree we should create a separated warning message for service of externalName.
@bianpengyuan I close this as you already fixed and merged it.
Thank you.
|
2025-04-01T06:39:06.802961
| 2021-12-07T22:00:42
|
1073789025
|
{
"authors": [
"bianpengyuan",
"movergan",
"zirain"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7110",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/36419"
}
|
gharchive/issue
|
Tracing: Istio HTTP->gRPC span doesn't appear in trace.
Inspired by investigations done in https://github.com/istio/istio/issues/11696
Environment:
IstioGateway->Service-A(http)->Service-B(gRPC)
Ports in services are named properly.
Service A Takes headers from incoming HTTP request and adds them to the outdoing gRPC requests to Service B:
XRequestID: requestID,
XB3TraceID: getHeader(xB3TraceID),
XB3SpanID: getHeader(xB3SpanID),
XB3ParentSpanID: getHeader(xB3ParentSpanID),
XB3Sampled: getHeader(xB3Sampled),
XB3Flag: getHeader(xB3Flags),
B3: getHeader(b3),
In Jaeger, I only see the trace from IstioGateway to Service A but no span for ServiceA to ServiceB.
Proxy log format is:
accessLogFormat: "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %RESPONSE_CODE_DETAILS% %CONNECTION_TERMINATION_DETAILS% \"%UPSTREAM_TRANSPORT_FAILURE_REASON%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-B3-SAMPLED)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(X-B3-TRACEID)%\" \"%REQ(X-B3-SPANID)%\" \"%REQ(X-B3-PARENTSPANID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%\n"
The logs from the proxies:
InressGateway:
[2021-12-07T21:26:50.090Z] "POST /init HTTP/1.1" 200 - via_upstream - "-" 828 755 47 47 "<IP_ADDRESS>, <IP_ADDRESS>,<IP_ADDRESS>" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36" "1" "303cef67-e3cb-9ff9-80aa-614c19d502b3" "b8b2896792ea92498a84596ac59bb845" "8a84596ac59bb845" "-" "gs1-gs-x.domain.tld" "<IP_ADDRESS>:8005" outbound|80||gs1-api-game-server.d01.svc.cluster.local <IP_ADDRESS>:46938 <IP_ADDRESS>:80 <IP_ADDRESS>:49843 - play-for-real
ServiceA Incoming
[2021-12-07T21:26:50.091Z] "POST /init HTTP/1.1" 200 - via_upstream - "-" 828 755 46 45 "<IP_ADDRESS>, <IP_ADDRESS>,<IP_ADDRESS>" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36" "1" "303cef67-e3cb-9ff9-80aa-614c19d502b3" "b8b2896792ea92498a84596ac59bb845" "7845ffbbf6266607" "8a84596ac59bb845" "gs1-gs-x.domain.tld" "<IP_ADDRESS>:8005" inbound|8005|| <IP_ADDRESS>:52327 <IP_ADDRESS>:8005 <IP_ADDRESS>:0 outbound_.80_._.gs1-api-game-server.d01.svc.cluster.local default
ServiceA Outgoing
[2021-12-07T21:26:50.095Z] "POST /core_service_v1.CoreService/GetBalance HTTP/2" 200 - via_upstream - "-" 55 17 18 18 "-" "grpc-go/1.36.1" "1" "303cef67-e3cb-9ff9-80aa-614c19d502b3" "b8b2896792ea92498a84596ac59bb845" "7845ffbbf6266607" "8a84596ac59bb845" "gs1-api-core:50101" "<IP_ADDRESS>:8007" outbound|50101||gs1-api-core.d01.svc.cluster.local <IP_ADDRESS>:49578 <IP_ADDRESS>:50101 <IP_ADDRESS>:45616 - default
ServiceB Incoming
[2021-12-07T21:26:50.096Z] "POST /core_service_v1.CoreService/GetBalance HTTP/2" 200 - via_upstream - "-" 55 17 15 14 "-" "grpc-go/1.36.1" "1" "303cef67-e3cb-9ff9-80aa-614c19d502b3" "b8b2896792ea92498a84596ac59bb845" "7845ffbbf6266607" "8a84596ac59bb845" "gs1-api-core:50101" "<IP_ADDRESS>:8007" inbound|8007|| <IP_ADDRESS>:53567 <IP_ADDRESS>:8007 <IP_ADDRESS>:49578 outbound_.50101_._.gs1-api-core.d01.svc.cluster.local default
Based on that proxy logs it seems that spainId and parrentSpanId are not replaced by the envoy and propagated "as is" down to the ServiceB.
As an alternative experiment I called Service B via grpcCurl from the pod of ServiceA:
./grpcurl -rpc-header 'x-b3-sampled: 1' -rpc-header 'x-b3-traceid: ebf7561051ac6433952010cd9cac40d3' -rpc-header 'x-b3-parentspanid: 297e3236182ca725' -rpc-header 'x-b3-spanid: 952010cd9cacdddd' -rpc-header 'x-request-id: a48ace99-0d2f-9fa9-9aad-83d5fe03c579' -plaintext -proto core_service_v1.proto gs1-api-core:50101 core_service_v1.CoreService/GetBalance
ServiceA outgoing:
[2021-12-07T21:37:57.691Z] "POST /core_service_v1.CoreService/GetBalance HTTP/2" 200 - via_upstream - "-" 63 17 29 29 "-" "grpcurl/v1.8.5 grpc-go/1.37.0" "1" "a48ace99-0d2f-9fa9-9aad-83d5fe03c579" "ebf7561051ac6433952010cd9cac40d3" "7fc4a980a41d3edf" "952010cd9cacdddd" "gs1-api-core:50101" "<IP_ADDRESS>:8007" outbound|50101||gs1-api-core.d01.svc.cluster.local <IP_ADDRESS>:54452 <IP_ADDRESS>:50101 <IP_ADDRESS>:38614 - default
ServiceB Incoming:
[2021-12-07T21:37:57.692Z] "POST /core_service_v1.CoreService/GetBalance HTTP/2" 200 - via_upstream - "-" 63 17 28 27 "-" "grpcurl/v1.8.5 grpc-go/1.37.0" "1" "a48ace99-0d2f-9fa9-9aad-83d5fe03c579" "ebf7561051ac6433952010cd9cac40d3" "cbc05a179e71686a" "7fc4a980a41d3edf" "gs1-api-core:50101" "<IP_ADDRESS>:8007" inbound|8007|| <IP_ADDRESS>:42243 <IP_ADDRESS>:8007 <IP_ADDRESS>:54452 outbound_.50101_._.gs1-api-core.d01.svc.cluster.local default
As you can see, in this case, spans were replaced properly.
Debug log with grpc metadata from the application ServiceB when called by ServiceA:
2021/12/07 21:57:25 gRPC server metadata x-b3-flags:[];x-forwarded-proto:[http];:authority:[gs1-api-core:50101];user-agent:[grpc-go/1.36.1];x-b3-traceid:[3ffc9fb9cb04232b3a0ec68e6cc286ff];x-b3-spanid:[ffa5bc3f75abc782];x-b3-parentspanid:[3a0ec68e6cc286ff];x-forwarded-client-cert:[By=spiffe://cluster.local/ns/d01/sa/gs1-api-core;Hash=c0306f7205d9ac84e9e5b312c942b9a3465da206292ff0c7c6a4f635ce15ae91;Subject="";URI=spiffe://cluster.local/ns/d01/sa/gs1-api-game-server];content-type:[application/grpc];x-request-id:[d9d13ed6-bb8b-9123-a75a-fe0e456a350c];x-b3-sampled:[1];b3:[];x-envoy-attempt-count:[1];
Debug log with grpc metadata from the application ServiceB when called by GRPCcurl:
2021/12/07 21:58:40 gRPC server metadata content-type:[application/grpc];x-envoy-attempt-count:[1];x-b3-traceid:[ebf7561051ac6433952010cd9cac40d3];x-b3-parentspanid:[c846478c2b88b686];x-b3-sampled:[1];:authority:[gs1-api-core:50101];user-agent:[grpcurl/v1.8.5 grpc-go/1.37.0];x-request-id:[a48ace99-0d2f-9fa9-9aad-83d5fe03c579];x-forwarded-proto:[http];x-forwarded-client-cert:[By=spiffe://cluster.local/ns/d01/sa/gs1-api-core;Hash=c0306f7205d9ac84e9e5b312c942b9a3465da206292ff0c7c6a4f635ce15ae91;Subject="";URI=spiffe://cluster.local/ns/d01/sa/gs1-api-game-server];x-b3-spanid:[817e1c7f84e7f9ec];
Istio installed via helm version 1.12.0 with the following settings:
global:
tracer:
zipkin:
address: jaeger-collector.istio-system:9411
meshConfig:
accessLogFile: /dev/stdout
accessLogFormat: |
[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %RESPONSE_CODE_DETAILS% %CONNECTION_TERMINATION_DETAILS% "%UPSTREAM_TRANSPORT_FAILURE_REASON%" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%" "%REQ(X-B3-SAMPLED)%" "%REQ(X-REQUEST-ID)%" "%REQ(X-B3-TRACEID)%" "%REQ(X-B3-SPANID)%" "%REQ(X-B3-PARENTSPANID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME% %ROUTE_NAME%
pilot:
traceSampling: 100
docker.io/istio/proxyv2:1.12.0
docker.io/istio/pilot:1.12.0
jaegertracing/all-in-one:1.28.0
At this point, I am lost and I am not sure why istio is not taking care of gRPC part of the trace and don't replace headers.
This looks weird.. Can you enable service B proxy trace log with istioctl proxy-config log POD --level=trace, Capture the log of the request from ingress and grpc curl directly from service A. Want to compare what is difference between those two request, which might attribute to why service B proxy does not initiate a span.
@bianpengyuan thanks for a quick response. Here we go:
Ingress Log:
[2021-12-08T08:53:58.386Z] "POST /init HTTP/1.1" 200 - via_upstream - "-"<PHONE_NUMBER> 97 "<IP_ADDRESS>, <IP_ADDRESS>,<IP_ADDRESS>" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36" "1" "b5c688cd-0846-9cab-9432-fe6704aa7a3c" "29639b27a30880e5ff53863e6d6b6bab" "ff53863e6d6b6bab" "-" "gs1-gs-x.domain.tld" "<IP_ADDRESS>:8005" outbound|80||gs1-api-game-server.d01.svc.cluster.local <IP_ADDRESS>:59276 <IP_ADDRESS>:80 <IP_ADDRESS>:51409 - play-for-real
ServiceA:
[2021-12-08T08:51:12.838Z] "POST /init HTTP/1.1" 200 - via_upstream - "-" 828 754 708 694 "<IP_ADDRESS>, <IP_ADDRESS>,<IP_ADDRESS>" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36" "1" "86dbe583-64d9-982f-abdf-90ea5b0411fa" "e70ae8c0491b8e46cb7909086c8a732a" "369bafcd44cd96f7" "cb7909086c8a732a" "gs1-gs-x.domain.tld" "<IP_ADDRESS>:8005" inbound|8005|| <IP_ADDRESS>:36657 <IP_ADDRESS>:8005 <IP_ADDRESS>:0 outbound_.80_._.gs1-api-game-server.d01.svc.cluster.local default
[2021-12-08T08:51:13.036Z] "POST /core_service_v1.CoreService/GetBalance HTTP/2" 200 - via_upstream - "-" 55 17 58 58 "-" "grpc-go/1.36.1" "1" "86dbe583-64d9-982f-abdf-90ea5b0411fa" "e70ae8c0491b8e46cb7909086c8a732a" "369bafcd44cd96f7" "cb7909086c8a732a" "gs1-api-core:50101" "<IP_ADDRESS>:8007" outbound|50101||gs1-api-core.d01.svc.cluster.local <IP_ADDRESS>:37150 <IP_ADDRESS>:50101 <IP_ADDRESS>:50170 - default
ServiceB log attached.
ServiceB-proxy.log
Same with CurlgRPC:
ServiceA log:
[2021-12-08T09:07:21.369Z] "POST /core_service_v1.CoreService/GetBalance HTTP/2" 200 - via_upstream - "-" 63 17 34 30 "-" "grpcurl/v1.8.5 grpc-go/1.37.0" "1" "a48ace99-0d2f-9fa9-9aad-83d5fe03c579" "ebf7561051ac6433952010cd9cac40d3" "222d8415b7cc1280" "952010cd9cacdddd" "gs1-api-core:50101" "<IP_ADDRESS>:8007" outbound|50101||gs1-api-core.d01.svc.cluster.local <IP_ADDRESS>:36550 <IP_ADDRESS>:50101 <IP_ADDRESS>:53974 - default
Service B log attached.
ServiceB-curlgrpc.log
It seems it is fixed by removing b3 and b3flags empty headers. If I don't propogate them from ServiceA to ServiceB trace works and glues togather. Any ideas why? I remember advice from #11696 not to propagate empty headers.
Ah b3 and b3-flags actually implies sample decision: https://github.com/openzipkin/b3-propagation#debug-flag and envoy doc: https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#config-http-conn-man-headers-x-b3-flags
Ah b3 and b3-flags actually implies sample decision: https://github.com/openzipkin/b3-propagation#debug-flag and envoy doc: https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#config-http-conn-man-headers-x-b3-flags
so the root reason is empty b3 headers broken trace context?
Yes, it seems so. Is it expected behaviour?
so the root reason is empty b3 headers broken trace context?
Yes.
Yes, it seems so. Is it expected behaviour?
Yes. Close since this is expected. Thanks!
|
2025-04-01T06:39:06.805724
| 2023-06-16T19:26:28
|
1761202322
|
{
"authors": [
"istio-testing",
"lei-tang",
"zirain"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7111",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/45527"
}
|
gharchive/issue
|
[release-1.17] improve accesslog mode e2e tests
Manual cherrypick required.
#45504 failed to apply on top of branch "release-1.17":
Applying: improve accesslog mode e2e tests
Applying: retry
Using index info to reconstruct a base tree...
M tests/integration/telemetry/api/accesslogs_test.go
Falling back to patching base and 3-way merge...
Auto-merging tests/integration/telemetry/api/accesslogs_test.go
CONFLICT (content): Merge conflict in tests/integration/telemetry/api/accesslogs_test.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0002 retry
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
https://github.com/istio/istio/pull/45532 fixes this issue.
duplicated
|
2025-04-01T06:39:06.810266
| 2023-11-09T12:29:22
|
1985522629
|
{
"authors": [
"Killgore87",
"hzxuzhonghu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7112",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/issues/47802"
}
|
gharchive/issue
|
VM Sidecar drop connections on large number of them
Is this the right place to submit this?
[X] This is not a security vulnerability or a crashing bug
[X] This is not a question about how to use Istio
Bug Description
We installed sidecar on a virtual machine with MySQL. Our architecture requires a large number of connections to the database (up to 10,000)
But when we reach 1700-2000 connections, we can no longer connect and get errors like this
2023-11-03T10:23:44.119694Z critical envoy backtrace external/envoy/source/server/backtrace.h:104 Caught Aborted, suspect faulting address 0x3dd000368cc thread=223451
2023-11-03T10:23:44.119762Z critical envoy backtrace external/envoy/source/server/backtrace.h:91 Backtrace (use tools/stack_decode.py to get line numbers): thread=223451
2023-11-03T10:23:44.119802Z critical envoy backtrace external/envoy/source/server/backtrace.h:92 Envoy version: a1ff538a63890e27dd2add4b2680ba8dc49293ca/1.27.1-dev/Clean/RELEASE/BoringSSL thread=223451
2023-11-03T10:23:44.119551Z critical envoy assert external/envoy/source/common/network/socket_interface_impl.cc:72 assert failure: SOCKET_VALID(result.return_value_). Details: socket(2) failed, got error: Too many open filesthread=223446
ulimit -n
40000
Version
version 1.19.3
OS rocky linux 9
Additional Information
No response
There is a limit in your os, try to check ulimit -a
open files (-n) 40000
4000 limits all the process in the os
increase limit, error disappeared but problem not
ulimit -a
real-time non-blocking time (microseconds, -R) unlimited
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 58746
max locked memory (kbytes, -l) 8192
max memory size (kbytes, -m) unlimited
open files (-n) 524288
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 58746
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[root@db1-test ~]# lsof | wc -l
347505
[root@db1-test ~]# netstat -atn|wc -l
1613
IIRC, in istio we set connection pool max connections to MAXUint32, so there is not connection number limit. But there is a max idle time per idle connection. So check if this idle timeout cause connection close or there is a connection number limit on MySQL setting
|
2025-04-01T06:39:06.831144
| 2017-11-15T00:38:24
|
273994091
|
{
"authors": [
"apicl",
"douglas-reid",
"geeknoid",
"istio-testing",
"manlinl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7113",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/1726"
}
|
gharchive/pull-request
|
Add api key
What this PR does / why we need it:
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #
Special notes for your reviewer:
First add the api attributs into install/kubernetes/templates/istio-mixer.yaml.tmpl, then run ./install/updateVersion.sh
Release note:
NONE
@apicl: The following test failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
prow/istio-presubmit.sh
43671b10b3e54f9d9647af6959a728331856dabc
link
/test istio-presubmit
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
I don't know the purpose of any of these files, so I'll pass on reviewing this :-)
/lgtm
PTAL
/lgtm
@apicl: The following test failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
prow/istio-presubmit.sh
65b5d1e2c32e830151e0cdf3af46d1adb189f445
link
/test istio-presubmit
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
I'd like someone familiar with all these files to comment first before approving...
/assign @costinm
/lgtm
/retest
/lgtm
@apicl: The following test failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
prow/istio-presubmit.sh
e6a10b2acad1c99a847b07c04c8fdab92e699266
link
/test istio-presubmit
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/retest
@apicl: The following test failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
prow/istio-presubmit.sh
e6a10b2acad1c99a847b07c04c8fdab92e699266
link
/test istio-presubmit
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/retest
@apicl: The following test failed, say /retest to rerun them all:
Test name
Commit
Details
Rerun command
prow/istio-presubmit.sh
e6a10b2acad1c99a847b07c04c8fdab92e699266
link
/test istio-presubmit
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/lgtm
|
2025-04-01T06:39:06.838797
| 2020-04-29T01:01:14
|
608707848
|
{
"authors": [
"brian-avery",
"istio-testing",
"jacob-delgado",
"rshriram"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7114",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/23361"
}
|
gharchive/pull-request
|
[release-1.6] unit tests for ServiceEntry update fix
This is an automated cherry-pick of #23354
@googlebot I consent
@istio-testing: The following test failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
unit-tests_istio_release-1.6
b2da85f3a9f1f16e78a094331c4f4af008b1e474
link
/test unit-tests_istio_release-1.6
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/retest
@istio-testing: The following test failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
unit-tests_istio_release-1.6
b2da85f3a9f1f16e78a094331c4f4af008b1e474
link
/test unit-tests_istio_release-1.6
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/retest
|
2025-04-01T06:39:06.840566
| 2020-05-09T18:57:56
|
615239652
|
{
"authors": [
"istio-testing",
"jmazzitelli"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7115",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/23678"
}
|
gharchive/pull-request
|
[kiali] pilot version endpoint is gone, use the new istiod replacement
fixes: https://github.com/istio/istio/issues/23677
pilot service version endpoint is gone in 1.6 - replaced with analogous version endpoint in the istiod service. Kiali needs to switch to using the new istiod endpoint.
In response to a cherrypick label: new pull request created: #23679
|
2025-04-01T06:39:06.850487
| 2020-09-24T23:23:45
|
708532760
|
{
"authors": [
"howardjohn",
"istio-testing",
"stevenctl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7116",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/27531"
}
|
gharchive/pull-request
|
multicluster registries respect endpointmode
Currently only the "main" registry has the endpoint mode set. This adds that for multicluster remote registries.
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[X] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
Pull Request Attributes
Please check any characteristics that apply to this pull request.
[ ] Does not have any changes that may affect Istio users.
@stevenctl: The following test failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
release-notes_istio
88e9071cedfa2e10ef923be70372e21abe041875
link
/test release-notes_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/retest
@howardjohn I actually don't want this to merge yet. I have a strong feeling the endpointslice tests will fail. I saw that happening in https://github.com/istio/istio/pull/27493 even with this change. That PR is just to get the pilot/mc tests green so we get to the point where that's blocking. I've had it passing a few times now consistently to have it break later when incompatible thing are added.
Looks like ingress gateways in remote clusters fail to get discovery:
https://prow.istio.io/view/gs/istio-prow/pr-logs/pull/istio_istio/27531/integ-pilot-multicluster-tests_istio/1605
@stevenctl: The following test failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
gencheck_istio
8d64296bee64befb70fdd4366186d8aa470ac273
link
/test gencheck_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
|
2025-04-01T06:39:06.858666
| 2021-02-17T01:59:26
|
809788022
|
{
"authors": [
"incfly",
"istio-testing",
"yangminzhu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7117",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/30878"
}
|
gharchive/pull-request
|
ext-authz: return error for invalid extension provider only when it is being used
Fixes #30824
If some extension provider is not used by any authz policy, it should not cause errors even if it might be invalid.
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[x] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
Pull Request Attributes
Please check any characteristics that apply to this pull request.
[x] Does not have any changes that may affect Istio users.
@yangminzhu: The following test failed, say /retest to rerun all failed tests:
Test name
Commit
Details
Rerun command
integ-pilot-k8s-tests_istio
b19a0db87b840a6e6b4e590eaddd346d079a26c9
link
/test integ-pilot-k8s-tests_istio
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/test integ-pilot-k8s-tests_istio
is the error in the original issue coming from the validation webhook when applying the meshconfig updaets? I didn't see other unit test updates other than validation-test.go
is the error in the original issue coming from the validation webhook when applying the meshconfig updaets? I didn't see other unit test updates other than validation-test.go
@incfly not from validation webhook but directly from the authz plugin in pilot, it emits the error every time it is called when pushing xDS configs to proxy.
In response to a cherrypick label: new pull request created: #30959
|
2025-04-01T06:39:06.866245
| 2021-08-06T20:48:29
|
963036833
|
{
"authors": [
"AdamKorcz",
"hzxuzhonghu",
"istio-testing"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7118",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/34567"
}
|
gharchive/pull-request
|
Fuzzing: Add bootstrap fuzzer
Adds fuzzer that creates a fuzzed bootstrap.PilotArgs{} and passes it on to bootstrap.NewServer()
To help us figure out who should review this PR, please put an X in all the areas that this PR affects.
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[x] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[x] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
Please check any characteristics that apply to this pull request.
[x] Does not have any user-facing changes. This may include CLI changes, API changes, behavior changes, performance improvements, etc.
Hi @AdamKorcz. Thanks for your PR.
I'm waiting for a istio member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/ok-to-test
@AdamKorcz: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
|
2025-04-01T06:39:09.007335
| 2023-07-31T17:45:43
|
1829733284
|
{
"authors": [
"ericvn",
"istio-testing"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7119",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/46248"
}
|
gharchive/pull-request
|
Automator: update proxy@master in istio/istio@master
Generated by Automator - 2023-07-31T17:20:18+00:00
@istio-testing: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
Test name
Commit
Details
Required
Rerun command
unit-tests_istio
79bcba59f9e34055746917337b9ab69ffce4b646
link
true
/test unit-tests
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/test unit-tests
|
2025-04-01T06:39:09.036383
| 2023-10-16T16:24:05
|
1945635275
|
{
"authors": [
"costinm",
"hzxuzhonghu",
"istio-testing",
"ramaraochavali"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7120",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/47376"
}
|
gharchive/pull-request
|
move stateful set filter to per route filter
This PR
Moves the Statefulsession filter to per route instead of per virtual host
Decides based on route destination i.e. if the route destination service has labels attached this filter will be added. Without this, the stateful session filter wont be generated for virtual hosts that are generated based on virtual service hosts.
Added tests to cover these flows
[ ] Ambient
[ ] Configuration Infrastructure
[ ] Docs
[ ] Dual Stack
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
@ramaraochavali: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
Test name
Commit
Details
Required
Rerun command
release-notes_istio
511ef4384c927680edce24053bd559a8cc2b9f61
link
true
/test release-notes
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@ramaraochavali: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
Test name
Commit
Details
Required
Rerun command
release-notes_istio
511ef4384c927680edce24053bd559a8cc2b9f61
link
true
/test release-notes
unit-tests-arm64_istio
511ef4384c927680edce24053bd559a8cc2b9f61
link
true
/test unit-tests-arm64
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@ramaraochavali: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
Test name
Commit
Details
Required
Rerun command
release-notes_istio
511ef4384c927680edce24053bd559a8cc2b9f61
link
true
/test release-notes
unit-tests-arm64_istio
511ef4384c927680edce24053bd559a8cc2b9f61
link
true
/test unit-tests-arm64
unit-tests_istio
511ef4384c927680edce24053bd559a8cc2b9f61
link
true
/test unit-tests
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@ramaraochavali: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
Test name
Commit
Details
Required
Rerun command
unit-tests-arm64_istio
511ef4384c927680edce24053bd559a8cc2b9f61
link
true
/test unit-tests-arm64
unit-tests_istio
511ef4384c927680edce24053bd559a8cc2b9f61
link
true
/test unit-tests
release-notes_istio
96acff9e07bd4bfbbced0a4fb3f1dcfefdf33ad3
link
true
/test release-notes
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
/test release-notes
Ok to approve it after we clarify the semantics and backward compat impact, may be best to review alongside release notes.
@costinm retained the old behaviour for BC and added comments. PTAL
@costinm Can you PTAL?
@costinm @hzxuzhonghu @howardjohn Can you PTAL when you get chance?
While setting all route filters with same stateful session, and also same statefulset in virtualHost level. Isn't it equal to the VirtualHost level config before?
Earlier solution does not add stateful session filter for all "hosts" defined on virtual service - and hence it does not apply to all routes. This fixes that. Please see the test case
I know that, what I mean is when vs hosts equals the service hostname, then it will apply same filter to both virtualHost and per route level. Is that redundant?
I know that, what I mean is when vs hosts equals the service hostname, then it will apply same filter to both virtualHost and per route level. Is that redundant?
It is but does not make any behavioural difference. Long term, for us it would be better to move to per route.
@hzxuzhonghu Can you PTAL when you get chance?
Last question will per route filter take precedence over virtualhost one?
I am aksing because i think if user setup simiar case like different matching to apply different headers, but all route to one same service, then maybe he only need to apply the filter to per virtualHost
yes. per route takes precedence over per virtual host. I think those are cases which can not easily solved with simple env var. We need proper API
@hzxuzhonghu @costinm gentle ping Can you PTAL?
I am concerned that this may not be the eventual ideal prototype.
SVC A ->Detination with SVCB
->Detination with SVCC
->Detination with SVCD, here we would choose oneof B or C Or D's affinity policy to apply.
here we would choose oneof B or C Or D's affinity policy to apply.
Not exactly. It is based on your destination. If your destination is B - we use B's affinity policy for that route which makes sense to me.
SVC A's Route matching xxx -> SVCB (40%)
-> SVCC (30%)
-> SVCD(30%)
In this example, it may choose B or C Or D's affinity policy to apply.
In this example, it may choose B or C Or D's affinity policy to apply.
Correct. It is tricky to configure for weighted destinations pointing to different destinations. As I said most common use case is to use same destination but different versions
Yes, it is not common. One case probably split some traffic to canary release.
@hzxuzhonghu What do you suggest? IMO this is a decent fix that would solve some important use cases for us(except the weighted confusion which we agree that it is rare use case)
@hzxuzhonghu Ok. Changed. PTAL
Thanks
/lgtm
|
2025-04-01T06:39:09.044814
| 2024-12-28T02:57:09
|
2761407658
|
{
"authors": [
"istio-testing",
"my-git9"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7121",
"repo": "istio/istio",
"url": "https://github.com/istio/istio/pull/54489"
}
|
gharchive/pull-request
|
removed unused const
Please provide a description of this PR:
Removed some unused const
@my-git9: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
Test name
Commit
Details
Required
Rerun command
lint_istio
fc127385cd3575f022432c5264299b8e094e1c4e
link
true
/test lint
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
@my-git9: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
Test name
Commit
Details
Required
Rerun command
lint_istio
fc127385cd3575f022432c5264299b8e094e1c4e
link
true
/test lint
unit-tests-arm64_istio
1cb15980e1d58fbb829da0594ed612444ba3f3d5
link
true
/test unit-tests-arm64
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
|
2025-04-01T06:39:09.046094
| 2017-08-03T17:35:58
|
247781298
|
{
"authors": [
"douglas-reid",
"guptasu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7122",
"repo": "istio/mixer",
"url": "https://github.com/istio/mixer/issues/1002"
}
|
gharchive/issue
|
Port the existing e2e smoke test (may be others) to the new mixer adapter 0.2 model
Before we can delete code existing code, we need to port the existing e2e tests (inside and outside mixer repo) to use the new adapter model.
This can be closed. We should track the move of APAs in a 0.3 issue.
|
2025-04-01T06:39:09.048090
| 2017-07-12T00:35:21
|
242223365
|
{
"authors": [
"kyessenov",
"ldemailly"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7123",
"repo": "istio/pilot",
"url": "https://github.com/istio/pilot/issues/924"
}
|
gharchive/issue
|
File system config adapter
Provide an alternative file system based backend as a temporary solution for VM work.
This work will likely be superseded by one Galley adaptor to replace both FS and TPR adapters.
File system store.
Implement route rule representation as files in a directory hierarchy.
inotify watcher.
Implement config change watcher using inotify.
cc @gnirodi @costinm @rshriram @mandarjog
subscribe
closing as we're focusing on CRD backend first.
|
2025-04-01T06:39:09.079863
| 2024-08-14T07:20:47
|
2465089112
|
{
"authors": [
"devtobi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7124",
"repo": "it-at-m/refarch-templates",
"url": "https://github.com/it-at-m/refarch-templates/issues/190"
}
|
gharchive/issue
|
[Documentation] Add README
Relevant documentation
Other
Problem description (optional)
Currently this project has no useful content in its README file
Desired solution
Add a README file with useful information.
Keep in mind our central documentation tool will be Vitepress, this means the README should only contain the most important information and link to the Vitepress documentation where useful.
Additional context (optional)
No response
No duplicate
[X] I confirm that this issue is not a duplicate
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Maybe https://readme.so can help here? However we need to discuss what parts of documentation only should be part of docs/ and which are also worth mentioning in the README.
I'm e.g. thinking about things like LICENSE, Contribution information, ...
|
2025-04-01T06:39:09.086263
| 2024-06-29T06:50:48
|
2381539413
|
{
"authors": [
"borntoshine1",
"liliaruda"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7125",
"repo": "ita-social-projects/GreenCity",
"url": "https://github.com/ita-social-projects/GreenCity/issues/7238"
}
|
gharchive/issue
|
[UBS AutoNotification] Links in autonotifications are not clickable in 'Messages' tab
Environment: Chrome 126.0.6478.127 (64-bit)
Reproducible: always
Build found: 29.06.24
Preconditions
Go to https://www.pick-up.city/ as client
Open 'My profile'
Click on 'Messages' tab
Client has notifications with links (such as The courier route formed, Violation of the rules, Pay for the change in the order, Unpaid Order) in 'Messages' tab
Steps to reproduce
Expand notification (by clicking on the up arrow)
Pay attention to the links in the message
Actual result
Links in notifications are not clickable in 'Messages' tab
Expected result
Links in notifications are clickable and active in 'Messages' tab
User story #2796
The link to the messenger is not clickable
https://github.com/user-attachments/assets/8a6697be-8a1a-4e7e-b229-074c2ff75d0c
https://github.com/user-attachments/assets/8a6697be-8a1a-4e7e-b229-074c2ff75d0c
|
2025-04-01T06:39:09.097802
| 2021-03-03T14:15:34
|
821149389
|
{
"authors": [
"go-ann",
"vitaliyhere"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7126",
"repo": "ita-social-projects/dokazovi-be",
"url": "https://github.com/ita-social-projects/dokazovi-be/issues/193"
}
|
gharchive/issue
|
As an Admin I want to see Edit button on Material page so that I could change material
As an Admin I want to see Edit button on Material page so that I could change material
Preconditions:
Registered User is logged in
Registered User has Admin role
Admin opened Material page
Button is displayed on every Material Page (Article, Post, Video etc.)
Pressing the button opens the Editing page with prefilled Material content.
Editing page contains all elements of the Creation page and the button Β«ΠΡΠ΄ΠΌΡΠ½ΠΈΡΠΈ ΡΠ΅Π΄Π°Π³ΡΠ²Π°Π½Π½ΡΒ».
Instead of Β«ΠΠΏΡΠ±Π»ΡΠΊΡΠ²Π°ΡΠΈΒ» the user sees Β«ΠΠ±Π΅ΡΠ΅Π³ΡΠΈ Ρ ΠΎΠΏΡΠ±Π»ΡΠΊΡΠ²Π°ΡΠΈΒ». The button functionality does not change.
For Article use page from US #122
For Note use page from US #123
For Video use page from US #124
After pressing the button Β«ΠΠ±Π΅ΡΠ΅Π³ΡΠΈΒ» user should confirm his decision in modal window. Pressing "Π’Π°ΠΊ" saves changes and displays Material page. Pressing "ΠΡ" closes modal window, user stays at the edition page.
After pressing the button Β«ΠΡΠ΄ΠΌΡΠ½ΠΈΡΠΈ ΡΠ΅Π΄Π°Π³ΡΠ²Π°Π½Π½ΡΒ» user should confirm his decision in modal window. Pressing "Π’Π°ΠΊ" cancels edition and displays original Material page. Pressing "ΠΡ" closes modal window, user stays at the edition page.
Assumption:
The Author and the creating date stay unchanged after saving.
AC:
The User has Admin role
Pressed button opens prefilled Editing Page
The Admin can edit fields Β«Π’Π΅ΠΌΠΈΒ» (ΠΠ°ΠΏΡΡΠΌΠΊΠΈ), Β«ΠΠ°Π³ΠΎΠ»ΠΎΠ²ΠΎΠΊΒ», Β«Π’Π΅ΠΊΡΡ ΡΡΠ°ΡΡΡΒ», Β«Π’Π΅ΠΊΡΡ ΠΊΠ°ΡΡΠΊΠΈ ΠΌΠ°ΡΠ΅ΡΡΠ°Π»ΡΒ».
Not accepted.
Editing page doesn't have Β«ΠΠ±Π΅ΡΠ΅Π³ΡΠΈ Ρ ΠΎΠΏΡΠ±Π»ΡΠΊΡΠ²Π°ΡΠΈΒ» button.
UI of the editing page is outdated.
A modal window doesn't pop up after pressing the button Β«ΠΠ±Π΅ΡΠ΅Π³ΡΠΈΒ» , user is redirected to his Material page immediately.
The design of modal window is outdated.
21/07/21
BA check.
Not accepted.
Editing page doesn't have Β«ΠΠ±Π΅ΡΠ΅Π³ΡΠΈ Ρ ΠΎΠΏΡΠ±Π»ΡΠΊΡΠ²Π°ΡΠΈΒ» button.
A modal window doesn't pop up after pressing the button Β«ΠΠ±Π΅ΡΠ΅Π³ΡΠΈΒ» , user is redirected to his Material page immediately.
UI of the editing page and of modal window are designed from own mockups so some visual details can be changed/polished or are changing/being polished.
|
2025-04-01T06:39:09.110201
| 2023-10-26T11:18:09
|
1963289980
|
{
"authors": [
"JAPHETHNYARANGA",
"paul-nadola"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7127",
"repo": "italanta/elewa-website-html",
"url": "https://github.com/italanta/elewa-website-html/pull/332"
}
|
gharchive/pull-request
|
Project carousel#71
Description
Created the Carousel section of the home page
Fixes #71
Worked on this issue with @Schola-droid and @https://github.com/AzharAhmed-bot
Type of change
Please delete options that are not relevant.
[x] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[x] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[x] This change requires a documentation update
Screenshot (optional)
Desktop
Tablet
Phone
Checklist
[x] My code follows the style guidelines of this project
[x] I have performed a self-review of my code
[x] I have commented my code, particularly in hard-to-understand areas
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
Closing the branch as it is already done. This is really good work.
|
2025-04-01T06:39:09.128607
| 2022-02-13T17:58:20
|
1136214477
|
{
"authors": [
"zanardigit"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7128",
"repo": "italiaremote/awesome-italia-remote",
"url": "https://github.com/italiaremote/awesome-italia-remote/pull/2"
}
|
gharchive/pull-request
|
Add Refactory
Refactory is also full remote, so here's my PR to add it. Thank you!
Thank you for this project!
|
2025-04-01T06:39:09.142685
| 2021-03-20T21:45:46
|
836917688
|
{
"authors": [
"itavero",
"jhm47"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7129",
"repo": "itavero/homebridge-z2m",
"url": "https://github.com/itavero/homebridge-z2m/issues/117"
}
|
gharchive/issue
|
Ubisys S2 (also applies to S1)
Device description
I'd like to have the Ubisys S2 switch supported. It is a mains-powered, wall mounted, two-channel switch for loads, e.g. lights. Same goes for the S1 (difference: s2 is two channel, s1 is one channel, they produce the same error message)
Manufacturer website
Supported in Zigbee2MQTT?
The device is supported in Zigbee2MQTT. Same goes for S1. Interestingly the Ubisys D1 works flawlessly.
I am on the latest dev version, but I assume support for these has been around a while.
Device model / Exposes information
{
"date_code": "20191127-DE-FB0",
"definition": {
"description": "Power switch S2",
"exposes": [
{
"endpoint": "l1",
"features": [
{
"access": 7,
"description": "On/off state of the switch",
"endpoint": "l1",
"name": "state",
"property": "state_l1",
"type": "binary",
"value_off": "OFF",
"value_on": "ON",
"value_toggle": "TOGGLE"
}
],
"type": "switch"
},
{
"endpoint": "l2",
"features": [
{
"access": 7,
"description": "On/off state of the switch",
"endpoint": "l2",
"name": "state",
"property": "state_l2",
"type": "binary",
"value_off": "OFF",
"value_on": "ON",
"value_toggle": "TOGGLE"
}
],
"type": "switch"
},
{
"access": 5,
"description": "Instantaneous measured power",
"endpoint": "meter",
"name": "power",
"property": "power",
"type": "numeric",
"unit": "W"
},
{
"access": 1,
"description": "Triggered action (e.g. a button click)",
"name": "action",
"property": "action",
"type": "enum",
"values": [
"toggle_s1",
"toggle_s2",
"on_s1",
"on_s2",
"off_s1",
"off_s2",
"recall_*_s1",
"recal_*_s2",
"brightness_move_up_s1",
"brightness_move_up_s2",
"brightness_move_down_s1",
"brightness_move_down_s2",
"brightness_stop_s1",
"brightness_stop_s2"
]
},
{
"access": 1,
"description": "Link quality (signal strength)",
"name": "linkquality",
"property": "linkquality",
"type": "numeric",
"unit": "lqi",
"value_max": 255,
"value_min": 0
}
],
"model": "S2",
"supports_ota": true,
"vendor": "Ubisys"
},
"endpoints": {
"1": {
"bindings": [
{
"cluster": "genOnOff",
"target": {
"endpoint": 1,
"ieee_address": "0x00124b0021b7788c",
"type": "endpoint"
}
}
],
"clusters": {
"input": [
"genBasic",
"genIdentify",
"genGroups",
"genScenes",
"genOnOff"
],
"output": []
},
"configured_reportings": [
{
"attribute": "onOff",
"cluster": "genOnOff",
"maximum_report_interval": 300,
"minimum_report_interval": 0,
"reportable_change": 0
}
]
},
"2": {
"bindings": [
{
"cluster": "genOnOff",
"target": {
"endpoint": 1,
"ieee_address": "0x00124b0021b7788c",
"type": "endpoint"
}
}
],
"clusters": {
"input": [
"genBasic",
"genIdentify",
"genGroups",
"genScenes",
"genOnOff"
],
"output": []
},
"configured_reportings": [
{
"attribute": "onOff",
"cluster": "genOnOff",
"maximum_report_interval": 300,
"minimum_report_interval": 0,
"reportable_change": 0
}
]
},
"3": {
"bindings": [
{
"cluster": "genOnOff",
"target": {
"endpoint": 1,
"ieee_address": "0x001fee00000058d9",
"type": "endpoint"
}
}
],
"clusters": {
"input": [
"genBasic",
"genIdentify"
],
"output": [
"genScenes",
"genOnOff",
"genLevelCtrl"
]
},
"configured_reportings": []
},
"4": {
"bindings": [
{
"cluster": "genOnOff",
"target": {
"endpoint": 2,
"ieee_address": "0x001fee00000058d9",
"type": "endpoint"
}
}
],
"clusters": {
"input": [
"genBasic",
"genIdentify"
],
"output": [
"genScenes",
"genOnOff",
"genLevelCtrl"
]
},
"configured_reportings": []
},
"5": {
"bindings": [
{
"cluster": "seMetering",
"target": {
"endpoint": 1,
"ieee_address": "0x00124b0021b7788c",
"type": "endpoint"
}
}
],
"clusters": {
"input": [
"genBasic",
"seMetering",
"haElectricalMeasurement"
],
"output": []
},
"configured_reportings": [
{
"attribute": "instantaneousDemand",
"cluster": "seMetering",
"maximum_report_interval": 3600,
"minimum_report_interval": 5,
"reportable_change": 1
}
]
},
"200": {
"bindings": [],
"clusters": {
"input": [],
"output": []
},
"configured_reportings": []
},
"232": {
"bindings": [],
"clusters": {
"input": [
"genBasic",
"genCommissioning",
"manuSpecificUbisysDeviceSetup"
],
"output": [
"genIdentify",
"genOta"
]
},
"configured_reportings": []
},
"242": {
"bindings": [],
"clusters": {
"input": [
"greenPower"
],
"output": [
"greenPower"
]
},
"configured_reportings": []
}
},
"friendly_name": "KuecheWand",
"ieee_address": "0x001fee00000058d9",
"interview_completed": true,
"interviewing": false,
"model_id": "S2 (5502)",
"network_address": 38227,
"power_source": "Mains (single phase)",
"supported": true,
"type": "Router"
}
Missing features/functionality
This device is currently not exposed at all, as it throws an error at start up
[3/20/2021, 10:17:03 PM] [zigbee2mqtt] Failed to setup stateless programmable switch for accessory KuecheWand from expose "{"access":1,"description":"Triggered action (e.g. a button click)","name":"action","property":"action","type":"enum","values":["toggle_s1","toggle_s2","on_s1","on_s2","off_s1","off_s2","recall_*_s1","recal_*_s2","brightness_move_up_s1","brightness_move_up_s2","brightness_move_down_s1","brightness_move_down_s2","brightness_stop_s1","brightness_stop_s2"]}", error: Error: Device found with a wildcard in the exposed possible values for the action, which cannot be mapped: recall_*_s1
Expected functionality: a simple lamp on/off (homekit Lightbulb)
I guess the problem stems from the "recall_*_s1" attribute, which I dont think is needed and can be ignored
Suggested services and characteristics
For end points l1 and l2, I would expect a homekit Lightbulb
As it's a switch, it should show up as a Switch in HomeKit (which is also what the plugin website indicates).
This Switch service can be configured in HomeKit to be interpreted as a light, if that is what you have connected to it.
The error you are seeing is because it also exposes an action, but it contains wildcard values which the plugin can't handle.
This should however not influence the creation of the Switch service, but only means that it will not have a Stateless Programmable Switch service to expose the action enum to HomeKit.
If you also want to stateless programmable switch to be present you can either add the wildcard values to the ignore list in the of the homebridge-z2m plugin (see this page for some more info), or you can figure out all the valid values and open a PR for Koenkk/zigbee-herdsman-converters to improve the exposes information.
I'll try to create a unit test from the information you've provided, if you are sure that the Switch services are not being exposed.
I've just added some automated tests and as far as I can tell, two Switch services are being created for this device, as I would expect based on the exposes information you've shared.
Can you please provide the entire log of the startup of Homebridge (with the debug option) enabled?
Thanks a lot!
I have now deleted all services in HomeKit and set it up again ... and now it shows the switch!
(I dont know what happened -- I don't think overlooked it the first time -- but no guarantee).
Thanks anyways ... it now works!
|
2025-04-01T06:39:09.162543
| 2022-12-05T13:43:02
|
1476597358
|
{
"authors": [
"JulianoLagana",
"shcheklein"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7130",
"repo": "iterative/dvc.org",
"url": "https://github.com/iterative/dvc.org/pull/4160"
}
|
gharchive/pull-request
|
Adds missing word in to metrics get started tutorial
You may disregard these recommendations if you used the Edit on GitHub button from dvc.org to improve a doc in place.
β Please read the guidelines in the Contributing to the Documentation list if you make any substantial changes to the documentation or JS engine.
π Please make sure to mention Fix #issue (if applicable) in the description of the PR. This causes GitHub to close it automatically when the PR is merged.
Please choose to allow us to edit your branch when creating the PR.
Thank you for the contribution - we'll try to review it as soon as possible. π
thanks @JulianoLagana !
|
2025-04-01T06:39:09.198492
| 2023-04-12T04:27:38
|
1663729487
|
{
"authors": [
"aguschin",
"aminalaee",
"wiswisbus"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7131",
"repo": "iterative/mlem",
"url": "https://github.com/iterative/mlem/issues/650"
}
|
gharchive/issue
|
Error with relative imports during saving model with mlem
I'm trying to save wav2vec2 model from fairseq, but it seems that there's a bug in handling requirements with relative imports.
Versions
mlem==0.4.10
fairseq==0.12.2
Followings are the codes and the error log.
# Clone and install fairseq
git clone git@github.com:facebookresearch/fairseq.git
cd fairseq
pip install -e .
You can download checkpoints from here
Python script for saving wav2vec2
from mlem.api import load, save
import fairseq
if __name__ == "__main__":
wav2vec_ckpt_path = (
"/Downloads/ckpts/wav2vec/xlsr2_300m.pt"
)
# Load Wav2Vec
wav2vec, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task(
[wav2vec_ckpt_path]
)
wav2vec = wav2vec[0]
# Save
save(wav2vec, "wav2vec")
Error log
Traceback (most recent call last):
File "debug.py", line 18, in <module>
save(wav2vec, "wav2vec")
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/telemetry.py", line 50, in inner
return f(*args, **kwargs)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/core/metadata.py", line 122, in save
meta = get_object_metadata(
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/core/metadata.py", line 53, in get_object_metadata
return MlemModel.from_obj(
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/core/objects.py", line 792, in from_obj
mlem_model.add_processor(MAIN_PROCESSOR_NAME, mt)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/core/objects.py", line 816, in add_processor
self.requirements += model_type.get_requirements().expanded
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/contrib/torch.py", line 229, in get_requirements
return super().get_requirements() + InstallableRequirement.from_module(
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/core/model.py", line 341, in get_requirements
) + get_object_requirements(self.model)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 643, in get_object_requirements
a.dump(obj)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/pickle.py", line 487, in dump
self.save(obj)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 612, in save
self.add_requirement(obj)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 591, in add_requirement
self.add_requirement(local_req)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 591, in add_requirement
self.add_requirement(local_req)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 591, in add_requirement
self.add_requirement(local_req)
[Previous line repeated 1 more time]
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 599, in add_requirement
self.add_requirement(parent_package)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 591, in add_requirement
self.add_requirement(local_req)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 588, in add_requirement
for local_req in get_local_module_reqs(module):
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 355, in get_local_module_reqs
result = [importing.import_module(i, p) for i, p in imports]
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/utils/module.py", line 355, in <listcomp>
result = [importing.import_module(i, p) for i, p in imports]
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/site-packages/mlem/ext.py", line 265, in load_module
module = importlib.import_module(fullname)
File "/home/wis/anaconda3/envs/superhub_debug/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'fairseq.data.audio.data_utils'
As shown in the error log, mlem tries to find fairseq.data.audio.data_utils. But as you can see here there's no data_utils under fairseq.data.audio.
I'm assuming that mlem is not handling properly with relative imports from this part.
It should import module fairseq.data.data_utils not the fairseq.data.audio.data_utils.
Nice catch @wiswisbus! Would you like to investigate this and make a PR fixing this issue? I can help with reviewing and merging :)
I also wonder if this happens with pip install -e . only, or pip install github.com:facebookresearch/fairseq.git (or whatever is the correct way to install it right from git) would work correctly.
I was taking a look into this and there are two points:
Doing pip install . (removing -e editable) or pip install git+https or pip install fairseq solves the issues.
I have prepared a fix, but specifically for fairseq it won't solve the issue because the project is using a vendored submodule megatron: https://github.com/facebookresearch/fairseq/tree/main/fairseq/model_parallel.
So is it worth the fix?
@aminalaee I think the fix always worth it, don't hesitate to make a PR - especially if you already implemented it!
Re megatron - I hope we can resolve that as well, since the version (commit) is pinned there. Once you make a PR I think I can TAL and see if I have any good ideas.
@aminalaee, I'm having problems with reproducing issue with megatron. Could you please post a traceback so I could see what fails exactly?
@aguschin
Here it is:
Traceback (most recent call last):
File "/path//mlem/example.py", line 18, in <module>
save(wav2vec, "wav2vec")
File "/path//mlem/mlem/telemetry.py", line 50, in inner
return f(*args, **kwargs)
File "/path//mlem/mlem/core/metadata.py", line 122, in save
meta = get_object_metadata(
File "/path//mlem/mlem/core/metadata.py", line 53, in get_object_metadata
return MlemModel.from_obj(
File "/path//mlem/mlem/core/objects.py", line 792, in from_obj
mlem_model.add_processor(MAIN_PROCESSOR_NAME, mt)
File "/path//mlem/mlem/core/objects.py", line 816, in add_processor
self.requirements += model_type.get_requirements().expanded
File "/path//mlem/mlem/contrib/torch.py", line 229, in get_requirements
return super().get_requirements() + InstallableRequirement.from_module(
File "/path//mlem/mlem/core/model.py", line 341, in get_requirements
) + get_object_requirements(self.model)
File "/path//mlem/mlem/utils/module.py", line 648, in get_object_requirements
return a.to_requirements()
File "/path//mlem/mlem/utils/module.py", line 553, in to_requirements
r.add(CustomRequirement.from_module(mod))
File "/path//mlem/mlem/core/requirements.py", line 236, in from_module
raise ValueError(f"{mod} does not have __file__ attr")
ValueError: <module 'fairseq.model_parallel.megatron' (<_frozen_importlib_external._NamespaceLoader object at 0x7f3695152830>)> does not have __file__ attr
This was result of installing fairseq with pip install -e . and then running the save method in the issue example.
Hmm. I assume this doesn't happen when doing pip install fairseq?
Yeah, it works well with pip install fairseq
Ok, not sure how this can be fixed right now. @aminalaee, do you have any workaround in mind?
I also have 2 questions:
@wiswisbus is this something that blocks your workflow? Do you absolutely need to install it with -e?
@aminalaee would you like to contribute to this particular issue? Or you're just looking for something to contribute? There are a lot of good issues we can check out.
Yeah @aguschin I'll be happy to contribute to any issue that can help the project. Nothing specific really.
Thank you! @aguschin @aminalaee
I can work without installing package as editable mode. It might be better if it works with editable mode but it's not mandatory. Just didn't know that it only happens with editable mode. Thank you all again for the rapid response!
Good to hear! Thank you @wiswisbus :)
|
2025-04-01T06:39:09.201493
| 2021-06-17T18:32:06
|
924219814
|
{
"authors": [
"0x2b3bfa0"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7132",
"repo": "iterative/terraform-provider-iterative",
"url": "https://github.com/iterative/terraform-provider-iterative/issues/147"
}
|
gharchive/issue
|
Create a common internal abstraction over every vendor
Follow-up of https://github.com/iterative/terraform-provider-iterative/pull/143#discussion_r653823205
Ideally we should write something like:
var provider Provider
switch cloud {
case "aws":
provider = aws
case "azure":
provider = azure
}
regions := provider.ImageRegions
cloudRegion := provider.GetRegion(region)
for _, item := range regions {
if item == cloudRegion {
return true
}
}
return false
The actionable requirement would be creating an interface over vendors.
Closed with #237
|
2025-04-01T06:39:09.203374
| 2022-10-30T20:00:37
|
1428999404
|
{
"authors": [
"shcheklein"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7133",
"repo": "iterative/vscode-dvc",
"url": "https://github.com/iterative/vscode-dvc/issues/2700"
}
|
gharchive/issue
|
Share experiment fails if no remote specified in repo
If remote storage is not specified:
[version: 0.5.7, 2022-10-30T19:57:04.192Z, pid: 65281] > /Users/ivan/Projects/ensemble-dvc-template/.venv/bin/python -m dvc push - FAILED with code 1 (236ms)
[31mERROR[39m: failed to push data to the cloud - config file error: no remote specified. Create a default remote with
dvc remote add -d <remote name> <remote url>
while trying to "Share as Branch"
Extension: v0.5.7
DVC:
DVC version: 2.32.0 (pip)
---------------------------------
Platform: Python 3.9.13 on macOS-12.6-arm64-arm-64bit
Subprojects:
dvc_data = 0.22.0
dvc_objects = 0.11.0
dvc_render = 0.0.12
dvc_task = 0.1.4
dvclive = 0.12.1.dev8+g243ae28.d20221030
scmrepo = 0.1.2
Supports:
http (aiohttp = 3.8.3, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.3, aiohttp-retry = 2.8.3)
Cache types: reflink, hardlink, symlink
Cache directory: apfs on /dev/disk3s1s1
Caches: local
Remotes: None
Workspace directory: apfs on /dev/disk3s1s1
Repo: dvc, git
We can include also better handing of any remote / Git errors during the process. Dmitry's feedback was about this as well.
|
2025-04-01T06:39:09.205747
| 2021-06-23T09:02:27
|
928026975
|
{
"authors": [
"freehere107"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7134",
"repo": "itering/scale.go",
"url": "https://github.com/itering/scale.go/issues/32"
}
|
gharchive/issue
|
metadata v14
https://github.com/paritytech/substrate/pull/8615
https://github.com/polkadot-js/api/pull/3827/files
https://github.com/polkadot-js/api/pull/3899
https://github.com/polkadot-js/api/pull/3920/files
https://github.com/polkadot-js/api/pull/3987#issuecomment-926489040
|
2025-04-01T06:39:09.285044
| 2018-08-17T03:23:20
|
351444010
|
{
"authors": [
"Casper-ThinkGeo",
"xivk"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7135",
"repo": "itinero/routing",
"url": "https://github.com/itinero/routing/issues/214"
}
|
gharchive/issue
|
The index out of range exception has been thrown when I try to build the whole US router db.
Hi,
I'm working on building the whole US router db. I try to get the router db file with two approaches but it works fail:
Use the whole US data with one North America .pbf (7.8G).
I have download the North America data from https://download.geofabrik.de/ . I try to build the router db with following statements:
`var usPbfFile = @"C:\Test\north-america-latest.osm.pbf";
using (var stream = File.OpenRead(usPbfFile))
{
var osmStream = new OsmSharp.Streams.PBFOsmStreamSource(stream);
var routerDb = new RouterDb();
routerDb.LoadOsmData(osmStream);
if (!routerDb.HasContractedFor(vehicle.Fastest()))
{
Itinero.Logging.Logger.Log("RouterDbBuilder", Itinero.Logging.TraceEventType.Information, "No contracted graph found for the 'car' profile, building now...");
routerDb.AddContracted(vehicle.Fastest(), true);
}
using (var writeStream = File.Open(routerDbFileName, FileMode.Create))
{
routerDb.Serialize(writeStream);
}
}
`
It works failed and throws an index out of range exception when the router db calculating progress is 97% as the attached
Use the divided state .pbf files.
I have downloaded all of .pbf files in America, and build them into one router db file as following statements:
`
public static RouterDb Build(string sourceDbFolder, string targetDbFilename, Vehicle vehicle)
{
RouterDb routerDb = null;
var routerDbFileName = targetDbFilename;
if (File.Exists(routerDbFileName))
{
try
{
using (var stream = File.OpenRead(routerDbFileName))
{
routerDb = RouterDb.Deserialize(stream);
}
}
catch
{
routerDb = null;
}
}
if (routerDb == null)
{
// check if OSM pbf file is there.
var files = Directory.GetFiles(sourceDbFolder);
if (files == null || files.Length <= 0)
{ // check if OSM file is there, otherwise attempt to download from overpass.
throw new Exception("The .pbf file is not exist.");
}
// build routerdb.
Itinero.Logging.Logger.Log("RouterDbBuilder", Itinero.Logging.TraceEventType.Information, "No existing RouterDb file found, creating now.");
var osmFiles = new List<OsmStreamSource>();
var fileStreams = new List<Stream>();
foreach (var file in files)
{
OsmStreamSource osmStream = null;
var stream = File.OpenRead(file);
fileStreams.Add(stream);
if (file.EndsWith(".osm.pbf"))
{
osmStream = new OsmSharp.Streams.PBFOsmStreamSource(stream);
}
else
{
osmStream = new OsmSharp.Streams.XmlOsmStreamSource(stream);
}
osmFiles.Add(osmStream);
}
routerDb = new RouterDb();
routerDb.LoadOsmData(osmFiles.ToArray(), vehicle);
foreach (var stream in fileStreams)
{
stream.Close();
stream.Dispose();
}
Itinero.Logging.Logger.Log("RouterDbBuilder", Itinero.Logging.TraceEventType.Information, "RouterDb file created.");
if (!routerDb.HasContractedFor(vehicle.Fastest()))
{
Itinero.Logging.Logger.Log("RouterDbBuilder", Itinero.Logging.TraceEventType.Information, "No contracted graph found for the 'car' profile, building now...");
routerDb.AddContracted(vehicle.Fastest(), true);
}
using (var stream = File.Open(routerDbFileName, FileMode.Create))
{
routerDb.Serialize(stream);
}
}
return routerDb;
}
`
It works failed and throws an index out of range exception when the router db is being written with a result data as the attached file:
I don't know how to fix this issue. I'm very appreciate for your any help.
My computer configuration:
RAM: 32G,
CPU: Core i7 4790
OS: Win10 X64
Application: 64 bit.
Did you try this using the latest prerelease? We're focusing on getting that released.
I tested building the north-america routerdb and that worked, now testing the contraction...
Hi xivk,
Thanks for your help, I'm working on the version v1.4.0-pre67. As the above statements, a "Car fastest" contraction has been added to RouterDb:
routerDb.AddContracted(vehicle.Fastest(), true);
This exception has been thrown when the router db is written to file with the "Car fastest" contraction. The thrown point is located on the "Compress" method in "DirectedGraph.cs".
If you have any updates, please tell me know. I'm very appreciate for your help.
Hi xivk,
I'm testing routing contraction with the latest development branch source code. As above issue, I have tried to build the whole US routing data with all of states .pbf data. It seems the out of range exception was thrown by Router db serialize progress. You can see the exception is thrown when the percent of the progress is 99%, the router db is being written to the disk.
|
2025-04-01T06:39:09.324552
| 2022-08-14T16:42:58
|
1338295874
|
{
"authors": [
"itm4n"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7136",
"repo": "itm4n/PrivescCheck",
"url": "https://github.com/itm4n/PrivescCheck/issues/37"
}
|
gharchive/issue
|
Helper script detected by AMSI when building
When building the script, the file src\02_Helpers.ps1 is blocked by AMSI.
C:\PATH\TO\PrivescCheck>powershell -ep bypass -c ".\Build.ps1"
[OK] Loaded module file 00_Main.ps1
[OK] Loaded module file 01_Win32.ps1
[KO] Failed to load module file 02_Helpers.ps1
[ERROR] At C:\_WORKSPACE\PrivescCheck\src\02_Helpers.ps1:1 char:1
+ function Test-IsRunningInConsole {
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This script contains malicious content and has been blocked by your antivirus software.
[OK] Loaded module file 03_User.ps1
[OK] Loaded module file 04_Services.ps1
[OK] Loaded module file 05_Applications.ps1
[OK] Loaded module file 06_ScheduledTasks.ps1
[OK] Loaded module file 07_Hardening.ps1
[OK] Loaded module file 08_Config.ps1
[OK] Loaded module file 09_Network.ps1
[OK] Loaded module file 10_Updates.ps1
[OK] Loaded module file 11_Credentials.ps1
[OK] Loaded module file 99_Misc.ps1
This can be worked around by disabling "Windows Security" during build, but it would be nice to improve the Builder script in order to bypass detection earlier in the process.
Slightly modified the Builder script.
Instead of trying to load each script, and then removing the comments, I remove the comments first, and then I try to load the resulting code block.
In addition to the comment blocks, I also remove comment lines. Another benefit is that this reduces the size of the final file even more.
That's it. Apparently, this does the trick because the script is no longer caught by AMSI. :partying_face:
|
2025-04-01T06:39:09.375937
| 2021-11-21T21:00:04
|
1059485512
|
{
"authors": [
"hugalafutro",
"itzg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7137",
"repo": "itzg/docker-minecraft-server",
"url": "https://github.com/itzg/docker-minecraft-server/issues/1127"
}
|
gharchive/issue
|
minecraft-server:java8 tag no longer starts with forge modpack
Describe the problem
Haven't started this server up in few weeks, today I pulled latest :java8 image and it won't extract the modpack. It prints out 1 error on 1st launch, and then more on 2nd launch. The modpack .zip nor docker-compose.yml have not changed since last successful run of the server.
edit: For what it's worth a similarly set up fabric modpack running on latest itzg/minecraft-server tag runs ok.
Container definition
version: '3'
services:
mc_rlcraft:
ports:
- "25565:25565"
environment:
EULA: "true"
TZ: "Europe/London"
MAX_MEMORY: "8G"
VERSION: "1.12.2"
TYPE: "FORGE"
FORGEVERSION: "<IP_ADDRESS>55"
OPS: ${OPLIST}
WHITELIST: ${WLIST}
GENERIC_PACK: "/modpacks/RLCraft_Server_Pack_Beta_v2.8.2.zip"
USE_MODPACK_START_SCRIPT: "false"
ALLOW_FLIGHT: "true"
MAX_TICK_TIME: "-1"
VIEW_DISTANCE: 10
MAX_PLAYERS: 5
PVP: "false"
OVERRIDE_SERVER_PROPERTIES: "true"
LEVEL_TYPE: "BIOMESOP"
ENABLE_ROLLING_LOGS: "true"
USE_AIKAR_FLAGS: "true"
DIFFICULTY: "normal"
image: itzg/minecraft-server:java8
container_name: mc-rlcraft
restart: unless-stopped
volumes:
- ./data:/data
- ./modpacks:/modpacks
Container logs
1st launch
mc-rlcraft | [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | find: unrecognized: -printf
mc-rlcraft | BusyBox v1.29.3 (2019-01-24 07:45:07 UTC) multi-call binary.
mc-rlcraft |
mc-rlcraft | Usage: find [-HL] [PATH]... [OPTIONS] [ACTIONS]
mc-rlcraft |
mc-rlcraft | Search for files and perform actions on them.
mc-rlcraft | First failed action stops processing of current file.
mc-rlcraft | Defaults: PATH is current directory, action is '-print'
mc-rlcraft |
mc-rlcraft | -L,-follow Follow symlinks
mc-rlcraft | -H ...on command line only
mc-rlcraft | -xdev Don't descend directories on other filesystems
mc-rlcraft | -maxdepth N Descend at most N levels. -maxdepth 0 applies
mc-rlcraft | actions to command line arguments only
mc-rlcraft | -mindepth N Don't act on first N levels
mc-rlcraft | -depth Act on directory *after* traversing it
mc-rlcraft |
mc-rlcraft | Actions:
mc-rlcraft | ( ACTIONS ) Group actions for -o / -a
mc-rlcraft | ! ACT Invert ACT's success/failure
mc-rlcraft | ACT1 [-a] ACT2 If ACT1 fails, stop, else do ACT2
mc-rlcraft | ACT1 -o ACT2 If ACT1 succeeds, stop, else do ACT2
mc-rlcraft | Note: -a has higher priority than -o
mc-rlcraft | -name PATTERN Match file name (w/o directory name) to PATTERN
mc-rlcraft | -iname PATTERN Case insensitive -name
mc-rlcraft | -path PATTERN Match path to PATTERN
mc-rlcraft | -ipath PATTERN Case insensitive -path
mc-rlcraft | -regex PATTERN Match path to regex PATTERN
mc-rlcraft | -type X File type is X (one of: f,d,l,b,c,s,p)
mc-rlcraft | -perm MASK At least one mask bit (+MASK), all bits (-MASK),
mc-rlcraft | or exactly MASK bits are set in file's mode
mc-rlcraft | -mtime DAYS mtime is greater than (+N), less than (-N),
mc-rlcraft | or exactly N days in the past
mc-rlcraft | -mmin MINS mtime is greater than (+N), less than (-N),
mc-rlcraft | or exactly N minutes in the past
mc-rlcraft | -newer FILE mtime is more recent than FILE's
mc-rlcraft | -inum N File has inode number N
mc-rlcraft | -user NAME/ID File is owned by given user
mc-rlcraft | -group NAME/ID File is owned by given group
mc-rlcraft | -size N[bck] File size is N (c:bytes,k:kbytes,b:512 bytes(def.))
mc-rlcraft | +/-N: file size is bigger/smaller than N
mc-rlcraft | -links N Number of links is greater than (+N), less than (-N),
mc-rlcraft | or exactly N
mc-rlcraft | -prune If current file is directory, don't descend into it
mc-rlcraft | If none of the following actions is specified, -print is assumed
mc-rlcraft | -print Print file name
mc-rlcraft | -print0 Print file name, NUL terminated
mc-rlcraft | -exec CMD ARG ; Run CMD with all instances of {} replaced by
mc-rlcraft | file name. Fails if CMD exits with nonzero
mc-rlcraft | -exec CMD ARG + Run CMD with {} replaced by list of file names
mc-rlcraft | -delete Delete current file/directory. Turns on -depth option
2nd launch
mc-rlcraft | [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
mc-rlcraft | replace config/adhooks/adhooks.cfg? [y]es, [n]o, [A]ll, [N]one, [r]ename: [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 12 1000 1000 4096 Nov 21 20:53 /data'
mc-rlcraft | [init] Resolved version given 1.12.2 into 1.12.2
mc-rlcraft | [init] Resolving type given FORGE
mc-rlcraft | [init] Checking Forge version information.
mc-rlcraft | unzip: can't read standard input
Please switch to itzg/minecraft-server:java8-multiarch.
Many thanks that solved it!
I found a compatibility package that will resolve the original issue.
itzg/minecraft-server:java8 image is now fixed.
|
2025-04-01T06:39:09.379201
| 2022-01-01T10:55:03
|
1091792375
|
{
"authors": [
"Extensivity",
"itzg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7138",
"repo": "itzg/docker-minecraft-server",
"url": "https://github.com/itzg/docker-minecraft-server/issues/1238"
}
|
gharchive/issue
|
Forge and Packwiz
Enhancement Type
Improve an existing feature
Describe the enhancement
Allow forge to also implement packwiz, instead of just fabric.
Just figured out the issue, I thought I could just git clone and point the url to the file instead.
PS: Should add in the readme that it does works with forge and fabric, instead of just fabric.
Just figured out the issue, I thought I could just git clone and point the url to the file instead.
PS: Should add in the readme that it does works with forge and fabric, instead of just fabric.
I don't see where it says only Fabric. Or was it the example that implied that?
I don't see where it says only Fabric. Or was it the example that implied that?
Pretty much, plus I was still kinda new to packwiz and was trying to get used to it. Some completely my bad.
|
2025-04-01T06:39:09.432396
| 2024-11-01T18:48:02
|
2629666162
|
{
"authors": [
"plbenveniste"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7139",
"repo": "ivadomed/ms-lesion-agnostic",
"url": "https://github.com/ivadomed/ms-lesion-agnostic/pull/35"
}
|
gharchive/pull-request
|
Evaluation of existing SCT methods
In this PR, we evaluate the SCT methods for MS lesion segmentation:
sct_deepseg_lesion
sct_deepseg PSIR/STIR
sct_deepseg MP2RAGE
We perform evaluation on the test split of the MSD dataset and on the external dataset.
Reviewed ! Ready to be merged !
|
2025-04-01T06:39:09.433756
| 2023-05-28T08:53:23
|
1729314711
|
{
"authors": [
"yan42685"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7140",
"repo": "ivan-lednev/obsidian-persistent-links",
"url": "https://github.com/ivan-lednev/obsidian-persistent-links/issues/2"
}
|
gharchive/issue
|
[FR] Support for vim mode commands "x", "p"
Currently, it doesn't work with vim mode.
I have set clipboard=unnamed " Use system clipboard in my .obsidian.vimrc
|
2025-04-01T06:39:09.434645
| 2024-03-06T16:18:27
|
2171905359
|
{
"authors": [
"abdousfayhi",
"odufuwa-segun"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7141",
"repo": "ivan-topp/y-socket.io",
"url": "https://github.com/ivan-topp/y-socket.io/issues/13"
}
|
gharchive/issue
|
Is there a way to access the socket that emitted event?
I need to broadcast whatever changes happened from user1 to user2 (working on the same document) , i couldn't access the socket that emitted the event i'm pretty sure i'm missing something. Thanks in advance.
+1 here
|
2025-04-01T06:39:09.439375
| 2022-02-15T07:10:59
|
1138277610
|
{
"authors": [
"ivangabriele"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7142",
"repo": "ivangabriele/bhala",
"url": "https://github.com/ivangabriele/bhala/pull/38"
}
|
gharchive/pull-request
|
ci(release): 3.0.1 [skip ci]
Automated changes by create-pull-request GitHub action
:tada: This PR is included in version 3.0.2 :tada:
The release is available on:
GitHub release
npm package (@latest dist-tag)
Your semantic-release bot :package::rocket:
|
2025-04-01T06:39:09.444598
| 2023-03-01T19:24:14
|
1605536065
|
{
"authors": [
"Cluster2a",
"ivanhofer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7143",
"repo": "ivanhofer/typesafe-i18n",
"url": "https://github.com/ivanhofer/typesafe-i18n/issues/618"
}
|
gharchive/issue
|
extendDictionary is not switching translations properly
Version
5.24.1
Describe the bug
If creating en-US as an extension of en, via extendDictionary, the locale switcher is not switching between en and en-US:
https://user-images.githubusercontent.com/84905165/222242546-a487a4a9-519e-436e-a56b-ded3c88a1299.mp4
Reproduction
Use the svelteKit example app (https://github.com/ivanhofer/typesafe-i18n-demo-sveltekit) and update all dependencies (ncu -u) and add en-US:
Logs
No response
Config
No response
Additional information
No response
My bad. I assumed lodash.merge works in an immutable way. But instead it alters the orginal object. I will need to find a replacement for this line: https://github.com/ivanhofer/typesafe-i18n/blob/main/packages/utils/src/extendDictionary.mts#L24
It seems merge({}, obj1, obj2) is the solution.
Should be fixed in version 5.24.3. Thanks for reporting this issue.
Thanks for the quick fix!
|
2025-04-01T06:39:09.472227
| 2016-02-15T01:29:39
|
133601054
|
{
"authors": [
"ivogabe",
"ntrrgc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7144",
"repo": "ivogabe/gulp-typescript",
"url": "https://github.com/ivogabe/gulp-typescript/pull/284"
}
|
gharchive/pull-request
|
Set typescript as peerDependency
This way we don't depend on gulp-typescript to be updated each time a new TypeScript version is released, we can choose the exact version we want and we don't get it installed twice (which causes awkward moments like "why does it work in console but not in gulp-typescript?".)
Cheers. :smiley:
I agree that it's better to have typescript as a peerDependency, though that would be a breaking change. I plan to change this in the next major release, at the same time when TypeScript 2.0 is released. Closing for now, I will revisit this later.
|
2025-04-01T06:39:09.507040
| 2023-07-25T05:55:29
|
1819603317
|
{
"authors": [
"Brano5",
"IX-BOT"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7154",
"repo": "ix-ax/axsharp",
"url": "https://github.com/ix-ax/axsharp/issues/199"
}
|
gharchive/issue
|
[BUG] Some bugs in shadow presentation in RenderableContentControl
The default value for CHAR in ShadowDisplay seems problematic.
DATE and LDATE as shown as format in ShadowDisplay
DATE_AND_TIME and DATE can not be changed in ShadowControl
Some LONG TYPE doesnt have type in render in SHadowControl
/cib
Branch 199-_BUG_Some_bugs_in_shadow_presentation_in_RenderableContentControl created!
|
2025-04-01T06:39:09.511216
| 2022-07-12T09:18:34
|
1301783178
|
{
"authors": [
"KimDegnJensen",
"ScottSoren"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7155",
"repo": "ixdat/ixdat",
"url": "https://github.com/ixdat/ixdat/issues/91"
}
|
gharchive/issue
|
Non-intuitive plotting range when combining EC and MS data
When combining .tsv and .mpt files in any ECMSMeasurement object it will plot from when EC data starts (when the potential column starts), contrary to what is ususally the case when reading in pure .tsv data (all data from time=0 until whenever .tsv file time stamp runs out. Would be nice if the format of plotting range, vs time of the ECMS object had the same time range as the pure MS object. This would also make sense as backgrounds are are somewhat easier defined prior any EC data collection, i.e. not having to define a new time range fro the plot (could of course be done using tspan=[0,np.max(full_ECMS_data.grab("time/s"))] or similar in the ECMS plotting command).
This depends on the workflow.
When doing EC-MS I and, I think, my colleagues here at ICL, start recording MS data while setting up and leaving MS data acquisition running until we go home. During the day we then run one or more EC measurements, which we think of as the actual experiments. When analizying data, we'll often have something like the following:
ms = Measurement.read(...) # all the MS data
ec_1 = Measurement.read(...) # experiment 1, with sample A
ec_2 = Measurement.read(...) # experiment 2, with sample B
ecms_1 = ec_1 + ms
ecms_2 = ec_2 + ms
ecms_1.plot() # plot 1
ecms_2.plot() # plot 2
If we used the tspan of the full data, then plot 1 and plot 2 have the same range and identical top panels. Yes, they show different ec data, but the part of the experiment I actually want to look at might be too small to resolve.
I've now updated the plotting function to accept the following strings as tspan specification:
"ec": use the timespan of the EC data (this will remain the default)
"ms": use the timespan of the MS data
"all": use a timespan containing all of the data.
So if you want the whole MS data plotted, you'll be able to just call ecms_1.plot(tspan="all"). Hope this is satisfactory?
The change is here: https://github.com/ixdat/ixdat/commit/8ced531ae42ecd844c28260b2fd5681ff3184b2d
I'll open the PR with it now, and hopefully soon distribute in ixdat 0.2.4
|
2025-04-01T06:39:09.515219
| 2024-02-06T18:48:38
|
2121460288
|
{
"authors": [
"masterwendu",
"nlvw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7156",
"repo": "ixkaito/astro-relative-links",
"url": "https://github.com/ixkaito/astro-relative-links/issues/13"
}
|
gharchive/issue
|
astro dev support
Is it possible to get this to work with astro dev?
I use a project called code-server which proxies ports through a '/proxy/portnumber' url. So when running the url of the homepage would look something like http://localhost:4080/proxy/4321/. Since the proxy port can change relative paths are needed.
This project works fine when doing astro build then astro preview, however astro dev doesn't work at all. All of the asset's ignore the relative paths ans use absolute paths such as http://localhost:4080/node_modules when it should be http://localhost:4080/proxy/4321/node_modules
Hello,
I think a base path should help you:
https://docs.astro.build/en/reference/configuration-reference/#base
|
2025-04-01T06:39:09.557298
| 2022-05-25T18:20:41
|
1248501115
|
{
"authors": [
"j-andrews7",
"jake-steele"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7158",
"repo": "j-andrews7/CRISPRball",
"url": "https://github.com/j-andrews7/CRISPRball/pull/1"
}
|
gharchive/pull-request
|
File Uploading - First Steps
Not fully functional, but first steps toward implementing a file upload functionality.
Great points, @j-andrews7! These are all resolved in the new commit, c8f845c.
File uploads working properly, hurray! Thanks for the help!
|
2025-04-01T06:39:09.568895
| 2024-09-23T14:32:52
|
2542858184
|
{
"authors": [
"NextGenerationHackers",
"j4k0xb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7159",
"repo": "j4k0xb/webcrack",
"url": "https://github.com/j4k0xb/webcrack/issues/116"
}
|
gharchive/issue
|
BABEL_PARSER_SYNTAX_ERROR / REASONCODE: UNEXPECTEDTOKEN
Describe the bug
Expected Behaviour
To Start the Process
Code
https://raw.githubusercontent.com/NextGenerationHackers/WCJSDECODER/refs/heads/main/Game%20Files/Decoded%20Game%20File%20V1/VIPDecoded.js
Logs
No response
it is invalid syntax, there are many ? . in the script
[__w9_L3y[_0xd0709e(0x1b0)]]] ? . [__w9_L3y[_0xd0709e(0x18e)]]
|
2025-04-01T06:39:09.605978
| 2017-09-16T21:14:31
|
258259129
|
{
"authors": [
"NBonaparte",
"TonCherAmi",
"flameshikari",
"patrick96"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7160",
"repo": "jaagr/polybar",
"url": "https://github.com/jaagr/polybar/issues/759"
}
|
gharchive/issue
|
Modules gradient issue
My bar have gradient with two colors. When I used version 3.0.5 bar looked like this:
But after aur-git update to version 3.0.5-44-ga682d2a I got this:
Modules has no gradients now. I read the wiki, but there is nothing about it and I tried some combinations with format-background etc. Any help, please? Should I use one color for the bar?
P. S. Sorry about my English, it isn't my primary language.
There is some color weirdness in the latest commits, the issue is tracked in #639.
To confirm that this is actually the issue, could you post the output, when you run polybar with the -s option (for both versions)
Version 3.0.5:
%{l}%{A5:i3wm-wsprev: A4:i3wm-wsnext: A1:i3wm-wsfocus-1:} ξ
%{A A A} %{A1:thunar / &:A3:terminator --working-directory / &: F#38A1E3}ξ― %{F#252229}140%{F- A A F-} %{A1:thunar ~ &:A3:terminator --working-directory ~ &: F#38A1E3}ξΏ %{F#252229}343%{F- A A F-} %{F#38A1E3}ξ %{F- A1:terminator -x htop &:}42%{A} %{F#38A1E3}ξ %{F- A1:terminator -x htop &:}19%{A} %{F#38A1E3}ξΈ %{F- A1:~/.config/polybar/output.sh --battery:}100%{A}%{c}%{A1:~/.config/polybar/output.sh --date:}03:07:16%{A}%{r}%{A1:volmute: A4:volup: A5:voldown: F#38A1E3}ξ %{F-}61%{A A A} %{F#38A1E3}ξ
%{F-} %{A1:~/.config/polybar/output.sh --interfaces &:A3:terminator -x nmtui &:}<IP_ADDRESS>%{A A} %{A1:xkblayout-state set +1 &: F#38A1E3}ξͺ %{F#252229} 0%{A F-} %{A1:~/.config/polybar/output.sh --dropbox-notification &:A3:thunar ~/Downloads/Dropbox &: F#38A1E3}ξ %{F#252229}R%{A A F-} %{F#38A1E3}ξ² %{F#252229}1%{F-} %{A1:xkblayout-state set +1 &: F#252229}EN%{F#D81A1C A F-}
Version 3.0.5-44-ga682d2a:
%{l}%{A5:i3wm-wsprev: A4:i3wm-wsnext: A1:i3wm-wsfocus-1:} ξ
%{A A A} %{A1:thunar / &:A3:terminator --working-directory / &: F#38A1E3}ξ― %{F#252229}140%{F- A A F-} %{A1:thunar ~ &:A3:terminator --working-directory ~ &: F#38A1E3}ξΏ %{F#252229}343%{F- A A F-} %{F#38A1E3}ξ %{F- A1:terminator -x htop &:}41%{A} %{F#38A1E3}ξ %{F- A1:terminator -x htop &:}22%{A} %{F#38A1E3}ξΈ %{F- A1:~/.config/polybar/output.sh --battery:}100%{A}%{c}%{A1:~/.config/polybar/output.sh --date:}03:05:38%{A}%{r}%{A1:volmute: A4:volup: A5:voldown: F#38A1E3}ξ %{F-}61%{A A A} %{F#38A1E3}ξ
%{F-} %{A1:~/.config/polybar/output.sh --interfaces &:A3:terminator -x nmtui &:}<IP_ADDRESS>%{A A} %{A1:xkblayout-state set +1 &: F#38A1E3}ξͺ %{F#252229} 0%{A F-} %{A1:~/.config/polybar/output.sh --dropbox-notification &:A3:thunar ~/Downloads/Dropbox &: F#38A1E3}ξ %{F#252229}R%{A A F-} %{F#38A1E3}ξ² %{F#252229}1%{F-} %{A1:xkblayout-state set +1 &: F#252229}EN%{F#D81A1C A F-}
See no difference.
Yeah I don't see any either.
I think I have tracked down the issue. Can you locally revert commit https://github.com/jaagr/polybar/commit/0bd8f1f69a8dfdb2f2800a6612c8acb7e6c86ed2 and recompile and test if that fixes it.
Sorry for delay. I just did read some manuals about git, because I'm not so friendly with it. I reverted commit as you said, recompiled it and it works fine now:
What's now?
Now, we have figured out what the problem is. That commit you reverted sets the background color when rendering a module to the bar background and doesn't consider a gradient here.
I will try to fix this as soon as we merge #729 as it also messes with the colors.
OK! Thanks for help by the way!
@patrick96 Any updates on a fix? Just wanted to know if you've started working on it, because as it stands, this is the only issue left on the 3.1.0 milestone.
Sorry, I forgot about this. I think we can release version 3.1.0 without this fix and just schedule it for 3.2.0, it isn't really a critical bug
@patrick96 hey, I seem to have managed to fix this, do you mind if I submit a PR?
@TonCherAmi Yes! Please do.
|
2025-04-01T06:39:09.621030
| 2021-11-18T12:55:33
|
1057324846
|
{
"authors": [
"frenck",
"jabesq"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7161",
"repo": "jabesq/pyatmo",
"url": "https://github.com/jabesq/pyatmo/issues/163"
}
|
gharchive/issue
|
Release v6.2.0?
Can we get a release of this package so we get the latest aiohttp pinning into Home Assistant?
Currently, this package (on the latest release) conflict with the aiohttp version used by Home Assistant. Releasing a newer version would solve that.
Yes, no problem. I'll do that tonight
v6.2.0 was just released https://pypi.org/project/pyatmo/6.2.0/
|
2025-04-01T06:39:09.630299
| 2018-12-13T00:43:01
|
390470343
|
{
"authors": [
"jamesadevine",
"pelikhan",
"teddyseyed"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7162",
"repo": "jacdac/jacdac",
"url": "https://github.com/jacdac/jacdac/issues/3"
}
|
gharchive/issue
|
Bus death method #1
Summary:
Unplugging a usb connected device while still connected to the jacdac bus seems to kill the bus entirely, until it is replugged, at which point anything else that was connected to the bus no longer connects except for the replugged device.
Hardware Setup:
CPX running a Button service (sample code here).
CPX running an Accelerometer service connected to USB battery, with only ground and tx connected to audio jack (sample code here).
Arcade running logging service.
Reproduction Steps:
Leave services / clients running and connected to the bus.
Unplug CPX Accelerometer from USB, but leave audio jack connected.
The bus has died!
Replug the USB battery into the CPX Accelerometer .
CPX acclerometer is connected to bus, but CPX Button no longer connected to bus.
Thanks Teddy.
Just for my info, are we filing JACDAC issues here @pelikhan @teddyseyed ?
I think this is samd specific from the sounds of it? I certainly didn't robustly test CPX...
I assume there are more than one bus death methods by the title? :smile:
Where do you want them?
Get Outlook for iOShttps://aka.ms/o0ukef
From: James Devine<EMAIL_ADDRESS>Sent: Thursday, December 13, 2018 6:37:52 AM
To: jamesadevine/jacdac
Cc: Peli de Halleux; Mention
Subject: Re: [jamesadevine/jacdac] Bus death method #1 (#3)
Just for my info, are we filing JACDAC issues here @pelikhanhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpelikhan&data=02|01|jhalleux%40microsoft.com|8242843aaa0c4b521caa08d661089037|72f988bf86f141af91ab2d7cd011db47|1|0|636803086740808449&sdata=uh0xbZU6gO332YbXX0X2FnlUIKFwIqxrJdAeYf0FrIE%3D&reserved=0 @teddyseyedhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fteddyseyed&data=02|01|jhalleux%40microsoft.com|8242843aaa0c4b521caa08d661089037|72f988bf86f141af91ab2d7cd011db47|1|0|636803086740818457&sdata=awKj5X1Z1p%2FzUelCbcqCKnj%2FaRf8jZ54eN9mN6OGrN4%3D&reserved=0 ?
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fjamesadevine%2Fjacdac%2Fissues%2F3%23issuecomment-446990423&data=02|01|jhalleux%40microsoft.com|8242843aaa0c4b521caa08d661089037|72f988bf86f141af91ab2d7cd011db47|1|0|636803086740828461&sdata=5jdsRB9mhhOEIWUKefvChaoPQyp8XhaSUninWb0G2hs%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAD-4KRZkzRR8HVVecY2dGMu9zivPczbtks5u4mZAgaJpZM4ZQtx6&data=02|01|jhalleux%40microsoft.com|8242843aaa0c4b521caa08d661089037|72f988bf86f141af91ab2d7cd011db47|1|0|636803086740828461&sdata=79tcgG0OVCtq3MlG5Q8FoYrbmqog8zw%2FH2Yw17I%2ByPY%3D&reserved=0.
It's probably best to unify all jacdac issues here, rather than across the billion codal repos! :smile:
V-1 is not supported :smile: I'm sure V0 will bring a whole host of issues.
|
2025-04-01T06:39:09.646097
| 2022-09-08T12:13:11
|
1366193128
|
{
"authors": [
"jackc",
"lafriks"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7163",
"repo": "jackc/pgx-zap",
"url": "https://github.com/jackc/pgx-zap/pull/1"
}
|
gharchive/pull-request
|
Fix zap logging adapter to work with latest pgx v5
Also update zap dependency to latest version
Thanks!
|
2025-04-01T06:39:09.647997
| 2015-03-08T21:39:41
|
60277717
|
{
"authors": [
"DanGoldbach"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7164",
"repo": "jackhou-chromium/bitmapper",
"url": "https://github.com/jackhou-chromium/bitmapper/issues/91"
}
|
gharchive/issue
|
Cursor isn't properly aligned with canvas
The black box around the drawing cursor isn't aligned properly with what's drawn. I think the cursor it should be shifted one pixel to the left, so the black box is centered around the area that gets painted. See attached image.
...the image that I meant to upload was
|
2025-04-01T06:39:09.769963
| 2022-08-29T01:04:45
|
1353538055
|
{
"authors": [
"jacksund",
"scott-materials"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7165",
"repo": "jacksund/simmate",
"url": "https://github.com/jacksund/simmate/issues/262"
}
|
gharchive/issue
|
Merge output from evolutionary search into one document
Describe the desired feature
As an evolutionary search :slot_machine: progresses, a series of summary files are produced, including:
convergence__staged_relaxations.html
convergence__time_vs_energy_per_atom.html
distribution_of_energy_per_atom.html
distribution_of_subworkflow_times.html
history_of_the_best_individuals.md
Rather than writing these to five different files, I think it'd be useful if they were all written to a single html file. That way I only have to bother to open/close one file, not five.
Additional context
No response
To-do items
notes by @jacksund :
[x] condense workflow outputs to database objects
[x] streamline Figure registration from a database table/object
[x] allow figures to be written to file and html div with single implementation
[x] add default method for writing summary output from db object
[x] link to website URL for viewing all results on a single page
[ ] add templates and views for structure-prediction flows
Agreed, I'd like to move in this direction as well. π
I also think the same can be requested for the other summary files too (i.e. merge all outputs into a single "report" file). If we're building a larger html file, we might as well include everything we need. Keep running with this idea, and we end up with a website/django view.
Ideally, I think the only "output" files should be (1) a URL link to the website view (even to a local server link) and (2) a "static html" of the website view. This addresses many output files and moves toward a single "report" html (which is also a view in the website).
I've planned on doing this for all workflows (not just the evo search) but haven't had much time to put towards website templates. However, I can set up the basics to address this issue. Since you've built django templates before, you'll easily be able to update things once I have the main template set up.
Below are just notes on how I would write a static html locally for the view
from django.template.loader import get_template
from django.template import Context
template = get_template(template_src)
context = Context(context_dict)
html = template.render(context)
https://docs.djangoproject.com/en/4.1/ref/templates/api/#loading-a-template
https://docs.djangoproject.com/en/4.1/ref/templates/api/#rendering-a-context
Also see the render_to_string method:
from django.template.loader import render_to_string
rendered = render_to_string('my_template.html', {'foo': 'bar'})
https://docs.djangoproject.com/en/4.1/topics/templates/#module-django.template.loader
https://docs.djangoproject.com/en/4.1/topics/templates/#django.template.loader.render_to_string
@scott-materials
I've streamlined how results are used when writing output files + making the web UI.
While the last few PRs might seem like overkill, they're pretty important for streamlining our code -- and not having to implement things multiple times & in multiple places.
Before these PRs, the processes were isolated for results+output files and database+webUI. This was easier when starting out Simmate, but it's now reached the point where maintaining two implementations was too difficult.
Here's how it looked before:
graph LR
A[Calculation run] --> B[Results];
B --> C[Save to output files];
B --> D[Convert to database object];
D --> E[Save to database];
E --> F[Build into html template];
Here's how things work now:
graph LR
A[Calculation run] --> B[Results];
B --> C[Convert to database object];
C --> D[Save to database];
D --> E[Save to output files];
D --> F[Build into html template];
adding new plots
Now if you have a script that builds a plot from a database table, I can immediately add it to both output files AND the website UI. This will be very important as Simmate grows. How to build a new plot:
from simmate.visualization.plotting import PlotlyFigure
class MyNewPlot(PlotlyFigure):
def get_plot(table, chemical_system,):
data = table.filter(...).all() # grab data from the table
# .... make a plotly figure and return the object ....
return plot
end goal
Ultimately, there will be a link to the website pages, but no local "report" file. In the simmary_summary.yaml file will be everything you need -- URL link, database table, database id.
Saving a local html has complications due to...
links and static assets will not work
building the html will be slow and affect workflow speed
building a minimal template falls back to the original issue (maintaining two implementations of the same thing)
So this issue will be closed once there is a website view for structure-prediction workflows. There won't be a single file as you originally requested, but there will be the (even better) option of viewing things in the website UI.
Impressive! I like it!
@scott-materials I haven't done any html formatting, but everything is in a single page now. You can check out the html here. In the template, calcuation is the database table object (so FixedCompositionSearch.objects.get(id=123))
|
2025-04-01T06:39:09.786860
| 2022-08-13T16:39:42
|
1338005028
|
{
"authors": [
"demicuz",
"jackyzha0",
"sspaeti"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7166",
"repo": "jackyzha0/quartz",
"url": "https://github.com/jackyzha0/quartz/issues/176"
}
|
gharchive/issue
|
Links to notes with apostrophes ' in the name are inactive
Describe the bug
If you (wiki)link a note that has a ' in the name, it will be shown as inactive. It can be searched for, but the note's address doesn't mark ' presence too (http://localhost:1313/Links-inactive/).
To Reproduce
Create a note with ' in its name
Link it from another note (e.g. [[what's happening?]])
Screenshots
You can also see doubled letters in the search. This happens sometimes, but I haven't figured out when exactly.
Desktop:
OS: happens when I build locally (HUGO v0.101.0, latest hugo-obsidian) and through Quartz GitHub action.
Browser: tested on Brave (mobile) and Firefox (desktop)
Suspect this is due to ' being converted to β on Hugo render, will look into it
The same goes for .. I had notes that were called plural.sh or restack.io, these were not working as a file name. The workaround was renaming them.
|
2025-04-01T06:39:09.792409
| 2023-03-21T16:01:36
|
1634231927
|
{
"authors": [
"hubtub2",
"jacobgil"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7167",
"repo": "jacobgil/confidenceinterval",
"url": "https://github.com/jacobgil/confidenceinterval/issues/1"
}
|
gharchive/issue
|
Validation
Great work!
How can I trust the confidence intervals it returns - did you do anykind of validation - maybe comparing results with the standard R implementations?
Would be cool to have like a (small) test report for some examples - in case you are in a regulated environment.
Hi,
The validation I did do was comparing the results of the bootstrap method with the analytical implementation.
I didn't compare with R (i'm also not an R user).
A test against R is a very good idea!
What R functions/packages do you recommend testing this against?
I guess it would be cool to have a unit test, where the results are compared against R,
maybe using https://rpy2.github.io
(Btw, Any contribution around this would be much appreciated!)
|
2025-04-01T06:39:09.796986
| 2020-08-19T02:43:02
|
681503588
|
{
"authors": [
"Dominique-github",
"jacobkrantz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7168",
"repo": "jacobkrantz/VLN-CE",
"url": "https://github.com/jacobkrantz/VLN-CE/issues/2"
}
|
gharchive/issue
|
Hope to provide more detailed content about embeddings.json.gz
Hello, I am very interested in your research.
I hope to get the details about embeddings.json.gz: the correspondence among words - word embedding - instruction_tokens.
I would be very grateful if I could get your reply.
Hi @Dominique-github,
Thank you for your interest! In the data format displayed here https://jacobkrantz.github.io/vlnce/data, instruction_text is the raw R2R instruction string. We then performed some preprocessing to derive instruction_tokens:
We used this tokenization function https://github.com/facebookresearch/habitat-lab/blob/v0.1.5/habitat/datasets/utils.py#L25
to get a list of words for each instruction. We then checked if each word was present in the pre-trained GloVe embedding file glove.6B.50d.txt available here https://nlp.stanford.edu/projects/glove/. If the word was present in the GloVe file, we mapped it to a unique integer >= 2. If it was not, we mapped that word to a value of 1 (representing unknown). We padded each instruction with values of 0 to reach a length of 200. The mapping between word and integer can be found in the instruction_vocab accompanying each dataset split. We computed this vocab for all splits together (train, val_seen, val_unseen), so instruction_vocab is identical for each split.
The embeddings file is a list of word embeddings. Indices in the list correspond to mapped word integer tokens in instruction_tokens and instruction_vocab. For example, embeddings[0] is the 50-dimensional zero vector for the pad token. We set the embedding for unknown words (index 1) to the mean of the other word embeddings that exist in R2R. All the rest of the embeddings are from glove.6B.50d.txt.
Thank you for your detailed explanation, it is of great help to me.
|
2025-04-01T06:39:09.799399
| 2017-10-02T17:11:37
|
262156407
|
{
"authors": [
"jacobpalm"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7169",
"repo": "jacobpalm/costa",
"url": "https://github.com/jacobpalm/costa/issues/3"
}
|
gharchive/issue
|
DESKLINK.DAT is not created if missing
If DESKLINK.DAT is missing, it should be created automatically with the default icons for built-in apps, since there is no way to add these from within the UI.
Possible solution:
Get the data of the current standard icons in DESKLINK.DAT.
Make a SUB that resets the icon arrays, then adds the standard elements to the array. At the end of the SUB, call the SaveLink SUB. Make sure DESKLINK.DAT will be deleted - if this is not already in SaveLink, do it in the new SUB before SaveLink is called.
The LoadLink SUB already checks for errors. Simply call the new SUB when errors are found.
Implemented in commit d5b8672
|
2025-04-01T06:39:09.818597
| 2024-10-03T17:56:53
|
2564664048
|
{
"authors": [
"aidenszolosi",
"jacquesh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7170",
"repo": "jacquesh/foo_openlyrics",
"url": "https://github.com/jacquesh/foo_openlyrics/issues/411"
}
|
gharchive/issue
|
LRCLIB causing crash despite being disabled
Steps to reproduce
Manually edit lyrics for a song
Wait for the backend to attempt to make a search request to LRCLIB
Experience crash
Expected behavior
foobar2000 version: 2.2
OpenLyrics version: 1.11
Debug logs
Illegal operation:
Code: C0000005h, flags: 00000000h, address: 00007FFE5A536107h
Access violation, operation: read, address:<PHONE_NUMBER>000090h
Call path not available.
Code bytes (00007FFE5A536107h):
00007FFE5A5360C7h: E6 FF FF E9 10 FD FF FF 90 68 5F 00 00 1A 5D 00
00007FFE5A5360D7h: 00 1A 5D 00 00 1A 5D 00 00 20 5D 00 00 1A 5D 00
00007FFE5A5360E7h: 00 CC CC CC CC CC CC CC CC 40 53 48 83 EC 30 48
00007FFE5A5360F7h: 8B D9 48 85 C9 74 1E 0F B6 C3 24 0F 3C 01 74 1E
00007FFE5A536107h: 81 39 B0 01 00 00 72 0D 81 79 04 51 55 55 55 0F
00007FFE5A536117h: 84 E0 00 00 00 33 C0 48 83 C4 30 5B C3 CC 48 C1
00007FFE5A536127h: EB 04 48 83 FB 46 73 ED 48 89 6C 24 48 48 89 74
00007FFE5A536137h: 24 50 48 89 7C 24 58 4C 89 7C 24 20 4C 89 74 24
Registers:
RAX:<PHONE_NUMBER>000000, RBX:<PHONE_NUMBER>000090, RCX:<PHONE_NUMBER>000090, RDX:<PHONE_NUMBER>000000
RSI: 00000010E2CFF4B0, RDI: 00007FFE5A551000, RBP: 00000010E2CFF4E0, RSP: 00000010E2CFF350
Timestamp:
232141ms
Crash location:
Module: bcrypt
Offset: 6107h
Symbol: "BCryptGetProperty" (+507h)
Loaded modules:
foobar2000 loaded at 00007FF62F130000h - 00007FF62F57D000h
ntdll loaded at 00007FFE5D7E0000h - 00007FFE5DA43000h
KERNEL32 loaded at 00007FFE5C090000h - 00007FFE5C157000h
KERNELBASE loaded at 00007FFE5AC90000h - 00007FFE5B041000h
SHLWAPI loaded at 00007FFE5CB30000h - 00007FFE5CB8D000h
msvcrt loaded at 00007FFE5B6E0000h - 00007FFE5B789000h
COMCTL32 loaded at 00007FFE41ED0000h - 00007FFE42160000h
WINMM loaded at 00007FFE49D50000h - 00007FFE49D86000h
GDI32 loaded at 00007FFE5B790000h - 00007FFE5B7BA000h
USER32 loaded at 00007FFE5CD60000h - 00007FFE5CF23000h
ucrtbase loaded at 00007FFE5B390000h - 00007FFE5B4DB000h
win32u loaded at 00007FFE5B050000h - 00007FFE5B077000h
gdi32full loaded at 00007FFE5B260000h - 00007FFE5B385000h
ADVAPI32 loaded at 00007FFE5C930000h - 00007FFE5C9E2000h
msvcp_win loaded at 00007FFE5ABE0000h - 00007FFE5AC83000h
UxTheme loaded at 00007FFE57FD0000h - 00007FFE5807D000h
sechost loaded at 00007FFE5BD60000h - 00007FFE5BE06000h
combase loaded at 00007FFE5B870000h - 00007FFE5BBEC000h
RPCRT4 loaded at 00007FFE5D0B0000h - 00007FFE5D1C6000h
SHELL32 loaded at 00007FFE5C1B0000h - 00007FFE5C8A8000h
ole32 loaded at 00007FFE5BED0000h - 00007FFE5C069000h
OLEAUT32 loaded at 00007FFE5BC80000h - 00007FFE5BD56000h
CRYPT32 loaded at 00007FFE5B4E0000h - 00007FFE5B656000h
zlib1 loaded at 00007FFE57A70000h - 00007FFE57A8D000h
sqlite3 loaded at 00007FFE1DB70000h - 00007FFE1DC64000h
shared loaded at 00007FFE3C260000h - 00007FFE3C288000h
MSVCP140 loaded at 00007FFE3C1D0000h - 00007FFE3C25D000h
MSVCP140_ATOMIC_WAIT loaded at 00007FFE3C1B0000h - 00007FFE3C1C4000h
MSIMG32 loaded at 00007FFE42340000h - 00007FFE42348000h
OLEACC loaded at 00007FFE41C30000h - 00007FFE41CAC000h
imagehlp loaded at 00007FFE5BEA0000h - 00007FFE5BEC0000h
AVRT loaded at 00007FFE54D00000h - 00007FFE54D0B000h
COMDLG32 loaded at 00007FFE5D1D0000h - 00007FFE5D2BD000h
shcore loaded at 00007FFE5C9F0000h - 00007FFE5CAC3000h
gdiplus loaded at 00007FFE42170000h - 00007FFE4233A000h
WINHTTP loaded at 00007FFE54750000h - 00007FFE54876000h
VCRUNTIME140_1 loaded at 00007FFE535C0000h - 00007FFE535CC000h
Secur32 loaded at 00007FFE57BE0000h - 00007FFE57BED000h
VCRUNTIME140 loaded at 00007FFE3C170000h - 00007FFE3C18E000h
dbghelp loaded at 00007FFE47740000h - 00007FFE47981000h
dbgcore loaded at 00007FFE28E60000h - 00007FFE28E99000h
SSPICLI loaded at 00007FFE59DD0000h - 00007FFE59E18000h
IMM32 loaded at 00007FFE5D760000h - 00007FFE5D78F000h
kernel.appcore loaded at 00007FFE59B20000h - 00007FFE59B3A000h
bcryptPrimitives loaded at 00007FFE5B140000h - 00007FFE5B1D9000h
windows.storage loaded at 00007FFE58910000h - 00007FFE5913B000h
MSCTF loaded at 00007FFE5CF30000h - 00007FFE5D08A000h
atlthunk loaded at 00007FFE19630000h - 00007FFE1963D000h
textinputframework loaded at 00007FFE49300000h - 00007FFE49444000h
TextShaping loaded at 00007FFE25540000h - 00007FFE255EC000h
CoreMessaging loaded at 00007FFE57620000h - 00007FFE57745000h
CoreUIComponents loaded at 00007FFE53130000h - 00007FFE53413000h
wintypes loaded at 00007FFE53C70000h - 00007FFE53DD8000h
CRYPTBASE loaded at 00007FFE5A310000h - 00007FFE5A31C000h
foo_ui_std loaded at 00007FFDDB730000h - 00007FFDDB96F000h
dwmapi loaded at 00007FFE584A0000h - 00007FFE584CE000h
foo_fileops loaded at 00007FFDF2E90000h - 00007FFDF2F18000h
foo_quicksearch loaded at<PHONE_NUMBER>000000h -<PHONE_NUMBER>08E000h
WindowsCodecs loaded at 00007FFE57DA0000h - 00007FFE57FCA000h
foo_metronome loaded at 00007FFE21E90000h - 00007FFE21E9E000h
foo_play_next loaded at 00007FFE1FA70000h - 00007FFE1FA7F000h
foo_dsp_utility loaded at 00007FFE14370000h - 00007FFE14398000h
foo_uie_lyrics3 loaded at 00000123AB100000h - 00000123AB1B0000h
WININET loaded at 00007FFE40490000h - 00007FFE40713000h
foo_quicktag loaded at 00007FFE13530000h - 00007FFE13555000h
foo_dsp_std loaded at 00007FFDF99A0000h - 00007FFDF99E2000h
foo_openlyrics loaded at 00007FFDDAC10000h - 00007FFDDADFF000h
d2d1 loaded at 00007FFE565B0000h - 00007FFE56BEA000h
bcrypt loaded at 00007FFE5A530000h - 00007FFE5A556000h
d3d11 loaded at 00007FFE56BF0000h - 00007FFE56E4D000h
DWrite loaded at 00007FFE56340000h - 00007FFE565A5000h
dxgi loaded at 00007FFE58250000h - 00007FFE58371000h
directxdatabasehelper loaded at 00007FFE58100000h - 00007FFE58157000h
foo_input_std loaded at 00007FFDDA9C0000h - 00007FFDDAC0A000h
MSACM32 loaded at 00007FFDF9B80000h - 00007FFDF9BA1000h
avformat-fb2k-60 loaded at 00007FFDF9B40000h - 00007FFDF9B71000h
avcodec-fb2k-60 loaded at 00007FFDDA600000h - 00007FFDDA828000h
avutil-fb2k-58 loaded at 00007FFDDA400000h - 00007FFDDA5F7000h
foo_audioscrobbler loaded at 00000123AB1B0000h - 00000123AB1D6000h
foo_run_main loaded at 00007FFE1F6B0000h - 00007FFE1F6BE000h
foo_freedb2 loaded at 00007FFDF9910000h - 00007FFDF995B000h
foo_beefweb loaded at 00007FFDDA2C0000h - 00007FFDDA3FF000h
WS2_32 loaded at 00007FFE5C8B0000h - 00007FFE5C924000h
MSWSOCK loaded at 00007FFE5A030000h - 00007FFE5A098000h
foo_ui_columns loaded at 00007FFDC2F20000h - 00007FFDC335B000h
USP10 loaded at 00007FFDF3780000h - 00007FFDF3799000h
urlmon loaded at 00007FFE40A40000h - 00007FFE40C18000h
iertutil loaded at 00007FFE40750000h - 00007FFE40A12000h
srvcli loaded at 00007FFE40720000h - 00007FFE40749000h
netutils loaded at 00007FFE593E0000h - 00007FFE593ED000h
foo_dsp_eq loaded at 00007FFDF1D90000h - 00007FFDF1E18000h
foo_vis_spectrum_analyzer loaded at 00007FFDF1B80000h - 00007FFDF1BF5000h
foo_dsp_effect loaded at 00000123AB250000h - 00000123AB2C3000h
foo_unpack loaded at 00007FFDEFAF0000h - 00007FFDEFB87000h
foo_vis_milk2 loaded at 00007FFDB9C10000h - 00007FFDBA064000h
D3DCOMPILER_47 loaded at 00007FFE55E90000h - 00007FFE5630F000h
cryptsp loaded at 00007FFE5A2F0000h - 00007FFE5A30C000h
foo_queue_viewer loaded at 00007FFDF36D0000h - 00007FFDF3729000h
foo_whatsnew loaded at 00007FFDF31F0000h - 00007FFDF3226000h
foo_uie_typefind loaded at 00000123AB420000h - 00000123AB463000h
foo_taskbar_playback_progress_b loaded at 00007FFE1DAD0000h - 00007FFE1DADC000h
clbcatq loaded at 00007FFE5B7C0000h - 00007FFE5B868000h
explorerframe loaded at 00007FFE19710000h - 00007FFE19998000h
foo_uie_console loaded at 00007FFDF1D40000h - 00007FFDF1D85000h
foo_uie_tagger_mod loaded at 00000123AB4B0000h - 00000123AB52E000h
foo_uie_eslyric loaded at 00007FFDB95F0000h - 00007FFDB9C04000h
VERSION loaded at 00007FFE54C90000h - 00007FFE54C9B000h
foo_httpcontrol loaded at 00007FFDEB790000h - 00007FFDEB803000h
MPR loaded at 00007FFE3BA90000h - 00007FFE3BAB1000h
foo_converter loaded at 00007FFDDA8E0000h - 00007FFDDA9B7000h
foo_wave_minibar_mod loaded at 00007FFDEB880000h - 00007FFDEB8D2000h
foo_out_upnp loaded at 00007FFDEB640000h - 00007FFDEB6E2000h
IPHLPAPI loaded at 00007FFE59420000h - 00007FFE59451000h
foo_playcount loaded at 00007FFDF1B40000h - 00007FFDF1B80000h
foo_dsp_audiostretch loaded at 00007FFE1BD20000h - 00007FFE1BD36000h
foo_loop loaded at 00007FFE1D9C0000h - 00007FFE1D9CE000h
foo_scrobble loaded at 00007FFDD9020000h - 00007FFDD9145000h
CONCRT140 loaded at 00007FFDE4650000h - 00007FFDE469D000h
DPAPI loaded at 00007FFE5A920000h - 00007FFE5A92A000h
Windows.UI loaded at 00007FFE418C0000h - 00007FFE41A14000h
Windows.UI.Immersive loaded at 00007FFE27330000h - 00007FFE2747C000h
twinapi.appcore loaded at 00007FFE4EB40000h - 00007FFE4ED77000h
profapi loaded at 00007FFE5AB00000h - 00007FFE5AB29000h
dataexchange loaded at 00007FFE263A0000h - 00007FFE263FA000h
MMDevApi loaded at 00007FFE4F2C0000h - 00007FFE4F350000h
DEVOBJ loaded at 00007FFE5A860000h - 00007FFE5A88D000h
cfgmgr32 loaded at 00007FFE5A8B0000h - 00007FFE5A90F000h
NSI loaded at 00007FFE5D0A0000h - 00007FFE5D0AA000h
dhcpcsvc6 loaded at 00007FFE54D10000h - 00007FFE54D2E000h
dhcpcsvc loaded at 00007FFE54CD0000h - 00007FFE54CF5000h
DNSAPI loaded at 00007FFE594B0000h - 00007FFE595D1000h
tiptsf loaded at 00007FFDE1FA0000h - 00007FFDE203F000h
msiltcfg loaded at 00007FFE3FC20000h - 00007FFE3FC2B000h
msi loaded at 00007FFE3B650000h - 00007FFE3B997000h
sxs loaded at 00007FFE5AA30000h - 00007FFE5AAD1000h
UIAutomationCore loaded at 00007FFE29770000h - 00007FFE29BA2000h
dxcore loaded at 00007FFE580B0000h - 00007FFE580F0000h
nvldumdx loaded at 00007FFE53420000h - 00007FFE534E4000h
msasn1 loaded at 00007FFE5A360000h - 00007FFE5A373000h
cryptnet loaded at 00007FFE52FC0000h - 00007FFE52FFB000h
wldp loaded at 00007FFE5A3D0000h - 00007FFE5A42D000h
drvstore loaded at 00007FFE52E40000h - 00007FFE52FB1000h
wintrust loaded at 00007FFE5B1E0000h - 00007FFE5B25A000h
PP-UWP-Interop loaded at 00007FFE1CFC0000h - 00007FFE1CFCB000h
vccorlib140 loaded at 00007FFDE2420000h - 00007FFDE2475000h
Windows.Media.Playback.Backgrou loaded at 00007FFDC3640000h - 00007FFDC3716000h
MFPlat loaded at 00007FFE54FF0000h - 00007FFE551F6000h
RTWorkQ loaded at 00007FFE54DB0000h - 00007FFE54DE6000h
Windows.Media.MediaControl loaded at 00007FFDD9150000h - 00007FFDD91DD000h
MFMediaEngine loaded at 00007FFDB8B80000h - 00007FFDB8FA4000h
powrprof loaded at 00007FFE599D0000h - 00007FFE59A1E000h
XmlLite loaded at 00007FFE55200000h - 00007FFE5523B000h
UMPDC loaded at 00007FFE599B0000h - 00007FFE599C4000h
AUDIOSES loaded at 00007FFE49550000h - 00007FFE49707000h
Windows.Media.Devices loaded at 00007FFE472D0000h - 00007FFE473BE000h
rsaenh loaded at 00007FFE59A80000h - 00007FFE59AB8000h
Windows.Media.Playback.ProxyStu loaded at 00007FFE13890000h - 00007FFE138AA000h
nvgpucomp64 loaded at 00007FFE4F730000h - 00007FFE52440000h
OneCoreUAPCommonProxyStub loaded at 00007FFE4E280000h - 00007FFE4E8BE000h
nvwgf2umx loaded at 00007FFE49D90000h - 00007FFE4E20C000h
nvspcap64 loaded at 00007FFE25050000h - 00007FFE2534D000h
ntmarta loaded at 00007FFE59C40000h - 00007FFE59C75000h
nvppex loaded at 00007FFE24EF0000h - 00007FFE25045000h
resourcepolicyclient loaded at 00007FFE58530000h - 00007FFE58544000h
PROPSYS loaded at 00007FFE55750000h - 00007FFE55844000h
rasadhlp loaded at 00007FFE4F5F0000h - 00007FFE4F5FB000h
fwpuclnt loaded at 00007FFE53890000h - 00007FFE53916000h
WINNSI loaded at 00007FFE57D10000h - 00007FFE57D1E000h
webio loaded at 00007FFE39600000h - 00007FFE396C3000h
schannel loaded at 00007FFE598E0000h - 00007FFE599A4000h
ncrypt loaded at 00007FFE5A4F0000h - 00007FFE5A520000h
NTASN1 loaded at 00007FFE5A4A0000h - 00007FFE5A4DF000h
ncryptsslp loaded at 00007FFE39FA0000h - 00007FFE39FCD000h
Stack dump analysis:
Address: 00007FFE3C267F1Eh (shared+7F1Eh), symbol: "uCallStackTracker::uCallStackTracker" (+13Eh)
Address: 00007FF62F4DCFA8h (foobar2000+3ACFA8h)
Address: 00007FFE5A534E40h (bcrypt+4E40h), symbol: "BCryptCloseAlgorithmProvider" (+30h)
Address: 00007FFE5B39DDABh (ucrtbase+DDABh), symbol: "free_base" (+1Bh)
Address: 00007FF62F4DCFA8h (foobar2000+3ACFA8h)
Address: 00007FFDDAC59ED2h (foo_openlyrics+49ED2h), symbol: "run_mvtf_tests" (+92B2h)
Address: 00007FFDDAC5B000h (foo_openlyrics+4B000h), symbol: "run_mvtf_tests" (+A3E0h)
Address: 00007FF62F4DCFA8h (foobar2000+3ACFA8h)
Address: 00007FFDDAC5B066h (foo_openlyrics+4B066h), symbol: "run_mvtf_tests" (+A446h)
Address: 00007FFDDADD7DC8h (foo_openlyrics+1C7DC8h), symbol: "foobar2000_get_interface" (+14EE98h)
Address: 00007FFE5B4C88B0h (ucrtbase+1388B0h), symbol: "mbcasemap" (+D0h)
Address: 00007FFE5B3B7F19h (ucrtbase+27F19h), symbol: "malloc_base" (+39h)
Address: 00007FFE5B4C88B0h (ucrtbase+1388B0h), symbol: "mbcasemap" (+D0h)
Address: 00007FFDDACD1F7Bh (foo_openlyrics+C1F7Bh), symbol: "foobar2000_get_interface" (+4904Bh)
Address: 00007FFDDACD1F7Bh (foo_openlyrics+C1F7Bh), symbol: "foobar2000_get_interface" (+4904Bh)
Address: 00007FFDDAC2CF5Eh (foo_openlyrics+1CF5Eh), symbol: "cJSON_free" (+1406Eh)
Address: 00007FFDDAC2C86Dh (foo_openlyrics+1C86Dh), symbol: "cJSON_free" (+1397Dh)
Address: 00007FFDDAC2C2EDh (foo_openlyrics+1C2EDh), symbol: "cJSON_free" (+133FDh)
Address: 00007FF62F4DCFA8h (foobar2000+3ACFA8h)
Address: 00007FF62F4DD0C8h (foobar2000+3AD0C8h)
Address: 00007FFDDADD7DC8h (foo_openlyrics+1C7DC8h), symbol: "foobar2000_get_interface" (+14EE98h)
Address: 00007FFDDADD7DC8h (foo_openlyrics+1C7DC8h), symbol: "foobar2000_get_interface" (+14EE98h)
Address: 00007FFDDAC5B000h (foo_openlyrics+4B000h), symbol: "run_mvtf_tests" (+A3E0h)
Address: 00007FFDDADD7DC8h (foo_openlyrics+1C7DC8h), symbol: "foobar2000_get_interface" (+14EE98h)
Address: 00007FFDDAC3168Eh (foo_openlyrics+2168Eh), symbol: "cJSON_free" (+1879Eh)
Address: 00007FF62F4DCFA8h (foobar2000+3ACFA8h)
Address: 00007FF62F4DCFA8h (foobar2000+3ACFA8h)
Address: 00007FF62F4DD0C8h (foobar2000+3AD0C8h)
Address: 00007FFE5B3B56EFh (ucrtbase+256EFh), symbol: "towlower_l" (+E4Fh)
Address: 00007FFDDAC9206Fh (foo_openlyrics+8206Fh), symbol: "foobar2000_get_interface" (+913Fh)
Address: 00007FFE5B3A4EA0h (ucrtbase+14EA0h), symbol: "wcsrchr" (+1F0h)
Address: 00007FFE5C0BDBE7h (KERNEL32+2DBE7h), symbol: "BaseThreadInitThunk" (+17h)
Address: 00007FFE5D865A4Ch (ntdll+85A4Ch), symbol: "RtlUserThreadStart" (+2Ch)
Address: 00007FFE5ADAD360h (KERNELBASE+11D360h), symbol: "UnhandledExceptionFilter" (+0h)
Environment:
App: foobar2000 v2.2 preview 2024-09-11
Arch: x64
UI: Columns UI 2.1.0
Components:
Core (2024-09-11 14:11:10 UTC)
foobar2000 core 2.2 preview 2024-09-11
foo_audioscrobbler (2022-09-06 23:33:52 UTC)
Audioscrobbler 1.5.0
foo_beefweb (2023-09-03 13:10:02 UTC)
Beefweb Remote Control 0.8
foo_converter (2024-09-11 14:11:42 UTC)
Converter 2.2 preview 2024-09-11
foo_dsp_audiostretch (2023-04-21 17:50:58 UTC)
Audio-Stretch 0.2
foo_dsp_effect (2024-05-03 06:52:38 UTC)
Effect DSP 0.51
foo_dsp_eq (2024-09-11 14:11:46 UTC)
Equalizer 1.2.3
foo_dsp_std (2024-09-11 14:11:48 UTC)
Standard DSP Array 2.2 preview 2024-09-11
foo_dsp_utility (2023-02-23 23:27:50 UTC)
Utility DSP Array 1.3.2
foo_fileops (2024-09-11 14:11:52 UTC)
File Operations 2.2 preview 2024-09-11
foo_freedb2 (2024-09-11 14:11:56 UTC)
Online Tagger 0.10
foo_httpcontrol (2023-05-21 12:54:40 UTC)
HTTP Control 0.97.28
foo_input_std (2024-09-11 14:11:38 UTC)
CD Audio Decoder 2.2 preview 2024-09-11
FFmpeg Decoders 6.0
FLAC Decoder 1.4.3
Monkey's Audio Decoder 10.61
Opus Decoder 1.5.2
Standard Input Array 2.2 preview 2024-09-11
WavPack Decoder 5.7.0
foo_loop (2023-04-30 20:11:30 UTC)
Loop 1.6
foo_metronome (2022-10-22 21:06:24 UTC)
Metronome 1.2
foo_openlyrics (2024-09-06 00:28:24 UTC)
OpenLyrics 1.11
foo_out_upnp (2022-08-29 21:33:36 UTC)
UPnP MediaRenderer Output 1.4
foo_play_next (2023-03-16 19:06:30 UTC)
Play Next 0.2.3
foo_playcount (2023-03-14 19:04:18 UTC)
Playback Statistics 3.1.5
foo_queue_viewer (2023-04-28 03:43:14 UTC)
Queue Viewer 1.0.22
foo_quicksearch (2024-06-17 16:09:24 UTC)
Quick Search Toolbar 3.9
foo_quicktag (2022-09-22 23:42:30 UTC)
Quick Tagger 1.1.1
foo_run_main (2022-11-14 00:16:56 UTC)
Run Main 1.0.2
foo_scrobble (2022-09-06 04:43:00 UTC)
Scrobble <IP_ADDRESS>56
foo_taskbar_playback_progress_bar (2022-09-25 14:01:16 UTC)
Taskbar Playback Progress Bar 1.1.3
foo_ui_columns (2023-09-27 02:19:32 UTC)
Columns UI 2.1.0
foo_ui_std (2024-09-11 14:11:24 UTC)
Album List 2.2 preview 2024-09-11
Decoding Speed Test 2.2 preview 2024-09-11
Default User Interface 2.2 preview 2024-09-11
File Integrity Verifier 2.2 preview 2024-09-11
foo_uie_console (2023-05-07 01:46:38 UTC)
Console panel 3.0.0
foo_uie_eslyric (2023-12-25 02:32:36 UTC)
ESLyric <IP_ADDRESS>8 (Beta)
foo_uie_lyrics3 (2023-11-18 15:53:36 UTC)
Lyric Show Panel 3 0.6
foo_uie_tagger_mod (2023-04-22 16:35:44 UTC)
Tagger Panel 2.0.0
foo_uie_typefind (2023-11-03 00:43:42 UTC)
Typefind 0.4.0
foo_unpack (2024-09-11 14:12:04 UTC)
ZIP/GZIP/RAR/7-Zip Reader 2.2 preview 2024-09-11
foo_vis_milk2 (2024-09-15 01:09:10 UTC)
MilkDrop 2 Visualisation 0.1.0-beta
foo_vis_spectrum_analyzer (2024-04-18 01:50:24 UTC)
Spectrum Analyzer <IP_ADDRESS>
foo_wave_minibar_mod (2024-01-16 17:24:36 UTC)
Waveform Minibar (mod) 1.2.58
foo_whatsnew (2023-05-04 18:05:24 UTC)
Feature Watcher 1.1.2
Recent events:
[36000ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[36000ms] INFO-OpenLyrics: Querying for lyrics in file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[36000ms] INFO-OpenLyrics: Querying for lyrics in file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).txt...
[36000ms] INFO-OpenLyrics: Found 2 lyrics in local files: file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)
[36000ms] INFO-OpenLyrics: Lookup local-file file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc for lyrics...
[36000ms] INFO-OpenLyrics: Successfully retrieved lyrics from file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[36000ms] INFO-OpenLyrics: Successfully looked-up lyrics from source: Local files
[36000ms] INFO-OpenLyrics: Parsing lyrics text...
[36000ms] INFO-OpenLyrics: Loaded lyrics already form a valid UTF-8 sequence
[36000ms] INFO-OpenLyrics: Parsing LRC lyric text...
[36000ms] setConfigFloat(core.totalTimePlayed,530886.1286268)
[36000ms] Automatic resampling: using Resampler (dBpoweramp/SSRC): 192000 Hz, Resampler (RetroArch): 192000 Hz
[36000ms] INFO-OpenLyrics: Lyric loading complete
[36000ms] INFO-OpenLyrics: New album art data retrieved
[36016ms] Device: Realtek Digital Output (Realtek(R) Audio)
Mix format: 192000 Hz / 32-bit float / 2 channels (0x3)
[36063ms] Sending stream: 192000 Hz / 32-bit float / 2 channels (0x3)
[36125ms] Audioscrobbler: Handshake successful.
[36188ms] INFO-OpenLyrics: LyricPanel::compute_background_image took 101539us
[36188ms] INFO-OpenLyrics: LyricPanel::on_album_art_retrieved took 183747us
[36297ms] INFO-OpenLyrics: Skipping lyric save. Type: 1, Local: yes, Timestamped: yes, Autosave: 1
[38578ms] INFO-OpenLyrics: Spawning editor window...
[38578ms] INFO-OpenLyrics: Expanding lyric text...
[38594ms] INFO-OpenLyrics: Initializing editor window...
[66156ms] INFO-OpenLyrics: Saving lyrics from editor...
[66156ms] INFO-OpenLyrics: Parsing LRC lyric text...
[66156ms] INFO-OpenLyrics: Expanding lyric text...
[66156ms] INFO-OpenLyrics: Saving lyrics to a local file...
[66156ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[66156ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[66156ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[73469ms] INFO-OpenLyrics: Synchronising editor line...
[73469ms] INFO-OpenLyrics: Parsing LRC lyric text...
[82016ms] INFO-OpenLyrics: Synchronising editor line...
[82016ms] INFO-OpenLyrics: Parsing LRC lyric text...
[83688ms] INFO-OpenLyrics: Synchronising editor line...
[83688ms] INFO-OpenLyrics: Parsing LRC lyric text...
[83922ms] INFO-OpenLyrics: Synchronising editor line...
[83922ms] INFO-OpenLyrics: Parsing LRC lyric text...
[89625ms] INFO-OpenLyrics: Synchronising editor line...
[89625ms] INFO-OpenLyrics: Parsing LRC lyric text...
[92063ms] INFO-OpenLyrics: Synchronising editor line...
[92063ms] INFO-OpenLyrics: Parsing LRC lyric text...
[97250ms] INFO-OpenLyrics: Synchronising editor line...
[97250ms] INFO-OpenLyrics: Parsing LRC lyric text...
[99531ms] INFO-OpenLyrics: Synchronising editor line...
[99531ms] INFO-OpenLyrics: Parsing LRC lyric text...
[100438ms] INFO-OpenLyrics: Saving lyrics from editor...
[100453ms] INFO-OpenLyrics: Parsing LRC lyric text...
[100453ms] INFO-OpenLyrics: Expanding lyric text...
[100453ms] INFO-OpenLyrics: Saving lyrics to a local file...
[100453ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[100453ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[100453ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[101938ms] INFO-OpenLyrics: Synchronising editor line...
[101938ms] INFO-OpenLyrics: Parsing LRC lyric text...
[102797ms] INFO-OpenLyrics: Synchronising editor line...
[102797ms] INFO-OpenLyrics: Parsing LRC lyric text...
[104031ms] INFO-OpenLyrics: Saving lyrics from editor...
[104031ms] INFO-OpenLyrics: Parsing LRC lyric text...
[104031ms] INFO-OpenLyrics: Expanding lyric text...
[104031ms] INFO-OpenLyrics: Saving lyrics to a local file...
[104031ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[104031ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[104031ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[104906ms] INFO-OpenLyrics: Synchronising editor line...
[104906ms] INFO-OpenLyrics: Parsing LRC lyric text...
[105750ms] INFO-OpenLyrics: Saving lyrics from editor...
[105750ms] INFO-OpenLyrics: Parsing LRC lyric text...
[105750ms] INFO-OpenLyrics: Expanding lyric text...
[105750ms] INFO-OpenLyrics: Saving lyrics to a local file...
[105750ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[105750ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[105750ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[106922ms] INFO-OpenLyrics: Synchronising editor line...
[106922ms] INFO-OpenLyrics: Parsing LRC lyric text...
[107656ms] INFO-OpenLyrics: Saving lyrics from editor...
[107656ms] INFO-OpenLyrics: Parsing LRC lyric text...
[107656ms] INFO-OpenLyrics: Expanding lyric text...
[107656ms] INFO-OpenLyrics: Saving lyrics to a local file...
[107656ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[107656ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[107656ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[108985ms] INFO-OpenLyrics: Synchronising editor line...
[108985ms] INFO-OpenLyrics: Parsing LRC lyric text...
[109688ms] INFO-OpenLyrics: Saving lyrics from editor...
[109688ms] INFO-OpenLyrics: Parsing LRC lyric text...
[109688ms] INFO-OpenLyrics: Expanding lyric text...
[109688ms] INFO-OpenLyrics: Saving lyrics to a local file...
[109688ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[109688ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[109688ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[111594ms] INFO-OpenLyrics: Synchronising editor line...
[111594ms] INFO-OpenLyrics: Parsing LRC lyric text...
[112406ms] INFO-OpenLyrics: Saving lyrics from editor...
[112406ms] INFO-OpenLyrics: Parsing LRC lyric text...
[112406ms] INFO-OpenLyrics: Expanding lyric text...
[112406ms] INFO-OpenLyrics: Saving lyrics to a local file...
[112406ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[112406ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[112406ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[114422ms] INFO-OpenLyrics: Synchronising editor line...
[114422ms] INFO-OpenLyrics: Parsing LRC lyric text...
[115078ms] INFO-OpenLyrics: Saving lyrics from editor...
[115078ms] INFO-OpenLyrics: Parsing LRC lyric text...
[115078ms] INFO-OpenLyrics: Expanding lyric text...
[115078ms] INFO-OpenLyrics: Saving lyrics to a local file...
[115078ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[115078ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[115094ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[116563ms] INFO-OpenLyrics: Synchronising editor line...
[116563ms] INFO-OpenLyrics: Parsing LRC lyric text...
[118281ms] INFO-OpenLyrics: Saving lyrics from editor...
[118281ms] INFO-OpenLyrics: Parsing LRC lyric text...
[118281ms] INFO-OpenLyrics: Expanding lyric text...
[118281ms] INFO-OpenLyrics: Saving lyrics to a local file...
[118281ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[118281ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[118281ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[118938ms] INFO-OpenLyrics: Synchronising editor line...
[118938ms] INFO-OpenLyrics: Parsing LRC lyric text...
[121313ms] INFO-OpenLyrics: Saving lyrics from editor...
[121313ms] INFO-OpenLyrics: Parsing LRC lyric text...
[121313ms] INFO-OpenLyrics: Expanding lyric text...
[121313ms] INFO-OpenLyrics: Saving lyrics to a local file...
[121313ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[121313ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[121313ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[121953ms] INFO-OpenLyrics: Synchronising editor line...
[121953ms] INFO-OpenLyrics: Parsing LRC lyric text...
[124016ms] INFO-OpenLyrics: Synchronising editor line...
[124016ms] INFO-OpenLyrics: Parsing LRC lyric text...
[126172ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[126688ms] INFO-OpenLyrics: Synchronising editor line...
[126688ms] INFO-OpenLyrics: Parsing LRC lyric text...
[128985ms] INFO-OpenLyrics: Synchronising editor line...
[128985ms] INFO-OpenLyrics: Parsing LRC lyric text...
[131313ms] INFO-OpenLyrics: Synchronising editor line...
[131313ms] INFO-OpenLyrics: Parsing LRC lyric text...
[133625ms] INFO-OpenLyrics: Synchronising editor line...
[133625ms] INFO-OpenLyrics: Parsing LRC lyric text...
[137141ms] INFO-OpenLyrics: Synchronising editor line...
[137141ms] INFO-OpenLyrics: Parsing LRC lyric text...
[145422ms] INFO-OpenLyrics: Synchronising editor line...
[145422ms] INFO-OpenLyrics: Parsing LRC lyric text...
[149906ms] INFO-OpenLyrics: Synchronising editor line...
[149906ms] INFO-OpenLyrics: Parsing LRC lyric text...
[154813ms] INFO-OpenLyrics: Synchronising editor line...
[154813ms] INFO-OpenLyrics: Parsing LRC lyric text...
[159360ms] INFO-OpenLyrics: Synchronising editor line...
[159360ms] INFO-OpenLyrics: Parsing LRC lyric text...
[160453ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[164047ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[165766ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[167672ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[169688ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[170625ms] INFO-OpenLyrics: Saving lyrics from editor...
[170625ms] INFO-OpenLyrics: Parsing LRC lyric text...
[170625ms] INFO-OpenLyrics: Expanding lyric text...
[170625ms] INFO-OpenLyrics: Saving lyrics to a local file...
[170625ms] INFO-OpenLyrics: Save file name format '[%artist% - ][%title%]' with directory class 'ConfigDirectory' evaluated to 'file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON)'
[170625ms] INFO-OpenLyrics: Saving lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc...
[170625ms] INFO-OpenLyrics: Successfully saved lyrics to file://C:\Users\abudd\AppData\Roaming\foobar2000-v2\lyrics\sewerperson - CHAPTER9_HOMINUS NOCTURNA (SKELLINGTON).lrc
[172422ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[175094ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[178297ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[181328ms] INFO-OpenLyrics: Skipping lyric upload for sewerperson//sewerperson because there is a more recent upload pending for that track
[195735ms] INFO-OpenLyrics: Synchronising editor line...
[195735ms] INFO-OpenLyrics: Parsing LRC lyric text...
[196953ms] INFO-OpenLyrics: Synchronising editor line...
[196953ms] INFO-OpenLyrics: Parsing LRC lyric text...
[198110ms] INFO-OpenLyrics: Synchronising editor line...
[198110ms] INFO-OpenLyrics: Parsing LRC lyric text...
[198953ms] INFO-OpenLyrics: Synchronising editor line...
[198953ms] INFO-OpenLyrics: Parsing LRC lyric text...
[203547ms] INFO-OpenLyrics: Synchronising editor line...
[203547ms] INFO-OpenLyrics: Parsing LRC lyric text...
[204078ms] INFO-OpenLyrics: Synchronising editor line...
[204078ms] INFO-OpenLyrics: Parsing LRC lyric text...
[204797ms] INFO-OpenLyrics: Synchronising editor line...
[204797ms] INFO-OpenLyrics: Parsing LRC lyric text...
[207031ms] INFO-OpenLyrics: Synchronising editor line...
[207031ms] INFO-OpenLyrics: Parsing LRC lyric text...
[209469ms] INFO-OpenLyrics: Synchronising editor line...
[209469ms] INFO-OpenLyrics: Parsing LRC lyric text...
[214063ms] INFO-OpenLyrics: Synchronising editor line...
[214063ms] INFO-OpenLyrics: Parsing LRC lyric text...
[216422ms] INFO-OpenLyrics: Synchronising editor line...
[216422ms] INFO-OpenLyrics: Parsing LRC lyric text...
[219078ms] INFO-OpenLyrics: Synchronising editor line...
[219078ms] INFO-OpenLyrics: Parsing LRC lyric text...
[224000ms] INFO-OpenLyrics: Synchronising editor line...
[224000ms] INFO-OpenLyrics: Parsing LRC lyric text...
[228406ms] INFO-OpenLyrics: Synchronising editor line...
[228406ms] INFO-OpenLyrics: Parsing LRC lyric text...
[230641ms] INFO-OpenLyrics: Retrieving lyrics from https://lrclib.net/api/get?artist_name=sewerperson&album_name=&track_name=CHAPTER9_HOMINUS%20NOCTURNA&duration=159
[230969ms] INFO-OpenLyrics: Synchronising editor line...
[230969ms] INFO-OpenLyrics: Parsing LRC lyric text...
[231219ms] WARN-OpenLyrics: Failed to make LRCLIB search request to https://lrclib.net/api/get?artist_name=sewerperson&album_name=&track_name=CHAPTER9_HOMINUS%20NOCTURNA&duration=159: Object not found
[231219ms] INFO-OpenLyrics: Requesting a challenge for LRCLIB upload...
[231969ms] INFO-OpenLyrics: Solved challenge SHA256(i3qIWoCl5p88EjCbpdABlQyYT05sWj8C) < 000000FF00000000000000000000000000000000000000000000000000000000 with nonce 0 in 0.00s
Machine specifications:
OS: Windows 10.0.26120 x64
CPU: 11th Gen Intel(R) Core(TM) i5-11400F @ 2.60GHz, features: MMX SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX LZCNT
CPU threads: 12
Audio: Realtek Digital Output (Realtek(R) Audio)
--
Additional information
To me it appears that the search request to LRCLIB, despite LRCLIB being disabled, is causing a crash. I have messed with loads of settings and still experience this within minutes of using the plugin. Thanks.
Here are the raw crash reports
failure_00000026.dmp
failure_00000026.txt
failure_00000027.dmp
failure_00000027.txt
Hmmm. The code is fairly simple and just checks that it's not set to "Never" so I'm surprised this is happening even with it disabled.
I know you said you'd tried various options but just to check the obvious, can you confirm that this persists even when you have uploads disabled in Preferences -> OpenLyrics -> Uploading (IE "Upload lyrics to LRCLIB" set to "Never")?
Also I don't see any crashes like this at all in the crash tracker. When it crashes and fb2k asks if you want to submit a crash report, have you ever said "yes"?
Hmmm. The code is fairly simple and just checks that it's not set to "Never" so I'm surprised this is happening even with it disabled. I know you said you'd tried various options but just to check the obvious, can you confirm that this persists even when you have uploads disabled in Preferences -> OpenLyrics -> Uploading (IE "Upload lyrics to LRCLIB" set to "Never")?
Also I don't see any crashes like this at all in the crash tracker. When it crashes and fb2k asks if you want to submit a crash report, have you ever said "yes"?
Yes, I can confirm that it is set to Never. I have also hit yes when it crashes. Thanks for your help.
|
2025-04-01T06:39:09.840374
| 2017-12-05T15:29:48
|
279417303
|
{
"authors": [
"jpkrohling",
"pohly",
"tbarbugli",
"yurishkuro"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7171",
"repo": "jaegertracing/jaeger-lib",
"url": "https://github.com/jaegertracing/jaeger-lib/issues/32"
}
|
gharchive/issue
|
codahale/hdrhistogram github repository is archived by the owner
I just noticed this while looking at the dependencies added by this lib:
We shouldn't have it as a direct dependency
This could be labeled as "good first issue".
@yurishkuro, @jpkrohling: what solution do you have in mind? Copy just the necessary code from codahale/hdrhistogram into jaeger-lib/metrics (which is where it is used), or copy the entire repo and continue maintaining it - but where and how?
Isn't it just a matter of removing the dependency from glide.lock? I think it was removed from the YAML but not from the lock...
@pohly, would you be willing to give it a shot?
@jpkrohling no, the package still gets used here:
https://github.com/jaegertracing/jaeger-lib/blob/master/metrics/local.go#L23
So the solution isn't just a simple change to glide.lock.
Sorry, I thought this was the main repo :-)
@jpkrohling do you still think that this is a "good first issue"?
codahale/hdrhistogram has some unfixed issues open, so whoever does something probably also needs to have a good understanding of whether those bugs are relevant when copying code. Doesn't look trivial to me.
I'd also like to add that this is a show-stopper for me for using Jaeger - depending on unmaintained, potentially buggy components just isn't good and won't pass a closer review.
do you still think that this is a "good first issue"?
For someone who knows Go, I think this might still be a good first issue.
codahale/hdrhistogram has some unfixed issues open, so whoever does something probably also needs to have a good understanding of whether those bugs are relevant when copying code. Doesn't look trivial to me.
@yurishkuro can provide more details about this, but I think it would be acceptable to switch to a more modern backend, like Prometheus. Pretty much all providers nowadays are able to scrape Prometheus data. For the main backend, we are using expvar, Prometheus and a noop implementation:
https://github.com/jaegertracing/jaeger/blob/master/pkg/metrics/builder.go#L52
So, the real solution is to not depend on this unmaintained library.
depending on unmaintained, potentially buggy components just isn't good and won't pass a closer review
+1, I don't think anyone would disagree with that, but different issues have different priorities to different people.
Local backend is only used in unintentionally tests. We can probably simulate it via Prometheus client if we don't use the default registrar (which is global and will persist across tests).
If the local backend is only used for testing, why does it get pulled into production clients? That probably should be changed first. Once that's resolved, depending on an unmaintained components becomes less of a problem.
It should be moved to a package
As of v2.3.0 we're not using codahale (#82).
|
2025-04-01T06:39:09.856212
| 2019-03-13T14:57:10
|
420546907
|
{
"authors": [
"jcantrill",
"jkandasa",
"jpkrohling",
"pavolloffay"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7172",
"repo": "jaegertracing/jaeger-operator",
"url": "https://github.com/jaegertracing/jaeger-operator/issues/310"
}
|
gharchive/issue
|
jaeger-query service throws "no Elasticsearch node available" very often when we use ES operator provided ES cluster
Steps to reproduce,
Install jaeger services with the following CR,
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaegerqe
spec:
ingress:
security: none
strategy: production
collector:
replicas: 1
image: jaegertracing/jaeger-collector:1.11
options:
metrics-backend: prometheus
collector:
num-workers: "50"
queue-size: "2000"
es:
bulk:
size: "5000000"
workers: "1"
flush-interval: "200ms"
query:
replicas: 1
image: jaegertracing/jaeger-query:1.11
options:
metrics-backend: prometheus
query:
port: 16686
agent:
strategy: sidecar
image: jaegertracing/jaeger-agent:1.11
options:
metrics-backend: prometheus
storage:
type: elasticsearch
esIndexCleaner:
enabled: false
sparkDependencies:
enabled: false
elasticsearch:
image: quay.io/openshift/origin-logging-elasticsearch5:latest
nodeCount: 3
resources:
launch jaeger UI and select a service(example: jaeger-query)
click on Find Traces often, throws "HTTP Error: Search service failed: no available connection: no Elasticsearch node available",
Log files:
jaeger-query: jaegerqe-query-84ddcc9654-kph2f_jaeger-query.log
jaeger-agent: jaegerqe-query-84ddcc9654-kph2f_jaeger-agent.log
jaeger-collector: jaegerqe-collector-6976478cc5-54ngv.log
elasticsearch-clientdatamaster-0-1: elasticsearch-clientdatamaster-0-1-9b575665-pfrgb_elasticsearch.log
elasticsearch-clientdatamaster-0-1 proxy: elasticsearch-clientdatamaster-0-1-9b575665-pfrgb_proxy.log
Other files: other files.zip
I have seen this before.
I am wondering whether this is related to resource limits. What are the resource limits for ES? Could you please give it more juice and try if the issue still happens?
@pavolloffay ES memory and CPU resource limits? I do not specify anything specifically. Just went with default settings.
Looks like ES using up to 4GiB and up to 4 cores
First I have tried port-forward query port and never have got this error.
Increasing es.timeout (default is 0s) to 10s solved the issue. I think we should by default increase the timeout to let's say 5s?
@jkandasa are you modifying es.timout when running perf tests on OCP?
There is also route timeout https://docs.openshift.com/container-platform/3.5/install_config/configuring_routing.html. I didn't get a timeout on the route so far.
@jkandasa if you run into this in tests just increase the timeout high enough.
Summary:
I don't get any timeouts when jaeger is connected to ECL ES via certs (note that we use token auth by default)
I don't get any timeouts when using ES deployment from tests make es
I think the root cause of the problem is that we are using token auth with ES - specifically https://github.com/fabric8io/openshift-elasticsearch-plugin. It might do a request to k8s API ( apis/authorization.k8s.io/v1/selfsubjectaccessreviews) per Jaeger request to Elasticsearch whereas when using certs all information is already present in ES container.
cc @jcantrill @ewolinetz
Summary:
I don't get any timeouts when jaeger is connected to ECL ES via certs (note that we use token auth by default)
I don't get any timeouts when using ES deployment from tests make es
I think the root cause of the problem is that we are using token auth with ES - specifically https://github.com/fabric8io/openshift-elasticsearch-plugin. It might do a request to k8s API ( apis/authorization.k8s.io/v1/selfsubjectaccessreviews) per Jaeger request to Elasticsearch
This is correct. There is no caching mechanism
whereas when using certs all information is already present in ES container.
Certs would be faster, especially if you do not additional identify a token on the request. It will skip all of the token auth logic when the token is not there
cc @jcantrill @ewolinetz
How do you suggest moving forward? Implement some caching or switch to client certs? Either way we will need a change to the ES image.
@jkandasa if you run into this in tests just increase the timeout high enough.
@pavolloffay Sure, I will modify and do a recheck. When I face this issue I had only ~10 traces in ES. It was a fresh installation.
@jkandasa the es.timeout is just a workaround. The query is still very slow, we will need a faster authorization mechanism.
Just to confirm: does this affect the whole query API, or just the UI? Bearer tokens come only when the user is authenticated via the UI, right?
@jpkrohling It affects the whole qurery API. Query-service backed failed to get data from the ES cluster.
I have disabled UI authentication,
ingress:
security: none
|
2025-04-01T06:39:09.858600
| 2019-02-20T09:13:59
|
412319546
|
{
"authors": [
"jpkrohling",
"pavolloffay"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7173",
"repo": "jaegertracing/jaeger-operator",
"url": "https://github.com/jaegertracing/jaeger-operator/pull/212"
}
|
gharchive/pull-request
|
Bump Jaeger to 1.10
Similar to https://github.com/jaegertracing/jaeger-operator/pull/176/files
Signed-off-by: Pavol Loffay<EMAIL_ADDRESS>
This change isβ
|
2025-04-01T06:39:09.871985
| 2019-12-02T20:02:39
|
531506203
|
{
"authors": [
"apiszcz",
"jagin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7174",
"repo": "jagin/detectron2-pipeline",
"url": "https://github.com/jagin/detectron2-pipeline/issues/1"
}
|
gharchive/issue
|
initial test, multiprocessor mode not working single process works
I am getting the following error
This may be due to my setup, the --single-process flag works fine.
Without --single-process the following error is emitted.
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_104508\conda\conda-bld\pytorch_1572950778684\work\torch/csrc/generic/StorageSharing.cpp:245
Traceback (most recent call last):
File "C:\test\lib\multiprocessing\queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\test\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\test\lib\site-packages\torch\multiprocessing\reductions.py", line 242, in reduce_tensor
event_sync_required) = storage.share_cuda()
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_104508\conda\conda-bld\pytorch_1572950778684\work\torch/csrc/generic/StorageSharing.cpp:245
Traceback (most recent call last):
File "C:\test\lib\multiprocessing\queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\test\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\test\lib\site-packages\torch\multiprocessing\reductions.py", line 242, in reduce_tensor
event_sync_required) = storage.share_cuda()
RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\tmp_conda_3.7_104508\conda\conda-bld\pytorch_1572950778684\work\torch/csrc/generic/StorageSharing.cpp:245
Process _PredictWorker-1:
Traceback (most recent call last):
File "C:\test\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\g\vn\lc\detectron2-pipeline\pipeline\libs\async_predictor.py", line 32, in run
task = self.task_queue.get()
File "C:\test\lib\multiprocessing\queues.py", line 94, in get
res = self._recv_bytes()
File "C:\test\lib\multiprocessing\connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "C:\test\lib\multiprocessing\connection.py", line 306, in _recv_bytes
[ov.event], False, INFINITE)
It's hard to say what's going on. It looks like there is something wrong with your CUDA setup or Detectron2 setup with GPU. To be sure you can switch off GPU adding --gpus 0 --cpus 1 options.
|
2025-04-01T06:39:09.892806
| 2023-02-14T19:46:43
|
1584731207
|
{
"authors": [
"nickelnine",
"trinhloivn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7175",
"repo": "jairaj08/SystemUI-Patcher",
"url": "https://github.com/jairaj08/SystemUI-Patcher/issues/1"
}
|
gharchive/issue
|
Can Patcher framework-res,framework-ext-res?
same as above, can i edit framework-res,framework-ext-res?
sorry if this is a bad question
Thanks your job...
Curious about this as well...
|
2025-04-01T06:39:10.042799
| 2015-11-26T15:08:34
|
119061245
|
{
"authors": [
"jakob101",
"jrieken"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7176",
"repo": "jakob101/RelativePath",
"url": "https://github.com/jakob101/RelativePath/issues/1"
}
|
gharchive/issue
|
Also participate in intellisense
Using the completion provider API this extension could be part of IntelliSense, allowing me to complete commonjs module names or html script tags etc
I'll try it out. Don't wanna have performance issues while writing code because of a large workspace though. Maybe there's a way around it
|
2025-04-01T06:39:10.049812
| 2018-04-09T23:56:34
|
312727451
|
{
"authors": [
"expobrain",
"jakubknejzlik"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7177",
"repo": "jakubknejzlik/sql-condition-builder",
"url": "https://github.com/jakubknejzlik/sql-condition-builder/issues/7"
}
|
gharchive/issue
|
Is this project still maintained?
Hi @jakubknejzlik,
are you still maintaining this project? I don't see any more activity and my PRs are still pending.
Hi @expobrain , sorry for the delay. I missed any notification and didn't noticed the PR's. I'll check them ASAP.
|
2025-04-01T06:39:10.060815
| 2015-10-21T07:40:52
|
112528759
|
{
"authors": [
"james91b",
"tmr232"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7178",
"repo": "james91b/ida_ipython",
"url": "https://github.com/james91b/ida_ipython/pull/17"
}
|
gharchive/pull-request
|
Updated installation and a new example
Update the readme to use the jupyter-kernelspec command instead of the deprecated ipython kernelspec command.
Add an example to show off screenshot capabilities of Sark + IDA IPython
Nice work.
|
2025-04-01T06:39:10.067431
| 2020-03-26T16:58:48
|
588575206
|
{
"authors": [
"RadixSeven",
"jamesagnew"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7179",
"repo": "jamesagnew/hapi-fhir",
"url": "https://github.com/jamesagnew/hapi-fhir/issues/1779"
}
|
gharchive/issue
|
The Spring version used by HAPI FHIR has a security vulnerability
Describe the bug
The Spring version (5.2.1) used by HAPI FHIR (4.2.0) has a security vulnerability -- it needs to be updated to 5.2.3
An edited version of our automated report
Spring Framework Reflected File Download Vulnerability. (CVE-2020-5398)
Path : server/target/hapi-fhir-jpaserver/WEB-INF/lib/spring-core-5.2.1.RELEASE.jar
Installed version : 5.2.1.RELEASE
Fixed version : 5.2.3 Feb 26, 2020 05:23:27 EST
The remote Windows host contains a web application framework library that is affected by a reflected file download vulnerability. The remote host contains a Spring Framework library version that is 5.0.x prior to 5.0.16 or 5.1.x prior to 5.1.13 or 5.2.x prior to 5.2.3. It is, therefore, affected by a reflected file download vulnerability. An attacker can exploit this tricking user to click on a URL for trusted domain. Upon clicking on the malicious link, the victim will be presented with a download which appears to have originated from a trusted domain. Once downloaded, the malicious payload can execute arbitrary code and potentially completely take-over a system. Upgrade to Spring Framework version 5.0.16 or 5.1.13 or 5.2.3 or later.
CVE-2020-5398
Jan 16, 2020 12:00:00 EST
Environment (please complete the following information):
HAPI FHIR Version: 4.2.0
OS: CentOs (but Doesn't matter)
There is no part of HAPI FHIR that uses spring's ContentDisposition class to generate a filename, so I don't see this CVE as having any direct risk in HAPI FHIR (please do comment if you feel that this assessment is incorrect though).
Nonetheless, no reason not to bump to a version without known vulnerabiilities. Will fix.
Actually- We've already bumped to this version. See: https://github.com/jamesagnew/hapi-fhir/blob/master/pom.xml#L670
Closing the ticket, please comment if you disagree.
Thank you.
|
2025-04-01T06:39:10.144403
| 2023-05-21T10:00:16
|
1718443480
|
{
"authors": [
"jamesdolezal",
"jinnyjuice"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7180",
"repo": "jamesdolezal/slideflow",
"url": "https://github.com/jamesdolezal/slideflow/issues/282"
}
|
gharchive/issue
|
Concatenate error with multimodal input
Description
Hello,
I have a couple of clinical variables (categorical and float) that I want to use as additional input.
I list them as: multi_input = ["age", "sex", "height", "weight"].
Anyway, when I try to train a (very basic) model, I receive an error saying that tensor shapes don't match:
ConcatOp : Ranks of all input tensors should match: shape[0] = [16,4,1] vs. shape[1] = [16,2048]
[[{{node model/input_merge/concat}}]] [Op:__inference_train_function_35947]
I believe there's something wrong with the way how the clinical variables are introduced into the other layers (slide_feature_input)?
How can I change it? Or am I missing something trivial?
Would be great if anybody could help!
Cheers.
To Reproduce
Steps to reproduce the behavior:
1. Commands
import slideflow as sf
P = sf.load_project('project')
hp = sf.ModelParams(
tile_px=299,
tile_um=100,
)
multi_input = ["age", "sex", "height", "weight"]
P.train(
'category',
params=hp,
val_strategy='none',
input_header= multi_input,
)
2. Output
[11:45:54] INFO Training model category-HP0...
INFO Hyperparameters: {
"augment": "xyrj",
"batch_size": 16,
"drop_images": false,
"dropout": 0,
"early_stop": false,
"early_stop_method": "loss",
"early_stop_patience": 0,
"epochs": [
3
],
"hidden_layer_width": 500,
"hidden_layers": 0,
"include_top": true,
"l1": 0.0,
"l1_dense": 0.0,
"l2": 0.0,
"l2_dense": 0.0,
"learning_rate": 0.0001,
"learning_rate_decay": 0,
"learning_rate_decay_steps": 100000,
"loss": "sparse_categorical_crossentropy",
"manual_early_stop_batch": null,
"manual_early_stop_epoch": null,
"model": "xception",
"normalizer": null,
"normalizer_source": null,
"optimizer": "Adam",
"pooling": "max",
"tile_px": 299,
"tile_um": 100,
"toplayer_epochs": 0,
"trainable_layers": 0,
"training_balance": "category",
"uq": false,
"validation_balance": "none"
}
INFO Val settings: {
"strategy": "none",
"k_fold": 3,
"k": null,
"k_fold_header": null,
"fraction": null,
"source": null,
"annotations": null,
"filters": null,
"dataset": null
}
INFO Using 687 training TFRecords, 0 validation
INFO Adding input variable age as float
INFO Adding input variable sex as float
INFO Adding input variable height as float
INFO Adding input variable weight as float
[11:46:28] INFO Training with both images and 4 categories of slide-level input
2023-05-21 11:46:28.822013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 30948 MB memory: -> device: 0, nam
e: Tesla V100-SXM2-32GB, pci bus id: 0000:61:00.0, compute capability: 7.0
[11:46:29] INFO Using pretraining from imagenet
Model: "model"
Layer (type) Output Shape Param # Connected to
tile_image (InputLayer) [(None, 299, 299, 3 0 []
)]
xception (Functional) (None, 2048) 20861480 ['tile_image[0][0]']
slide_feature_input (InputLaye [(None, 4)] 0 []
r)
post_convolution (Activation) (None, 2048) 0 ['xception[0][0]']
input_merge (Concatenate) (None, 2052) 0 ['slide_feature_input[0][0]',
'post_convolution[0][0]']
logits-0 (Dense) (None, 2) 4106 ['input_merge[0][0]']
out-0 (Activation) (None, 2) 0 ['logits-0[0][0]']
Total params: 20,865,586
Trainable params: 20,811,058
Non-trainable params: 54,528
[11:46:37] INFO Beginning training
Epoch 1/3
2023-05-21 11:46:51.242360: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:428] Loaded cuDNN version 8201
Traceback (most recent call last):
File "", line 1, in
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/project.py", line 3378, in train
self._train_hp(
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/project.py", line 709, in _train_hp
self._train_split(dataset, hp, val_settings, s_args)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/project.py", line 933, in _train_split
project_utils._train_worker(
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/project_utils.py", line 147, in _train_worker
results = trainer.train(train_dts, val_dts, **training_kw)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/model/tensorflow.py", line 1925, in train
self.model.fit(
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:
Detected at node 'model/input_merge/concat' defined at (most recent call last):
File "", line 1, in
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/project.py", line 3378, in train
self._train_hp(
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/project.py", line 709, in _train_hp
self._train_split(dataset, hp, val_settings, s_args)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/project.py", line 933, in _train_split
project_utils._train_worker(
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/project_utils.py", line 147, in _train_worker
results = trainer.train(train_dts, val_dts, **training_kw)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/slideflow/model/tensorflow.py", line 1925, in train
self.model.fit(
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/training.py", line 1650, in fit
tmp_logs = self.train_function(iterator)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/training.py", line 1249, in train_function
return step_function(self, iterator)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/training.py", line 1233, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/training.py", line 1222, in run_step
outputs = model.train_step(data)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/training.py", line 1023, in train_step
y_pred = self(x, training=True)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/training.py", line 561, in call
return super().call(*args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/base_layer.py", line 1132, in call
outputs = call_fn(inputs, *args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 96, in error_handler
return fn(*args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/functional.py", line 511, in call
return self._run_internal_graph(inputs, training=training, mask=mask)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/functional.py", line 668, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/engine/base_layer.py", line 1132, in call
outputs = call_fn(inputs, *args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 96, in error_handler
return fn(*args, **kwargs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/layers/merging/base_merge.py", line 196, in call
return self._merge_function(inputs)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/layers/merging/concatenate.py", line 134, in _merge_function
return backend.concatenate(inputs, axis=self.axis)
File "/data/jjung23/miniconda3/envs/sf_2/lib/python3.9/site-packages/keras/backend.py", line 3572, in concatenate
return tf.concat([to_dense(x) for x in tensors], axis)
Node: 'model/input_merge/concat'
ConcatOp : Ranks of all input tensors should match: shape[0] = [16,4,1] vs. shape[1] = [16,2048]
[[{{node model/input_merge/concat}}]] [Op:__inference_train_function_35947]
Expected behavior
Successful training of a multimodal model.
Environment:
Slideflow Version (e.g., 1.0): 2.0.3-post1
OS (e.g., Ubuntu): Ubuntu 20.04.5
How you installed Slideflow (pip, source): pip install slideflow[tf] cucim cupy-cuda11x
Python version: 3.9
CUDA/cuDNN version: 11.6
GPU models and configuration: Tesla V100-SXM2-32GB
Any other relevant information:
Additional context
Thanks for raising this issue - we'll build a test dataset over the next day or so and work on reproducing the error, so we can find the source of the problem.
In the meantime, do you see the same error when you train with only a single additional clinical variable? Try training 4 different models, one with each clinical variable as a single additional input, to see if the problem can be isolated to one of the variables.
Thanks for replying!
I tried to feed the model only one clinical variable, for example "age".
What I get is:
/data/jjung23/miniconda3/envs/sf_tensfl/lib/python3.10/site-packages/keras/engine/functional.py:638: UserWarning: Input dict contained keys ['slide_feature_input'] which did not match any model input. They will be ignored by the model.
After that it looks like it's training normally based on the slides but without the clinical variable.
Quick update - I was able to reproduce the problem when using continuous outcomes (like the ones you're using here). Categorical slide inputs (either single or multiple) are working as expected, but there seems to be an issue with continuous outcomes. Our automatic testing included testing categorical clinical variables as additional only, which is why this wasn't caught by our testing protocol.
I should have a patch out shortly that fixes the problem, and I'll expand our testing to include continuous input variables, as well.
Ok - patch has been applied for the Tensorflow backend. If you have the ability to run from source, let me know if it works on your end, as well. Still working on a fix for the PyTorch backend.
If this resolves the issue, I'll incorporate it into the next patch release.
Patch has been released as version 2.0.5.
Hi, sorry for the late replay. Thank you so much for the patch and the messages!
However, I receive the following error:
[22:07:38] INFO Beginning training [31/1476]
Epoch 1/3
Traceback (most recent call last):
File "/data/jjung23/23_04_30/3_train_1.py", line 29, in <module>
P.train(
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/slideflow/project.py", line 3426, in train
self._train_hp(
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/slideflow/project.py", line 713, in _train_hp
self._train_split(dataset, hp, val_settings, s_args)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/slideflow/project.py", line 937, in _train_split
project_utils._train_worker(
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/slideflow/project_utils.py", line 147, in _train_worker
results = trainer.train(train_dts, val_dts, **training_kw)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/slideflow/model/tensorflow.py", line 1924, in train
self.model.fit(
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/tmp/__autograph_generated_filevedfejsj.py", line 15, in tf__train_function
retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 129, in from_value
return default_types.Tuple(*(from_value(c, context) for c in value))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 129, in <genexpr>
return default_types.Tuple(*(from_value(c, context) for c in value))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 129, in from_value
return default_types.Tuple(*(from_value(c, context) for c in value))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 129, in <genexpr>
return default_types.Tuple(*(from_value(c, context) for c in value))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 152, in from_value
raise TypeError(
TypeError: in user code:
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/engine/training.py", line 1249, in train_function *
return step_function(self, iterator)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/engine/training.py", line 1233, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/engine/training.py", line 1222, in run_step **
outputs = model.train_step(data)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/engine/training.py", line 1027, in train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 527, in minimize
self.apply_gradients(grads_and_vars)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/mixed_precision/loss_scale_optimizer.py", line 1331, in apply_gradients
tf.__internal__.smart_cond.smart_cond(
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/mixed_precision/loss_scale_optimizer.py", line 1329, in apply_fn
return self._apply_gradients(grads, wrapped_vars)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/mixed_precision/loss_scale_optimizer.py", line 1361, in _apply_gradients
self._optimizer.apply_gradients(
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1140, in apply_gradients
return super().apply_gradients(grads_and_vars, name=name)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 634, in apply_gradients
iteration = self._internal_apply_gradients(grads_and_vars)
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1166, in _internal_apply_gradients
return tf.__internal__.distribute.interim.maybe_merge_call(
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1216, in _distributed_apply_gradients_fn
distribution.extended.update(
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1211, in apply_grad_to_update_var
return self._update_step_xla(grad, var, id(self._var_key(var)))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 129, in from_value
return default_types.Tuple(*(from_value(c, context) for c in value))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 129, in <genexpr>
return default_types.Tuple(*(from_value(c, context) for c in value))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 129, in from_value
return default_types.Tuple(*(from_value(c, context) for c in value))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 129, in <genexpr>
return default_types.Tuple(*(from_value(c, context) for c in value))
File "/data/jjung23/miniconda3/envs/sf_tensorflow/lib/python3.9/site-packages/tensorflow/core/function/trace_type/trace_type_builder.py", line 152, in from_value
raise TypeError(
TypeError: Python object could not be represented through the generic tracing type. Consider implementing the Tracing Protocol for it: <AutoCastVariable 'block1_conv1/kernel:0' shape
=(3, 3, 3, 32) dtype=float32 dtype_to_cast_to=float32>
I couldn't really find anything when I googled the error.
Maybe (or hopefully) it's something easy to fix?
Hmmm - let me investigate. This looks like a separate issue. Can you paste the contents of the model params.json here?
Yes of course:
{
"slideflow_version": "2.0.5",
"project": "MyProject",
"backend": "tensorflow",
"git_commit": "ae6ad0e8937207efe60d23a400e88bf12f5db719",
"model_name": "category-HP0",
"full_model_name": "category-HP0",
"stage": "training",
"img_format": "jpeg",
"tile_px": 299,
"tile_um": 100,
"max_tiles": 0,
"min_tiles": 0,
"model_type": "categorical",
"outcomes": [
"category"
],
"input_features": [
"age"
],
"input_feature_sizes": [
1
],
"input_feature_labels": {
"age": "float"
},
"outcome_labels": {
"0": "major",
"1": "minor"
},
"dataset_config": "project/datasets.json",
"sources": [
"MyProject"
],
"annotations": "project/annotations.csv",
"validation_strategy": "none",
"validation_fraction": null,
"validation_k_fold": 3,
"k_fold_i": null,
"filters": null,
"hp": {
"augment": "xyrj",
"batch_size": 16,
"drop_images": false,
"dropout": 0,
"early_stop": false,
"early_stop_method": "loss",
"early_stop_patience": 0,
"epochs": [
3
],
"hidden_layer_width": 500,
"hidden_layers": 0,
"include_top": true,
"l1": 0.0,
"l1_dense": 0.0,
"l2": 0.0,
"l2_dense": 0.0,
"learning_rate": 0.0001,
"learning_rate_decay": 0,
"learning_rate_decay_steps": 100000,
"loss": "sparse_categorical_crossentropy",
"manual_early_stop_batch": null,
"manual_early_stop_epoch": null,
"model": "xception",
"normalizer": null,
"normalizer_source": null,
"optimizer": "Adam",
"pooling": "max",
"tile_px": 299,
"tile_um": 100,
"toplayer_epochs": 0,
"trainable_layers": 0,
"training_balance": "category",
"uq": false,
"validation_balance": "none"
},
"training_kwargs": {
"save_predictions": "csv"
}
}
This is btw only with one clinical variable (age).
Thank you so much for taking care!
Nevermind, i think i got it working and am currently training a model with multiple clinical variables.
Apparently it had nothing to do with the patch but rather with the conda environment that i re-installed (and obviously in some wrong way).
Thank you so much for your help! I will let you know how training and testing turns out.
BTW:
Is it possible to have clinical variables and tfrecords both as input - and train for a linear outcome? I know that the keyword argument "input_header" is available in things like Project.train or Project.evaluate... But is there a way to pass that input_header argument to the sf.model.LinearTrainer? Or do I have to use the keyword argument "slide_input"? Apparently, it's supposed to be a dictionary... can i then just do a list of dictionaries? Such as:
csv = 'project/annotations.csv'
df = pd.read_csv(csv)
age_dict = df.set_index('slide').to_dict()['age']
sex_dict = df.set_index('slide').to_dict()['sex']
asa_dict = df.set_index('slide').to_dict()['asa']
height_dict = df.set_index('slide').to_dict()['height']
weight_dict = df.set_index('slide').to_dict()['weight']
multi_input = [age_dict, sex_dict, asa_dict, height_dict, weight_dict]
my_trainer = sf.model.LinearTrainer(
hp=hp,
slide_input=multi_input,
outdir='outputs',
labels=labels,
)
my_trainer.train(dataset1, None)
It looks like it's running, but i'm not sure if the clinical variables are really being processed... Do you know what I mean?
Glad to hear it!
Training to linear outcomes is super easy. All you have to do is choose a linear loss function in the hyperparameters (eg "mean_squared_error"), and an outcome that can be interpreted as a continuous variable, and it should just work. You can still use the same P.train() and P.evaluate() interface, and clinical variable input will still work, as well.
Alright, thanks for your help! Yeah, i didn't see that i don't really need the LinearTrainer for linear outcome.
Quick question: is it possible to train the MIL and CLAM models for a linear outcome measure?
I've seen that there's this keyword argument bag_loss (Primary loss function) which can be either βceβ or βsvmβ... it's not possible to change it to something like rmse / mean_squared_error, is it?
Training MIL models with linear outcomes is under development! (see PR https://github.com/jamesdolezal/slideflow/pull/287). The plan is to add this in version 2.1, which is still 1-2 months out.
I have another question:
Apparently, all clinical variables that are used as input for the neural network are treated as float variables.
For example, when I look into the params.json it looks like this:
"input_features": [
"age",
"sex",
"asa",
"bmi"
],
"input_feature_sizes": [
1,
1,
1,
1
],
"input_feature_labels": {
"age": "float",
"sex": "float",
"asa": "float",
"bmi": "float"
],
Would it make sense to change the parameters into telling the network that for example things like sex or asa unlike age or bmi are actually categorical (or ordinal) variables and not float? If so, how can I change it?
You can definitely mix float and categorical variables. Any variable that can be interpreted as a continuous variable (eg coded with 0 and 1) will be interpreted as float. Is this how "sex" and "ama" are encoded? If so, you can force categorical interpretation by changing "0" and "1" to "M" and "F", for example.
Hello James,
I have a very quick question:
So I trained a model with default 3-fold cross-validation. It automatically created the splits.json with data[0]['strategy'] stating the strategy method, data[0]['patients'] summing up all patients and data[0]['tfrecords']['k-fold-1'], ['k-fold-2'] and ['k-fold-3'].
There's no such information in the splits.json stating something like: first run is a+b, val on c. second run is a+c, val on b. third run is b+c, val on a. It could be any order and I couldn't figure from the documentation that you provided.
Do you know what I mean?
The reason I want to know this is because I eventually want to generate heatmaps only for the validation group. Because I want to understand what parts of the tissue were relevant related to the validation results. For like the first fold, I would have to take the model that was trained on some two thirds - but I would have to exactly locate the last third of patients that was not used for training. Does that make sense? Maybe you could comment on this as well.
Would greatly appreciate it! Thank you so much so far.
Cheers
Hi Jinny - thanks for the question, this could be better clarified in the documentation.
The best way to determine what data was used for training/validation is to view the slide_manifest.csv file created in the model folder during training. This is a CSV file with three columns - the slide name, the outcome label, and the dataset (training/validation).
You can quickly pull a list of slides that were used for model training or validation using sf.util.get_slides_from_model_manifest(), specifying whether you want to retrieve the training or validation slides using the parameter dataset:
import slideflow as sf
model_path = '/path/to/saved_model'
val_slides = sf.util.get_slides_from_model_manifest(model_path, dataset='validation')
You can then use create a dataset from only those slides, and use that dataset for generating heatmaps, or other downstream tasks:
P = sf.Project(...)
val_dataset = P.dataset(..., filters={'slide': val_slides})
To answer your question more directly though, the splits.json has the slides/tfrecords split into the number of groups equal to your cross-fold (in your case, 3). For k-fold 1, the first group (A) is validation, and the remainder is training (B+C). For k-fold 2, the second group (B) is validation, and the remainder (A+C) is training. And so on.
I appreciate you asking, I realize now that I failed to include this information in the documentation. I'll add a section in the documentation explaining this more clearly.
Let me know if that makes sense or if I can help clarify further!
Thank you, James, this was super fast!
Yes, it totally makes sense =)
|
2025-04-01T06:39:10.149478
| 2020-02-29T04:06:41
|
573154714
|
{
"authors": [
"jamesgeorge007",
"renjithgr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7181",
"repo": "jamesgeorge007/csstox",
"url": "https://github.com/jamesgeorge007/csstox/pull/2"
}
|
gharchive/pull-request
|
chore: refactor
This commit does two things mainly
Simplifies components. For example, instead of Source and Target now there is CodeEditor
Uses effector for managing the state.
As per our discussion, I removed the use of effector but kept the other changes I made.
We have an initial CSS snippet that is intended to show up initially so that the users can get an idea regarding the usage. Also, it should be the case if the textarea field was cleared (left empty).
This should be fixed now.
Thanks
|
2025-04-01T06:39:10.160977
| 2017-07-26T13:44:51
|
245727016
|
{
"authors": [
"jamesmontemagno",
"opcodewriter"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7182",
"repo": "jamesmontemagno/MediaPlugin",
"url": "https://github.com/jamesmontemagno/MediaPlugin/issues/312"
}
|
gharchive/issue
|
[Android 4.4.4 + NETStandard] TakePhotoAsync throws exception
Bug Information
Version Number of Plugin: 3.0.1 (latest stable)
Device Tested On: Google Nexus 10
Simulator Tested On: (tested it on real device, see above)
Version of VS: Visual Studio 2017
Version of Xamarin: Xamarin Android <IP_ADDRESS>
Versions of other things you are using:
Steps to reproduce the Behavior
call TakePhotoAsync
Expected Behavior
No app crash
Actual Behavior
App crashes with following call stack. Before migrating my core project from PCL to .NETStandard 1.6, it worked fine I think.
07-26 16:35:19.644 I/MonoDroid(13349): UNHANDLED EXCEPTION:
07-26 16:35:19.684 I/MonoDroid(13349): Java.Lang.NullPointerException: Exception of type 'Java.Lang.NullPointerException' was thrown.
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at Java.Interop.JniEnvironment+StaticMethods.CallStaticObjectMethod (Java.Interop.JniObjectReference type, Java.Interop.JniMethodInfo method, Java.Interop.JniArgumentValue* args) [0x00069] in <bd30a18775d94dc8b6263aecd1ca9077>:0
07-26 16:35:19.684 I/MonoDroid(13349): at Android.Runtime.JNIEnv.CallStaticObjectMethod (System.IntPtr jclass, System.IntPtr jmethod, Android.Runtime.JValue* parms) [0x0000e] in <d855bac285f44dda8a0d8510b679b1e2>:0
07-26 16:35:19.684 I/MonoDroid(13349): at Android.Support.V4.Content.FileProvider.GetUriForFile (Android.Content.Context context, System.String authority, Java.IO.File file) [0x00078] in <3e239b9681084d42bb949c1e01ef500e>:0
07-26 16:35:19.684 I/MonoDroid(13349): at Plugin.Media.MediaPickerActivity.OnCreate (Android.OS.Bundle savedInstanceState) [0x0023f] in C:\projects\mediaplugin\src\Media.Plugin.Android\MediaPickerActivity.cs:162
07-26 16:35:19.684 I/MonoDroid(13349): --- End of stack trace from previous location where exception was thrown ---
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x0003e] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x00028] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x00008] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.CompilerServices.TaskAwaiter`1[TResult].GetResult () [0x00000] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at Plugin.Media.MediaImplementation+<TakePhotoAsync>d__16.MoveNext () [0x000c7] in C:\projects\mediaplugin\src\Media.Plugin.Android\MediaImplementation.cs:119
07-26 16:35:19.684 I/MonoDroid(13349): --- End of stack trace from previous location where exception was thrown ---
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Threading.Tasks.Task task) [0x0003e] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x00028] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x00008] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at System.Runtime.CompilerServices.TaskAwaiter`1[TResult].GetResult () [0x00000] in <3fd174ff54b146228c505f23cf75ce71>:0
07-26 16:35:19.684 I/MonoDroid(13349): at MyApp.Services.Impl.FilePicker+<TakePhotoAsync>d__1.MoveNext () [0x0005b] in XXX
Code snippet
Screenshotst
Oh, I think I know what's going on.
In my Android project I have "Target Android Version" set to Android 7.1 (Level 25).
I need this because I am using anther plugin of yours, Permissions, which requires it, as you also noted in the readme:
You MUST set your Target version to API 24+ and Compile against API 24+:
But why does the Media plugin crashes when Target Android Version is set to Android 7.1?
It seems like it hits an API which is not available on Android 4.4
Works fine on my sample app that I have included in this repo on my 4.4 device just fine.
Ensure you follow all the setup with xml files on android for file permissions.
Thanks.
I'm 100% sure is because of setting "Target Android Version" set to Android 7.1 (Level 25).
If I leave it to the default "Use Compile using SDK version", it doesn't crash.
I don't understand, why would this have to do with setting Android file permissions, since it works when using "Use Compile using SDK version", ?
To be clear, right now, I am not even using Permissions plugin. I was actually preparing to use it.
I am only using the Media plugin.
To summarize:
I am only using the Media plugin. Everything works great on all Android versions.
If I set Android project to have "Target Android Version" to Android 7.1 (Level 25), it crashes on all Android below 7.0
Did you do all this: https://github.com/jamesmontemagno/mediaplugin#android-n ?
Arghhhh! Thanks! Sorry!!
But I have one question:
In step #2, you instruct to have the file_paths.xml with the following content:
<?xml version="1.0" encoding="utf-8"?>
<paths xmlns:android="http://schemas.android.com/apk/res/android">
<external-files-path name="my_images" path="Pictures" />
<external-files-path name="my_movies" path="Movies" />
</paths>
and you mention
YOUR_APP_PACKAGE_NAME must be set to your app package name!
But note there's no YOUR_APP_PACKAGE_NAME in your XML content.
However, going to https://developer.android.com/training/camera/photobasics.html they suggest a different name attribute:
<?xml version="1.0" encoding="utf-8"?>
<paths xmlns:android="http://schemas.android.com/apk/res/android">
<external-path name="my_images" path="Android/data/**com.example.package.name**/files/Pictures" />
</paths>
Which one is correct?
That is their package name...
Whatever your package name is in the android manifest is what you should put in there.
Usually it is.... "com.business.company"
It creates a private area that can be shared between apps for security reasons.
|
2025-04-01T06:39:10.162495
| 2019-12-09T19:52:49
|
535226483
|
{
"authors": [
"BlackLine-maker",
"jamesmontemagno"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7183",
"repo": "jamesmontemagno/MediaPlugin",
"url": "https://github.com/jamesmontemagno/MediaPlugin/issues/782"
}
|
gharchive/issue
|
Just a qestion regarding focus
When using iPad or iPhone, users can tap the screen and the camera will attempt to focus. We have some users saying their cameras aren't focusing well in our app which uses this plugin, but outside the app, the camera focuses good. Is there a way to show a focus slider or something that we're not aware of?
This library just pops up the native Camera itself, so it is out of our hands at that point and the OS takes full control. :(
|
2025-04-01T06:39:10.165692
| 2015-10-19T13:36:05
|
112145688
|
{
"authors": [
"gnola14",
"jamesmontemagno"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7184",
"repo": "jamesmontemagno/Xamarin.Plugins",
"url": "https://github.com/jamesmontemagno/Xamarin.Plugins/issues/119"
}
|
gharchive/issue
|
Geolocator: Crash on iOS 8.4 on simulator when Location is set to None
If I make the following call
await CrossGeolocator.Current.GetPositionAsync(10000).ConfigureAwait(false);
while running it on the simulator, setting the Location to None on iOS 8.4 makes it crash with a
-[CLLocationManager allowsBackgroundLocationUpdates]: unrecognized selector sent to instance <some hex number>
I haven't been able to create a minimal test case, but it does happen consistently on my app. However, the error seems to be gone if I modify Failed method of GeolocationSingleUpdateDelegate
public override void Failed(CLLocationManager manager, NSError error)
{
switch((CLError)(int)error.Code)
{
case CLError.Network:
StopListening();
this.tcs.TrySetException(new GeolocationException(GeolocationError.PositionUnavailable));
break;
case CLError.LocationUnknown:
StopListening();
this.tcs.TrySetException(new GeolocationException(GeolocationError.PositionUnavailable));
break;
}
}
I can add that case in there, but there is no possible way that allowsBackgroundLocationUpdates could be called on iOS 9, I am doing a system number check when setting it and it would only be enabled if you set it to true too.
Committing and pushing today
Awesome, thanks James!
|
2025-04-01T06:39:10.167745
| 2024-06-26T04:04:43
|
2374152025
|
{
"authors": [
"jamesrochabrun",
"lzell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7185",
"repo": "jamesrochabrun/SwiftOpenAI",
"url": "https://github.com/jamesrochabrun/SwiftOpenAI/pull/55"
}
|
gharchive/pull-request
|
Set AIProxy DeviceCheck bypass token as env variable
The DeviceCheck bypass token is used for AIProxy customers to make requests to AIProxy from the iOS simulator, where DeviceCheck is not available.
If the token leaks into a production build of the app, then attackers can use the bypass token themselves to skip one layer of security that AIProxy provides.
This patch adjusts the way that developers set the bypass token, removing it from the source code and adding it instead as an env variable. Env variables are not packaged up in distribution app bundles, and are therefore harder to make leak into a production release of your app.
Updated the README with new instructions for adding the AIPROXY_DEVICE_CHECK_BYPASS env variable
@lzell oops, this LGTM lets resolve the conflict and I can merge :)
|
2025-04-01T06:39:10.169523
| 2023-08-10T02:46:37
|
1844319395
|
{
"authors": [
"jameswynn",
"theeternalrat",
"xinmans"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7186",
"repo": "jameswynn/helm-charts",
"url": "https://github.com/jameswynn/helm-charts/issues/14"
}
|
gharchive/issue
|
Deployment does not have minimum availability
Pods "homepage-77cdf87bbc-" is forbidden: error looking up service account default/homepage: serviceaccount "homepage" not found:Deployment does not have minimum availability.
kubectl create serviceaccount homepage
serviceaccount/homepage created
Fix this bug
I also found this bug on a fresh deploy. Simply creating the SA fixed it like @xinmans said.
Since it seems like a misconfiguration, I'm closing this.
|
2025-04-01T06:39:10.183906
| 2022-10-03T14:58:51
|
1394884204
|
{
"authors": [
"aziham",
"jamiebrynes7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7187",
"repo": "jamiebrynes7/obsidian-todoist-plugin",
"url": "https://github.com/jamiebrynes7/obsidian-todoist-plugin/issues/179"
}
|
gharchive/issue
|
The plugin will no longer work after Nov 1, 2022
Describe the bug
Today the plugin stopped working, I'm getting:
Oh no, something went wrong!
Error: Internal Server Error
Screenshots
Desktop (please complete the following information):
Plugin version: 1.9.0
Obsidian version: 0.15.9
Related to: #177 #157
@pprazzi You are right. It was related to the maintenance.
Any idea if this maintenance is related to #176 ?
As they stated that the v8 will no longer work after November 1
The earlier will no longer be available after November 1
Any idea if this maintenance is related to Todoist api v2Β #176 ?
https://groups.google.com/a/doist.com/g/todoist-api/c/33g1sC_ov3Q
This confirms my assumptions that the V1 API will no longer work but it's actually after November 30.
There's an open PR #176 by @gnapse addressing the API migration to V2.
@jamiebrynes7 Any plans for migrating to V2 ?
Thanks to all the people who made this plugin possible.
Thanks for the heads up, I should probably join that google group!
I'll take a look at #176 on the weekend, or earlier if I find some time :)
Hey all, tihs was released in v1.10.0!
|
2025-04-01T06:39:10.187630
| 2018-08-26T16:28:45
|
354107966
|
{
"authors": [
"Minecrell",
"jamierocks"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7188",
"repo": "jamiemansfield/Lorenz",
"url": "https://github.com/jamiemansfield/Lorenz/pull/6"
}
|
gharchive/pull-request
|
Add TextMappingFormat
Extends MappingFormat and allows reading text mapping formats from readers/writers instead of just binary input/output streams.
Merged with https://github.com/jamiemansfield/Lorenz/commit/fa120163db7c81d06e1878ccc41644760a63577d π
|
2025-04-01T06:39:10.202234
| 2020-06-06T14:06:31
|
632484977
|
{
"authors": [
"codecov-commenter",
"jthomperoo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7189",
"repo": "jamjarlabs/JamJar",
"url": "https://github.com/jamjarlabs/JamJar/pull/123"
}
|
gharchive/pull-request
|
Add audio loading and audio source playing
Resolves #25, resolves #22
Codecov Report
Merging #123 into master will increase coverage by 0.28%.
The diff coverage is 75.86%.
@@ Coverage Diff @@
## master #123 +/- ##
==========================================
+ Coverage 76.83% 77.11% +0.28%
==========================================
Files 78 87 +9
Lines 2357 2618 +261
Branches 213 233 +20
==========================================
+ Hits 1811 2019 +208
- Misses 373 410 +37
- Partials 173 189 +16
Flag
Coverage Ξ
#unittests
77.11% <75.86%> (+0.28%)
:arrow_up:
Impacted Files
Coverage Ξ
src/fake/audio_context.ts
53.12% <53.12%> (ΓΈ)
src/fake/response.ts
62.50% <62.50%> (ΓΈ)
src/fake/gain_node.ts
71.42% <71.42%> (ΓΈ)
src/fake/audio_buffer_source_node.ts
77.77% <77.77%> (ΓΈ)
src/standard/audio_source/audio_source_system.ts
82.08% <82.08%> (ΓΈ)
src/standard/http_audio/http_audio_system.ts
89.79% <89.79%> (ΓΈ)
src/standard/audio_source/audio_source.ts
90.00% <90.00%> (ΓΈ)
src/audio/audio_asset.ts
100.00% <100.00%> (ΓΈ)
src/audio/audio_request.ts
100.00% <100.00%> (ΓΈ)
... and 9 more
|
2025-04-01T06:39:10.212753
| 2018-10-16T22:23:31
|
370824042
|
{
"authors": [
"Caemor",
"jamwaffles"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7190",
"repo": "jamwaffles/embedded-graphics",
"url": "https://github.com/jamwaffles/embedded-graphics/pull/51"
}
|
gharchive/pull-request
|
Make other background colors than (0/"black") possible for fonts
When I tested the embedded graphics library today with my eink I realised that only black background colors for fonts was possible.
This PR make different background colors for fonts possible and adds a default stroke_color (a default for fill_color was already set before, so it's just the opposite) so the panic can be removed.
That means the following changes (only shows the differences):
If no stroke and fill color is set: It doesn't panic anymore and uses 1u8 as stroke color
If stroke color is set but fill color is not: No change
If stroke and fill color are set: now fill color is used instead of the default 0u8 ("black")
Can you also add or edit an example in the simulator examples folder to demonstrate this behaviour?
Yes, I am gonna add an example and some tests.
In the font_builder we are currently using Style::default() which returns None for both of the colors. Should we change this so it's more visible that we use a default fill and stroke color for None values in the next iterator? (https://github.com/jamwaffles/embedded-graphics/blob/master/embedded-graphics/src/fonts/font_builder.rs#L61)
New:
fn render_str(text: &'a str) -> Self {
Self {
pos: Coord::new(0, 0),
text,
style: Style::default()
.with_fill(0u8.into())
.with_stroke(1u8.into()),
_conf: Default::default(),
}
}
When I was trying to make a testcase for inverted I might have found a bug in the current implementation:
// produced result:
[0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
// what it should look like (at least thats what i thought)
[1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
That's a bug in master, thanks for surfacing it! Running the simulator with a font where the pixel colour is always set, I see this in master:
The black rectangle to the middle left has its first row shifted by one which obviously shouldn't happen. I'll look at getting a fix into master asap.
where the pixel colour is always set
What do you mean by that?
Can you maybe paste your code in here for that example? That looks like the old behaviour before this PR and shouldn't happen.
I've just merged #53. Can you updated from master? The off-by-one error should be gone now.
What do you mean by that?
Sorry for the confusion, it was just a quick temporary change to help debugging by always setting the pixel black :slightly_smiling_face:
Travis is also happy now :-)
0.4.2 released with these changes in it!
|
2025-04-01T06:39:10.218181
| 2021-01-19T17:35:02
|
789219831
|
{
"authors": [
"bugadani",
"jamwaffles",
"ostenning",
"quentinmit"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7191",
"repo": "jamwaffles/ssd1306",
"url": "https://github.com/jamwaffles/ssd1306/issues/146"
}
|
gharchive/issue
|
Nonblocking implementation
MCU/other hardware in use: STM32h7
Display resolution and interface: I2C, [128x64]
Nonblocking I2C support
Hello, I'm attempting to build an application that utilizes this library in conjunction with doing real-time DAC/ADC functions. I have noticed that my DAC output, which is executed from a Timer producing a simple sine-wave is blocked by the I2C communication that this library executes, which distorts the output of the DAC.
I know that the embedded-hal doesn't support non-blocking for various reasons. What I would like to do is write my own interface adapter and implement I2C interrupt handling within my app to unblock the communication to the display.
Looking at the test_helpers.rs, I see this:
#[allow(dead_code)]
#[derive(Debug, Clone, Copy)]
pub struct StubInterface;
impl WriteOnlyDataCommand for StubInterface {
fn send_commands(
&mut self,
_cmd: display_interface::DataFormat<'_>,
) -> Result<(), DisplayError> {
Ok(())
}
fn send_data(&mut self, _buf: display_interface::DataFormat<'_>) -> Result<(), DisplayError> {
Ok(())
}
}
I assume that I could use the WriteOnlyDataCommand trait to achieve this?
Kind regards
Oliver
Sorry for the delay! @therealprof maintains the crate that WriteOnlyDataCommand so might be able to offer more insight, but yes I think it's enough to add a custom impl of WriteOnlyDataCommand and pass that into Builder::new().connect(interface).into() instead of the provided blocking implementations.
There's an embedded-hal-async now that provides async/await-compatible bus interfaces. See https://github.com/jamwaffles/ssd1331/pull/13 for a PR adding support to the ssd1311 crate and follow https://github.com/embedded-graphics/embedded-graphics/issues/622 for an upstream embedded-graphics trait.
cc https://github.com/jamwaffles/ssd1306/pull/178
|
2025-04-01T06:39:10.234583
| 2022-10-10T14:39:29
|
1403286534
|
{
"authors": [
"alecharp",
"andham",
"basil",
"fabricat",
"jan-molak"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7192",
"repo": "jan-molak/jenkins-build-monitor-plugin",
"url": "https://github.com/jan-molak/jenkins-build-monitor-plugin/issues/635"
}
|
gharchive/issue
|
Latest release notes / Project visibility
Hello,
I have been asked to evaluate and install this Jenkins plugin, but I cannot find release notes for the latest version 1.13.
The Jenkins plugins directory entry does not provide any issue tracker, and I could only find an indirect link to this GH repo in the Documentation text.
Please, can you enrich the plugin metadata, and add a changelog for newer versions?
IMHO it should require a little effort, but will provide greater visibility to the plugin development health status π
I could only find an indirect link to this GH repo in the Documentation text.
The problem is that the update-center generation code is expecting the plugin source-code to be hosted in the jenkinsci organization in GitHub. This is described here: https://www.jenkins.io/doc/developer/publishing/requesting-hosting/#open-hosting-request. However, this plugin was not transferred to that organization.
In the update-center code (https://github.com/jenkins-infra/update-center2/blob/7d77cd45525fe5f9ddbc9ec11b1968c00952a2fa/src/main/java/io/jenkins/update_center/HPI.java#L461), the repository of the plugin is excluded.
The issue tracker problem is because none is documented in https://github.com/jenkins-infra/repository-permissions-updater/blob/master/permissions/plugin-build-monitor-plugin.yml.
add a changelog for newer versions?
The repository is not using release-drafter nor any manual release note file.
@jan-molak would it be ok to transfer the plugin to the jenkinsci organization? The plugin could benefit from dependabot, jep-229, release-drafter, better integration with plugins.jenkins.io..
@alecharp - last time I've spoken with CloudBees regarding transfer there were some challenges that prevented the transfer; happy to get back to this conversation over email though
last time I've spoken with CloudBees
To be specific, this hosting process has nothing to do with CloudBees but with the Jenkins project.
I was here speaking as a Jenkins community member and because I'm working on https://github.com/jenkins-infra/plugin-health-scoring/ which shows that the plugins is not hosted correctly.
Right, thanks for the context. There are several contributors from CloudBees helping out with Jenkins Build Monitor at the moment, so if there are PRs you'd like to propose to improve integration with Jenkins ecosystem we'll be happy to review them?
if there are PRs you'd like to propose to improve integration with Jenkins ecosystem
the transfer of the repository cannot be done with a pull request. For the release-drafter, cd etc., there configurations are easier once in the jenkinsci organization as they can simply extend basic configuration from https://github.com/jenkinsci/.github
Thanks for the explanation, I'll reach out to CloudBees and the Jenkins project to see what's changed since we last discussed this some (long) time ago #418
I'll reach out to CloudBees
I'm not sure why you need to reach out. If so, I can help you, as employee of CloudBees.
For the Jenkins Project, I can also help, as a long time contributor and I'm not also part of the hosting process.
From https://github.com/jan-molak/jenkins-build-monitor-plugin/issues/418, I don't know what you need for your end-to-end tests. For the release process, it's still up to you to use the decide to use semantic versioning or not, but that was also the case back then.
To be clear, this issue would not exist if this repository followed the standard conventions for the Jenkins project:
Repository hosted in the jenkinsci GitHub organization
CI build done on https://ci.jenkins.io
CD done with JEP-229
if there are PRs you'd like to propose to improve integration with Jenkins ecosystem we'll be happy to incorporate them?
We will not be proposing PRs to improve integration with the Jenkins ecosystem for repositories not hosted in the jenkinsci GitHub organization, using GitHub Actions for CI, and using something other than JEP-229 for CD.
If you would like to transfer this repository to the jenkinsci GitHub organization, we will be happy to help move the CI to https://ci.jenkins.io and the CD to JEP-229, which will resolve issues like this.
@alecharp @basil - I appreciate your offering to help and support Jenkins Build Monitor. I'll reach out to people I've spoken with originally to discuss the details of any transfers.
Thanks @jan-molak. To transfer the repository from the jan-molak GitHub organization to the jenkinsci GitHub organization, you can file a ticket at https://github.com/jenkins-infra/helpdesk.
Thanks @basil I'm on holiday at the moment with limited access to the Internet. I'll look into it when I'm back home next week. Thanks for all the details!
Any update on this? As @fabricat I was also looking into this plugin to evaluate it and found the info lacking. It makes the process a bit more difficult and the plugin also looks less attractive (it doesn't look maintained).
From the Jenkins project's perspective, we are still happy to help normalize this plugin's hosting and release process. The only step that is necessary to begin the process is for Jan Molak to transfer the plugin to the jenkinsci GitHub organization.
Build Monitor View is now hosted in the jenkinsci GitHub organization, built on ci.jenkins.io, and deployed with our standard CD process (including changelogs generated with Release Drafter).
|
2025-04-01T06:39:10.275296
| 2018-09-09T17:28:39
|
358395510
|
{
"authors": [
"aignas",
"codeinabox",
"jhonnyslpz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7193",
"repo": "janko-m/vim-test",
"url": "https://github.com/janko-m/vim-test/issues/321"
}
|
gharchive/issue
|
Running :Test* using dotnettest in Windows adds a backslash \
Hello, first thanks for this vim plugin. It has been very useful.
I'm having the following issue in Windows:
OS:
Windows 10
_vimrc:
let test#strategy='dispatch'
let g:test#csharp#runner='dotnettest'
Running :Test* using dotnettest in Windows adds a backslash \:
Do not escape ~ on win32
https://github.com/janko-m/vim-test/blob/0941cfc91cdaa896f16f5e32d20940aab902f88c/autoload/test/csharp/dotnettest.vim#L22
https://github.com/janko-m/vim-test/blob/0941cfc91cdaa896f16f5e32d20940aab902f88c/autoload/test/csharp/dotnettest.vim#L24
https://github.com/janko-m/vim-test/blob/0941cfc91cdaa896f16f5e32d20940aab902f88c/autoload/test/csharp/dotnettest.vim#L27
Sorry, but I don't use Windows anymore and I can't help with this one, unfortunately. IIRC escaping ~ was indeed not working for me but I would not say that it is something vim-test specific (I am using neovim, however, so it could have been a neovim issue). @jhonnyslpz, can you confirm what the following command outputs?
echo expand('~')
I wonder if the PR #328 may provider a solution to this
@aignas The command output is: C:\users\jhonnys.lopez my home path.
@codeinabox Do you mean to implement a similar approach?
I believe the issue is in vim-test, the filter format is: --filter FullyQualifiedName~xyz, in OSX/Linux it is necessary to escape ~ but this is not working in Windows because it just adds a \ which is a wrong filter.
Windows CMD Screenshot:
|
2025-04-01T06:39:10.296781
| 2023-10-17T12:31:50
|
1947306712
|
{
"authors": [
"christophe-f",
"divyanshiGupta",
"invincibleJai"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7194",
"repo": "janus-idp/backstage-showcase",
"url": "https://github.com/janus-idp/backstage-showcase/issues/621"
}
|
gharchive/issue
|
CI tab: UXD improvement
What do you want to improve?
I would like to see all CI tools configured for service in the radio option if there are multiple instead of showing all in the same screen with scroll
Screenshots:
I don't think a component will use several CI tools.
So if the component is using Jenkins or Tekton (or others), the corresponding plugin will be displayed based on the annotation in the catalog-info yaml file.
If we go that route, it will be a huge amount of work to have all the CI plugins having the same UI.
I don't think a component will use several CI tools. So if the component is using Jenkins or Tekton (or others), the corresponding plugin will be displayed based on the annotation in the catalog-info yaml file.
If we go that route, it will be a huge amount of work to have all the CI plugins having the same UI.
Hi @christophe-f ideally there will be one but if catalog-info.yaml has multiple then now we show one after another as in below
and thus UX came up with this suggestion to show in radio i.e one at at time if there are multiple. Let me know if this is not valid usecase or not on prioirty
cc @ShiranHi
Unassigned myself for now as this is still under exploration if it should be implemented or not.
Closing it for now based on the above discussions and we'll revisit if there is more ask for it.
|
2025-04-01T06:39:10.300248
| 2024-05-27T16:43:41
|
2319509811
|
{
"authors": [
"Zaperex",
"rm3l"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7195",
"repo": "janus-idp/backstage-showcase",
"url": "https://github.com/janus-idp/backstage-showcase/pull/1277"
}
|
gharchive/pull-request
|
chore(docker): set the NODE_OPTIONS=--no-node-snapshot env variable
Description
Adds the NODE_OPTIONS=--no-node-snapshot env variable so that the scaffolder is usable in nodejs 20.
Which issue(s) does this PR fix
Fixes RHIDP-2436
PR acceptance criteria
Please make sure that the following steps are complete:
[ ] GitHub Actions are completed and successful
[ ] Unit Tests are updated and passing
[ ] E2E Tests are updated and passing
[ ] Documentation is updated if necessary (requirement for new features)
[ ] Add a screenshot if the change is UX/UI related
How to test changes / Special notes to the reviewer
/retest
/test e2e-tests
|
2025-04-01T06:39:10.304567
| 2023-02-12T18:07:50
|
1581355061
|
{
"authors": [
"schultzp2020",
"serenamarie125"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7196",
"repo": "janus-idp/software-templates",
"url": "https://github.com/janus-idp/software-templates/issues/42"
}
|
gharchive/issue
|
Platform engineer can either use this as is or use it as a sample to provide a starting point for their own GPT.
What problem does this solve?
As a Developer
I want a guided UI to create GH repo for a Go application, which will be managed by Argo using GH Actions
So that I can get started quickly with a Go application
Scenario:
Given: A Developer and an existing Backstage application
When: The developer wants to add a new component to the application
Then: A Go GPT is available to select
Given: A Developer and an existing Backstage application
When The developer adds a new Go component to the application
Then: A starting point application is committed to a source repository
completed by https://github.com/janus-idp/software-templates/pull/65
|
2025-04-01T06:39:10.308068
| 2019-04-01T16:06:34
|
427790801
|
{
"authors": [
"kaihendry"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7197",
"repo": "janza/wl-clipboard-history",
"url": "https://github.com/janza/wl-clipboard-history/issues/1"
}
|
gharchive/issue
|
dmenu bind?
Hi! Do you have a way to get the paste in the clipboard like clipd menu?
https://mpov.timmorgan.org/clipboard-history-in-sway-window-manager/
Thanks!
|
2025-04-01T06:39:10.317940
| 2023-11-08T21:47:40
|
1984449752
|
{
"authors": [
"Julien-R44",
"mrmlnc",
"thetutlage"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7198",
"repo": "japa/snapshot",
"url": "https://github.com/japa/snapshot/issues/3"
}
|
gharchive/issue
|
Cannot open the discussion for the new feature
Package version
2.0.1
Describe the bug
I would like to suggest adding the ability to specify the path to the directory for snapshots in options. I wanted to do this via discussion as the manual says.
However, the current page for creating issues does not allow me to do so (image). Because it's a link to a repository (different from this one) that has no discussions.
https://github.com/japa/runner/issues/new?title=Discussion%20for%20a%20new%20feature%20-%20%3CYOUR%20FEATURE%20NAME%3E
Marked this as a bug, as I don't see any other way to communicate.
Reproduction repo
No response
@thetutlage will be able to tell what we should do here. Probably open a discussion forum like on the Adonis organisation?
In the meantime, feel free to open a feature request on this repo if needed.
The ability to specify a directory path is something we already have, via resolveSnapshotPath, see: https://japa.dev/docs/plugins/snapshot#configuration-options
I think we can use the AdonisJS discussions forum for the same. We even have a Japa category there for the same. https://github.com/adonisjs/core/discussions/categories/japa
|
2025-04-01T06:39:10.341759
| 2021-12-19T00:43:51
|
1083976320
|
{
"authors": [
"jaredhendrickson13"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7199",
"repo": "jaredhendrickson13/pfsense-api",
"url": "https://github.com/jaredhendrickson13/pfsense-api/pull/184"
}
|
gharchive/pull-request
|
v1.3.3 Fixes
Breaks up field validation on APIServicesDHCPdUpdate.inc to validate each field within it's own method.
Addresses issue that prevented DHCP configurations from being updated when the default interface DHCP configuration was not initialized. (#178)
Adds staticarp field to APIServicesDHCPdUpdate.inc and adds notes regarding issues with static ARP via API (#129).
Updates documentation for range_from and range_to fields on /api/v1/services/dhcpd to state their conditional requirement. (briefly mentioned in #178)
Updates copyright for 2022 year
Fixes typo in documentation for /api/v1/services/dhcpd endpoint that stated incorrect required privilege name
Beta builds are available:
pfSense 2.5:
pkg add https://github.com/jaredhendrickson13/pfsense-api/files/7747370/pfSense-2.5-pkg-API-1.3_3beta_1.zip && /etc/rc.restart_webgui
pfSense 2.6:
pkg add https://github.com/jaredhendrickson13/pfsense-api/files/7747371/pfSense-2.6-pkg-API-1.3_3beta_1.zip && /etc/rc.restart_webgui
pfSense-2.5-pkg-API-1.3_3beta_1.zip
pfSense-2.6-pkg-API-1.3_3beta_1.zip
|
2025-04-01T06:39:10.347613
| 2019-08-15T16:49:06
|
481239454
|
{
"authors": [
"ivan200",
"tianma8023"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7200",
"repo": "jaredrummler/Cyanea",
"url": "https://github.com/jaredrummler/Cyanea/issues/68"
}
|
gharchive/issue
|
?actionBarTheme for Toolbar seems not correct before change the theme
In the demo-main module, we can see that the DrawerActivity includes a Toolbar.
But when first run it (Don't change theme), the toolbar's text color seems not correct. Here are the screenshots:
It's okay, MainActivity don't use Toolbar.
The title text color of Toolbar and other menu icons color is black rather than white.
Devices:
Genymotion Android 8.0
Xiaomi mix2s MIUI 10 Android 9.0
Have same problem.
Launch demo-simple-java. Go to settings. Choose primary color - yellow. Go Back.
And here we go:
How i can fix this??
|
2025-04-01T06:39:10.365943
| 2017-11-25T11:18:30
|
276743727
|
{
"authors": [
"Malvineous",
"jarro2783"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7201",
"repo": "jarro2783/cxxopts",
"url": "https://github.com/jarro2783/cxxopts/issues/84"
}
|
gharchive/issue
|
Positional parameters cannot follow a no-parameter option
If you have the following scenario:
Option -c that takes no parameters
Positional parameters
The the following command line works:
./example blah -c
However this command line fails:
./example -c blah
Argument βblahβ failed to parse
Since the -c option takes no parameters, it would be nice if it could be used anywhere in the command line and not just at the end.
In case it's relevant, if you have another option that takes a parameter then that does work:
./example -c -t test blah
So if a no-parameter option is followed by another dashed option then it works, but when it's followed by a potential positional parameter then it fails.
Just so you know what's going on: this was caused by 6c9bae4a071d6892069e6bd998fbae47193df0a8, which added parsing for boolean values, so that you can write things like --foo=false to explicitly disable an option. The problem is that it treated booleans as taking a parameter now, so -c blah tries to parse blah as a boolean value.
It works in a lot of use cases because you might write something like -c -f file, and the -f looks like an option, so it skips it.
The short story is that the handling of implicit values is a bit broken, and the entire parser loop is a bit of a mess. So I'll have to work out the best way to patch or rewrite this so that it behaves in a sane way.
No worries at all, thanks for the update!
Sounds like it could be tricky if you consider -c true as equivalent to -c. I guess if only the long variant is allowed and only with an equals sign (--foo=false) then you could get away with it, but as soon as you allow a space I think you'll end up with ambiguity. Take this for example:
(1) $ ./power-on true true # Turn on devices 1 and 2
(2) $ ./power-on --delay=true true # Turn on only device 1 with a delay
(3) $ ./power-on --delay true true # Turn on devices 1 and 2 with a delay (or maybe only device 1 with a delay?)
I think (3) is always going to be ambiguous unless you require the equals as in (2).
Yes I think that's how I'll fix it. It's too ambiguous otherwise, especially when --delay --foo is parsed as two arguments because of the - at the start of --foo.
|
2025-04-01T06:39:10.454937
| 2019-11-08T03:56:43
|
519674926
|
{
"authors": [
"JohnRoesler",
"Streppel"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7202",
"repo": "jasonlvhit/gocron",
"url": "https://github.com/jasonlvhit/gocron/pull/125"
}
|
gharchive/pull-request
|
reduce test time, fix spelling, add Week()
Reduced the testing time by lowering some of the iterations and sleeps
Spelling fixes
Added Week() it was missing
Use the interval constants named functions
@Streppel
Cool @JohnRoesler! Ideally, working on #88 would fix the long testing time as we'd to wrap the clock and use a stub instead. I'm having some thoughts on the best way to implement it.
|
2025-04-01T06:39:10.463186
| 2015-01-20T12:14:40
|
54876135
|
{
"authors": [
"Belkar",
"aivascu",
"jasonsanjose"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7203",
"repo": "jasonsanjose/brackets-sass",
"url": "https://github.com/jasonsanjose/brackets-sass/issues/79"
}
|
gharchive/issue
|
@import '_file.scss'; not working
Might be a noobish misunderstanding, but importing a dependency file doesn't work for me.
When trying to import a partial scss file I'm getting the 'file to import not found or unreadable' error.
The main scss file and the partial file are in the same directory.
No need to add the underscore or the file extension for partials. Try @import "file".
Thanks Jason!
It still gives me the error but at least it generates the *.css file correctly.
Also in live preview the changes in the imported *.scss file don't apply until the importing file is changed(re-saved).
Also in live preview the changes in the imported *.scss file don't apply until the importing file is changed(re-saved).
Same problem here. This makes developing and styling tedious
Could you try syncing to the latest https://github.com/jasonsanjose/bourbon-example. I updated it to include a local partial file for importing.
@Belkar also please try the latest 1.0.4-83 update. Are you on mac or win?
Updated the extension, still getting the same behavior.
Bourbon has the same behavior too.
I am running Windows 8.1.
Hmm. I'm on windows and I'm still not seeing an issue. Have you tried quitting and restarting Brackets?
@Belkar also please try the latest 1.0.4-83 update. Are you on mac or win?
Thanks it works now :smile:
@jasonsanjose yes I restarted Brackets several times. Haven't tried a reinstall though. That usually cuts it with programs on Windows.
Here is a demo of my problem.
@Belkar what OS are you running?
I'm using Windows 8.1.
So I tried several things. I uninstalled Brackets; removed the installation folder from Program Files; cleaned the registries, the temp files and the Roaming folder; Installed Brackets; Installed brackets-sass.
Not sure which one of this things helped but it works for me too. I'm still getting the false error but at least it updates the partial styles in live preview.
@Belkar it worked for me with the default config.
Thanks for the help guys.
Glad it's working for you. Sorry for the extra trouble. Closing.
|
2025-04-01T06:39:10.586339
| 2022-11-29T08:09:38
|
1467629638
|
{
"authors": [
"csviri",
"wangchenglonggithub"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7204",
"repo": "java-operator-sdk/java-operator-sdk",
"url": "https://github.com/java-operator-sdk/java-operator-sdk/issues/1634"
}
|
gharchive/issue
|
errorsοΌtrying to send message larger than max(2179626 vs. 2097152)
I have an error message:
i.joperator.processing.eventDispatcher: Kubernets exception 500 failure eecyting :PUT at http://xxx .message: rpc error :code =trying to send message larger max(2179622 vs. 2097152)
I did not find the configuration item to solve the problem
Please help me
Hi @wangchenglonggithub , this issue seems to me a generic one with Kubernetes API server.
probably the best place to ask is the sig-api-machinery channel on Kubernetes Slack.
see:
https://kubernetes.slack.com/archives/C0EG7JC6T/p1669669822416379
will close this for now. @wangchenglonggithub pls let us know if you found out the problem.
|
2025-04-01T06:39:10.589363
| 2015-04-29T23:53:59
|
72020097
|
{
"authors": [
"arreche",
"jayseejc",
"jephillips"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7205",
"repo": "javabrewery/brew-chat",
"url": "https://github.com/javabrewery/brew-chat/pull/27"
}
|
gharchive/pull-request
|
Little code clean up
Reviewed code style.
Sorted methods. Android methods go first, then user actions and finally bus events.
Added static ChatService assessor to Application.
Started ChatService in background to pass the StrictMode.
Ensured to handle UI events in the main thread.
Refactored common stuff into a BaseActivity.
Looks fantastic! +1
Good work +1
Perhaps we should create some style rules at some point to get everyone on the same page
Maybe we can use Checkstyle
|
2025-04-01T06:39:10.633124
| 2020-02-13T10:10:22
|
564573734
|
{
"authors": [
"bookyo",
"sanex3339"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7206",
"repo": "javascript-obfuscator/javascript-obfuscator",
"url": "https://github.com/javascript-obfuscator/javascript-obfuscator/issues/556"
}
|
gharchive/issue
|
when i use<EMAIL_ADDRESS>will cause very strange error.
i have a 4000 lines node code, i use version 0.18.7 work fine, but use 0.24.5 will cause very strange error.
like:
const { title, permission, group} = req.body;
sometime will cause group is undefined or some other from req.body be undefined, but i post body is work fine.
i have hundreds const {arg1, arg2, arg3...} = req.body;
but when i submit, some arg will became undefined.
my config:
compact code: true
identifier Names Generator: hexadecimal
self defending: true
control flow flatteing: true
control flow flattening threshold: 0.8
dead code injection: true
dead code injection threshold: 0.4
string array: true
rotate string array: true
string array encoding: rc4
string array threshold: 0.8
unicode escape sequence: true
disable console output: true
debug protection: true
debug protection interval: true
seed:0
target: node
const vipbuys = await Vipbuy.find().populate('group');
will cause TypeError: Cannot read property 'refPath' of undefined.
this like 'group' be undefined.
Hmm. Transform object keys is disabled?
Try to remove all opttions, the error will happen?
I need an example of the code
like:
exports.postadddownload = async (req, res) => {
const { name, url, type } = req.body;
await Download.create({
name,
url,
path: './download/' + name + '.mp4',
status: 'ηεΎ
δΈθ½½',
type,
})
res.redirect('/admin/download');
}
after post, type property will lost. i dont know what happen, maybe somewhere type is against this 'type'.
Can you debug:
const { title, permission, group} = req.body;
after this line, group already undefined?
can fix it by obfuscator more times. i think this is because i have more function inclue type property.
like:
exports.posteditad = async (req, res) => {
const id = req.body.id;
const { title, type } = req.body;
let types = [];
types = types.concat(type);
console.log(types);
await Ad.updateOne({ _id: id }, { $set: { title, type: types.join(',') } });
res.redirect('/admin/ad');
};
exports.postaddapp = async (req, res) => {
const { title, theimg, link, duration, type } = req.body;
await App.create({ title, img: theimg, link, duration, type });
res.redirect('/admin/app');
};
i cant reappear it. sorry. i already fix it by obfusecate more times.
|
2025-04-01T06:39:10.671489
| 2013-10-13T05:01:34
|
20921951
|
{
"authors": [
"ermagana",
"javve"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7214",
"repo": "javve/list.js",
"url": "https://github.com/javve/list.js/pull/157"
}
|
gharchive/pull-request
|
Added support for flat data structures
I added support for key value instances, where the li element's text is the value and no complex data structure is used.
Sorry, but this is not a feature I want to add. Thanks anyways!
|
2025-04-01T06:39:10.699507
| 2024-01-03T10:20:17
|
2063701034
|
{
"authors": [
"Ousret",
"dwt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7215",
"repo": "jawah/niquests",
"url": "https://github.com/jawah/niquests/issues/60"
}
|
gharchive/issue
|
Unable to get http3 request against pie.dev
Hi there, I was directed here by @Ousret from a merge request on httpie - hopefully this is the right location to file this bug. Please note, I would be happy to provide any additional information and try out experiments if that helps debug this issue.
Summary
I expected to get a http3 connection to pie.dev
β― python
Python 3.12.1 (main, Dec 8 2023, 18:57:37) [Clang 14.0.3 (clang-14<IP_ADDRESS>.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from niquests import Session
>>>
>>> with Session() as s:
... print(s.get("https://pie.dev/get"))
...
<Response HTTP/2 [200]>
>>> from urllib3.contrib.hface import HTTPProtocolFactory, HTTP3Protocol
>>>
>>> print(HTTPProtocolFactory.new(HTTP3Protocol))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dwt/.virtualenvs/tempenv-714d1721928c9/lib/python3.12/site-packages/urllib3/contrib/hface/protocols/_factories.py", line 126, in new
return implementation_target(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: HTTP3ProtocolAioQuicImpl.__init__() missing 3 required keyword-only arguments: 'remote_address', 'server_name', and 'tls_config'
Reproduction Steps
See above
System Information
$ python -m niquests.help
β― python --version
Python 3.12.1
~/C/P/httpie π tempenv-714d1721928c9 π± feature-tryout-niquests
β― macosver
10.16
~/C/P/httpie π tempenv-714d1721928c9 π± feature-tryout-niquests
β― pip list
Package Version Editable project location
------------------ ---------- -------------------------------
certifi 2023.11.17
cffi 1.16.0
charset-normalizer 3.3.2
cryptography 41.0.7
defusedxml 0.7.1
h11 0.14.0
h2 4.1.0
hpack 4.0.0
httpie 4.0.0b1 /Users/dwt/Code/Projekte/httpie
hyperframe 6.0.1
idna 3.6
kiss-headers 2.4.3
markdown-it-py 3.0.0
mdurl 0.1.2
multidict 6.0.4
niquests 3.4.0
pip 23.3.2
pycparser 2.21
Pygments 2.17.2
python-socks 2.4.4
qh3 0.14.0
requests 2.31.0
requests-toolbelt 1.0.0
rich 13.7.0
setuptools 69.0.3
urllib3 2.1.0
urllib3-future 2.4.902
wassima 1.0.3
~/C/P/httpie π tempenv-714d1721928c9 π± feature-tryout-niquests
β― python -m niquests.help
{
"charset_normalizer": {
"version": "3.3.2"
},
"cryptography": {
"version": "41.0.7"
},
"http1": {
"h11": "0.14.0"
},
"http2": {
"h2": "4.1.0"
},
"http3": {
"enabled": true,
"qh3": "0.14.0"
},
"idna": {
"version": "3.6"
},
"implementation": {
"name": "CPython",
"version": "3.12.1"
},
"niquests": {
"version": "3.4.0"
},
"ocsp": {
"enabled": true
},
"platform": {
"release": "22.6.0",
"system": "Darwin"
},
"system_ssl": {
"version": "30200000"
},
"urllib3.future": {
"version": "2.4.902"
},
"wassima": {
"certifi_fallback": false,
"enabled": true,
"version": "1.0.3"
}
}
OK, let's try to get more intel.
import logging
from niquests import Session
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
explain_handler = logging.StreamHandler()
explain_handler.setFormatter(
logging.Formatter("%(asctime)s | %(levelname)s | %(message)s")
)
logger.addHandler(explain_handler)
with Session() as s:
print(s.get("https://pie.dev/get"))
β― python
Python 3.12.1 (main, Dec 8 2023, 18:57:37) [Clang 14.0.3 (clang-14<IP_ADDRESS>.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import logging
>>> from niquests import Session
>>>
>>> logger = logging.getLogger()
>>> logger.setLevel(logging.DEBUG)
>>> explain_handler = logging.StreamHandler()
>>> explain_handler.setFormatter(
... logging.Formatter("%(asctime)s | %(levelname)s | %(message)s")
... )
>>> logger.addHandler(explain_handler)
>>>
>>> with Session() as s:
... print(s.get("https://pie.dev/get"))
...
2024-01-03 12:07:11,846 | DEBUG | Converted retries value: 0 -> Retry(total=0, connect=None, read=None, redirect=None, status=None)
2024-01-03 12:07:11,847 | DEBUG | Converted retries value: 0 -> Retry(total=0, connect=None, read=None, redirect=None, status=None)
2024-01-03 12:07:11,859 | DEBUG | Starting new HTTPS connection (1): pie.dev:443
2024-01-03 12:07:12,136 | DEBUG | Converted retries value: 0 -> Retry(total=0, connect=None, read=None, redirect=None, status=None)
2024-01-03 12:07:12,136 | DEBUG | Converted retries value: 0 -> Retry(total=0, connect=None, read=None, redirect=None, status=None)
2024-01-03 12:07:12,138 | DEBUG | Starting new HTTP connection (1): e1.o.lencr.org:80
2024-01-03 12:07:12,250 | DEBUG | http://e1.o.lencr.org:80 "POST / HTTP/1.1" 200 344
2024-01-03 12:07:12,253 | DEBUG | Adding (b':method', b'GET') to the header table, sensitive:False, huffman:True
2024-01-03 12:07:12,253 | DEBUG | Encoding 2 with 7 bits
2024-01-03 12:07:12,253 | DEBUG | Adding (b':scheme', b'https') to the header table, sensitive:False, huffman:True
2024-01-03 12:07:12,253 | DEBUG | Encoding 7 with 7 bits
2024-01-03 12:07:12,253 | DEBUG | Adding (b':path', b'/get') to the header table, sensitive:False, huffman:True
2024-01-03 12:07:12,253 | DEBUG | Encoding 4 with 6 bits
2024-01-03 12:07:12,253 | DEBUG | Encoding 3 with 7 bits
2024-01-03 12:07:12,253 | DEBUG | Adding (b':authority', b'pie.dev') to the header table, sensitive:False, huffman:True
2024-01-03 12:07:12,253 | DEBUG | Encoding 1 with 6 bits
2024-01-03 12:07:12,253 | DEBUG | Encoding 5 with 7 bits
2024-01-03 12:07:12,253 | DEBUG | Adding (b'user-agent', b'niquests/3.4.0') to the header table, sensitive:False, huffman:True
2024-01-03 12:07:12,253 | DEBUG | Encoding 58 with 6 bits
2024-01-03 12:07:12,253 | DEBUG | Encoding 10 with 7 bits
2024-01-03 12:07:12,253 | DEBUG | Adding (b'accept-encoding', b'gzip, deflate') to the header table, sensitive:False, huffman:True
2024-01-03 12:07:12,254 | DEBUG | Encoding 16 with 7 bits
2024-01-03 12:07:12,254 | DEBUG | Adding (b'accept', b'*/*') to the header table, sensitive:False, huffman:True
2024-01-03 12:07:12,254 | DEBUG | Encoding 19 with 6 bits
2024-01-03 12:07:12,254 | DEBUG | Encoding 3 with 7 bits
2024-01-03 12:07:12,254 | DEBUG | Encoded header block to b'\x82\x87D\x83bb\xa7A\x85\xac\xc5^B\xf7z\x8a\xa8\xdd\xad*\x12\x86\x19]\xa5\xc1\x90S\x83\xf9c\xe7'
2024-01-03 12:07:12,305 | DEBUG | Decoding b'?\xe1\x1f\x88a\x96\xe4Y>\x94\x03*e\x1dJ\x08\x02i@\x86\xe0\x1d\xb8\x11)\x8bF\xff_\x8b\x1du\xd0b\r&=LtA\xeaT\x01*@\x96\x19\x08T!b\x1e\xa4\xd8z\x16\x1d\x14\x1f\xc2\xc4\xb0\xb2\x16\xa4\x98t#\x83M\x96\x97@\x8a$\xab\x10d\x9c\xab!#M\xa8\x86\xbf\xcfL:2^@\x87\xb0\xb5\x9e\xc4\xac\x93\xff\xffH\xff\xfd\xfc\x96\xa9+9\xaaJ?\x9b\x9f\xfb\xff\xfd\xfc\xdbe\x1f\xcd\xcf\xe6t\xa6\xb4\\\xff\xfe\x0c\x7f\xff\x06\x06\xbdE\xa1rP{d\x96\x81\xd8U\xc8z\x7f\xff\x83\x16\x16\xb3\xd8\x9f\xff\xe0\xc7v\x7f\xc4A\x9f\x81U\x15\xd1\x8b\x898\xee\xbe\x87\xf0\xe2\xd6a\xdb\xd02*+\xbbp\xf1\xd5+\x93TuE\x86\x95Eu\x00\x9e\x91\xef\xcaX\xe6\xd7\xeb~\x9f\n\x8b\x0e\t\xaf\xedg\x9b\xdc3\xf1\xf5\x82\xee\xc0>4\xfe~\xf6j\xfe\xb9\xe3\x7f\xd8=\x14\x98a\x0c\xc3\xb3\x87z\xc8\x8f^\xf68\xe7EE\x87Sx\x80\xdf\xefLCCro4\x8e\xbe\xdf\x1f\xe7\xff\xdf\xfe}\x7f3X{k\xfen\x7f$\x95j\x8bG\xf3\xf5\xfc\xd2?1\x0eb\xff7\x1c\x03O\x00\x1f\xfe\xff@\x03nel\xb1\xff\xfd\xfc\xa2\xd2\x10\xa8DR\xd82$\xc7\xab\xf9\xb8\x0f\xaf\xe6\xc2\xd6{\x13\x12O\xfc\xdc\xfeI*\xd5\x16\x8f\xe7\xeb\xf9\xa4~b\x1c\xc5\xfen8\x06\x9e\x00?\xfdv\x87%\x07\xb6Ih\x1d\x85@\x85$\xabX?_\x8fy\x99FG$|\xa5}\xd8\xdfm\xa5\xa1\xd1\xbbZ\x83\x9b\xd9\xab@\x85\x1d\tY\x1d\xc9\x90\x9d\x98?\x9b\x8d4\xcf\xf3\xf6\xa5#\x81\xe7\x1a\x00?'
2024-01-03 12:07:12,305 | DEBUG | Decoded 4096, consumed 3 bytes
2024-01-03 12:07:12,305 | DEBUG | Resizing header table to 4096 from 4096
2024-01-03 12:07:12,305 | DEBUG | Decoded 8, consumed 1 bytes
2024-01-03 12:07:12,305 | DEBUG | Decoded (b':status', b'200'), consumed 1
2024-01-03 12:07:12,305 | DEBUG | Decoded 33, consumed 1 bytes
2024-01-03 12:07:12,305 | DEBUG | Decoded 22, consumed 1 bytes
2024-01-03 12:07:12,305 | DEBUG | Decoded (b'date', b'Wed, 03 Jan 2024 11:07:12 GMT'), total consumed 24 bytes, indexed True
2024-01-03 12:07:12,305 | DEBUG | Decoded 31, consumed 1 bytes
2024-01-03 12:07:12,305 | DEBUG | Decoded 11, consumed 1 bytes
2024-01-03 12:07:12,305 | DEBUG | Decoded (b'content-type', b'application/json'), total consumed 13 bytes, indexed True
2024-01-03 12:07:12,305 | DEBUG | Decoded 20, consumed 1 bytes
2024-01-03 12:07:12,305 | DEBUG | Decoded 1, consumed 1 bytes
2024-01-03 12:07:12,305 | DEBUG | Decoded (b'access-control-allow-origin', <memory at 0x102dfb700>), total consumed 3 bytes, indexed True
2024-01-03 12:07:12,305 | DEBUG | Decoded 22, consumed 1 bytes
2024-01-03 12:07:12,305 | DEBUG | Decoded 3, consumed 1 bytes
2024-01-03 12:07:12,305 | DEBUG | Decoded (b'access-control-allow-credentials', b'true'), total consumed 28 bytes, indexed True
2024-01-03 12:07:12,305 | DEBUG | Decoded 10, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded 6, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded (b'cf-cache-status', b'DYNAMIC'), total consumed 19 bytes, indexed True
2024-01-03 12:07:12,306 | DEBUG | Decoded 7, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded 199, consumed 2 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded (b'report-to', b'{"endpoints":[{"url":"https:\\/\\/a.nel.cloudflare.com\\/report\\/v3?s=LUe%2Ba2VcVSDs9FGPiauj1d%2BRFVOf6gno%2Fm%2Bs0hmaTJebgPyTNw%2FEgDR3Y8ULVyEBQ09atXZq4DPhb9z0yecFA1garUvpcsyzQ66j%2FO5G05ZjGas5dTid795V"}],"group":"cf-nel","max_age":604800}'), total consumed 210 bytes, indexed True
2024-01-03 12:07:12,306 | DEBUG | Decoded 3, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded 49, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded (<memory at 0x102dfb1c0>, b'{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'), total consumed 55 bytes, indexed True
2024-01-03 12:07:12,306 | DEBUG | Decoded 54, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded 7, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded (b'server', b'cloudflare'), total consumed 9 bytes, indexed True
2024-01-03 12:07:12,306 | DEBUG | Decoded 5, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded 15, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded (b'cf-ray', b'83fac6d9ee97b954-AMS'), total consumed 23 bytes, indexed True
2024-01-03 12:07:12,306 | DEBUG | Decoded 26, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded 3, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded (b'content-encoding', b'gzip'), total consumed 5 bytes, indexed True
2024-01-03 12:07:12,306 | DEBUG | Decoded 5, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded 16, consumed 1 bytes
2024-01-03 12:07:12,306 | DEBUG | Decoded (b'alt-svc', b'h3=":443"; ma=86400'), total consumed 24 bytes, indexed True
2024-01-03 12:07:12,306 | DEBUG | https://pie.dev:443 "GET /get HTTP/2.0" 200 None
<Response HTTP/2 [200]>
>>>
Everything seems fine on Niquests side.
I have a reasonable explanation onto why. Forget about the HTTP/1.1 it is a OCSP request made to ensure you're not getting MITM-hacked.
from niquests import Session
with Session() as s:
print(s.get("https://pie.dev/get"))
print(s.get("https://pie.dev/get"))
Then run
from niquests import Session
with Session(resolver="doh+google://") as s:
print(s.get("https://pie.dev/get"))
On the HTTPie side, verify your config directory is actually writable.
Run (twice):
https pie.dev/get
and
https --resolver "doh+google://" pie.dev/get
You should have a file in /Users/dwt/.config/httpie named quic.json.
Finally, I did find a bug in the PR. The caching layer wasn't properly injected in the custom HTTPSAdapter, and made the --http3 flag useless.
I fixed it. Thank you for the report and debug.
Tell me if this (recent push) fixed the problem.
Do you confirm its fixed? If so, feel free to close this issue.
Re:
I am going to suppose everything is OK on your side.
If not, do not hesitate to let me know.
Regards,
Sorry, I was a bit away the last few days - I have retested now and had these findings:
β― python
Python 3.12.1 (main, Dec 8 2023, 18:57:37) [Clang 14.0.3 (clang-14<IP_ADDRESS>.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from niquests import Session
with Session() as s:
print(s.get("https://pie.dev/get"))
print(s.get("https://pie.dev/get"))
>>>
>>> with Session() as s:
... print(s.get("https://pie.dev/get"))
... print(s.get("https://pie.dev/get"))
...
<Response HTTP/2 [200]>
<Response HTTP/3 [200]>
>>> ^D
~/C/P/httpie π tempenv-714d1721928c9 π± feature-tryout-niquests 6s
β― python
Python 3.12.1 (main, Dec 8 2023, 18:57:37) [Clang 14.0.3 (clang-14<IP_ADDRESS>.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from niquests import Session
with Session(resolver="doh+google://") as s:
print(s.get("https://pie.dev/get"))
>>>
>>> with Session(resolver="doh+google://") as s:
... print(s.get("https://pie.dev/get"))
...
<Response HTTP/3 [200]>
>>> ^D
This seems to work - could you perhaps elaborate why the first request didn't work but the second did? Does the session need to collect the information that the host is http3 capable before trying it?
On the shell I was not so lucky:
β― https pie.dev/get
HTTP/2 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 842cb3d0b8d70baa-AMS
Content-Encoding: gzip
Content-Type: application/json
Date: Tue, 09 Jan 2024 12:32:20 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=2oJxp%2FRmNZ11y7hhOr9bkcI3MRmBHh0wyDGi5ZbvJbFo7tkxRJ7ULTPR%2F4EJslQGJhR4B2jh2zwO9BkNA8Jl55t2n%2FAHrW99AC7QjOTWDWqvQhgGvSdkuZhV"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Cdn-Loop": "cloudflare",
"Cf-Connecting-Ip": "<IP_ADDRESS>",
"Cf-Ipcountry": "DE",
"Cf-Ray": "842cb3d0b8d70baa-FRA",
"Cf-Visitor": "{\"scheme\":\"https\"}",
"Connection": "Keep-Alive",
"Host": "pie.dev",
"User-Agent": "HTTPie/4.0.0.b1"
},
"origin": "<IP_ADDRESS>",
"url": "https://pie.dev/get"
}
~/C/P/httpie π tempenv-714d1721928c9 π± feature-tryout-niquests
β― https pie.dev/get
HTTP/2 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 842cb3e7be386fab-CDG
Content-Encoding: gzip
Content-Type: application/json
Date: Tue, 09 Jan 2024 12:32:24 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=x7%2BYUHaNka9LBnCLM6ndKctXAAokJ1BGvBYvJF9pdbc0yjLZowYIT68ULkJ2sQyw50STbTjTWS8uE2wWAb0%2BmMkbE4JWrCczanMqBw%2BN8Mw0EH24c7eG1RVY"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Cdn-Loop": "cloudflare",
"Cf-Connecting-Ip": "<IP_ADDRESS>",
"Cf-Ipcountry": "DE",
"Cf-Ray": "842cb3e7be386fab-FRA",
"Cf-Visitor": "{\"scheme\":\"https\"}",
"Connection": "Keep-Alive",
"Host": "pie.dev",
"User-Agent": "HTTPie/4.0.0.b1"
},
"origin": "<IP_ADDRESS>",
"url": "https://pie.dev/get"
}
~/C/P/httpie π tempenv-714d1721928c9 π± feature-tryout-niquests
β― https --resolver "doh+google://" pie.dev/get
HTTP/2 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 842cb4360fa76627-AMS
Content-Encoding: gzip
Content-Type: application/json
Date: Tue, 09 Jan 2024 12:32:36 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=Fp5K55yzrboR%2FvkSr%2FPSGCQDM4Ghoo0SrBN%2B6c4aGvLtcMZ%2Byb4YFsyN5cXTPwzXXNQ4pQuOSO95TJcA%2FscaiqVFzaZhOREMhbW4n3atnpWkDsFXVNXKsK1B"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Cdn-Loop": "cloudflare",
"Cf-Connecting-Ip": "<IP_ADDRESS>",
"Cf-Ipcountry": "DE",
"Cf-Ray": "842cb4360fa76627-FRA",
"Cf-Visitor": "{\"scheme\":\"https\"}",
"Connection": "Keep-Alive",
"Host": "pie.dev",
"User-Agent": "HTTPie/4.0.0.b1"
},
"origin": "<IP_ADDRESS>",
"url": "https://pie.dev/get"
}
~/C/P/httpie π tempenv-714d1721928c9 π± feature-tryout-niquests
β― https --resolver "doh+google://" pie.dev/get
HTTP/2 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 842cb44ccf350a6f-AMS
Content-Encoding: gzip
Content-Type: application/json
Date: Tue, 09 Jan 2024 12:32:40 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=23Q18VEt4d3YuApMUBS0NTExzN7TV5dpwqZTzifTgpck9dNsMyCCsyur8MPNw%2BIJy361tQXa3hGY%2F%2FzQzq3%2Fx1kF2FooPIdblwTZQLSIeDDSxchRi6mKuwmO"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Cdn-Loop": "cloudflare",
"Cf-Connecting-Ip": "<IP_ADDRESS>",
"Cf-Ipcountry": "DE",
"Cf-Ray": "842cb44ccf350a6f-FRA",
"Cf-Visitor": "{\"scheme\":\"https\"}",
"Connection": "Keep-Alive",
"Host": "pie.dev",
"User-Agent": "HTTPie/4.0.0.b1"
},
"origin": "<IP_ADDRESS>",
"url": "https://pie.dev/get"
}
~/C/P/httpie π tempenv-714d1721928c9 π± feature-tryout-niquests
β― https --http3 --resolver "doh+google://" pie.dev/get
HTTP/2 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 842cb5b37abe6f04-CDG
Content-Encoding: gzip
Content-Type: application/json
Date: Tue, 09 Jan 2024 12:33:37 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=6R7RN875gKI2cArgg3Mw9EHgmOQdoP5FgQUhkDgs%2F4A6QPxZjnzvdei1aMvyxovWIxvVNdYjScTDrYr%2BtTnHdMKOTePN6X0O4bqBgDplqFoLr1L3%2BRV54SZp"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Cdn-Loop": "cloudflare",
"Cf-Connecting-Ip": "<IP_ADDRESS>",
"Cf-Ipcountry": "DE",
"Cf-Ray": "842cb5b37abe6f04-FRA",
"Cf-Visitor": "{\"scheme\":\"https\"}",
"Connection": "Keep-Alive",
"Host": "pie.dev",
"User-Agent": "HTTPie/4.0.0.b1"
},
"origin": "<IP_ADDRESS>",
"url": "https://pie.dev/get"
}
~/C/P/httpie π tempenv-714d1721928c9 π± feature-tryout-niquests
β― https --http3 pie.dev/get
HTTP/2 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 842cb5db7d00f110-CDG
Content-Encoding: gzip
Content-Type: application/json
Date: Tue, 09 Jan 2024 12:33:44 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=Jhqw1wkYiH55ne8D4%2B4qGnyyHbbx01QNCTZcCcqQYFuXv%2FwQb3ZGu%2FYmZI6VBiglfw4GXmo2XhG0F26m9ZdSgur4U3Gl68Rz83DOglkIAgtv4NtTYxWz9aor"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Cdn-Loop": "cloudflare",
"Cf-Connecting-Ip": "<IP_ADDRESS>",
"Cf-Ipcountry": "DE",
"Cf-Ray": "842cb5db7d00f110-FRA",
"Cf-Visitor": "{\"scheme\":\"https\"}",
"Connection": "Keep-Alive",
"Host": "pie.dev",
"User-Agent": "HTTPie/4.0.0.b1"
},
"origin": "<IP_ADDRESS>",
"url": "https://pie.dev/get"
}
All the while the config directory seems to be perfectly writeable, but no config file shows up there.
~/.config/httpie
β― ls -al .
total 8
drwxr-xr-x 3 dwt staff 96 3 Jan 10:43 ./
drwxr-xr-x 23 dwt staff 736 3 Jan 10:43 ../
-rw-r--r-- 1 dwt staff 221 3 Jan 10:43 version_info.json
If you give me a point where to start, I can try to debug the library to see why it can't seem to write in that directory?
OK.
Let's try to further understand your case.
Immediately I would try to remove httpie completely and re do the installation of the fork/patch and try again. I am suspicious on how --http3 did not work. I think it will resolve your case definitely. Let me know.
could you perhaps elaborate why the first request didn't work but the second did? Does the session need to collect the information that the host is http3 capable before trying it?
without a custom DNS, you cannot reach a HTTP/3 endpoint without prior establishing a HTTP/1 or HTTP/2 link, that's why.
Not sure, it seems that your assesmen that the quic config file is not written still seems correct.
Even when recreating the full venv and trying the requests multiple times that file is not written - and of course the requests stay at http/2
Details
β― cd Code/Projekte/httpie/
~/C/P/httpie π π± feature-tryout-niquests
β― vf tmp
Creating tempenv-51b5170562de8 via ~/.local/pipx/venvs/virtualfish/bin/python β¦
~/C/P/httpie π tempenv-51b5170562de8 π± feature-tryout-niquests 3s
β― git pull
Already up to date.
~/C/P/httpie π tempenv-51b5170562de8 π± feature-tryout-niquests 2s
β― git remote -v
origin git@github.com:Ousret/httpie.git (fetch)
origin git@github.com:Ousret/httpie.git (push)
~/C/P/httpie π tempenv-51b5170562de8 π± feature-tryout-niquests
β― pip install --editable .
Obtaining file:///Users/dwt/Code/Projekte/httpie
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Requirement already satisfied: pip in /Users/dwt/.virtualenvs/tempenv-51b5170562de8/lib/python3.12/site-packages (from httpie==4.0.0b1) (23.3.2)
Collecting charset-normalizer>=2.0.0 (from httpie==4.0.0b1)
Using cached charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl.metadata (33 kB)
Collecting defusedxml>=0.6.0 (from httpie==4.0.0b1)
Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)
Collecting niquests<4,>=3.4.0 (from niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Downloading niquests-3.4.1-py3-none-any.whl.metadata (6.4 kB)
Collecting Pygments>=2.5.2 (from httpie==4.0.0b1)
Using cached pygments-2.17.2-py3-none-any.whl.metadata (2.6 kB)
Collecting setuptools (from httpie==4.0.0b1)
Using cached setuptools-69.0.3-py3-none-any.whl.metadata (6.3 kB)
Collecting rich>=9.10.0 (from httpie==4.0.0b1)
Using cached rich-13.7.0-py3-none-any.whl.metadata (18 kB)
Collecting idna<4,>=2.5 (from niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached idna-3.6-py3-none-any.whl.metadata (9.9 kB)
Collecting kiss-headers<4,>=2 (from niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached kiss_headers-2.4.3-py3-none-any.whl.metadata (13 kB)
Collecting urllib3-future<3,>=2.4.901 (from niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Downloading urllib3_future-2.4.903-py3-none-any.whl.metadata (6.1 kB)
Collecting wassima<2,>=1.0.1 (from niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached wassima-1.0.3-cp37-abi3-macosx_11_0_arm64.whl.metadata (3.8 kB)
Collecting markdown-it-py>=2.2.0 (from rich>=9.10.0->httpie==4.0.0b1)
Using cached markdown_it_py-3.0.0-py3-none-any.whl.metadata (6.9 kB)
Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich>=9.10.0->httpie==4.0.0b1)
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Collecting h11<1.0.0,>=0.11.0 (from urllib3-future<3,>=2.4.901->niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached h11-0.14.0-py3-none-any.whl (58 kB)
Collecting h2<5.0.0,>=4.0.0 (from urllib3-future<3,>=2.4.901->niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached h2-4.1.0-py3-none-any.whl (57 kB)
Collecting qh3<1.0.0,>=0.14.0 (from urllib3-future<3,>=2.4.901->niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached qh3-0.14.0-cp37-abi3-macosx_11_0_arm64.whl.metadata (4.8 kB)
Collecting python-socks<3.0,>=2.0 (from urllib3-future[socks]<3,>=2.4.901; extra == "socks"->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached python_socks-2.4.4-py3-none-any.whl.metadata (7.1 kB)
Collecting hyperframe<7,>=6.0 (from h2<5.0.0,>=4.0.0->urllib3-future<3,>=2.4.901->niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached hyperframe-6.0.1-py3-none-any.whl (12 kB)
Collecting hpack<5,>=4.0 (from h2<5.0.0,>=4.0.0->urllib3-future<3,>=2.4.901->niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached hpack-4.0.0-py3-none-any.whl (32 kB)
Collecting cryptography<42.0.0,>=41.0.0 (from qh3<1.0.0,>=0.14.0->urllib3-future<3,>=2.4.901->niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached cryptography-41.0.7-cp37-abi3-macosx_10_12_universal2.whl.metadata (5.2 kB)
Collecting cffi>=1.12 (from cryptography<42.0.0,>=41.0.0->qh3<1.0.0,>=0.14.0->urllib3-future<3,>=2.4.901->niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached cffi-1.16.0-cp312-cp312-macosx_11_0_arm64.whl.metadata (1.5 kB)
Collecting pycparser (from cffi>=1.12->cryptography<42.0.0,>=41.0.0->qh3<1.0.0,>=0.14.0->urllib3-future<3,>=2.4.901->niquests<4,>=3.4.0->niquests[socks]<4,>=3.4.0->httpie==4.0.0b1)
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Using cached charset_normalizer-3.3.2-cp312-cp312-macosx_11_0_arm64.whl (119 kB)
Downloading niquests-3.4.1-py3-none-any.whl (94 kB)
ββββββββββββββββββββββββββββββββββββββββ 94.5/94.5 kB 1.2 MB/s eta 0:00:00
Using cached pygments-2.17.2-py3-none-any.whl (1.2 MB)
Using cached rich-13.7.0-py3-none-any.whl (240 kB)
Using cached setuptools-69.0.3-py3-none-any.whl (819 kB)
Using cached idna-3.6-py3-none-any.whl (61 kB)
Using cached kiss_headers-2.4.3-py3-none-any.whl (43 kB)
Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
Downloading urllib3_future-2.4.903-py3-none-any.whl (337 kB)
ββββββββββββββββββββββββββββββββββββββββ 338.0/338.0 kB 6.1 MB/s eta 0:00:00
Using cached wassima-1.0.3-cp37-abi3-macosx_11_0_arm64.whl (261 kB)
Using cached python_socks-2.4.4-py3-none-any.whl (52 kB)
Using cached qh3-0.14.0-cp37-abi3-macosx_11_0_arm64.whl (263 kB)
Using cached cryptography-41.0.7-cp37-abi3-macosx_10_12_universal2.whl (5.3 MB)
Using cached cffi-1.16.0-cp312-cp312-macosx_11_0_arm64.whl (177 kB)
Building wheels for collected packages: httpie
Building editable for httpie (pyproject.toml) ... done
Created wheel for httpie: filename=httpie-4.0.0b1-0.editable-py3-none-any.whl size=7312 sha256=d77ed3eca64b9478e633b272b9e9142a5ec2ba03fab00b6a83a31c456fef829b
Stored in directory: /private/var/folders/hc/ss_swh2s7rscgx4402mjxtv00000gn/T/pip-ephem-wheel-cache-nppdx0j0/wheels/60/bd/76/3d063ddffe013d962cd599bbfd12c29d9a91313e761fc7f76e
Successfully built httpie
Installing collected packages: python-socks, wassima, setuptools, Pygments, pycparser, mdurl, kiss-headers, idna, hyperframe, hpack, h11, defusedxml, charset-normalizer, markdown-it-py, h2, cffi, rich, cryptography, qh3, urllib3-future, niquests, httpie
Successfully installed Pygments-2.17.2 cffi-1.16.0 charset-normalizer-3.3.2 cryptography-41.0.7 defusedxml-0.7.1 h11-0.14.0 h2-4.1.0 hpack-4.0.0 httpie-4.0.0b1 hyperframe-6.0.1 idna-3.6 kiss-headers-2.4.3 markdown-it-py-3.0.0 mdurl-0.1.2 niquests-3.4.1 pycparser-2.21 python-socks-2.4.4 qh3-0.14.0 rich-13.7.0 setuptools-69.0.3 urllib3-future-2.4.903 wassima-1.0.3
~/C/P/httpie π tempenv-51b5170562de8 π± feature-tryout-niquests 14s
β― pip list
Package Version Editable project location
------------------ ------- -------------------------------
cffi 1.16.0
charset-normalizer 3.3.2
cryptography 41.0.7
defusedxml 0.7.1
h11 0.14.0
h2 4.1.0
hpack 4.0.0
httpie 4.0.0b1 /Users/dwt/Code/Projekte/httpie
hyperframe 6.0.1
idna 3.6
kiss-headers 2.4.3
markdown-it-py 3.0.0
mdurl 0.1.2
niquests 3.4.1
pip 23.3.2
pycparser 2.21
Pygments 2.17.2
python-socks 2.4.4
qh3 0.14.0
rich 13.7.0
setuptools 69.0.3
urllib3-future 2.4.903
wassima 1.0.3
~/C/P/httpie π tempenv-51b5170562de8 π± feature-tryout-niquests
β― which https
/Users/dwt/.virtualenvs/tempenv-51b5170562de8/bin/https
~/C/P/httpie π tempenv-51b5170562de8 π± feature-tryout-niquests
β― https --http3 --resolver "doh+google://" pie.dev/get
HTTP/2 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 842e2f7a4e881c82-AMS
Content-Encoding: gzip
Content-Type: application/json
Date: Tue, 09 Jan 2024 16:51:31 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=yU2NQKMxVXTqBiwWJaDtlkEUH3k5p4Z6g7%2BhFgmAFrgFXzR1KMqb98mx7Mu5x4LHu3NPz8Wm7NxYanPM4o18syEXQWm9ug2%2BOo6pOjFaMt%2BHmvRAunSgO%2FND"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Cdn-Loop": "cloudflare",
"Cf-Connecting-Ip": "<IP_ADDRESS>",
"Cf-Ipcountry": "DE",
"Cf-Ray": "842e2f7a4e881c82-FRA",
"Cf-Visitor": "{\"scheme\":\"https\"}",
"Connection": "Keep-Alive",
"Host": "pie.dev",
"User-Agent": "HTTPie/4.0.0.b1"
},
"origin": "<IP_ADDRESS>",
"url": "https://pie.dev/get"
}
~/C/P/httpie π tempenv-51b5170562de8 π± feature-tryout-niquests 3s
β― https --http3 pie.dev/get
HTTP/2 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 842e2feaadf365df-FRA
Content-Encoding: gzip
Content-Type: application/json
Date: Tue, 09 Jan 2024 16:51:49 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=oymUHuIvuaPYe9IGgqJnfHIrH6FGoLgcwv%2Bt3Mrkv%2F0%2F4ejdbfgXqUEDsCRESYYv6MSRxbvr84jZnsPyVc5K12ZiaZB%2FCssOdpPV08UNm%2BwNqpAlr7Z1PXL7"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Cdn-Loop": "cloudflare",
"Cf-Connecting-Ip": "<IP_ADDRESS>",
"Cf-Ipcountry": "DE",
"Cf-Ray": "842e2feaadf365df-FRA",
"Cf-Visitor": "{\"scheme\":\"https\"}",
"Connection": "Keep-Alive",
"Host": "pie.dev",
"User-Agent": "HTTPie/4.0.0.b1"
},
"origin": "<IP_ADDRESS>",
"url": "https://pie.dev/get"
}
~/C/P/httpie π tempenv-51b5170562de8 π± feature-tryout-niquests
β― ls ~/.config/httpie/
version_info.json
OK. I could collect some data using https://github.com/Ousret/httpie-test/actions/runs/7470350480/job/20328900920
I am going to be able to propose a fix for this, hopefully soon. It has to do with the args parser, something seems off.
Oh great, it is reproducible! Please ping me if I can help.
Also: sorry if it might take me a few days to respond.
Wasn't easy, but ultimately found why.
It depended on the system openssl default cipher list, that excluded by accident QUIC ciphers, therefore disabled http/3...
It is fixed. You may try again.
yay!
β― https --http3 pie.dev/get
HTTP/3 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 84678dcdbdec0bdc-AMS
Content-Encoding: gzip
Content-Type: application/json
Date: Tue, 16 Jan 2024 15:57:23 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=g7XjPz9ElzpYpSXcIk4CyBHzEoVn4VOu80R65aLy6UJ9bLs3H9zT%2FK%2BNiuYRKsqn%2BWQgG6VHnUtChRB9JzUL39l9Tk11yplURAJR3pqdteMkkiytQEEvPDR6"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Cdn-Loop": "cloudflare",
"Cf-Connecting-Ip": "<IP_ADDRESS>",
"Cf-Ipcountry": "DE",
"Cf-Ray": "84678dcdbdec0bdc-FRA",
"Cf-Visitor": "{\"scheme\":\"https\"}",
"Connection": "Keep-Alive",
"Host": "pie.dev",
"User-Agent": "HTTPie/4.0.0.b1"
},
"origin": "<IP_ADDRESS>",
"url": "https://pie.dev/get"
}
Indeed this fixes it for me. :)
|
2025-04-01T06:39:10.719545
| 2019-11-16T00:39:37
|
523761042
|
{
"authors": [
"charmparticle",
"taers232c"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7216",
"repo": "jay0lee/GAM",
"url": "https://github.com/jay0lee/GAM/issues/1045"
}
|
gharchive/issue
|
gam org operations fail when orgnames contain spaces
I have upgraded to the latest GAM release from https://git.io/gamreleases and I still have this issue.
I am typing the command as described in the GAM Wiki at https://github.com/jay0lee/gam/wiki
Full steps to reproduce the issue:
create an OU path that includes spaces in the OU and the sub OU
create a user
try to move the user to the sub OU you created, quoting the name of the OU to avoid spaces, or using the id, to circumvent the whole spaces issue altogether
example:
gam update org id:someorgid users someuser
reports back:
ERROR: 400: Invalid Input: INVALID_OU_ID - invalid
trying gam update user someuser org "Some Orgname With Spaces/Suborg With Spaces/SubSubOrg With Spaces"
reports back:
ERROR: Spaces/Suborg is not a valid argument for "gam update user"
Expected outcome (what are you trying to do?):
The user is successfully moved to the OU
Actual outcome (what errors or bad behavior do you see instead?):
error message resulting from spaces in the name
It's working for me.
Ross
$ gam version
GAM 4.96 - https://git.io/gam
Jay Lee<EMAIL_ADDRESS>Python 3.8.0 64-bit final
google-api-python-client 1.7.11
MacOS Sierra 10.12.6 x86_64
Path: /Users/admin/Documents/GoogleApps/GAM
$ gam info ou "/Test Space/Sub Space/Sub Sub Space"
name: Sub Sub Space
description: Sub Sub Space
orgUnitPath: /Test Space/Sub Space/Sub Sub Space
orgUnitId: id:03ph8a2z24acgk2
parentOrgUnitPath: /Test Space/Sub Space
parentOrgUnitId: id:03ph8a2z3h9j9wu
Got 0 Users: -
Users:
$ gam update user testuser7 org "/Test Space/Sub Space/Sub Sub Space"
updating user<EMAIL_ADDRESS>
$ gam info user testuser7 nogroups nolicenses
User<EMAIL_ADDRESS>First Name: Test
Last Name: User7
Is a Super Admin: False
Is Delegated Admin: False
2-step enrolled: False
2-step enforced: False
Has Agreed to Terms: False
IP Whitelisted: False
Account Suspended: False
Is Archived: False
Must Change Password: False
Google Unique ID:<PHONE_NUMBER>03179281666
Customer ID: C012345678
Mailbox is setup: True
Included in GAL: True
Creation Time: 2019-11-09T00:35:09.000Z
Last login time: Never
Google Org Unit Path: /Test Space/Sub Space/Sub Sub Space
$ gam info ou "/Test Space/Sub Space/Sub Sub Space"
name: Sub Sub Space
description: Sub Sub Space
orgUnitPath: /Test Space/Sub Space/Sub Sub Space
orgUnitId: id:03ph8a2z24acgk2
parentOrgUnitPath: /Test Space/Sub Space
parentOrgUnitId: id:03ph8a2z3h9j9wu
Got 1 Users<EMAIL_ADDRESS>-<EMAIL_ADDRESS>Users:
<EMAIL_ADDRESS>
This is not supported by Gam
$ gam update user testuser7 org id:03ph8a2z24acgk2
ERROR: id:03ph8a2z24acgk2 is not valid in this context
Found the problem, it had to do with how I was executing gam:
#!/bin/bash
pushd ~/Downloads/gam
./gam
popd
Just calling gam directly negates the issue.
|
2025-04-01T06:39:10.740263
| 2021-10-20T12:16:36
|
1031339662
|
{
"authors": [
"bootjp",
"jaytaylor"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7218",
"repo": "jaytaylor/go-hostsfile",
"url": "https://github.com/jaytaylor/go-hostsfile/pull/6"
}
|
gharchive/pull-request
|
support in-line cooments #5 but there are performance concerns
Thank you for a very good library.
When I looked at the issues, there were some that were not supported, so I sent a Pull Request.
However, there are performance concerns with this implementation.
It will be a bit complicated, but should I change the processing between the cases with in-line comment and without it?
Hi Bootjp,
Thanks for your kind words :)
Can you elaborate a bit on the performance concerns? How large of hosts files are you trying to consume with this?
Not sure if I'm missing something, because this PR seems reasonable AFAICT.
@bootjp LGTM, thank you!
|
2025-04-01T06:39:10.751567
| 2015-07-21T04:48:03
|
96226068
|
{
"authors": [
"johanhaleby"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7219",
"repo": "jayway/rest-assured",
"url": "https://github.com/jayway/rest-assured/issues/336"
}
|
gharchive/issue
|
Better support for self signed certificates
From<EMAIL_ADDRESS>on June 29, 2012 14:05:03
What steps will reproduce the problem? 1. test an ssl connection with a self signed certificate (no hostname specified) 2. 3. What is the expected output? What do you see instead? I would like to be able to do this but am prevented to because I cannot specify the default host name verifier What version of the product are you using? On what operating system? 1.6.2 (windows) Please provide any additional information below. I would like something like this to work. Many thanks!
/*
* The following code basically nullifies all SSL checks - this is not recommended to be copied without thought
* of the consequences!! Sadly rest assured ignores all this so we may well have to ditch rest assured
*/
TrustManager[] certs = new TrustManager[]
{ new X509TrustManager()
{
@Override
public X509Certificate[] getAcceptedIssuers()
{
return null;
}
@Override
public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException
{
}
@Override
public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException
{
}
} };
SSLContext ctx = null;
try
{
ctx = SSLContext.getInstance("TLS");
ctx.init(null, certs, new SecureRandom());
}
catch (java.security.GeneralSecurityException ex)
{
}
if (ctx != null)
{
HttpsURLConnection.setDefaultSSLSocketFactory(ctx.getSocketFactory());
}
HttpsURLConnection.setDefaultHostnameVerifier(new HostnameVerifier()
{
public boolean verify(String hostname, SSLSession session)
{
return true;
}
});
Original issue: http://code.google.com/p/rest-assured/issues/detail?id=182
From<EMAIL_ADDRESS>on June 29, 2012 05:05:33
Actually this should be an enhancement - sorry
From<EMAIL_ADDRESS>on July 13, 2012 23:54:26
And you just execute this before you make a request with HTTP Client and it should work?
Status: Accepted
Labels: -Type-Defect Type-Enhancement
From<EMAIL_ADDRESS>on October 22, 2012 06:17:18
I just posted to the forum a HttpClient+Jetty sample that might help Johan implement this. included in that post is a suggestion for how I think the API should work. The forum post hasn't show up yet so I can't link it here. I have however attached the sample.
Attachment: self-signed-certs.zip
From<EMAIL_ADDRESS>on December 06, 2013 06:20:10
Also see https://github.com/jayway/rest-assured/pull/22
From<EMAIL_ADDRESS>on December 07, 2013 09:42:25
Great! I've now merged the pull request.
From<EMAIL_ADDRESS>on December 07, 2013 11:03:35
I've actually modified the API (it's not backward compatible). You can now specify the host name verification check using "CertificateAuthSettings" that you may pass in to the "certificate" method. Please try this out and tell me if it works and if you like the API. Depend on version 2.0.2-SNAPSHOT after having added the following Maven repo:
<repositories>
<repository>
<id>sonatype
<url> https://oss.sonatype.org/content/repositories/snapshots/ <snapshots />
</repository>
</repositories>
Status: Fixed
|
2025-04-01T06:39:10.755849
| 2020-11-15T18:35:21
|
743316526
|
{
"authors": [
"iMerica",
"jerinpetergeorge",
"rphlo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7220",
"repo": "jazzband/dj-rest-auth",
"url": "https://github.com/jazzband/dj-rest-auth/issues/168"
}
|
gharchive/issue
|
Custom username validator is not respected
Custom username validator defined as allauth settings is not respected on put/patch request to UserDetailsView although it is respected for registration.
This seems valid to me as we are validating the same during the registration process.
I think I can give a hand on this, shall I? @iMerica
π
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.