Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
22,683
| 31,987,415,382
|
IssuesEvent
|
2023-09-21 01:16:45
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/transform] unable to parse OTTL statement
|
bug processor/transform needs triage
|
### Component(s)
processor/transform
### Description
I'm trying to follow the [example from here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusexporter#setting-resource-attributes-as-metric-labels) to add metric labels for Prometheus. It suggests using the transform processor like so in order to copy the most common resource attributes into metric labels:
```yaml
processor:
transform:
metric_statements:
- context: metric
statements:
- set(attributes["namespace"], resource.attributes["k8s_namespace_name"])
- set(attributes["container"], resource.attributes["k8s.container.name"])
- set(attributes["pod"], resource.attributes["k8s.pod.name"])
- set(attributes["cluster"], resource.attributes["k8s.cluster.name"])
```
Given that example I've simplified my collector config down to this simplified version:.
```yaml
processors:
transform:
metric_statements:
- context: metric
statements:
- set(attributes["k8s_namespace_name"], resource.attributes["k8s_namespace_name"])
resource:
attributes:
- key: environment_name
value: dev
action: insert
exporters:
otlphttp:
endpoint: <Omitted>
service:
extensions: [health_check]
pipelines:
metrics:
receivers: [otlp]
processors: [resource, transform, batch]
exporters: [otlphttp]
```
Unfortunately, with this configuration, the collector fails to start and complains with the following message:
> Error: invalid configuration: processors::transform: unable to parse OTTL statement "set(attributes[\"k8s_namespace_name\"], resource.attributes[\"k8s_namespace_name\"])": error while parsing arguments for call to "set": invalid argument at position 0: invalid metric path expression [{attributes [{0xc002ad2250 <nil>}]}]2023/09/20 00:43:53 collector server run finished with error: invalid configuration: processors::transform: unable to parse OTTL statement "set(attributes[\"k8s_namespace_name\"], resource.attributes[\"k8s_namespace_name\"])": error while parsing arguments for call to "set": invalid argument at position 0: invalid metric path expression [{attributes [{0xc002ad2250 <nil>}]}]
### Collector version
0.85.0
## Environment
docker image: otel/opentelemetry-collector-contrib:0.85.0
|
1.0
|
[processor/transform] unable to parse OTTL statement - ### Component(s)
processor/transform
### Description
I'm trying to follow the [example from here](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusexporter#setting-resource-attributes-as-metric-labels) to add metric labels for Prometheus. It suggests using the transform processor like so in order to copy the most common resource attributes into metric labels:
```yaml
processor:
transform:
metric_statements:
- context: metric
statements:
- set(attributes["namespace"], resource.attributes["k8s_namespace_name"])
- set(attributes["container"], resource.attributes["k8s.container.name"])
- set(attributes["pod"], resource.attributes["k8s.pod.name"])
- set(attributes["cluster"], resource.attributes["k8s.cluster.name"])
```
Given that example I've simplified my collector config down to this simplified version:.
```yaml
processors:
transform:
metric_statements:
- context: metric
statements:
- set(attributes["k8s_namespace_name"], resource.attributes["k8s_namespace_name"])
resource:
attributes:
- key: environment_name
value: dev
action: insert
exporters:
otlphttp:
endpoint: <Omitted>
service:
extensions: [health_check]
pipelines:
metrics:
receivers: [otlp]
processors: [resource, transform, batch]
exporters: [otlphttp]
```
Unfortunately, with this configuration, the collector fails to start and complains with the following message:
> Error: invalid configuration: processors::transform: unable to parse OTTL statement "set(attributes[\"k8s_namespace_name\"], resource.attributes[\"k8s_namespace_name\"])": error while parsing arguments for call to "set": invalid argument at position 0: invalid metric path expression [{attributes [{0xc002ad2250 <nil>}]}]2023/09/20 00:43:53 collector server run finished with error: invalid configuration: processors::transform: unable to parse OTTL statement "set(attributes[\"k8s_namespace_name\"], resource.attributes[\"k8s_namespace_name\"])": error while parsing arguments for call to "set": invalid argument at position 0: invalid metric path expression [{attributes [{0xc002ad2250 <nil>}]}]
### Collector version
0.85.0
## Environment
docker image: otel/opentelemetry-collector-contrib:0.85.0
|
process
|
unable to parse ottl statement component s processor transform description i m trying to follow the to add metric labels for prometheus it suggests using the transform processor like so in order to copy the most common resource attributes into metric labels yaml processor transform metric statements context metric statements set attributes resource attributes set attributes resource attributes set attributes resource attributes set attributes resource attributes given that example i ve simplified my collector config down to this simplified version yaml processors transform metric statements context metric statements set attributes resource attributes resource attributes key environment name value dev action insert exporters otlphttp endpoint service extensions pipelines metrics receivers processors exporters unfortunately with this configuration the collector fails to start and complains with the following message error invalid configuration processors transform unable to parse ottl statement set attributes resource attributes error while parsing arguments for call to set invalid argument at position invalid metric path expression collector server run finished with error invalid configuration processors transform unable to parse ottl statement set attributes resource attributes error while parsing arguments for call to set invalid argument at position invalid metric path expression collector version environment docker image otel opentelemetry collector contrib
| 1
|
10,892
| 13,671,858,856
|
IssuesEvent
|
2020-09-29 07:40:19
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
inverse parent relation in GO:0140418
|
multi-species process
|
It seems something went wrong with the #18689 creation of GO:0140418 because, as you can see,
```
id: GO:0140418
name: effector-mediated modulation of host process by symbiont
namespace: biological_process
def: "A process mediated by a molecule secreted by a symbiont that results in th
e modulation (either activation or suppresion) of a host structure or process. T
he host is defined as the larger of the organisms involved in a symbiotic intera
ction." [PMID:21467214]
synonym: "effector mediated modulation of host process by symbiont" EXACT []
synonym: "effector triggered modulation of host process by symbiont" EXACT []
synonym: "effector-dependent modulation of host process by symbiont" EXACT []
is_a: GO:0030682 ! mitigation of host defenses by symbiont
```
GO:0140418 is defined as is_a GO:0030682 while it should be vice versa. Please fix.
|
1.0
|
inverse parent relation in GO:0140418 - It seems something went wrong with the #18689 creation of GO:0140418 because, as you can see,
```
id: GO:0140418
name: effector-mediated modulation of host process by symbiont
namespace: biological_process
def: "A process mediated by a molecule secreted by a symbiont that results in th
e modulation (either activation or suppresion) of a host structure or process. T
he host is defined as the larger of the organisms involved in a symbiotic intera
ction." [PMID:21467214]
synonym: "effector mediated modulation of host process by symbiont" EXACT []
synonym: "effector triggered modulation of host process by symbiont" EXACT []
synonym: "effector-dependent modulation of host process by symbiont" EXACT []
is_a: GO:0030682 ! mitigation of host defenses by symbiont
```
GO:0140418 is defined as is_a GO:0030682 while it should be vice versa. Please fix.
|
process
|
inverse parent relation in go it seems something went wrong with the creation of go because as you can see id go name effector mediated modulation of host process by symbiont namespace biological process def a process mediated by a molecule secreted by a symbiont that results in th e modulation either activation or suppresion of a host structure or process t he host is defined as the larger of the organisms involved in a symbiotic intera ction synonym effector mediated modulation of host process by symbiont exact synonym effector triggered modulation of host process by symbiont exact synonym effector dependent modulation of host process by symbiont exact is a go mitigation of host defenses by symbiont go is defined as is a go while it should be vice versa please fix
| 1
|
14,013
| 16,816,536,775
|
IssuesEvent
|
2021-06-17 08:04:28
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] Responsive issues > All the tables > UI issues
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Responsive issues > All the tables > UI issues
1. UI issue in table column labels
2. All the contents in the table should be aligned properly
|
3.0
|
[PM] Responsive issues > All the tables > UI issues - Responsive issues > All the tables > UI issues
1. UI issue in table column labels
2. All the contents in the table should be aligned properly
|
process
|
responsive issues all the tables ui issues responsive issues all the tables ui issues ui issue in table column labels all the contents in the table should be aligned properly
| 1
|
17,369
| 23,192,521,137
|
IssuesEvent
|
2022-08-01 13:47:46
|
kubevious/kubevious
|
https://api.github.com/repos/kubevious/kubevious
|
closed
|
Feature Proposal: 3rd Party API Gateway(Traefik Proxy) Validation Support
|
kind/new-capability logic-processing skill/nodejs skill/typescript skill/traefik skill/kong
|
# Scope
We want to make usage of 3rd party API Gateways easier and safer. Kubevious is already equipped with a [Gateway View](https://kubevious.io/docs/features/application-centric-ui/gateway-view) where Ingresses, Services, and Applications are correlated and presented in a Domain -> URL -> Ingress -> Service -> Service Port -> Container Port -> Application path. We want to extend Gateway support capability and add support to popular API Gateways such as Traefik, Kong, Istio, Ambassador, Skipper, etc.
Kubevious already validates Service selectors in Ingresses, and Pod and Port selectors in Services. We also want to validate 3rd party API Gateways to detect errors early and aid with troubleshooting. That could require the correlation of 3rd party CRDs and other runtime sources to build a more clear understanding of what's going on in the cluster, application, and the API Gateway.
# Requirements
- API Gateway to be supported: Traefik Proxy
- Parse and integrate IngressRoute, TraefikService, Middleware, and TLSOptions
- Objects to be parsed under the [Gateway View](https://kubevious.io/docs/features/application-centric-ui/gateway-view)
## Validation Logic
- Detect missing Service and TraefikService
- Detect missing Ports
- Detect missing Middleware
- Detect missing TLSOptions
- Detect unused TraefikService
- Detect unused Middleware
- Detect unused TLSOptions
Validator documentation: https://kubevious.io/docs/built-in-validators/traefik-proxy/
# DRI
@rubenhak
# Current State
Traefik Proxy native support is already available in version **1.0.7**.
# Progress
- [x] ✅ Draft idea description
- [x] ✅ Gather initial interest from the community.
- [x] ✅ Define high-level requirements
- [x] ✅ Form a working group and elect a DRI
- [x] ✅ Clarify implementation specifics
- [x] ✅ Fun part - coding
- [x] ✅ Beta
- [x] ✅ Released
# Appendixes
## Kubevious Gateway View
A glimpse of what Kubevious does for K8s Ingresses. We want to do the same (and more) for 3rd party API Gateways.
[](https://kubevious.io/docs/features/application-centric-ui/gateway-view)
### Legend
✅ - Complete
👉 - Current / active stage
|
1.0
|
Feature Proposal: 3rd Party API Gateway(Traefik Proxy) Validation Support - # Scope
We want to make usage of 3rd party API Gateways easier and safer. Kubevious is already equipped with a [Gateway View](https://kubevious.io/docs/features/application-centric-ui/gateway-view) where Ingresses, Services, and Applications are correlated and presented in a Domain -> URL -> Ingress -> Service -> Service Port -> Container Port -> Application path. We want to extend Gateway support capability and add support to popular API Gateways such as Traefik, Kong, Istio, Ambassador, Skipper, etc.
Kubevious already validates Service selectors in Ingresses, and Pod and Port selectors in Services. We also want to validate 3rd party API Gateways to detect errors early and aid with troubleshooting. That could require the correlation of 3rd party CRDs and other runtime sources to build a more clear understanding of what's going on in the cluster, application, and the API Gateway.
# Requirements
- API Gateway to be supported: Traefik Proxy
- Parse and integrate IngressRoute, TraefikService, Middleware, and TLSOptions
- Objects to be parsed under the [Gateway View](https://kubevious.io/docs/features/application-centric-ui/gateway-view)
## Validation Logic
- Detect missing Service and TraefikService
- Detect missing Ports
- Detect missing Middleware
- Detect missing TLSOptions
- Detect unused TraefikService
- Detect unused Middleware
- Detect unused TLSOptions
Validator documentation: https://kubevious.io/docs/built-in-validators/traefik-proxy/
# DRI
@rubenhak
# Current State
Traefik Proxy native support is already available in version **1.0.7**.
# Progress
- [x] ✅ Draft idea description
- [x] ✅ Gather initial interest from the community.
- [x] ✅ Define high-level requirements
- [x] ✅ Form a working group and elect a DRI
- [x] ✅ Clarify implementation specifics
- [x] ✅ Fun part - coding
- [x] ✅ Beta
- [x] ✅ Released
# Appendixes
## Kubevious Gateway View
A glimpse of what Kubevious does for K8s Ingresses. We want to do the same (and more) for 3rd party API Gateways.
[](https://kubevious.io/docs/features/application-centric-ui/gateway-view)
### Legend
✅ - Complete
👉 - Current / active stage
|
process
|
feature proposal party api gateway traefik proxy validation support scope we want to make usage of party api gateways easier and safer kubevious is already equipped with a where ingresses services and applications are correlated and presented in a domain url ingress service service port container port application path we want to extend gateway support capability and add support to popular api gateways such as traefik kong istio ambassador skipper etc kubevious already validates service selectors in ingresses and pod and port selectors in services we also want to validate party api gateways to detect errors early and aid with troubleshooting that could require the correlation of party crds and other runtime sources to build a more clear understanding of what s going on in the cluster application and the api gateway requirements api gateway to be supported traefik proxy parse and integrate ingressroute traefikservice middleware and tlsoptions objects to be parsed under the validation logic detect missing service and traefikservice detect missing ports detect missing middleware detect missing tlsoptions detect unused traefikservice detect unused middleware detect unused tlsoptions validator documentation dri rubenhak current state traefik proxy native support is already available in version progress ✅ draft idea description ✅ gather initial interest from the community ✅ define high level requirements ✅ form a working group and elect a dri ✅ clarify implementation specifics ✅ fun part coding ✅ beta ✅ released appendixes kubevious gateway view a glimpse of what kubevious does for ingresses we want to do the same and more for party api gateways legend ✅ complete 👉 current active stage
| 1
|
9,108
| 12,191,965,657
|
IssuesEvent
|
2020-04-29 12:08:12
|
CGAL/cgal
|
https://api.github.com/repos/CGAL/cgal
|
closed
|
Different behavior of the edge_aware_upsample_point_set?
|
Pkg::Point_set_processing_3 bug
|
@afabri @sgiraudot
I am trying to do up-sampling using edge_aware_upsample_point_set function. I used it from point_set_processing_3 example. It works for **before_upsample.xyz** data (which is available in the data folder); But when I used my own data it gives the following error:
> edge_aware_upsample_point_set_example.exe (process 17092) exited with code -1073741819.
```
//Algorithm parameters
const double sharpness_angle = 30; // control sharpness of the result.
const double edge_sensitivity = 0.2; // higher values will sample more points near the edges
const double neighbor_radius = 18; // initial size of neighborhood.
```
Is there any hidden things with the dataset?
My data is look like this:
```
256075.14697445265 3935300.4999999995 -1.9903260605285873 -0.15059442500026382 -0.009264547714967571 -0.98855221779857272
256127.18092914129 3935279.9999999991 1.1806971722817656 0.36255488715508305 -0.28210066175799464 -0.88824161714911032
256124.96664647732 3935273.5000000005 0.018751732199395927 -0.14177362442736433 -0.13905823184364116 0.98008318400697436
256092.491746024 3935308.4999999995 -1.4489924924748039 -0.085803198655671153 0.024081843084161825 0.99602102183343766
256125.86596481747 3935280.5000000009 13.515503257111945 -0.13654115576027245 -0.24797889506659618 0.95909487559114781
256073.15088360262 3935255 -0.93490020406201102 0.59004993642832648 -0.095249131262525044 0.80172855475819382
256141.9517848785 3935322.2500000005 8.2318409757309219 -0.24795966835219574 -0.510449916403933 0.82338137318856486
256096.20889673862 3935319.7500000005 -0.21291360675685664 -0.099540727133083942 -0.024195625188008569 -0.9947392700419434
256107.09924066794 3935294.0000000005 -2.4377930441306859 -0.26315074779627329 -0.097325421959314151 0.95983303036241863
256130.16003200199 3935331.0002575037 14.609926967219579 -0.3588182051513355 -0.18823254749436522 0.9142308262773442
256116.12694613857 3935270.75 -2.0657511148141805 0.24053723579657579 0.070452458792483491 -0.9680796915778348
256089.84995883351 3935295.0000017555 -1.6188145313052476 0.0064104152035124699 0.04608216574353774 0.99891708393505008
256134.45646496481 3935324.2500000014 15.07897998471479 0.53174645878060467 -0.11064115355939698 -0.839645305300597
256095.27114842934 3935277.5 -2.8364161735055196 -0.074721280830373799 -0.096646577736570455 0.99251003481218136
256105.55928731122 3935323.0000000009 -1.1949637713004984 0.017197163654433804 0.097298114890588067 -0.99510669498349791
256130.41300641268 3935321.25 6.5035951636939613 -0.43255495210832851 0.15621449615365787 -0.88797142104800897
256137.72632800549 3935268.0000000005 5.7040995939553092 0.46653308381289188 -0.12300223968824016 -0.87590943067175042
256074.18452226146 3935306.9999999986 -1.7849232437298566 -0.32602824029779293 0.0081090078323643733 -0.94532525118093569
256117.4120398699 3935294.4999957578 -1.4283822200345702 -0.12123765837760146 -0.021571538178222535 0.99238908646344026
256093.04888751596 3935260.7500000005 -3.7532516406220982 -0.15371063599573145 0.38761148559555941 0.90891714507769694
256129.64393281593 3935324.75 12.980674692685975 -0.91363716556705743 0.026375393820841451 0.40567409123012632
256082.74928676969 3935279.5000000005 -2.3778079661987341 0.1486957529011404 0.017605361429175122 -0.98872626359276572
256067.26568958032 3935275.25 4.2080822055293892 -0.63171033161292922 0.23850790093484486 -0.73760154428060765
...
```
Environment:
Windows 10
cgal 5.0.2
Installed using vcpkg
|
1.0
|
Different behavior of the edge_aware_upsample_point_set? - @afabri @sgiraudot
I am trying to do up-sampling using edge_aware_upsample_point_set function. I used it from point_set_processing_3 example. It works for **before_upsample.xyz** data (which is available in the data folder); But when I used my own data it gives the following error:
> edge_aware_upsample_point_set_example.exe (process 17092) exited with code -1073741819.
```
//Algorithm parameters
const double sharpness_angle = 30; // control sharpness of the result.
const double edge_sensitivity = 0.2; // higher values will sample more points near the edges
const double neighbor_radius = 18; // initial size of neighborhood.
```
Is there any hidden things with the dataset?
My data is look like this:
```
256075.14697445265 3935300.4999999995 -1.9903260605285873 -0.15059442500026382 -0.009264547714967571 -0.98855221779857272
256127.18092914129 3935279.9999999991 1.1806971722817656 0.36255488715508305 -0.28210066175799464 -0.88824161714911032
256124.96664647732 3935273.5000000005 0.018751732199395927 -0.14177362442736433 -0.13905823184364116 0.98008318400697436
256092.491746024 3935308.4999999995 -1.4489924924748039 -0.085803198655671153 0.024081843084161825 0.99602102183343766
256125.86596481747 3935280.5000000009 13.515503257111945 -0.13654115576027245 -0.24797889506659618 0.95909487559114781
256073.15088360262 3935255 -0.93490020406201102 0.59004993642832648 -0.095249131262525044 0.80172855475819382
256141.9517848785 3935322.2500000005 8.2318409757309219 -0.24795966835219574 -0.510449916403933 0.82338137318856486
256096.20889673862 3935319.7500000005 -0.21291360675685664 -0.099540727133083942 -0.024195625188008569 -0.9947392700419434
256107.09924066794 3935294.0000000005 -2.4377930441306859 -0.26315074779627329 -0.097325421959314151 0.95983303036241863
256130.16003200199 3935331.0002575037 14.609926967219579 -0.3588182051513355 -0.18823254749436522 0.9142308262773442
256116.12694613857 3935270.75 -2.0657511148141805 0.24053723579657579 0.070452458792483491 -0.9680796915778348
256089.84995883351 3935295.0000017555 -1.6188145313052476 0.0064104152035124699 0.04608216574353774 0.99891708393505008
256134.45646496481 3935324.2500000014 15.07897998471479 0.53174645878060467 -0.11064115355939698 -0.839645305300597
256095.27114842934 3935277.5 -2.8364161735055196 -0.074721280830373799 -0.096646577736570455 0.99251003481218136
256105.55928731122 3935323.0000000009 -1.1949637713004984 0.017197163654433804 0.097298114890588067 -0.99510669498349791
256130.41300641268 3935321.25 6.5035951636939613 -0.43255495210832851 0.15621449615365787 -0.88797142104800897
256137.72632800549 3935268.0000000005 5.7040995939553092 0.46653308381289188 -0.12300223968824016 -0.87590943067175042
256074.18452226146 3935306.9999999986 -1.7849232437298566 -0.32602824029779293 0.0081090078323643733 -0.94532525118093569
256117.4120398699 3935294.4999957578 -1.4283822200345702 -0.12123765837760146 -0.021571538178222535 0.99238908646344026
256093.04888751596 3935260.7500000005 -3.7532516406220982 -0.15371063599573145 0.38761148559555941 0.90891714507769694
256129.64393281593 3935324.75 12.980674692685975 -0.91363716556705743 0.026375393820841451 0.40567409123012632
256082.74928676969 3935279.5000000005 -2.3778079661987341 0.1486957529011404 0.017605361429175122 -0.98872626359276572
256067.26568958032 3935275.25 4.2080822055293892 -0.63171033161292922 0.23850790093484486 -0.73760154428060765
...
```
Environment:
Windows 10
cgal 5.0.2
Installed using vcpkg
|
process
|
different behavior of the edge aware upsample point set afabri sgiraudot i am trying to do up sampling using edge aware upsample point set function i used it from point set processing example it works for before upsample xyz data which is available in the data folder but when i used my own data it gives the following error edge aware upsample point set example exe process exited with code algorithm parameters const double sharpness angle control sharpness of the result const double edge sensitivity higher values will sample more points near the edges const double neighbor radius initial size of neighborhood is there any hidden things with the dataset my data is look like this environment windows cgal installed using vcpkg
| 1
|
23,575
| 2,659,970,531
|
IssuesEvent
|
2015-03-19 01:06:03
|
perfsonar/project
|
https://api.github.com/repos/perfsonar/project
|
closed
|
upgrade to iperf-2.0.8
|
Milestone-Release3.5 Priority-Medium Type-Enhancement
|
Original [issue 1011](https://code.google.com/p/perfsonar-ps/issues/detail?id=1011) created by arlake228 on 2014-11-03T19:27:53.000Z:
Iperf2 is going through an RC phase right now. When it reaches official, include RPM in next release.
|
1.0
|
upgrade to iperf-2.0.8 - Original [issue 1011](https://code.google.com/p/perfsonar-ps/issues/detail?id=1011) created by arlake228 on 2014-11-03T19:27:53.000Z:
Iperf2 is going through an RC phase right now. When it reaches official, include RPM in next release.
|
non_process
|
upgrade to iperf original created by on is going through an rc phase right now when it reaches official include rpm in next release
| 0
|
101,442
| 8,787,941,952
|
IssuesEvent
|
2018-12-20 20:23:12
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
'Refresh' button did not change waring icon when respond is 200
|
area/catalog area/ui kind/bug status/resolved status/to-test team/cn version/2.0
|
<!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
bug
**Steps to reproduce (least amount of steps as possible):**
1. Go to the project catalog.
2. Click 'Refresh' button
3. Click 'Refresh' button again
**Result:**
Both respond is 200, but UI shows warning icon.

**Other details that may be helpful:**
|
1.0
|
'Refresh' button did not change waring icon when respond is 200 - <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
bug
**Steps to reproduce (least amount of steps as possible):**
1. Go to the project catalog.
2. Click 'Refresh' button
3. Click 'Refresh' button again
**Result:**
Both respond is 200, but UI shows warning icon.

**Other details that may be helpful:**
|
non_process
|
refresh button did not change waring icon when respond is please search for existing issues first then read to see what we expect in an issue for security issues please email security rancher com instead of posting a public issue in github you may but are not required to use the gpg key located on keybase what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible go to the project catalog click refresh button click refresh button again result both respond is but ui shows warning icon other details that may be helpful
| 0
|
1,233
| 3,770,497,356
|
IssuesEvent
|
2016-03-16 14:47:23
|
DoSomething/quasar
|
https://api.github.com/repos/DoSomething/quasar
|
closed
|
UK DoSomething ETL Troubleshooting
|
#processing small
|
From Graham:
Basically in the “Campaign_Member_Why_Participated__c” (from reportback.why_participated), it looks like the commas are causing the cell to split. There’s only ever a handful of reportbacks each week, hence why I haven’t spotted it yet.
I’ve attached a file from earlier this week which shows the issue (it’s just the report backs, I’ve stripped out the contact records).
Are you able to tweak the formatting to stop cells splitting when a comma is included as text?
Apart from that, all the files have been coming through great and I haven’t seen any other problems.
----
Query needs to be updated, as well as `sed` parsing fixed to not gobble up comma's that are in fields.
|
1.0
|
UK DoSomething ETL Troubleshooting - From Graham:
Basically in the “Campaign_Member_Why_Participated__c” (from reportback.why_participated), it looks like the commas are causing the cell to split. There’s only ever a handful of reportbacks each week, hence why I haven’t spotted it yet.
I’ve attached a file from earlier this week which shows the issue (it’s just the report backs, I’ve stripped out the contact records).
Are you able to tweak the formatting to stop cells splitting when a comma is included as text?
Apart from that, all the files have been coming through great and I haven’t seen any other problems.
----
Query needs to be updated, as well as `sed` parsing fixed to not gobble up comma's that are in fields.
|
process
|
uk dosomething etl troubleshooting from graham basically in the “campaign member why participated c” from reportback why participated it looks like the commas are causing the cell to split there’s only ever a handful of reportbacks each week hence why i haven’t spotted it yet i’ve attached a file from earlier this week which shows the issue it’s just the report backs i’ve stripped out the contact records are you able to tweak the formatting to stop cells splitting when a comma is included as text apart from that all the files have been coming through great and i haven’t seen any other problems query needs to be updated as well as sed parsing fixed to not gobble up comma s that are in fields
| 1
|
17,990
| 12,467,344,760
|
IssuesEvent
|
2020-05-28 16:54:54
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Add Chroma-Key or "Green Screen" Keying-Out of a Color Range
|
feature proposal topic:rendering usability
|
The idea is to be able to use green screen footage in a game. The user would select a range of colors to be considered transparent. I understand this is probably doable via shaders. I am suggesting this a feature in Godot, since it is a media industry standard workflow.
I found an example of someones [implementation](https://www.youtube.com/watch?v=lTmvkf_Ydzs) of this in Godot, but I think it should be a built-in feature.
|
True
|
Add Chroma-Key or "Green Screen" Keying-Out of a Color Range - The idea is to be able to use green screen footage in a game. The user would select a range of colors to be considered transparent. I understand this is probably doable via shaders. I am suggesting this a feature in Godot, since it is a media industry standard workflow.
I found an example of someones [implementation](https://www.youtube.com/watch?v=lTmvkf_Ydzs) of this in Godot, but I think it should be a built-in feature.
|
non_process
|
add chroma key or green screen keying out of a color range the idea is to be able to use green screen footage in a game the user would select a range of colors to be considered transparent i understand this is probably doable via shaders i am suggesting this a feature in godot since it is a media industry standard workflow i found an example of someones of this in godot but i think it should be a built in feature
| 0
|
11,161
| 13,957,693,950
|
IssuesEvent
|
2020-10-24 08:11:07
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
IS: Missing resources in Geoportal
|
Geoportal Harvesting process IS - Iceland
|
Collected from the Geoportal Workshop online survey answers:
http://inspire-geoportal.ec.europa.eu/results.html?country=is&view=details&legislation=all
Here we have 2 data sets and both should be Viewable, the have WMS service defined in the metadata. I do
not know why we get 0 her instead of 2 :(
|
1.0
|
IS: Missing resources in Geoportal - Collected from the Geoportal Workshop online survey answers:
http://inspire-geoportal.ec.europa.eu/results.html?country=is&view=details&legislation=all
Here we have 2 data sets and both should be Viewable, the have WMS service defined in the metadata. I do
not know why we get 0 her instead of 2 :(
|
process
|
is missing resources in geoportal collected from the geoportal workshop online survey answers here we have data sets and both should be viewable the have wms service defined in the metadata i do not know why we get her instead of
| 1
|
17,914
| 23,905,769,656
|
IssuesEvent
|
2022-09-09 00:29:33
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[Enhancement][FLINK-28959][Stream] 504 gateway timeout when consume large number of topics using TopicPatten
|
compute/data-processing type/enhancement
|
Our situation is as follows:
- In a single namespace, more than 300 topics(partitioned-topic with a single partition) will report this error;
- Error still exists after resource expansion
- A flick client program consumes 30 to 50 topics per program. This error is bound to be reported after five consecutive programs
Version
- flink-sql-connector-pulsar: 1.15.0.1
- pulsar 2.9.2.17
|
1.0
|
[Enhancement][FLINK-28959][Stream] 504 gateway timeout when consume large number of topics using TopicPatten - Our situation is as follows:
- In a single namespace, more than 300 topics(partitioned-topic with a single partition) will report this error;
- Error still exists after resource expansion
- A flick client program consumes 30 to 50 topics per program. This error is bound to be reported after five consecutive programs
Version
- flink-sql-connector-pulsar: 1.15.0.1
- pulsar 2.9.2.17
|
process
|
gateway timeout when consume large number of topics using topicpatten our situation is as follows in a single namespace more than topics partitioned topic with a single partition will report this error error still exists after resource expansion a flick client program consumes to topics per program this error is bound to be reported after five consecutive programs version flink sql connector pulsar pulsar
| 1
|
31,078
| 7,301,806,949
|
IssuesEvent
|
2018-02-27 07:16:08
|
MIPT-ILab/mipt-mips
|
https://api.github.com/repos/MIPT-ILab/mipt-mips
|
closed
|
Use ports instead of PC and new_PC variables
|
1 code first semester
|
The idea is to keep PC in ports in the same manner as ID stage keeps instruction data.
There may be an issue with initial value, two ways to solve it are possible:
1. add a Core->IF port which propagates startPC in the beginning of the simulation (more difficult, but it's hard to break it)
2. use startPC if all the ports which may contain PC are empty (easier but has potential issues)
|
1.0
|
Use ports instead of PC and new_PC variables - The idea is to keep PC in ports in the same manner as ID stage keeps instruction data.
There may be an issue with initial value, two ways to solve it are possible:
1. add a Core->IF port which propagates startPC in the beginning of the simulation (more difficult, but it's hard to break it)
2. use startPC if all the ports which may contain PC are empty (easier but has potential issues)
|
non_process
|
use ports instead of pc and new pc variables the idea is to keep pc in ports in the same manner as id stage keeps instruction data there may be an issue with initial value two ways to solve it are possible add a core if port which propagates startpc in the beginning of the simulation more difficult but it s hard to break it use startpc if all the ports which may contain pc are empty easier but has potential issues
| 0
|
258,155
| 27,563,860,801
|
IssuesEvent
|
2023-03-08 01:11:41
|
jtimberlake/pacbot
|
https://api.github.com/repos/jtimberlake/pacbot
|
opened
|
CVE-2020-7608 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-11.1.1.tgz</b>, <b>yargs-parser-4.2.1.tgz</b>, <b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>
<details><summary><b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz</a></p>
<p>Path to dependency file: /webapp/package.json</p>
<p>Path to vulnerable library: /webapp/node_modules/protractor/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- protractor-5.4.4.tgz (Root Library)
- yargs-12.0.5.tgz
- :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-4.2.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-4.2.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-4.2.1.tgz</a></p>
<p>Path to dependency file: /webapp/package.json</p>
<p>Path to vulnerable library: /webapp/node_modules/webpack-dev-server/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- cli-1.6.8.tgz (Root Library)
- webpack-dev-server-2.11.5.tgz
- yargs-6.6.0.tgz
- :x: **yargs-parser-4.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz</a></p>
<p>Path to dependency file: /webapp/package.json</p>
<p>Path to vulnerable library: /webapp/node_modules/webpack/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- cli-1.6.8.tgz (Root Library)
- webpack-3.10.0.tgz
- yargs-8.0.2.tgz
- :x: **yargs-parser-7.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution (yargs-parser): 13.1.2</p>
<p>Direct dependency fix Resolution (protractor): 7.0.0</p><p>Fix Resolution (yargs-parser): 5.0.0-security.0</p>
<p>Direct dependency fix Resolution (@angular/cli): 6.0.0</p><p>Fix Resolution (yargs-parser): 13.1.2</p>
<p>Direct dependency fix Resolution (@angular/cli): 7.0.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2020-7608 (Medium) detected in multiple libraries - ## CVE-2020-7608 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-11.1.1.tgz</b>, <b>yargs-parser-4.2.1.tgz</b>, <b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>
<details><summary><b>yargs-parser-11.1.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz</a></p>
<p>Path to dependency file: /webapp/package.json</p>
<p>Path to vulnerable library: /webapp/node_modules/protractor/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- protractor-5.4.4.tgz (Root Library)
- yargs-12.0.5.tgz
- :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-4.2.1.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-4.2.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-4.2.1.tgz</a></p>
<p>Path to dependency file: /webapp/package.json</p>
<p>Path to vulnerable library: /webapp/node_modules/webpack-dev-server/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- cli-1.6.8.tgz (Root Library)
- webpack-dev-server-2.11.5.tgz
- yargs-6.6.0.tgz
- :x: **yargs-parser-4.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>yargs-parser-7.0.0.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-7.0.0.tgz</a></p>
<p>Path to dependency file: /webapp/package.json</p>
<p>Path to vulnerable library: /webapp/node_modules/webpack/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- cli-1.6.8.tgz (Root Library)
- webpack-3.10.0.tgz
- yargs-8.0.2.tgz
- :x: **yargs-parser-7.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution (yargs-parser): 13.1.2</p>
<p>Direct dependency fix Resolution (protractor): 7.0.0</p><p>Fix Resolution (yargs-parser): 5.0.0-security.0</p>
<p>Direct dependency fix Resolution (@angular/cli): 6.0.0</p><p>Fix Resolution (yargs-parser): 13.1.2</p>
<p>Direct dependency fix Resolution (@angular/cli): 7.0.1</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries yargs parser tgz yargs parser tgz yargs parser tgz yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file webapp package json path to vulnerable library webapp node modules protractor node modules yargs parser package json dependency hierarchy protractor tgz root library yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file webapp package json path to vulnerable library webapp node modules webpack dev server node modules yargs parser package json dependency hierarchy cli tgz root library webpack dev server tgz yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file webapp package json path to vulnerable library webapp node modules webpack node modules yargs parser package json dependency hierarchy cli tgz root library webpack tgz yargs tgz x yargs parser tgz vulnerable library found in base branch master vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution yargs parser direct dependency fix resolution protractor fix resolution yargs parser security direct dependency fix resolution angular cli fix resolution yargs parser direct dependency fix resolution angular cli check this box to open an automated fix pr
| 0
|
398,820
| 27,214,227,201
|
IssuesEvent
|
2023-02-20 19:41:18
|
Azure/Azure-Functions
|
https://api.github.com/repos/Azure/Azure-Functions
|
opened
|
Blob Output Binding for Immutable Blob Container
|
documentation
|
Hello,
I need to store blobs to Immutable Blob Container. For this purpose I create Function App (.net 6 isolated) with Blob Output Binding.
Everything works perfectly until I configured Immutable policy (Time-based retention) on the target container. Function App shows 409 Conflict response from blob storage.
From logs I see that SDK failed to find the blob.

After that it creates the empty blob.

And then tries to update the contents. But gets the error because the container is immutable.

**My question.**
How to configure Blob Output Binding to store blobs in an Immutable Blob Container?
Thank you!
|
1.0
|
Blob Output Binding for Immutable Blob Container - Hello,
I need to store blobs to Immutable Blob Container. For this purpose I create Function App (.net 6 isolated) with Blob Output Binding.
Everything works perfectly until I configured Immutable policy (Time-based retention) on the target container. Function App shows 409 Conflict response from blob storage.
From logs I see that SDK failed to find the blob.

After that it creates the empty blob.

And then tries to update the contents. But gets the error because the container is immutable.

**My question.**
How to configure Blob Output Binding to store blobs in an Immutable Blob Container?
Thank you!
|
non_process
|
blob output binding for immutable blob container hello i need to store blobs to immutable blob container for this purpose i create function app net isolated with blob output binding everything works perfectly until i configured immutable policy time based retention on the target container function app shows conflict response from blob storage from logs i see that sdk failed to find the blob after that it creates the empty blob and then tries to update the contents but gets the error because the container is immutable my question how to configure blob output binding to store blobs in an immutable blob container thank you
| 0
|
14,631
| 17,767,728,742
|
IssuesEvent
|
2021-08-30 09:41:33
|
googleapis/nodejs-service-management
|
https://api.github.com/repos/googleapis/nodejs-service-management
|
closed
|
Dependency Dashboard
|
type: process api: servicemanagement
|
This issue contains a list of Renovate updates and their statuses.
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->chore(deps): update actions/setup-node action to v2
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/mocha-9.x -->[chore(deps): update dependency mocha to v9](../pull/57) (`mocha`, `@types/mocha`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->chore(deps): update actions/setup-node action to v2
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/mocha-9.x -->[chore(deps): update dependency mocha to v9](../pull/57) (`mocha`, `@types/mocha`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses awaiting schedule these updates are awaiting their schedule click on a checkbox to get an update now chore deps update actions setup node action to ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull mocha types mocha check this box to trigger a request for renovate to run again on this repository
| 1
|
262,015
| 19,758,699,114
|
IssuesEvent
|
2022-01-16 02:54:23
|
alexmfritz/overlook-hotel
|
https://api.github.com/repos/alexmfritz/overlook-hotel
|
closed
|
###### DASHBOARD
|
documentation Iteration 1!
|
- [x] CUSTOMER DASH BOARD
- [x] Should see all bookings
- [x] The total amount spent on rooms
|
1.0
|
###### DASHBOARD - - [x] CUSTOMER DASH BOARD
- [x] Should see all bookings
- [x] The total amount spent on rooms
|
non_process
|
dashboard customer dash board should see all bookings the total amount spent on rooms
| 0
|
521,358
| 15,108,301,972
|
IssuesEvent
|
2021-02-08 16:28:36
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
storage.cloud-client.iam_test: test_remove_bucket_conditional_iam_binding failed
|
api: storage flakybot: flaky flakybot: issue priority: p1 samples type: bug
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: a655d6b60bccfb37ddcd65e3b34d288b8c89ea66
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/b4b29f8e-3b62-4fad-8435-69415a2d7b99), [Sponge](http://sponge2/b4b29f8e-3b62-4fad-8435-69415a2d7b99)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/storage/cloud-client/iam_test.py", line 124, in test_remove_bucket_conditional_iam_binding
bucket.name, ROLE, CONDITION_TITLE, CONDITION_DESCRIPTION, CONDITION_EXPRESSION
File "/workspace/storage/cloud-client/storage_remove_bucket_conditional_iam_binding.py", line 52, in remove_bucket_conditional_iam_binding
bucket.set_iam_policy(policy)
File "/workspace/storage/cloud-client/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/storage/bucket.py", line 2939, in set_iam_policy
retry=retry,
File "/workspace/storage/cloud-client/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/storage/_http.py", line 63, in api_request
return call()
File "/workspace/storage/cloud-client/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/_http.py", line 483, in api_request
raise exceptions.from_http_response(response)
google.api_core.exceptions.ServiceUnavailable: 503 PUT https://storage.googleapis.com/storage/v1/b/test-iam-5f57b811-9068-4ca4-9594-0f745372d5a7/iam?prettyPrint=false: Backend Error</pre></details>
|
1.0
|
storage.cloud-client.iam_test: test_remove_bucket_conditional_iam_binding failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: a655d6b60bccfb37ddcd65e3b34d288b8c89ea66
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/b4b29f8e-3b62-4fad-8435-69415a2d7b99), [Sponge](http://sponge2/b4b29f8e-3b62-4fad-8435-69415a2d7b99)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/storage/cloud-client/iam_test.py", line 124, in test_remove_bucket_conditional_iam_binding
bucket.name, ROLE, CONDITION_TITLE, CONDITION_DESCRIPTION, CONDITION_EXPRESSION
File "/workspace/storage/cloud-client/storage_remove_bucket_conditional_iam_binding.py", line 52, in remove_bucket_conditional_iam_binding
bucket.set_iam_policy(policy)
File "/workspace/storage/cloud-client/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/storage/bucket.py", line 2939, in set_iam_policy
retry=retry,
File "/workspace/storage/cloud-client/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/storage/_http.py", line 63, in api_request
return call()
File "/workspace/storage/cloud-client/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/_http.py", line 483, in api_request
raise exceptions.from_http_response(response)
google.api_core.exceptions.ServiceUnavailable: 503 PUT https://storage.googleapis.com/storage/v1/b/test-iam-5f57b811-9068-4ca4-9594-0f745372d5a7/iam?prettyPrint=false: Backend Error</pre></details>
|
non_process
|
storage cloud client iam test test remove bucket conditional iam binding failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output traceback most recent call last file workspace storage cloud client iam test py line in test remove bucket conditional iam binding bucket name role condition title condition description condition expression file workspace storage cloud client storage remove bucket conditional iam binding py line in remove bucket conditional iam binding bucket set iam policy policy file workspace storage cloud client nox py lib site packages google cloud storage bucket py line in set iam policy retry retry file workspace storage cloud client nox py lib site packages google cloud storage http py line in api request return call file workspace storage cloud client nox py lib site packages google cloud http py line in api request raise exceptions from http response response google api core exceptions serviceunavailable put backend error
| 0
|
4,426
| 7,303,053,999
|
IssuesEvent
|
2018-02-27 11:46:36
|
0-complexity/openvcloud
|
https://api.github.com/repos/0-complexity/openvcloud
|
closed
|
ROS'es do not migrate as per User Selection
|
process_duplicate state_verification
|
#### Detailed description
When the CPU node is put in maintenance mode, OVC moves the ROS'es based upon its internal algorithm. However, when the user selects the CPU node on which he wants to move his ROS'es , still OVC seems to move the them on different nodes and discards the user's selection.
####Expected Result:
The VM or ROS's should move to the CPU node as per User selection.
#### Steps to reproduce
1. Select a VFW/ROS from the drop down and try moving it on a designated node you select.
#### Relevant stacktraces
There are no errors. The VFWs do move, but they don't move as selected by the user.
#### Installation information
- environment version: b17 (Russia)
JumpScale
Core: branch: production (8a77511) 1/23/2018, 5:15:23 PM
Portal: branch: production (ada4a21) 1/30/2018, 9:44:31 PM
OpenvCloud
Core: branch: production (a5c4eee) 2/14/2018, 3:32:38 PM
G8VDC: branch: production (989668b) 12/27/2017, 6:19:04 PM
Selfhealing: branch: production (3bc2f9b) 2/7/2018, 3:58:11 PM
OpenvStorage b17
openvstorage-backend-core: 1.9.2-1
openvstorage-health-check: 3.4.0-1
openvstorage-webapps: 2.9.9-1
openvstorage-backend-webapps: 1.9.2-1
openvstorage-core: 2.9.9-1
openvstorage-hc: 1.9.2-1
alba-ee: 1.5.17
openvstorage: 2.9.9-1
openvstorage-backend: 1.9.2-1
openvstorage-extensions: 0.1.1-1
openvstorage-sdm: 1.9.1-1
|
1.0
|
ROS'es do not migrate as per User Selection - #### Detailed description
When the CPU node is put in maintenance mode, OVC moves the ROS'es based upon its internal algorithm. However, when the user selects the CPU node on which he wants to move his ROS'es , still OVC seems to move the them on different nodes and discards the user's selection.
####Expected Result:
The VM or ROS's should move to the CPU node as per User selection.
#### Steps to reproduce
1. Select a VFW/ROS from the drop down and try moving it on a designated node you select.
#### Relevant stacktraces
There are no errors. The VFWs do move, but they don't move as selected by the user.
#### Installation information
- environment version: b17 (Russia)
JumpScale
Core: branch: production (8a77511) 1/23/2018, 5:15:23 PM
Portal: branch: production (ada4a21) 1/30/2018, 9:44:31 PM
OpenvCloud
Core: branch: production (a5c4eee) 2/14/2018, 3:32:38 PM
G8VDC: branch: production (989668b) 12/27/2017, 6:19:04 PM
Selfhealing: branch: production (3bc2f9b) 2/7/2018, 3:58:11 PM
OpenvStorage b17
openvstorage-backend-core: 1.9.2-1
openvstorage-health-check: 3.4.0-1
openvstorage-webapps: 2.9.9-1
openvstorage-backend-webapps: 1.9.2-1
openvstorage-core: 2.9.9-1
openvstorage-hc: 1.9.2-1
alba-ee: 1.5.17
openvstorage: 2.9.9-1
openvstorage-backend: 1.9.2-1
openvstorage-extensions: 0.1.1-1
openvstorage-sdm: 1.9.1-1
|
process
|
ros es do not migrate as per user selection detailed description when the cpu node is put in maintenance mode ovc moves the ros es based upon its internal algorithm however when the user selects the cpu node on which he wants to move his ros es still ovc seems to move the them on different nodes and discards the user s selection expected result the vm or ros s should move to the cpu node as per user selection steps to reproduce select a vfw ros from the drop down and try moving it on a designated node you select relevant stacktraces there are no errors the vfws do move but they don t move as selected by the user installation information environment version russia jumpscale core branch production pm portal branch production pm openvcloud core branch production pm branch production pm selfhealing branch production pm openvstorage openvstorage backend core openvstorage health check openvstorage webapps openvstorage backend webapps openvstorage core openvstorage hc alba ee openvstorage openvstorage backend openvstorage extensions openvstorage sdm
| 1
|
84,732
| 15,728,261,530
|
IssuesEvent
|
2021-03-29 13:37:49
|
ssobue/spring-preauth-session
|
https://api.github.com/repos/ssobue/spring-preauth-session
|
closed
|
CVE-2020-25649 (High) detected in jackson-databind-2.9.9.3.jar
|
security vulnerability
|
## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: spring-preauth-session/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.3/jackson-databind-2.9.9.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-actuator-2.1.9.RELEASE.jar (Root Library)
- spring-boot-actuator-autoconfigure-2.1.9.RELEASE.jar
- :x: **jackson-databind-2.9.9.3.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-25649 (High) detected in jackson-databind-2.9.9.3.jar - ## CVE-2020-25649 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: spring-preauth-session/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9.3/jackson-databind-2.9.9.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-actuator-2.1.9.RELEASE.jar (Root Library)
- spring-boot-actuator-autoconfigure-2.1.9.RELEASE.jar
- :x: **jackson-databind-2.9.9.3.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in FasterXML Jackson Databind, where it did not have entity expansion secured properly. This flaw allows vulnerability to XML external entity (XXE) attacks. The highest threat from this vulnerability is data integrity.
<p>Publish Date: 2020-12-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25649>CVE-2020-25649</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2589">https://github.com/FasterXML/jackson-databind/issues/2589</a></p>
<p>Release Date: 2020-12-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.4,2.9.10.7,2.10.5.1,2.11.0.rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file spring preauth session pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter actuator release jar root library spring boot actuator autoconfigure release jar x jackson databind jar vulnerable library vulnerability details a flaw was found in fasterxml jackson databind where it did not have entity expansion secured properly this flaw allows vulnerability to xml external entity xxe attacks the highest threat from this vulnerability is data integrity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
12,197
| 14,742,437,537
|
IssuesEvent
|
2021-01-07 12:17:41
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Winnipeg Billing Cycles
|
anc-process anp-1 ant-support has attachment
|
In GitLab by @kdjstudios on Apr 22, 2019, 09:49
**Submitted by:** Cheryl Hamelin <cheryl.hamelin@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-22-27623
**Server:** Internal
**Client/Site:** Winnipeg
**Account:** NA
**Issue:**
When I was working on the 3/28/19 NCSM 1st Billing Cycle for Winnipeg, I had entered some of the information, but had not completed it. The next day, I went in to finish adding what I needed to so I could finalize the billing, and this is what I found:
If you pull up the billing cycles, you will see that Winnipeg is showing the dates for May for the first and 15th cycles. I tried to change them and could not. All of the revenue allocations I had previously put in for the 1st & 15th billing cycles are gone and need to be re-entered. Can you fix it?

I know something happened before with this site. I am wondering if someone at the site is accidentally changing cycles. As I mentioned above, I tried to change the dates back in order to correct everything, but was unable to. Please help as I am not sure how to correct the issue and allocate the revenue.
|
1.0
|
Winnipeg Billing Cycles - In GitLab by @kdjstudios on Apr 22, 2019, 09:49
**Submitted by:** Cheryl Hamelin <cheryl.hamelin@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-22-27623
**Server:** Internal
**Client/Site:** Winnipeg
**Account:** NA
**Issue:**
When I was working on the 3/28/19 NCSM 1st Billing Cycle for Winnipeg, I had entered some of the information, but had not completed it. The next day, I went in to finish adding what I needed to so I could finalize the billing, and this is what I found:
If you pull up the billing cycles, you will see that Winnipeg is showing the dates for May for the first and 15th cycles. I tried to change them and could not. All of the revenue allocations I had previously put in for the 1st & 15th billing cycles are gone and need to be re-entered. Can you fix it?

I know something happened before with this site. I am wondering if someone at the site is accidentally changing cycles. As I mentioned above, I tried to change the dates back in order to correct everything, but was unable to. Please help as I am not sure how to correct the issue and allocate the revenue.
|
process
|
winnipeg billing cycles in gitlab by kdjstudios on apr submitted by cheryl hamelin helpdesk server internal client site winnipeg account na issue when i was working on the ncsm billing cycle for winnipeg i had entered some of the information but had not completed it the next day i went in to finish adding what i needed to so i could finalize the billing and this is what i found if you pull up the billing cycles you will see that winnipeg is showing the dates for may for the first and cycles i tried to change them and could not all of the revenue allocations i had previously put in for the billing cycles are gone and need to be re entered can you fix it uploads image png i know something happened before with this site i am wondering if someone at the site is accidentally changing cycles as i mentioned above i tried to change the dates back in order to correct everything but was unable to please help as i am not sure how to correct the issue and allocate the revenue
| 1
|
100,050
| 11,171,782,182
|
IssuesEvent
|
2019-12-28 22:56:37
|
nori-io/nori
|
https://api.github.com/repos/nori-io/nori
|
opened
|
Nori Documentation: Intro / Get started
|
Documentation task
|
Create documentation: Intro / Get started:
#### Intro / Get started
There are two main ways to get started with Nori
- [ ] Tutorial: Step-by-step instructions on how to install Nori, use and develop plugins.
- [ ] Quick start: One page summary of how to install Nori and use plugins.
|
1.0
|
Nori Documentation: Intro / Get started - Create documentation: Intro / Get started:
#### Intro / Get started
There are two main ways to get started with Nori
- [ ] Tutorial: Step-by-step instructions on how to install Nori, use and develop plugins.
- [ ] Quick start: One page summary of how to install Nori and use plugins.
|
non_process
|
nori documentation intro get started create documentation intro get started intro get started there are two main ways to get started with nori tutorial step by step instructions on how to install nori use and develop plugins quick start one page summary of how to install nori and use plugins
| 0
|
2,070
| 4,877,176,657
|
IssuesEvent
|
2016-11-16 15:06:28
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
opened
|
add integrity testing for CSA computation
|
sct_process_segmentation testing
|
add integrity testing by creating a segmentation with known diameter, to avoid issues like #1022.
|
1.0
|
add integrity testing for CSA computation - add integrity testing by creating a segmentation with known diameter, to avoid issues like #1022.
|
process
|
add integrity testing for csa computation add integrity testing by creating a segmentation with known diameter to avoid issues like
| 1
|
74,174
| 15,325,231,373
|
IssuesEvent
|
2021-02-26 00:50:15
|
wrbejar/JavaVulnerable
|
https://api.github.com/repos/wrbejar/JavaVulnerable
|
opened
|
CVE-2017-3523 (High) detected in mysql-connector-java-5.1.2.jar, mysql-connector-java-5.1.26.jar
|
security vulnerability
|
## CVE-2017-3523 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mysql-connector-java-5.1.2.jar</b>, <b>mysql-connector-java-5.1.26.jar</b></p></summary>
<p>
<details><summary><b>mysql-connector-java-5.1.2.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: JavaVulnerable/bin/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.2/mysql-connector-java-5.1.2.jar,JavaVulnerable/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.2.jar** (Vulnerable Library)
</details>
<details><summary><b>mysql-connector-java-5.1.26.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: JavaVulnerable/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,JavaVulnerable/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,JavaVulnerable/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,JavaVulnerable/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,JavaVulnerable/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,JavaVulnerable/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.26.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/JavaVulnerable/commit/9bd46b7721f26a6c8d0a3121c82cb6377d28790a">9bd46b7721f26a6c8d0a3121c82cb6377d28790a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523>CVE-2017-3523</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html">https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.41</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.2","packageFilePaths":["/bin/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.41"},{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.26","packageFilePaths":["/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml","/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.41"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-3523","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-3523 (High) detected in mysql-connector-java-5.1.2.jar, mysql-connector-java-5.1.26.jar - ## CVE-2017-3523 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mysql-connector-java-5.1.2.jar</b>, <b>mysql-connector-java-5.1.26.jar</b></p></summary>
<p>
<details><summary><b>mysql-connector-java-5.1.2.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: JavaVulnerable/bin/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.2/mysql-connector-java-5.1.2.jar,JavaVulnerable/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.2.jar** (Vulnerable Library)
</details>
<details><summary><b>mysql-connector-java-5.1.26.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: JavaVulnerable/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,JavaVulnerable/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,canner/.m2/repository/mysql/mysql-connector-java/5.1.26/mysql-connector-java-5.1.26.jar,JavaVulnerable/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,JavaVulnerable/bin/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,JavaVulnerable/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar,JavaVulnerable/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/target/JavaVulnerableLab/WEB-INF/lib/mysql-connector-java-5.1.26.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.26.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/wrbejar/JavaVulnerable/commit/9bd46b7721f26a6c8d0a3121c82cb6377d28790a">9bd46b7721f26a6c8d0a3121c82cb6377d28790a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523>CVE-2017-3523</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html">https://www.oracle.com/technetwork/security-advisory/cpuapr2017-3236618.html</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.41</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.2","packageFilePaths":["/bin/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.41"},{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.26","packageFilePaths":["/bin/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml","/target/JavaVulnerableLab/META-INF/maven/org.cysecurity/JavaVulnerableLab/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.41"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-3523","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in mysql connector java jar mysql connector java jar cve high severity vulnerability vulnerable libraries mysql connector java jar mysql connector java jar mysql connector java jar mysql jdbc type driver library home page a href path to dependency file javavulnerable bin pom xml path to vulnerable library canner repository mysql mysql connector java mysql connector java jar javavulnerable target javavulnerablelab web inf lib mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file javavulnerable bin target javavulnerablelab meta inf maven org cysecurity javavulnerablelab pom xml path to vulnerable library canner repository mysql mysql connector java mysql connector java jar javavulnerable target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar canner repository mysql mysql connector java mysql connector java jar javavulnerable bin target javavulnerablelab web inf lib mysql connector java jar javavulnerable bin target javavulnerablelab web inf lib mysql connector java jar javavulnerable target javavulnerablelab web inf lib mysql connector java jar javavulnerable bin target javavulnerablelab meta inf maven org cysecurity javavulnerablelab target javavulnerablelab web inf lib mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr l ui n s c c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree mysql mysql connector java isminimumfixversionavailable true minimumfixversion packagetype java groupid mysql packagename mysql connector java packageversion packagefilepaths istransitivedependency false dependencytree mysql mysql connector java isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr l ui n s c c h i h a h vulnerabilityurl
| 0
|
17,703
| 23,580,630,459
|
IssuesEvent
|
2022-08-23 07:21:18
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[SQL Connector] add tests for `properties` metadata fields
|
compute/data-processing
|
Properties metadata fields is not fully tested, we need to test this feature and change the documentation to let users know this feature is now supported ~
|
1.0
|
[SQL Connector] add tests for `properties` metadata fields - Properties metadata fields is not fully tested, we need to test this feature and change the documentation to let users know this feature is now supported ~
|
process
|
add tests for properties metadata fields properties metadata fields is not fully tested we need to test this feature and change the documentation to let users know this feature is now supported
| 1
|
17,293
| 23,104,783,523
|
IssuesEvent
|
2022-07-27 07:51:00
|
bjorkgard/public-secretary
|
https://api.github.com/repos/bjorkgard/public-secretary
|
closed
|
Inloggningsproblem
|
bug :bug: in process waiting for response
|
### Beskriv felet
Du får nog radera det kontot jag skapade på dev.jwapp
Har hamnat i ett skumt läge iom att jag inte fått verifieringsmailet och kan inte heller komma åt att skicka verifieringsmailet igen. Försökt återställa lösenordet men eftersom kontot aldrig blivit verifierat/aktiverat funkar inte det
### Hur uppstår felet
Kan inte aktivera kontot
### Systeminformation
```Shell
Mac OS
```
### Övrig information
_No response_
### Bekräftelser
- [X] Följ vår [uppförandekod](https://github.com/antfu/.github/blob/main/CODE_OF_CONDUCT.md).
- [X] Kontrollera att det inte redan finns ett problem som rapporterar samma bugg för att undvika att skapa en dubblett.
- [X] Kontrollera att detta är en konkret bugg. För frågor och svar, öppna en GitHub-diskussion istället.
- [X] Beskrivningen är en komplett beskrivning hur felet uppstår.
|
1.0
|
Inloggningsproblem - ### Beskriv felet
Du får nog radera det kontot jag skapade på dev.jwapp
Har hamnat i ett skumt läge iom att jag inte fått verifieringsmailet och kan inte heller komma åt att skicka verifieringsmailet igen. Försökt återställa lösenordet men eftersom kontot aldrig blivit verifierat/aktiverat funkar inte det
### Hur uppstår felet
Kan inte aktivera kontot
### Systeminformation
```Shell
Mac OS
```
### Övrig information
_No response_
### Bekräftelser
- [X] Följ vår [uppförandekod](https://github.com/antfu/.github/blob/main/CODE_OF_CONDUCT.md).
- [X] Kontrollera att det inte redan finns ett problem som rapporterar samma bugg för att undvika att skapa en dubblett.
- [X] Kontrollera att detta är en konkret bugg. För frågor och svar, öppna en GitHub-diskussion istället.
- [X] Beskrivningen är en komplett beskrivning hur felet uppstår.
|
process
|
inloggningsproblem beskriv felet du får nog radera det kontot jag skapade på dev jwapp har hamnat i ett skumt läge iom att jag inte fått verifieringsmailet och kan inte heller komma åt att skicka verifieringsmailet igen försökt återställa lösenordet men eftersom kontot aldrig blivit verifierat aktiverat funkar inte det hur uppstår felet kan inte aktivera kontot systeminformation shell mac os övrig information no response bekräftelser följ vår kontrollera att det inte redan finns ett problem som rapporterar samma bugg för att undvika att skapa en dubblett kontrollera att detta är en konkret bugg för frågor och svar öppna en github diskussion istället beskrivningen är en komplett beskrivning hur felet uppstår
| 1
|
11,414
| 14,242,267,607
|
IssuesEvent
|
2020-11-19 01:22:08
|
knative/serving
|
https://api.github.com/repos/knative/serving
|
closed
|
Load test activator to find limits
|
area/autoscale area/test-and-release kind/feature kind/performance kind/process lifecycle/stale
|
## In what area(s)?
/area autoscale
/area test-and-release
/kind process
@yanweiguo did some great work last year to see where does the activator break, but since then we have mostly rewritten everything there (new backend monitoring code, load balancing, optimization for metric computations and distribution, etc).
So we're doing *more* but we also *optimized a lot*.
For cluster resource forecasting it would be quite interesting to do this again and see how much traffic does activator sustain. Especially this is interesting for modes where all the traffic is going through the activator (tbc=-1).
The benchmarks we run establish the baseline, but they're not stressing activators far enough.
It would be interesting to do this in two modes:
- with precreated backends and scaled activator
- from 1 activator instance and 0 backends.
One of the suspicions in the air is that endpoint informer churn might be part of what's limiting the activator performance (thus mode 1 would show significantly better results than second).
We also have profiling now, so we can attach and see memory and CPU profiles in addition to just looking at the raw numbers.
/cc @mattmoor @markusthoemmes @julz
|
1.0
|
Load test activator to find limits -
## In what area(s)?
/area autoscale
/area test-and-release
/kind process
@yanweiguo did some great work last year to see where does the activator break, but since then we have mostly rewritten everything there (new backend monitoring code, load balancing, optimization for metric computations and distribution, etc).
So we're doing *more* but we also *optimized a lot*.
For cluster resource forecasting it would be quite interesting to do this again and see how much traffic does activator sustain. Especially this is interesting for modes where all the traffic is going through the activator (tbc=-1).
The benchmarks we run establish the baseline, but they're not stressing activators far enough.
It would be interesting to do this in two modes:
- with precreated backends and scaled activator
- from 1 activator instance and 0 backends.
One of the suspicions in the air is that endpoint informer churn might be part of what's limiting the activator performance (thus mode 1 would show significantly better results than second).
We also have profiling now, so we can attach and see memory and CPU profiles in addition to just looking at the raw numbers.
/cc @mattmoor @markusthoemmes @julz
|
process
|
load test activator to find limits in what area s area autoscale area test and release kind process yanweiguo did some great work last year to see where does the activator break but since then we have mostly rewritten everything there new backend monitoring code load balancing optimization for metric computations and distribution etc so we re doing more but we also optimized a lot for cluster resource forecasting it would be quite interesting to do this again and see how much traffic does activator sustain especially this is interesting for modes where all the traffic is going through the activator tbc the benchmarks we run establish the baseline but they re not stressing activators far enough it would be interesting to do this in two modes with precreated backends and scaled activator from activator instance and backends one of the suspicions in the air is that endpoint informer churn might be part of what s limiting the activator performance thus mode would show significantly better results than second we also have profiling now so we can attach and see memory and cpu profiles in addition to just looking at the raw numbers cc mattmoor markusthoemmes julz
| 1
|
81,450
| 23,466,480,080
|
IssuesEvent
|
2022-08-16 17:15:54
|
arfc/moltres
|
https://api.github.com/repos/arfc/moltres
|
closed
|
Add additional installation instructions
|
Comp:Build Difficulty:1-Beginner Priority:2-Normal Status:1-New Type:Docs
|
The current installation instructions are inadequate for new users who are less familiar with the relationship between Moltres and MOOSE.
This issue can be closed when the installation instructions state:
- [x] there is no need to compile MOOSE separately;
- [x] the `moose-tools` and `moose-libmesh` versions that are compatible with the MOOSE submodule in the Moltres repository.
- [x] the link to our moltres-users Google Group
|
1.0
|
Add additional installation instructions - The current installation instructions are inadequate for new users who are less familiar with the relationship between Moltres and MOOSE.
This issue can be closed when the installation instructions state:
- [x] there is no need to compile MOOSE separately;
- [x] the `moose-tools` and `moose-libmesh` versions that are compatible with the MOOSE submodule in the Moltres repository.
- [x] the link to our moltres-users Google Group
|
non_process
|
add additional installation instructions the current installation instructions are inadequate for new users who are less familiar with the relationship between moltres and moose this issue can be closed when the installation instructions state there is no need to compile moose separately the moose tools and moose libmesh versions that are compatible with the moose submodule in the moltres repository the link to our moltres users google group
| 0
|
14,379
| 17,401,084,540
|
IssuesEvent
|
2021-08-02 19:47:24
|
AcademySoftwareFoundation/OpenCue
|
https://api.github.com/repos/AcademySoftwareFoundation/OpenCue
|
closed
|
Resolve reliability/security SonarCloud warnings
|
process
|
**Describe the process**
We should resolve any issues reported by SonarCloud in the "Reliability" and "Security" categories.
Python components currently show no such issues: https://sonarcloud.io/dashboard?id=AcademySoftwareFoundation_OpenCue
Cuebot shows a bunch, some of which do not affect us and can be ignored: https://sonarcloud.io/dashboard?id=AcademySoftwareFoundation_OpenCue_Cuebot
|
1.0
|
Resolve reliability/security SonarCloud warnings - **Describe the process**
We should resolve any issues reported by SonarCloud in the "Reliability" and "Security" categories.
Python components currently show no such issues: https://sonarcloud.io/dashboard?id=AcademySoftwareFoundation_OpenCue
Cuebot shows a bunch, some of which do not affect us and can be ignored: https://sonarcloud.io/dashboard?id=AcademySoftwareFoundation_OpenCue_Cuebot
|
process
|
resolve reliability security sonarcloud warnings describe the process we should resolve any issues reported by sonarcloud in the reliability and security categories python components currently show no such issues cuebot shows a bunch some of which do not affect us and can be ignored
| 1
|
3,533
| 6,572,235,728
|
IssuesEvent
|
2017-09-11 00:25:23
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
test: investigate flaky test-child-process-send-returns-boolean
|
child_process CI / flaky test windows
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: https://github.com/nodejs/node/commit/b24e269a482812193fb3cd8b6ced4477f8e5e1c5
* **Platform**: Windows
* **Subsystem**: test,child_process
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-binary-windows/10905/RUN_SUBSET=0,VS_VERSION=vs2015-x86,label=win2008r2
```js
not ok 46 parallel/test-child-process-send-returns-boolean
---
duration_ms: 0.501
severity: fail
stack: |-
events.js:182
throw er; // Unhandled 'error' event
^
Error: write EMFILE
at _errnoException (util.js:1027:13)
at ChildProcess.target._send (internal/child_process.js:721:20)
at ChildProcess.<anonymous> (internal/child_process.js:550:16)
at emitTwo (events.js:125:13)
at ChildProcess.emit (events.js:213:7)
at emit (internal/child_process.js:791:12)
at _combinedTickCallback (internal/process/next_tick.js:141:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
...
```
|
1.0
|
test: investigate flaky test-child-process-send-returns-boolean - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: https://github.com/nodejs/node/commit/b24e269a482812193fb3cd8b6ced4477f8e5e1c5
* **Platform**: Windows
* **Subsystem**: test,child_process
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-binary-windows/10905/RUN_SUBSET=0,VS_VERSION=vs2015-x86,label=win2008r2
```js
not ok 46 parallel/test-child-process-send-returns-boolean
---
duration_ms: 0.501
severity: fail
stack: |-
events.js:182
throw er; // Unhandled 'error' event
^
Error: write EMFILE
at _errnoException (util.js:1027:13)
at ChildProcess.target._send (internal/child_process.js:721:20)
at ChildProcess.<anonymous> (internal/child_process.js:550:16)
at emitTwo (events.js:125:13)
at ChildProcess.emit (events.js:213:7)
at emit (internal/child_process.js:791:12)
at _combinedTickCallback (internal/process/next_tick.js:141:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
...
```
|
process
|
test investigate flaky test child process send returns boolean thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform windows subsystem test child process js not ok parallel test child process send returns boolean duration ms severity fail stack events js throw er unhandled error event error write emfile at errnoexception util js at childprocess target send internal child process js at childprocess internal child process js at emittwo events js at childprocess emit events js at emit internal child process js at combinedtickcallback internal process next tick js at process tickcallback internal process next tick js
| 1
|
5,055
| 3,141,962,248
|
IssuesEvent
|
2015-09-13 03:17:12
|
EmergentOrganization/cell-rpg
|
https://api.github.com/repos/EmergentOrganization/cell-rpg
|
opened
|
loading map w/ two of same entity crashes game
|
bug cat: code
|
bridge to a custom map with two entities of the same type ("twoLargeTest") causes the game to freeze up.
Unsure if this is a problem with the mapLoader or if the scene cannot handle two building entities of the same class. The scene _can_ have two bullets at the same time, but bullets don't use the physics collider.
|
1.0
|
loading map w/ two of same entity crashes game - bridge to a custom map with two entities of the same type ("twoLargeTest") causes the game to freeze up.
Unsure if this is a problem with the mapLoader or if the scene cannot handle two building entities of the same class. The scene _can_ have two bullets at the same time, but bullets don't use the physics collider.
|
non_process
|
loading map w two of same entity crashes game bridge to a custom map with two entities of the same type twolargetest causes the game to freeze up unsure if this is a problem with the maploader or if the scene cannot handle two building entities of the same class the scene can have two bullets at the same time but bullets don t use the physics collider
| 0
|
17,685
| 23,526,698,117
|
IssuesEvent
|
2022-08-19 11:34:36
|
benthosdev/benthos
|
https://api.github.com/repos/benthosdev/benthos
|
closed
|
redis cache missing args_mapping causes panic
|
bug processors
|
**Problem**
Missing `args_mapping` from redis processor causes panic during runtime.
**Example**
```yaml
input:
stdin: {}
pipeline:
processors:
- redis:
url: redis://localhost:6379
command: foo
output:
stdout: {}
```
**Result**
```shell
❯ ./target/bin/benthos -c config.yaml
INFO Running main config from specified file @service=benthos path=config.yaml
INFO Launching a benthos instance, use CTRL+C to close @service=benthos
INFO Listening for HTTP requests at: http://0.0.0.0:4195 @service=benthos
{"foo":"foobar"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1a556e7]
goroutine 40 [running]:
github.com/benthosdev/benthos/v4/public/service.MessageBatch.BloblangQuery({0xc001188cd0, 0x1, 0xc001586ff0?}, 0x110d4c6?, 0x11f2a10?)
/Users/___/benthos/public/service/message.go:276 +0x27
github.com/benthosdev/benthos/v4/internal/impl/redis.(*redisProc).execRaw(0xc0015865a0, {0x454bd08, 0xc000f1e000}, 0x8?, {0xc001188cd0, 0x1, 0x1}, 0xc001188ce8)
/Users/___/benthos/internal/impl/redis/processor.go:308 +0x78
github.com/benthosdev/benthos/v4/internal/impl/redis.(*redisProc).ProcessBatch(0xc0015865a0, {0x454bd08, 0xc000f1e000}, {0xc001188cd0?, 0x1, 0x1})
/Users/___/benthos/internal/impl/redis/processor.go:362 +0x4d0
github.com/benthosdev/benthos/v4/public/service.(*airGapBatchProcessor).ProcessBatch(0xc000d3a1e0, {0x454bd08, 0xc000f1e000}, {0x5?, 0xc000014078?, 0x1?}, {0xc000014078, 0x1, 0x1})
/Users/___/benthos/public/service/processor.go:109 +0xee
github.com/benthosdev/benthos/v4/internal/component/processor.(*v2BatchedToV1Processor).ProcessBatch(0xc0001a1050, {0x454bd08, 0xc000f1e000}, {0xc000014078?, 0x1, 0x1})
/Users/___/benthos/internal/component/processor/processor_v2.go:146 +0x157
github.com/benthosdev/benthos/v4/internal/component/processor.ExecuteAll({0x454bd08, 0xc000f1e000}, {0xc000bc7140, 0x1, 0x0?}, {0xc000c84ec0?, 0x1, 0x0?})
/Users/___/benthos/internal/component/processor/execute.go:22 +0x1b6
github.com/benthosdev/benthos/v4/internal/pipeline.(*Processor).loop(0xc000b2cf00)
/Users/___/benthos/internal/pipeline/processor.go:69 +0x285
created by github.com/benthosdev/benthos/v4/internal/pipeline.(*Processor).Consume
/Users/___/benthos/internal/pipeline/processor.go:139 +0x79
```
**Version**
`v4.4.1-73-gf8b1fd9d`
**Possible Fix**
Remove Default value
```diff
diff --git a/internal/impl/redis/processor.go b/internal/impl/redis/processor.go
index 4443e8f4..bc353280 100644
--- a/internal/impl/redis/processor.go
+++ b/internal/impl/redis/processor.go
@@ -36,8 +36,7 @@ performed for each message and the message contents are replaced with the result
Description("A [Bloblang mapping](/docs/guides/bloblang/about) which should evaluate to an array of values matching in size to the number of arguments required for the specified Redis comm>
Version("4.3.0").
Example("root = [ this.key ]").
- Example(`root = [ meta("kafka_key"), this.count ]`).
- Default(``)).
+ Example(`root = [ meta("kafka_key"), this.count ]`)).
Field(service.NewStringAnnotatedEnumField("operator", map[string]string{
"keys": `Returns an array of strings containing all the keys that match the pattern specified by the ` + "`key` field" + `.`,
"scard": `Returns the cardinality of a set, or ` + "`0`" + ` if the key does not exist.`,
```
|
1.0
|
redis cache missing args_mapping causes panic - **Problem**
Missing `args_mapping` from redis processor causes panic during runtime.
**Example**
```yaml
input:
stdin: {}
pipeline:
processors:
- redis:
url: redis://localhost:6379
command: foo
output:
stdout: {}
```
**Result**
```shell
❯ ./target/bin/benthos -c config.yaml
INFO Running main config from specified file @service=benthos path=config.yaml
INFO Launching a benthos instance, use CTRL+C to close @service=benthos
INFO Listening for HTTP requests at: http://0.0.0.0:4195 @service=benthos
{"foo":"foobar"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1a556e7]
goroutine 40 [running]:
github.com/benthosdev/benthos/v4/public/service.MessageBatch.BloblangQuery({0xc001188cd0, 0x1, 0xc001586ff0?}, 0x110d4c6?, 0x11f2a10?)
/Users/___/benthos/public/service/message.go:276 +0x27
github.com/benthosdev/benthos/v4/internal/impl/redis.(*redisProc).execRaw(0xc0015865a0, {0x454bd08, 0xc000f1e000}, 0x8?, {0xc001188cd0, 0x1, 0x1}, 0xc001188ce8)
/Users/___/benthos/internal/impl/redis/processor.go:308 +0x78
github.com/benthosdev/benthos/v4/internal/impl/redis.(*redisProc).ProcessBatch(0xc0015865a0, {0x454bd08, 0xc000f1e000}, {0xc001188cd0?, 0x1, 0x1})
/Users/___/benthos/internal/impl/redis/processor.go:362 +0x4d0
github.com/benthosdev/benthos/v4/public/service.(*airGapBatchProcessor).ProcessBatch(0xc000d3a1e0, {0x454bd08, 0xc000f1e000}, {0x5?, 0xc000014078?, 0x1?}, {0xc000014078, 0x1, 0x1})
/Users/___/benthos/public/service/processor.go:109 +0xee
github.com/benthosdev/benthos/v4/internal/component/processor.(*v2BatchedToV1Processor).ProcessBatch(0xc0001a1050, {0x454bd08, 0xc000f1e000}, {0xc000014078?, 0x1, 0x1})
/Users/___/benthos/internal/component/processor/processor_v2.go:146 +0x157
github.com/benthosdev/benthos/v4/internal/component/processor.ExecuteAll({0x454bd08, 0xc000f1e000}, {0xc000bc7140, 0x1, 0x0?}, {0xc000c84ec0?, 0x1, 0x0?})
/Users/___/benthos/internal/component/processor/execute.go:22 +0x1b6
github.com/benthosdev/benthos/v4/internal/pipeline.(*Processor).loop(0xc000b2cf00)
/Users/___/benthos/internal/pipeline/processor.go:69 +0x285
created by github.com/benthosdev/benthos/v4/internal/pipeline.(*Processor).Consume
/Users/___/benthos/internal/pipeline/processor.go:139 +0x79
```
**Version**
`v4.4.1-73-gf8b1fd9d`
**Possible Fix**
Remove Default value
```diff
diff --git a/internal/impl/redis/processor.go b/internal/impl/redis/processor.go
index 4443e8f4..bc353280 100644
--- a/internal/impl/redis/processor.go
+++ b/internal/impl/redis/processor.go
@@ -36,8 +36,7 @@ performed for each message and the message contents are replaced with the result
Description("A [Bloblang mapping](/docs/guides/bloblang/about) which should evaluate to an array of values matching in size to the number of arguments required for the specified Redis comm>
Version("4.3.0").
Example("root = [ this.key ]").
- Example(`root = [ meta("kafka_key"), this.count ]`).
- Default(``)).
+ Example(`root = [ meta("kafka_key"), this.count ]`)).
Field(service.NewStringAnnotatedEnumField("operator", map[string]string{
"keys": `Returns an array of strings containing all the keys that match the pattern specified by the ` + "`key` field" + `.`,
"scard": `Returns the cardinality of a set, or ` + "`0`" + ` if the key does not exist.`,
```
|
process
|
redis cache missing args mapping causes panic problem missing args mapping from redis processor causes panic during runtime example yaml input stdin pipeline processors redis url redis localhost command foo output stdout result shell ❯ target bin benthos c config yaml info running main config from specified file service benthos path config yaml info launching a benthos instance use ctrl c to close service benthos info listening for http requests at service benthos foo foobar panic runtime error invalid memory address or nil pointer dereference goroutine github com benthosdev benthos public service messagebatch bloblangquery users benthos public service message go github com benthosdev benthos internal impl redis redisproc execraw users benthos internal impl redis processor go github com benthosdev benthos internal impl redis redisproc processbatch users benthos internal impl redis processor go github com benthosdev benthos public service airgapbatchprocessor processbatch users benthos public service processor go github com benthosdev benthos internal component processor processbatch users benthos internal component processor processor go github com benthosdev benthos internal component processor executeall users benthos internal component processor execute go github com benthosdev benthos internal pipeline processor loop users benthos internal pipeline processor go created by github com benthosdev benthos internal pipeline processor consume users benthos internal pipeline processor go version possible fix remove default value diff diff git a internal impl redis processor go b internal impl redis processor go index a internal impl redis processor go b internal impl redis processor go performed for each message and the message contents are replaced with the result description a docs guides bloblang about which should evaluate to an array of values matching in size to the number of arguments required for the specified redis comm version example root example root default example root field service newstringannotatedenumfield operator map string keys returns an array of strings containing all the keys that match the pattern specified by the key field scard returns the cardinality of a set or if the key does not exist
| 1
|
239,632
| 18,279,147,948
|
IssuesEvent
|
2021-10-04 23:21:11
|
dtcenter/METplus
|
https://api.github.com/repos/dtcenter/METplus
|
closed
|
Add instructions to update old METplus configuration files that reference user-defined wrapped MET config files
|
type: enhancement component: documentation priority: high requestor: NOAA/EMC alert: NEED ACCOUNT KEY requestor: METplus Team METplus: Configuration
|
Related to discussion #996
## Describe the Enhancement ##
Add useful instructions to assist users who need to update their configurations to use the wrapped MET config files in parm/met_config instead of their own MET config files. This involves determining which variables in the user's wrapped MET config file differ from the wrapped version in parm/met_config (considering the variables in the parm/met_config version that mistakenly differ from the default MET values before v4.0.0).
Respond to GitHub Discussion topic listed above with info and consider adding this information to User's Guide
### Time Estimate ###
~3 days
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
- [ ] *Add logic for writing metplus_final.conf to move variables that are specific to that run (i.e. CLOCK_TIME) to a runtime section so that it is clear which variables are relevant if rerunning*
### Relevant Deadlines ###
4.1.0-beta3 (if possible)
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [X] Select **engineer(s)** or **no engineer** required
- [X] Select **scientist(s)** or **no scientist** required
### Labels ###
- [X] Select **component(s)**
- [X] Select **priority**
- [X] Select **requestor(s)**
### Projects and Milestone ###
- [X] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [X] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [X] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [X] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
1.0
|
Add instructions to update old METplus configuration files that reference user-defined wrapped MET config files - Related to discussion #996
## Describe the Enhancement ##
Add useful instructions to assist users who need to update their configurations to use the wrapped MET config files in parm/met_config instead of their own MET config files. This involves determining which variables in the user's wrapped MET config file differ from the wrapped version in parm/met_config (considering the variables in the parm/met_config version that mistakenly differ from the default MET values before v4.0.0).
Respond to GitHub Discussion topic listed above with info and consider adding this information to User's Guide
### Time Estimate ###
~3 days
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
- [ ] *Add logic for writing metplus_final.conf to move variables that are specific to that run (i.e. CLOCK_TIME) to a runtime section so that it is clear which variables are relevant if rerunning*
### Relevant Deadlines ###
4.1.0-beta3 (if possible)
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [X] Select **engineer(s)** or **no engineer** required
- [X] Select **scientist(s)** or **no scientist** required
### Labels ###
- [X] Select **component(s)**
- [X] Select **priority**
- [X] Select **requestor(s)**
### Projects and Milestone ###
- [X] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [X] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [X] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [X] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
non_process
|
add instructions to update old metplus configuration files that reference user defined wrapped met config files related to discussion describe the enhancement add useful instructions to assist users who need to update their configurations to use the wrapped met config files in parm met config instead of their own met config files this involves determining which variables in the user s wrapped met config file differ from the wrapped version in parm met config considering the variables in the parm met config version that mistakenly differ from the default met values before respond to github discussion topic listed above with info and consider adding this information to user s guide time estimate days sub issues consider breaking the enhancement down into sub issues add logic for writing metplus final conf to move variables that are specific to that run i e clock time to a runtime section so that it is clear which variables are relevant if rerunning relevant deadlines if possible funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
| 0
|
17,027
| 22,393,848,554
|
IssuesEvent
|
2022-06-17 10:22:16
|
metallb/metallb
|
https://api.github.com/repos/metallb/metallb
|
closed
|
Build directly in dockerfiles
|
help wanted good first issue process ci
|
We currently build the images using inv and then we copy the binaries inside the container with a COPY instruction.
It'd be better to have the build process as part of the dockerfile itself. That will allow us to get rid of inv when publishing images and delegate everything to GH actions (which can be a follow up of https://github.com/metallb/metallb/pull/1374 )
|
1.0
|
Build directly in dockerfiles - We currently build the images using inv and then we copy the binaries inside the container with a COPY instruction.
It'd be better to have the build process as part of the dockerfile itself. That will allow us to get rid of inv when publishing images and delegate everything to GH actions (which can be a follow up of https://github.com/metallb/metallb/pull/1374 )
|
process
|
build directly in dockerfiles we currently build the images using inv and then we copy the binaries inside the container with a copy instruction it d be better to have the build process as part of the dockerfile itself that will allow us to get rid of inv when publishing images and delegate everything to gh actions which can be a follow up of
| 1
|
17,060
| 22,492,911,923
|
IssuesEvent
|
2022-06-23 04:09:14
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
`bazel test` fail but `bazel run` pass
|
type: support / not a bug (process) team-Core
|
### Description of the bug:
Bazel test output:
```
root:[envoy]$ bazel test -c dbg //contrib/exe:example_configs_test --test_env=ENVOY_IP_TEST_VERSIONS=v4only --cache_test_results=no --test_output=all
INFO: Analyzed target //contrib/exe:example_configs_test (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
FAIL: //contrib/exe:example_configs_test (see /root/.cache/bazel/_bazel_root/2d35de14639eaad1ac7060a4dd7e3351/execroot/envoy/bazel-out/k8-dbg/testlogs/contrib/exe/example_configs_test/test.log)
INFO: From Testing //contrib/exe:example_configs_test:
==================== Test output for //contrib/exe:example_configs_test:
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from ExampleConfigsTest
[ RUN ] ExampleConfigsTest.All
mysql_envoy.yaml
postgres_envoy.yaml
[ ] chdir(cwd) = 0/root/.cache/bazel/_bazel_root/2d35de14639eaad1ac7060a4dd7e3351/sandbox/processwrapper-sandbox/349/execroot/envoy/bazel-out/k8-dbg/bin/contrib/exe/example_configs_test.runfiles/envoy
[ OK ] ExampleConfigsTest.All (548 ms)
[----------] 1 test from ExampleConfigsTest (548 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (548 ms total)
[ PASSED ] 1 test.
================================================================================
Target //contrib/exe:example_configs_test up-to-date:
bazel-bin/contrib/exe/example_configs_test
INFO: Elapsed time: 22.029s, Critical Path: 21.45s
INFO: 4 processes: 1 internal, 3 processwrapper-sandbox.
INFO: Build completed, 1 test FAILED, 4 total actions
//contrib/exe:example_configs_test FAILED in 1.3s
/root/.cache/bazel/_bazel_root/2d35de14639eaad1ac7060a4dd7e3351/execroot/envoy/bazel-out/k8-dbg/testlogs/contrib/exe/example_configs_test/test.log
INFO: Build completed, 1 test FAILED, 4 total actions
```
bazel run output:
```
root:[envoy]$ bazel run -c dbg //contrib/exe:example_configs_test --test_env=ENVOY_IP_TEST_VERSIONS=v4only --cache_test_results=no --test_output=all
INFO: Analyzed target //contrib/exe:example_configs_test (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //contrib/exe:example_configs_test up-to-date:
bazel-bin/contrib/exe/example_configs_test
INFO: Elapsed time: 0.393s, Critical Path: 0.02s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //contrib/exe:example_configs_test
-----------------------------------------------------------------------------
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from ExampleConfigsTest
[ RUN ] ExampleConfigsTest.All
mysql_envoy.yaml
postgres_envoy.yaml
[ OK ] ExampleConfigsTest.All (551 ms)
[----------] 1 test from ExampleConfigsTest (551 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (551 ms total)
[ PASSED ] 1 test.
```
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```
$ git clone https://github.com/daixiang0/envoy.git
$ cd envoy
$ git checkout extend-cb
$ ./ci/run_envoy_docker.sh bash
# bazel test -c dbg //contrib/exe:example_configs_test --test_env=ENVOY_IP_TEST_VERSIONS=v4only --cache_test_results=no --test_output=all
```
### Which operating system are you running Bazel on?
ubuntu
### What is the output of `bazel info release`?
release 6.0.0-pre.20220421.3
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
1.0
|
`bazel test` fail but `bazel run` pass - ### Description of the bug:
Bazel test output:
```
root:[envoy]$ bazel test -c dbg //contrib/exe:example_configs_test --test_env=ENVOY_IP_TEST_VERSIONS=v4only --cache_test_results=no --test_output=all
INFO: Analyzed target //contrib/exe:example_configs_test (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
FAIL: //contrib/exe:example_configs_test (see /root/.cache/bazel/_bazel_root/2d35de14639eaad1ac7060a4dd7e3351/execroot/envoy/bazel-out/k8-dbg/testlogs/contrib/exe/example_configs_test/test.log)
INFO: From Testing //contrib/exe:example_configs_test:
==================== Test output for //contrib/exe:example_configs_test:
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from ExampleConfigsTest
[ RUN ] ExampleConfigsTest.All
mysql_envoy.yaml
postgres_envoy.yaml
[ ] chdir(cwd) = 0/root/.cache/bazel/_bazel_root/2d35de14639eaad1ac7060a4dd7e3351/sandbox/processwrapper-sandbox/349/execroot/envoy/bazel-out/k8-dbg/bin/contrib/exe/example_configs_test.runfiles/envoy
[ OK ] ExampleConfigsTest.All (548 ms)
[----------] 1 test from ExampleConfigsTest (548 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (548 ms total)
[ PASSED ] 1 test.
================================================================================
Target //contrib/exe:example_configs_test up-to-date:
bazel-bin/contrib/exe/example_configs_test
INFO: Elapsed time: 22.029s, Critical Path: 21.45s
INFO: 4 processes: 1 internal, 3 processwrapper-sandbox.
INFO: Build completed, 1 test FAILED, 4 total actions
//contrib/exe:example_configs_test FAILED in 1.3s
/root/.cache/bazel/_bazel_root/2d35de14639eaad1ac7060a4dd7e3351/execroot/envoy/bazel-out/k8-dbg/testlogs/contrib/exe/example_configs_test/test.log
INFO: Build completed, 1 test FAILED, 4 total actions
```
bazel run output:
```
root:[envoy]$ bazel run -c dbg //contrib/exe:example_configs_test --test_env=ENVOY_IP_TEST_VERSIONS=v4only --cache_test_results=no --test_output=all
INFO: Analyzed target //contrib/exe:example_configs_test (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //contrib/exe:example_configs_test up-to-date:
bazel-bin/contrib/exe/example_configs_test
INFO: Elapsed time: 0.393s, Critical Path: 0.02s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //contrib/exe:example_configs_test
-----------------------------------------------------------------------------
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from ExampleConfigsTest
[ RUN ] ExampleConfigsTest.All
mysql_envoy.yaml
postgres_envoy.yaml
[ OK ] ExampleConfigsTest.All (551 ms)
[----------] 1 test from ExampleConfigsTest (551 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (551 ms total)
[ PASSED ] 1 test.
```
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```
$ git clone https://github.com/daixiang0/envoy.git
$ cd envoy
$ git checkout extend-cb
$ ./ci/run_envoy_docker.sh bash
# bazel test -c dbg //contrib/exe:example_configs_test --test_env=ENVOY_IP_TEST_VERSIONS=v4only --cache_test_results=no --test_output=all
```
### Which operating system are you running Bazel on?
ubuntu
### What is the output of `bazel info release`?
release 6.0.0-pre.20220421.3
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
_No response_
### Any other information, logs, or outputs that you want to share?
_No response_
|
process
|
bazel test fail but bazel run pass description of the bug bazel test output root bazel test c dbg contrib exe example configs test test env envoy ip test versions cache test results no test output all info analyzed target contrib exe example configs test packages loaded targets configured info found test target fail contrib exe example configs test see root cache bazel bazel root execroot envoy bazel out dbg testlogs contrib exe example configs test test log info from testing contrib exe example configs test test output for contrib exe example configs test running test from test suite global test environment set up test from exampleconfigstest exampleconfigstest all mysql envoy yaml postgres envoy yaml chdir cwd root cache bazel bazel root sandbox processwrapper sandbox execroot envoy bazel out dbg bin contrib exe example configs test runfiles envoy exampleconfigstest all ms test from exampleconfigstest ms total global test environment tear down test from test suite ran ms total test target contrib exe example configs test up to date bazel bin contrib exe example configs test info elapsed time critical path info processes internal processwrapper sandbox info build completed test failed total actions contrib exe example configs test failed in root cache bazel bazel root execroot envoy bazel out dbg testlogs contrib exe example configs test test log info build completed test failed total actions bazel run output root bazel run c dbg contrib exe example configs test test env envoy ip test versions cache test results no test output all info analyzed target contrib exe example configs test packages loaded targets configured info found target target contrib exe example configs test up to date bazel bin contrib exe example configs test info elapsed time critical path info process internal info build completed successfully total action info build completed successfully total action exec pager usr bin less exit executing tests from contrib exe example configs test running test from test suite global test environment set up test from exampleconfigstest exampleconfigstest all mysql envoy yaml postgres envoy yaml exampleconfigstest all ms test from exampleconfigstest ms total global test environment tear down test from test suite ran ms total test what s the simplest easiest way to reproduce this bug please provide a minimal example if possible git clone cd envoy git checkout extend cb ci run envoy docker sh bash bazel test c dbg contrib exe example configs test test env envoy ip test versions cache test results no test output all which operating system are you running bazel on ubuntu what is the output of bazel info release release pre if bazel info release returns development version or non git tell us how you built bazel no response what s the output of git remote get url origin git rev parse master git rev parse head no response have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response
| 1
|
5,314
| 8,129,252,205
|
IssuesEvent
|
2018-08-17 14:32:02
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Pyrakoon not able to connect to cluster if one member is down
|
process_cantreproduce
|
_From @jeroenmaelbrancke on June 26, 2018 7:57_
# Problem
When one member of the Arakoon cluster is down the Pyrakoon is not able to find the master.
Error message of the pyrakoon client:
```
2018-06-25 23:52:39 64900 -0400 - SV4SRV0015 - 28624/139769160349440 - extensions/ovs_extensions.db.arakoon.pyrakoon.pyrakoon.compat - 2 - ERROR - No connection available to node at '172.26.16.7:26400' : Message exchange with node sDEjp3rMpJ2ACr4L failed
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1160, in _send_message
connection.send(data)
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1325, in send
raise ArakoonNotConnected(self._address)
ArakoonNotConnected: No connection available to node at '172.26.16.7:26400'
2018-06-25 23:52:39 64900 -0400 - SV4SRV0015 - 28624/139769160349440 - extensions/ovs_extensions.db.arakoon.pyrakoon.pyrakoon.compat - 3 - ERROR - No connection available to node at '172.26.16.7:26400': Unable to query node "sDEjp3rMpJ2ACr4L" to look up master
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1201, in determine_master
self.master_id = self._get_master_id_from_node(node)
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1229, in _get_master_id_from_node
connection = self._send_message(node_id, data)
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1160, in _send_message
connection.send(data)
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1325, in send
raise ArakoonNotConnected(self._address)
ArakoonNotConnected: No connection available to node at '172.26.16.7:26400'
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: 2018-06-25 23:52:45 79600 -0400 - SV4SRV0015 - 28624/139769160349440 - extensions/ovs_extensions.db.arakoon.pyrakoon.pyrakoon.compat - 6 - ERROR - timed out: Unable
to connect to ('172.26.16.7', 26400)
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: Traceback (most recent call last):
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1284, in connect
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: self._socket = socket.create_connection(self._address, self._timeout)
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: File "/usr/lib/python2.7/socket.py", line 575, in create_connection
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: raise err
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: timeout: timed out
```
While a master was found on the arakoon:
```
root@SV4SRV0001:~# arakoon-1.9.25 --who-master -config /opt/OpenvStorage/config/arakoon_config.ini
yBlo4YnVC2FOoWYy
```
_Copied from original issue: openvstorage/pyrakoon#17_
|
1.0
|
Pyrakoon not able to connect to cluster if one member is down - _From @jeroenmaelbrancke on June 26, 2018 7:57_
# Problem
When one member of the Arakoon cluster is down the Pyrakoon is not able to find the master.
Error message of the pyrakoon client:
```
2018-06-25 23:52:39 64900 -0400 - SV4SRV0015 - 28624/139769160349440 - extensions/ovs_extensions.db.arakoon.pyrakoon.pyrakoon.compat - 2 - ERROR - No connection available to node at '172.26.16.7:26400' : Message exchange with node sDEjp3rMpJ2ACr4L failed
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1160, in _send_message
connection.send(data)
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1325, in send
raise ArakoonNotConnected(self._address)
ArakoonNotConnected: No connection available to node at '172.26.16.7:26400'
2018-06-25 23:52:39 64900 -0400 - SV4SRV0015 - 28624/139769160349440 - extensions/ovs_extensions.db.arakoon.pyrakoon.pyrakoon.compat - 3 - ERROR - No connection available to node at '172.26.16.7:26400': Unable to query node "sDEjp3rMpJ2ACr4L" to look up master
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1201, in determine_master
self.master_id = self._get_master_id_from_node(node)
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1229, in _get_master_id_from_node
connection = self._send_message(node_id, data)
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1160, in _send_message
connection.send(data)
File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1325, in send
raise ArakoonNotConnected(self._address)
ArakoonNotConnected: No connection available to node at '172.26.16.7:26400'
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: 2018-06-25 23:52:45 79600 -0400 - SV4SRV0015 - 28624/139769160349440 - extensions/ovs_extensions.db.arakoon.pyrakoon.pyrakoon.compat - 6 - ERROR - timed out: Unable
to connect to ('172.26.16.7', 26400)
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: Traceback (most recent call last):
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: File "/usr/lib/python2.7/dist-packages/ovs_extensions/db/arakoon/pyrakoon/pyrakoon/compat.py", line 1284, in connect
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: self._socket = socket.create_connection(self._address, self._timeout)
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: File "/usr/lib/python2.7/socket.py", line 575, in create_connection
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: raise err
Jun 25 23:52:45 SV4SRV0015 ovs-workers[28624]: timeout: timed out
```
While a master was found on the arakoon:
```
root@SV4SRV0001:~# arakoon-1.9.25 --who-master -config /opt/OpenvStorage/config/arakoon_config.ini
yBlo4YnVC2FOoWYy
```
_Copied from original issue: openvstorage/pyrakoon#17_
|
process
|
pyrakoon not able to connect to cluster if one member is down from jeroenmaelbrancke on june problem when one member of the arakoon cluster is down the pyrakoon is not able to find the master error message of the pyrakoon client extensions ovs extensions db arakoon pyrakoon pyrakoon compat error no connection available to node at message exchange with node failed traceback most recent call last file usr lib dist packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in send message connection send data file usr lib dist packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in send raise arakoonnotconnected self address arakoonnotconnected no connection available to node at extensions ovs extensions db arakoon pyrakoon pyrakoon compat error no connection available to node at unable to query node to look up master traceback most recent call last file usr lib dist packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in determine master self master id self get master id from node node file usr lib dist packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in get master id from node connection self send message node id data file usr lib dist packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in send message connection send data file usr lib dist packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in send raise arakoonnotconnected self address arakoonnotconnected no connection available to node at jun ovs workers extensions ovs extensions db arakoon pyrakoon pyrakoon compat error timed out unable to connect to jun ovs workers traceback most recent call last jun ovs workers file usr lib dist packages ovs extensions db arakoon pyrakoon pyrakoon compat py line in connect jun ovs workers self socket socket create connection self address self timeout jun ovs workers file usr lib socket py line in create connection jun ovs workers raise err jun ovs workers timeout timed out while a master was found on the arakoon root arakoon who master config opt openvstorage config arakoon config ini copied from original issue openvstorage pyrakoon
| 1
|
6,898
| 10,043,740,524
|
IssuesEvent
|
2019-07-19 08:19:02
|
OI-wiki/OI-wiki
|
https://api.github.com/repos/OI-wiki/OI-wiki
|
closed
|
Searching problem
|
需要处理 / Need Processing 高优先级 / P1
|
- 是出现了什么问题?(最好截图)
搜索结果若是某部分的简介,会链接到 `/xxx/index/` 导致 404
- 如何复现?
请搜索“简介”二字。
|
1.0
|
Searching problem - - 是出现了什么问题?(最好截图)
搜索结果若是某部分的简介,会链接到 `/xxx/index/` 导致 404
- 如何复现?
请搜索“简介”二字。
|
process
|
searching problem 是出现了什么问题?(最好截图) 搜索结果若是某部分的简介,会链接到 xxx index 导致 如何复现? 请搜索“简介”二字。
| 1
|
340,112
| 10,266,990,294
|
IssuesEvent
|
2019-08-22 23:26:26
|
danielmilner/wp-block-components
|
https://api.github.com/repos/danielmilner/wp-block-components
|
closed
|
Create a CoreBlock component
|
Priority: Medium Type: Enhancement Type: Question
|
I'm thinking about creating a `CoreBlock` component that automatically returns the appropriate component based on the `__typename` passed from GraphQL. This would only require you to import one component instead of needing to know each component ahead of time.
The individual components would still be able to be imported as needed.
|
1.0
|
Create a CoreBlock component - I'm thinking about creating a `CoreBlock` component that automatically returns the appropriate component based on the `__typename` passed from GraphQL. This would only require you to import one component instead of needing to know each component ahead of time.
The individual components would still be able to be imported as needed.
|
non_process
|
create a coreblock component i m thinking about creating a coreblock component that automatically returns the appropriate component based on the typename passed from graphql this would only require you to import one component instead of needing to know each component ahead of time the individual components would still be able to be imported as needed
| 0
|
20,283
| 26,914,623,263
|
IssuesEvent
|
2023-02-07 04:46:34
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
[Mirror] golang/x/tools v0.5.0
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
https://github.com/golang/tools/archive/refs/tags/v0.5.0.zip
Thanks!
|
1.0
|
[Mirror] golang/x/tools v0.5.0 - ### Please list the URLs of the archives you'd like to mirror:
https://github.com/golang/tools/archive/refs/tags/v0.5.0.zip
Thanks!
|
process
|
golang x tools please list the urls of the archives you d like to mirror thanks
| 1
|
52,707
| 3,028,046,766
|
IssuesEvent
|
2015-08-04 00:47:54
|
AtomicGameEngine/AtomicGameEngine
|
https://api.github.com/repos/AtomicGameEngine/AtomicGameEngine
|
closed
|
Look into classes derived from RefCounted (and not Object) in JS packages other than "Atomic"
|
High Priority
|
There is an assert here as RefCounted classes will not have a uniqueID:
https://github.com/AtomicGameEngine/AtomicGameEngine/blob/master/Source/AtomicJS/Javascript/JSAPI.cpp#L75
Maybe we should add a static field to RefCounted so they do, currently this is only seems to be used when pushing a variant, which will push null on RefCounted anyway
|
1.0
|
Look into classes derived from RefCounted (and not Object) in JS packages other than "Atomic" -
There is an assert here as RefCounted classes will not have a uniqueID:
https://github.com/AtomicGameEngine/AtomicGameEngine/blob/master/Source/AtomicJS/Javascript/JSAPI.cpp#L75
Maybe we should add a static field to RefCounted so they do, currently this is only seems to be used when pushing a variant, which will push null on RefCounted anyway
|
non_process
|
look into classes derived from refcounted and not object in js packages other than atomic there is an assert here as refcounted classes will not have a uniqueid maybe we should add a static field to refcounted so they do currently this is only seems to be used when pushing a variant which will push null on refcounted anyway
| 0
|
13,459
| 15,936,890,872
|
IssuesEvent
|
2021-04-14 11:46:01
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Spots and retouch stamps misaligned if distortion changes
|
priority: high scope: image processing
|
**Describe the bug/issue**
darktable draws masks in RAW coordinates by belief masks should be anchored to pixels and move with them if they are to be distorted at a later time but lower in the pipe. Spots and Retouch stamps show a gross misalignment when the image is deformed. Liquify is ok though.
**To Reproduce**
Use

Put clone stamps at corners with lens correction, crop and rotate and perspective correction modules off.

Enable perspective module with arbitrary deformation: mismatch

Enable crop and rotate with arbitrary keystone deformation : mismatch

Test with liquify, no distortion on:

Apply distortions : match with both perspective module and crop and rotate keystone


**Expected behavior**
Same as liquify.
|
1.0
|
Spots and retouch stamps misaligned if distortion changes - **Describe the bug/issue**
darktable draws masks in RAW coordinates by belief masks should be anchored to pixels and move with them if they are to be distorted at a later time but lower in the pipe. Spots and Retouch stamps show a gross misalignment when the image is deformed. Liquify is ok though.
**To Reproduce**
Use

Put clone stamps at corners with lens correction, crop and rotate and perspective correction modules off.

Enable perspective module with arbitrary deformation: mismatch

Enable crop and rotate with arbitrary keystone deformation : mismatch

Test with liquify, no distortion on:

Apply distortions : match with both perspective module and crop and rotate keystone


**Expected behavior**
Same as liquify.
|
process
|
spots and retouch stamps misaligned if distortion changes describe the bug issue darktable draws masks in raw coordinates by belief masks should be anchored to pixels and move with them if they are to be distorted at a later time but lower in the pipe spots and retouch stamps show a gross misalignment when the image is deformed liquify is ok though to reproduce use put clone stamps at corners with lens correction crop and rotate and perspective correction modules off enable perspective module with arbitrary deformation mismatch enable crop and rotate with arbitrary keystone deformation mismatch test with liquify no distortion on apply distortions match with both perspective module and crop and rotate keystone expected behavior same as liquify
| 1
|
20,709
| 27,401,780,154
|
IssuesEvent
|
2023-03-01 01:36:17
|
SigNoz/signoz-otel-collector
|
https://api.github.com/repos/SigNoz/signoz-otel-collector
|
opened
|
span metrics insertion from the same collector instance
|
signozspanmetricsprocessor
|
the collector-metrics scrape is prone to issues, and users get worried when they don't see charts for service overview which is built based on the span metrics data. we could potentially run a go routine in span metrics processor that periodically inserts the data into clickhouse.
|
1.0
|
span metrics insertion from the same collector instance - the collector-metrics scrape is prone to issues, and users get worried when they don't see charts for service overview which is built based on the span metrics data. we could potentially run a go routine in span metrics processor that periodically inserts the data into clickhouse.
|
process
|
span metrics insertion from the same collector instance the collector metrics scrape is prone to issues and users get worried when they don t see charts for service overview which is built based on the span metrics data we could potentially run a go routine in span metrics processor that periodically inserts the data into clickhouse
| 1
|
102,735
| 22,067,622,333
|
IssuesEvent
|
2022-05-31 06:09:07
|
FaberVitale/solid-bricks
|
https://api.github.com/repos/FaberVitale/solid-bricks
|
opened
|
Explain @solid-bricks/barcode SSR behavior
|
documentation enhancement barcode
|
### Describe the problem
It's not clear what happens when you render a `<Barcode />` element in SSR.
### Describe the proposed solution
Add:
- ssr example (Astro?)
- add SSR section in `packages/barcode/Readme.md`
### Alternatives considered
nope
### Importance
would make my life easier
|
1.0
|
Explain @solid-bricks/barcode SSR behavior - ### Describe the problem
It's not clear what happens when you render a `<Barcode />` element in SSR.
### Describe the proposed solution
Add:
- ssr example (Astro?)
- add SSR section in `packages/barcode/Readme.md`
### Alternatives considered
nope
### Importance
would make my life easier
|
non_process
|
explain solid bricks barcode ssr behavior describe the problem it s not clear what happens when you render a element in ssr describe the proposed solution add ssr example astro add ssr section in packages barcode readme md alternatives considered nope importance would make my life easier
| 0
|
125,952
| 26,754,891,263
|
IssuesEvent
|
2023-01-30 23:02:29
|
microsoftgraph/microsoft-graph-explorer-v4
|
https://api.github.com/repos/microsoftgraph/microsoft-graph-explorer-v4
|
closed
|
Code snippit for Powershell doesn't say which graph profile to use
|
Area: Code snippets
|
Using graph explorer I ran the following query
```
https://graph.microsoft.com/beta/deviceManagement/groupPolicyCategories?$expand=parent($select=id, displayName, isRoot),definitions($select=id, displayName, categoryPath, classType, policyType, version, hasRelatedDefinitions)&$select=id, displayName, isRoot, ingestionSource&$filter=ingestionSource eq 'builtIn'
```
The Code Snippet for Powershell suggested the following command:
```
Import-Module Microsoft.Graph.DeviceManagement.Administration
Get-MgDeviceManagementGroupPolicyCategory -ExpandProperty "parent(`$select=id,+displayName,+isRoot),definitions(`$select=id,+displayName,+categoryPath,+classType,+policyType,+version,+hasRelatedDefinitions)" -Property "id,+displayName,+isRoot,+ingestionSource" -Filter "ingestionSource+eq+'builtIn'"
```
The command ```Get-MgDeviceManagementGroupPolicyCategory``` is currently available in the 'beta' profile so the correct code is
```
Import-Module Microsoft.Graph.DeviceManagement.Administration
Select-MgProfile -name beta
Get-MgDeviceManagementGroupPolicyCategory -ExpandProperty "parent(`$select=id,+displayName,+isRoot),definitions(`$select=id,+displayName,+categoryPath,+classType,+policyType,+version,+hasRelatedDefinitions)" -Property "id,+displayName,+isRoot,+ingestionSource" -Filter "ingestionSource+eq+'builtIn'"
```
The javascript example does note which version of grah to use.
|
1.0
|
Code snippit for Powershell doesn't say which graph profile to use - Using graph explorer I ran the following query
```
https://graph.microsoft.com/beta/deviceManagement/groupPolicyCategories?$expand=parent($select=id, displayName, isRoot),definitions($select=id, displayName, categoryPath, classType, policyType, version, hasRelatedDefinitions)&$select=id, displayName, isRoot, ingestionSource&$filter=ingestionSource eq 'builtIn'
```
The Code Snippet for Powershell suggested the following command:
```
Import-Module Microsoft.Graph.DeviceManagement.Administration
Get-MgDeviceManagementGroupPolicyCategory -ExpandProperty "parent(`$select=id,+displayName,+isRoot),definitions(`$select=id,+displayName,+categoryPath,+classType,+policyType,+version,+hasRelatedDefinitions)" -Property "id,+displayName,+isRoot,+ingestionSource" -Filter "ingestionSource+eq+'builtIn'"
```
The command ```Get-MgDeviceManagementGroupPolicyCategory``` is currently available in the 'beta' profile so the correct code is
```
Import-Module Microsoft.Graph.DeviceManagement.Administration
Select-MgProfile -name beta
Get-MgDeviceManagementGroupPolicyCategory -ExpandProperty "parent(`$select=id,+displayName,+isRoot),definitions(`$select=id,+displayName,+categoryPath,+classType,+policyType,+version,+hasRelatedDefinitions)" -Property "id,+displayName,+isRoot,+ingestionSource" -Filter "ingestionSource+eq+'builtIn'"
```
The javascript example does note which version of grah to use.
|
non_process
|
code snippit for powershell doesn t say which graph profile to use using graph explorer i ran the following query displayname isroot definitions select id displayname categorypath classtype policytype version hasrelateddefinitions select id displayname isroot ingestionsource filter ingestionsource eq builtin the code snippet for powershell suggested the following command import module microsoft graph devicemanagement administration get mgdevicemanagementgrouppolicycategory expandproperty parent select id displayname isroot definitions select id displayname categorypath classtype policytype version hasrelateddefinitions property id displayname isroot ingestionsource filter ingestionsource eq builtin the command get mgdevicemanagementgrouppolicycategory is currently available in the beta profile so the correct code is import module microsoft graph devicemanagement administration select mgprofile name beta get mgdevicemanagementgrouppolicycategory expandproperty parent select id displayname isroot definitions select id displayname categorypath classtype policytype version hasrelateddefinitions property id displayname isroot ingestionsource filter ingestionsource eq builtin the javascript example does note which version of grah to use
| 0
|
346,892
| 10,421,324,394
|
IssuesEvent
|
2019-09-16 05:39:35
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
duckduckgo.com - see bug description
|
browser-firefox-mobile engine-gecko priority-important
|
<!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
**URL**: https://duckduckgo.com/?q=Famous+African+american+in+historysuspended+from+college+after+someone+bricked+her+dorm
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Results are null to historical event requested!!!
**Steps to Reproduce**:
It gave everything from perverse results again at first about porn to quizlet for child psych! When it was a search for a famous african american woman!
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
duckduckgo.com - see bug description - <!-- @browser: Firefox Mobile 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 -->
<!-- @reported_with: -->
**URL**: https://duckduckgo.com/?q=Famous+African+american+in+historysuspended+from+college+after+someone+bricked+her+dorm
**Browser / Version**: Firefox Mobile 69.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Results are null to historical event requested!!!
**Steps to Reproduce**:
It gave everything from perverse results again at first about porn to quizlet for child psych! When it was a search for a famous african american woman!
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
duckduckgo com see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description results are null to historical event requested steps to reproduce it gave everything from perverse results again at first about porn to quizlet for child psych when it was a search for a famous african american woman browser configuration none from with ❤️
| 0
|
302,320
| 9,256,799,781
|
IssuesEvent
|
2019-03-16 22:26:30
|
prometheus/prometheus
|
https://api.github.com/repos/prometheus/prometheus
|
closed
|
promtool rules migration produces unreadable rule files
|
component/rules dev-2.0 hacktoberfest help wanted kind/enhancement priority/P3
|
The current rules files migration tool produces output files which are unusable for us.
Smaller issues:
* order of annotations is not preserved, but alphabetical order enforced
* regular expression anchoring is printed verbatim `"^(?:ext.|xfs)$"` instead of `"ext.|xfs"`
Bigger issue:
* Any sequence of whitespace is truncated to a single space, the resulting expression is written as one long string to the file. We have longer expressions which spawn 8 lines without any kind of formatting and therefore, are completely unreadable for a human now. Either the original formatting or a new `promtool fmt` formatting should be used here.
|
1.0
|
promtool rules migration produces unreadable rule files - The current rules files migration tool produces output files which are unusable for us.
Smaller issues:
* order of annotations is not preserved, but alphabetical order enforced
* regular expression anchoring is printed verbatim `"^(?:ext.|xfs)$"` instead of `"ext.|xfs"`
Bigger issue:
* Any sequence of whitespace is truncated to a single space, the resulting expression is written as one long string to the file. We have longer expressions which spawn 8 lines without any kind of formatting and therefore, are completely unreadable for a human now. Either the original formatting or a new `promtool fmt` formatting should be used here.
|
non_process
|
promtool rules migration produces unreadable rule files the current rules files migration tool produces output files which are unusable for us smaller issues order of annotations is not preserved but alphabetical order enforced regular expression anchoring is printed verbatim ext xfs instead of ext xfs bigger issue any sequence of whitespace is truncated to a single space the resulting expression is written as one long string to the file we have longer expressions which spawn lines without any kind of formatting and therefore are completely unreadable for a human now either the original formatting or a new promtool fmt formatting should be used here
| 0
|
19,170
| 25,272,293,963
|
IssuesEvent
|
2022-11-16 10:09:25
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
opened
|
BP NTRs: 'effector-mediated modulation of host DNA synthesis by symbiont' and child term 'effector-mediated induction of host DNA synthesis'
|
New term request multi-species process
|
Please provide as much information as you can:
* **Suggested term label:**
'effector-mediated modulation of host DNA synthesis by symbiont'
and child term
'effector-mediated induction of host DNA synthesis'
* **Definition (free text)**
NTR: effector-mediated modulation of host DNA synthesis by symbiont
A process mediated by a molecule secreted by a symbiont that results in the modulation (either activation or suppression) of DNA synthesis involved in DNA replication (GO:0090592). The host is defined as the larger of the organisms involved in a symbiotic interaction. PMID:25888589
AND child term NTR: effector-mediated induction of host DNA synthesis
A process mediated by a molecule secreted by a symbiont that results in the activation of DNA synthesis involved in DNA replication (GO:0090592). The host is defined as the larger of the organisms involved in a symbiotic interaction. PMID:25888589
* **Reference, in format PMID:#######**
PMID:25888589
* **Gene product name and ID to be annotated to this term**
* [UMAG_02239](https://canto.phi-base.org/curs/1f5e9ca2be427dab/feature/gene/view/1) See1 from Ustilago maydis
* **Parent term(s)**
GO:0140418 effector-mediated modulation of host process by symbiont
* **Children terms (if applicable)** Should any existing terms that should be moved underneath this new proposed term?
* **Synonyms (please specify, EXACT, BROAD, NARROW or RELATED)**
* **Cross-references**
* For enzymes, please provide RHEA and/or EC numbers.
* Can also provide MetaCyc, KEGG, Wikipedia, and other links.
* **Any other information**
New GO BP terms required for publication curation into PHI-base using our new community curation tool PHI-Canto in collaboration with @ValWood.
I also noticed a typo in GO:0140418 definition should be 'suppression' not 'suppresion' and also a typo in GO:0140590 'supression' should be 'suppression'. Thanks, Alayne.
|
1.0
|
BP NTRs: 'effector-mediated modulation of host DNA synthesis by symbiont' and child term 'effector-mediated induction of host DNA synthesis' - Please provide as much information as you can:
* **Suggested term label:**
'effector-mediated modulation of host DNA synthesis by symbiont'
and child term
'effector-mediated induction of host DNA synthesis'
* **Definition (free text)**
NTR: effector-mediated modulation of host DNA synthesis by symbiont
A process mediated by a molecule secreted by a symbiont that results in the modulation (either activation or suppression) of DNA synthesis involved in DNA replication (GO:0090592). The host is defined as the larger of the organisms involved in a symbiotic interaction. PMID:25888589
AND child term NTR: effector-mediated induction of host DNA synthesis
A process mediated by a molecule secreted by a symbiont that results in the activation of DNA synthesis involved in DNA replication (GO:0090592). The host is defined as the larger of the organisms involved in a symbiotic interaction. PMID:25888589
* **Reference, in format PMID:#######**
PMID:25888589
* **Gene product name and ID to be annotated to this term**
* [UMAG_02239](https://canto.phi-base.org/curs/1f5e9ca2be427dab/feature/gene/view/1) See1 from Ustilago maydis
* **Parent term(s)**
GO:0140418 effector-mediated modulation of host process by symbiont
* **Children terms (if applicable)** Should any existing terms that should be moved underneath this new proposed term?
* **Synonyms (please specify, EXACT, BROAD, NARROW or RELATED)**
* **Cross-references**
* For enzymes, please provide RHEA and/or EC numbers.
* Can also provide MetaCyc, KEGG, Wikipedia, and other links.
* **Any other information**
New GO BP terms required for publication curation into PHI-base using our new community curation tool PHI-Canto in collaboration with @ValWood.
I also noticed a typo in GO:0140418 definition should be 'suppression' not 'suppresion' and also a typo in GO:0140590 'supression' should be 'suppression'. Thanks, Alayne.
|
process
|
bp ntrs effector mediated modulation of host dna synthesis by symbiont and child term effector mediated induction of host dna synthesis please provide as much information as you can suggested term label effector mediated modulation of host dna synthesis by symbiont and child term effector mediated induction of host dna synthesis definition free text ntr effector mediated modulation of host dna synthesis by symbiont a process mediated by a molecule secreted by a symbiont that results in the modulation either activation or suppression of dna synthesis involved in dna replication go the host is defined as the larger of the organisms involved in a symbiotic interaction pmid and child term ntr effector mediated induction of host dna synthesis a process mediated by a molecule secreted by a symbiont that results in the activation of dna synthesis involved in dna replication go the host is defined as the larger of the organisms involved in a symbiotic interaction pmid reference in format pmid pmid gene product name and id to be annotated to this term from ustilago maydis parent term s go effector mediated modulation of host process by symbiont children terms if applicable should any existing terms that should be moved underneath this new proposed term synonyms please specify exact broad narrow or related cross references for enzymes please provide rhea and or ec numbers can also provide metacyc kegg wikipedia and other links any other information new go bp terms required for publication curation into phi base using our new community curation tool phi canto in collaboration with valwood i also noticed a typo in go definition should be suppression not suppresion and also a typo in go supression should be suppression thanks alayne
| 1
|
241,688
| 20,156,394,290
|
IssuesEvent
|
2022-02-09 16:50:18
|
rancher/qa-tasks
|
https://api.github.com/repos/rancher/qa-tasks
|
opened
|
2.6.3 Upgrade testing at scale
|
area/scale-testing
|
2.5.8-patch3 -> 2.6.3-patch1
- [ ] Upgrade Rancher version
- [ ] Upgrade local cluster k8s version
- [ ] Upgrade downstream clusters k8s versions
|
1.0
|
2.6.3 Upgrade testing at scale - 2.5.8-patch3 -> 2.6.3-patch1
- [ ] Upgrade Rancher version
- [ ] Upgrade local cluster k8s version
- [ ] Upgrade downstream clusters k8s versions
|
non_process
|
upgrade testing at scale upgrade rancher version upgrade local cluster version upgrade downstream clusters versions
| 0
|
51,391
| 13,207,461,898
|
IssuesEvent
|
2020-08-14 23:11:40
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
cfirst reuses pointers to results (Trac #330)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/330">https://code.icecube.wisc.edu/projects/icecube/ticket/330</a>, reported by kislatand owned by jacobi</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-07T22:32:33",
"_ts": "1436308353324715",
"description": "The cfirst module internally keeps the pointers to the resulting frame objects. These pointers are reused in all events. This works fine if frames are never buffered, but in case of frame buffering (e.g. I3PacketModule) will lead to identical reconstruction results in all buffered frames.",
"reporter": "kislat",
"cc": "",
"resolution": "wontfix",
"time": "2011-11-23T14:08:55",
"component": "combo reconstruction",
"summary": "cfirst reuses pointers to results",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "jacobi",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
cfirst reuses pointers to results (Trac #330) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/330">https://code.icecube.wisc.edu/projects/icecube/ticket/330</a>, reported by kislatand owned by jacobi</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-07T22:32:33",
"_ts": "1436308353324715",
"description": "The cfirst module internally keeps the pointers to the resulting frame objects. These pointers are reused in all events. This works fine if frames are never buffered, but in case of frame buffering (e.g. I3PacketModule) will lead to identical reconstruction results in all buffered frames.",
"reporter": "kislat",
"cc": "",
"resolution": "wontfix",
"time": "2011-11-23T14:08:55",
"component": "combo reconstruction",
"summary": "cfirst reuses pointers to results",
"priority": "major",
"keywords": "",
"milestone": "",
"owner": "jacobi",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
cfirst reuses pointers to results trac migrated from json status closed changetime ts description the cfirst module internally keeps the pointers to the resulting frame objects these pointers are reused in all events this works fine if frames are never buffered but in case of frame buffering e g will lead to identical reconstruction results in all buffered frames reporter kislat cc resolution wontfix time component combo reconstruction summary cfirst reuses pointers to results priority major keywords milestone owner jacobi type defect
| 0
|
21,290
| 28,487,075,346
|
IssuesEvent
|
2023-04-18 08:36:44
|
JoTec2002/TINF21C_AAS_Management
|
https://api.github.com/repos/JoTec2002/TINF21C_AAS_Management
|
closed
|
Editing the package explorer in the frontend
|
in Process frontend
|
For better usability, a function is added that displays the expansion and collapse of certain AAS elements and thus promotes complexity reduction and clarity for the User.
|
1.0
|
Editing the package explorer in the frontend - For better usability, a function is added that displays the expansion and collapse of certain AAS elements and thus promotes complexity reduction and clarity for the User.
|
process
|
editing the package explorer in the frontend for better usability a function is added that displays the expansion and collapse of certain aas elements and thus promotes complexity reduction and clarity for the user
| 1
|
348,306
| 10,440,743,960
|
IssuesEvent
|
2019-09-18 09:20:20
|
JuPedSim/jpscore
|
https://api.github.com/repos/JuPedSim/jpscore
|
opened
|
Refactoring parts of the Geometry
|
Priority: Medium Status: In Progress Type: Refactoring
|
The Geometry part needs to be reworked.
- [ ] Create a appropiate clang-tidy config
- [ ] Fix selected checks with clang-tidy
- [ ] Fix Ownership of data
- [ ] Remove Pointers and shared_ptr when possible
To be extended ;)
|
1.0
|
Refactoring parts of the Geometry - The Geometry part needs to be reworked.
- [ ] Create a appropiate clang-tidy config
- [ ] Fix selected checks with clang-tidy
- [ ] Fix Ownership of data
- [ ] Remove Pointers and shared_ptr when possible
To be extended ;)
|
non_process
|
refactoring parts of the geometry the geometry part needs to be reworked create a appropiate clang tidy config fix selected checks with clang tidy fix ownership of data remove pointers and shared ptr when possible to be extended
| 0
|
254,205
| 27,357,207,437
|
IssuesEvent
|
2023-02-27 13:40:34
|
bturtu405/TestDev
|
https://api.github.com/repos/bturtu405/TestDev
|
closed
|
CVE-2012-3463 (Low) detected in actionpack-3.0.7.gem - autoclosed
|
security vulnerability
|
## CVE-2012-3463 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>actionpack-3.0.7.gem</b></p></summary>
<p>Web apps on Rails. Simple, battle-tested conventions for building and testing MVC web applications. Works with any Rack-compatible server.</p>
<p>Library home page: <a href="https://rubygems.org/gems/actionpack-3.0.7.gem">https://rubygems.org/gems/actionpack-3.0.7.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/actionpack-3.0.7.gem</p>
<p>
Dependency Hierarchy:
- rails-3.0.7.gem (Root Library)
- :x: **actionpack-3.0.7.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bturtu405/TestDev/commit/a630a01e6e191474848b3413a96ff308cb33bc9c">a630a01e6e191474848b3413a96ff308cb33bc9c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in actionpack/lib/action_view/helpers/form_tag_helper.rb in Ruby on Rails 3.x before 3.0.17, 3.1.x before 3.1.8, and 3.2.x before 3.2.8 allows remote attackers to inject arbitrary web script or HTML via the prompt field to the select_tag helper.
<p>Publish Date: 2012-08-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2012-3463>CVE-2012-3463</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-3463">https://nvd.nist.gov/vuln/detail/CVE-2012-3463</a></p>
<p>Release Date: 2012-08-10</p>
<p>Fix Resolution: 3.0.17,3.1.8,3.2.8</p>
</p>
</details>
<p></p>
|
True
|
CVE-2012-3463 (Low) detected in actionpack-3.0.7.gem - autoclosed - ## CVE-2012-3463 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>actionpack-3.0.7.gem</b></p></summary>
<p>Web apps on Rails. Simple, battle-tested conventions for building and testing MVC web applications. Works with any Rack-compatible server.</p>
<p>Library home page: <a href="https://rubygems.org/gems/actionpack-3.0.7.gem">https://rubygems.org/gems/actionpack-3.0.7.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/actionpack-3.0.7.gem</p>
<p>
Dependency Hierarchy:
- rails-3.0.7.gem (Root Library)
- :x: **actionpack-3.0.7.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bturtu405/TestDev/commit/a630a01e6e191474848b3413a96ff308cb33bc9c">a630a01e6e191474848b3413a96ff308cb33bc9c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in actionpack/lib/action_view/helpers/form_tag_helper.rb in Ruby on Rails 3.x before 3.0.17, 3.1.x before 3.1.8, and 3.2.x before 3.2.8 allows remote attackers to inject arbitrary web script or HTML via the prompt field to the select_tag helper.
<p>Publish Date: 2012-08-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2012-3463>CVE-2012-3463</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-3463">https://nvd.nist.gov/vuln/detail/CVE-2012-3463</a></p>
<p>Release Date: 2012-08-10</p>
<p>Fix Resolution: 3.0.17,3.1.8,3.2.8</p>
</p>
</details>
<p></p>
|
non_process
|
cve low detected in actionpack gem autoclosed cve low severity vulnerability vulnerable library actionpack gem web apps on rails simple battle tested conventions for building and testing mvc web applications works with any rack compatible server library home page a href path to dependency file gemfile lock path to vulnerable library home wss scanner gem ruby cache actionpack gem dependency hierarchy rails gem root library x actionpack gem vulnerable library found in head commit a href found in base branch main vulnerability details cross site scripting xss vulnerability in actionpack lib action view helpers form tag helper rb in ruby on rails x before x before and x before allows remote attackers to inject arbitrary web script or html via the prompt field to the select tag helper publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
9,058
| 12,133,272,891
|
IssuesEvent
|
2020-04-23 08:45:40
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Crash modeler after loading model
|
Bug Feedback Processing
|
## User Feedback
Just after loading existing model.
## Report Details
**Crash ID**: [c1a1074ec3d06fea23d8c01c628277f16e54a539]('https://github.com/qgis/QGIS/search?q=c1a1074ec3d06fea23d8c01c628277f16e54a539&type=Issues')
**Stack Trace**
<pre>
QgsProcessingToolboxProxyModel::filterAcceptsRow :
QSortFilterProxyModel::columnCount :
QSortFilterProxyModel::hasChildren :
QTreeViewPrivate::layout :
QTreeViewPrivate::layout :
QTreeViewPrivate::layout :
QTreeView::expandAll :
QgsProcessingToolboxTreeView::setFilterString :
QMetaObject::activate :
QLineEdit::qt_static_metacall :
QMetaObject::activate :
QWidgetLineControl::finishChange :
QWidgetLineControl::processKeyEvent :
QLineEdit::keyPressEvent :
QgsPresetSchemeColorRamp::clone :
QWidget::event :
QLineEdit::event :
QgsPresetSchemeColorRamp::clone :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QSizePolicy::QSizePolicy :
QSizePolicy::QSizePolicy :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QGuiApplicationPrivate::processKeyEvent :
QWindowSystemInterface::sendWindowSystemEvents :
QEventDispatcherWin32::processEvents :
UserCallWinProcCheckWow :
DispatchMessageWorker :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QCoreApplication::exec :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
</pre>
**QGIS Info**
QGIS Version: 3.12.0-Bucure?ti
QGIS code revision: cd141490ec
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 3.0.4
Running against GDAL: 3.0.4
**System Info**
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.17763
|
1.0
|
Crash modeler after loading model - ## User Feedback
Just after loading existing model.
## Report Details
**Crash ID**: [c1a1074ec3d06fea23d8c01c628277f16e54a539]('https://github.com/qgis/QGIS/search?q=c1a1074ec3d06fea23d8c01c628277f16e54a539&type=Issues')
**Stack Trace**
<pre>
QgsProcessingToolboxProxyModel::filterAcceptsRow :
QSortFilterProxyModel::columnCount :
QSortFilterProxyModel::hasChildren :
QTreeViewPrivate::layout :
QTreeViewPrivate::layout :
QTreeViewPrivate::layout :
QTreeView::expandAll :
QgsProcessingToolboxTreeView::setFilterString :
QMetaObject::activate :
QLineEdit::qt_static_metacall :
QMetaObject::activate :
QWidgetLineControl::finishChange :
QWidgetLineControl::processKeyEvent :
QLineEdit::keyPressEvent :
QgsPresetSchemeColorRamp::clone :
QWidget::event :
QLineEdit::event :
QgsPresetSchemeColorRamp::clone :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QSizePolicy::QSizePolicy :
QSizePolicy::QSizePolicy :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QGuiApplicationPrivate::processKeyEvent :
QWindowSystemInterface::sendWindowSystemEvents :
QEventDispatcherWin32::processEvents :
UserCallWinProcCheckWow :
DispatchMessageWorker :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QCoreApplication::exec :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
</pre>
**QGIS Info**
QGIS Version: 3.12.0-Bucure?ti
QGIS code revision: cd141490ec
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 3.0.4
Running against GDAL: 3.0.4
**System Info**
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 10.0.17763
|
process
|
crash modeler after loading model user feedback just after loading existing model report details crash id stack trace qgsprocessingtoolboxproxymodel filteracceptsrow qsortfilterproxymodel columncount qsortfilterproxymodel haschildren qtreeviewprivate layout qtreeviewprivate layout qtreeviewprivate layout qtreeview expandall qgsprocessingtoolboxtreeview setfilterstring qmetaobject activate qlineedit qt static metacall qmetaobject activate qwidgetlinecontrol finishchange qwidgetlinecontrol processkeyevent qlineedit keypressevent qgspresetschemecolorramp clone qwidget event qlineedit event qgspresetschemecolorramp clone qapplicationprivate notify helper qapplication notify qgsapplication notify qcoreapplication qsizepolicy qsizepolicy qsizepolicy qsizepolicy qapplicationprivate notify helper qapplication notify qgsapplication notify qcoreapplication qguiapplicationprivate processkeyevent qwindowsysteminterface sendwindowsystemevents processevents usercallwinproccheckwow dispatchmessageworker processevents qt plugin query metadata qeventloop exec qcoreapplication exec main basethreadinitthunk rtluserthreadstart qgis info qgis version bucure ti qgis code revision compiled against qt running against qt compiled against gdal running against gdal system info cpu type kernel type winnt kernel version
| 1
|
250,358
| 18,886,649,175
|
IssuesEvent
|
2021-11-15 08:43:03
|
rrousselGit/river_pod
|
https://api.github.com/repos/rrousselGit/river_pod
|
closed
|
Lack of market stratgy
|
documentation needs triage
|
The people in charge of this package lack market strategy...Asper zerooo..your market sense is zero...how can you make such changes in this package like this....do you know how big my project is...I'm switching to flutter bloc or redux as soon as possible ...wtf so I can't use context.read any more so now I have to always wrap in ConsumerWidget what is I want to do context.read(a).state?...:...does that need a consumer wiget????
You guys are good developers, I totally agree... but your business sense is Zero , I assure you if you keep making these kind of drastic changes people will stop using this package..Even bloggers are tired of updating their blogs anytime you guys make update ....common buckle up..my project had 3k+ errors on updating package ...do you know how discouraging that looks....bye bye
..BloC is the way!!!!!!
|
1.0
|
Lack of market stratgy - The people in charge of this package lack market strategy...Asper zerooo..your market sense is zero...how can you make such changes in this package like this....do you know how big my project is...I'm switching to flutter bloc or redux as soon as possible ...wtf so I can't use context.read any more so now I have to always wrap in ConsumerWidget what is I want to do context.read(a).state?...:...does that need a consumer wiget????
You guys are good developers, I totally agree... but your business sense is Zero , I assure you if you keep making these kind of drastic changes people will stop using this package..Even bloggers are tired of updating their blogs anytime you guys make update ....common buckle up..my project had 3k+ errors on updating package ...do you know how discouraging that looks....bye bye
..BloC is the way!!!!!!
|
non_process
|
lack of market stratgy the people in charge of this package lack market strategy asper zerooo your market sense is zero how can you make such changes in this package like this do you know how big my project is i m switching to flutter bloc or redux as soon as possible wtf so i can t use context read any more so now i have to always wrap in consumerwidget what is i want to do context read a state does that need a consumer wiget you guys are good developers i totally agree but your business sense is zero i assure you if you keep making these kind of drastic changes people will stop using this package even bloggers are tired of updating their blogs anytime you guys make update common buckle up my project had errors on updating package do you know how discouraging that looks bye bye bloc is the way
| 0
|
14,033
| 16,828,363,046
|
IssuesEvent
|
2021-06-17 22:12:08
|
CAVaccineInventory/vial
|
https://api.github.com/repos/CAVaccineInventory/vial
|
reopened
|
make QAing locations easy - claiming locations
|
django-admin and tools qa-process
|
as much as possible, we just want to implement the same QA system we have on reports for locations as well. (All of these are things that already exist for reports.)
Key pieces to implement first:
* add a "claimed" field to the location so users can add themselves as responsible for a location
* and a claimed time/date stamp field as well
* add a filter to the locations table page for Claimed w/ options All / Claimed by you / Claimed by anyone / unclaimed
* show "Is pending review" and "Claimed by" as columns in the locations table page between "request a call" and "full address"
* Add bulk action options "Claim locations" and "Unclaim locations you have claimed"
|
1.0
|
make QAing locations easy - claiming locations - as much as possible, we just want to implement the same QA system we have on reports for locations as well. (All of these are things that already exist for reports.)
Key pieces to implement first:
* add a "claimed" field to the location so users can add themselves as responsible for a location
* and a claimed time/date stamp field as well
* add a filter to the locations table page for Claimed w/ options All / Claimed by you / Claimed by anyone / unclaimed
* show "Is pending review" and "Claimed by" as columns in the locations table page between "request a call" and "full address"
* Add bulk action options "Claim locations" and "Unclaim locations you have claimed"
|
process
|
make qaing locations easy claiming locations as much as possible we just want to implement the same qa system we have on reports for locations as well all of these are things that already exist for reports key pieces to implement first add a claimed field to the location so users can add themselves as responsible for a location and a claimed time date stamp field as well add a filter to the locations table page for claimed w options all claimed by you claimed by anyone unclaimed show is pending review and claimed by as columns in the locations table page between request a call and full address add bulk action options claim locations and unclaim locations you have claimed
| 1
|
354,263
| 25,157,665,183
|
IssuesEvent
|
2022-11-10 14:40:01
|
honeycombio/examples
|
https://api.github.com/repos/honeycombio/examples
|
closed
|
Replace with an index of locallized examples
|
type: documentation
|
Many examples in this repo are out of date and no longer work with newer versions of our SDKs.
Our long-term plan is to have examples co-located with their SDKs, which we are already moving towards.
Remove examples from this repo, and either move them if they work, or add an issue to add such a working example in the appropriate SDK repo.
Add an index in this repo pointing to each examples directory in our SDK repos.
[Sync topic](https://honeycomb.quip.com/o7fLAOAQ80cu/Telemetry-Team-Sync#temp:C:GFGbd80bd84223c49a2a300c2798): suggest SA team to add their examples to the index
- [x] dotnet-core-webapi
- [x] dotnet-otlp
- [x] golang-gatekeeper
- [x] golang-otlp
- [x] golang-ratelimiting-proxy
- [x] golang-webapp
- [x] golang-wiki-tracing
- [x] honeytail-haproxy
- [x] honeytail-mysql
- [x] honeytail-nginx
- [x] honeytail-dockerd
- [x] java-beeline
- [x] java-otlp
- [x] java-webapp
- [x] kubernetes-envoy-tracing
- [x] node-otlp
- [x] node-serverless-app
- [x] node-tracing-example
- [x] python-api
- [x] python-gatekeeper
- [x] python-otlp
- [x] ruby-gatekeeper
- [x] ruby-otlp
- [x] tuby-wiki-tracing
- [x] webhook-listener-triggers
- [x] clean up README, explain that it's a reference repo
|
1.0
|
Replace with an index of locallized examples - Many examples in this repo are out of date and no longer work with newer versions of our SDKs.
Our long-term plan is to have examples co-located with their SDKs, which we are already moving towards.
Remove examples from this repo, and either move them if they work, or add an issue to add such a working example in the appropriate SDK repo.
Add an index in this repo pointing to each examples directory in our SDK repos.
[Sync topic](https://honeycomb.quip.com/o7fLAOAQ80cu/Telemetry-Team-Sync#temp:C:GFGbd80bd84223c49a2a300c2798): suggest SA team to add their examples to the index
- [x] dotnet-core-webapi
- [x] dotnet-otlp
- [x] golang-gatekeeper
- [x] golang-otlp
- [x] golang-ratelimiting-proxy
- [x] golang-webapp
- [x] golang-wiki-tracing
- [x] honeytail-haproxy
- [x] honeytail-mysql
- [x] honeytail-nginx
- [x] honeytail-dockerd
- [x] java-beeline
- [x] java-otlp
- [x] java-webapp
- [x] kubernetes-envoy-tracing
- [x] node-otlp
- [x] node-serverless-app
- [x] node-tracing-example
- [x] python-api
- [x] python-gatekeeper
- [x] python-otlp
- [x] ruby-gatekeeper
- [x] ruby-otlp
- [x] tuby-wiki-tracing
- [x] webhook-listener-triggers
- [x] clean up README, explain that it's a reference repo
|
non_process
|
replace with an index of locallized examples many examples in this repo are out of date and no longer work with newer versions of our sdks our long term plan is to have examples co located with their sdks which we are already moving towards remove examples from this repo and either move them if they work or add an issue to add such a working example in the appropriate sdk repo add an index in this repo pointing to each examples directory in our sdk repos suggest sa team to add their examples to the index dotnet core webapi dotnet otlp golang gatekeeper golang otlp golang ratelimiting proxy golang webapp golang wiki tracing honeytail haproxy honeytail mysql honeytail nginx honeytail dockerd java beeline java otlp java webapp kubernetes envoy tracing node otlp node serverless app node tracing example python api python gatekeeper python otlp ruby gatekeeper ruby otlp tuby wiki tracing webhook listener triggers clean up readme explain that it s a reference repo
| 0
|
71,433
| 9,523,448,376
|
IssuesEvent
|
2019-04-27 17:20:13
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
DOC: Not all stats distributions have tutorials
|
Documentation scipy.stats
|
Not all stats distributions have tutorials. Here's a list of missing ones. [Since the terms `frechet_l` and `frechet_r` are deprecated, they don't need a tutorial, but the others probably do.]
```
python -c "import operator, scipy.stats; print(' '.join(['doc/source/tutorial/stats/continuous_' + dst + '.rst' for dst in sorted(set(map(operator.itemgetter(0), scipy.stats._distr_params.distcont)))]))" | xargs ls -1 2>&1 | fgrep "No such" | awk '{print $2}'
doc/source/tutorial/stats/continuous_argus.rst:
doc/source/tutorial/stats/continuous_crystalball.rst:
doc/source/tutorial/stats/continuous_exponnorm.rst:
doc/source/tutorial/stats/continuous_frechet_l.rst:
doc/source/tutorial/stats/continuous_frechet_r.rst:
doc/source/tutorial/stats/continuous_halfgennorm.rst:
doc/source/tutorial/stats/continuous_kappa3.rst:
doc/source/tutorial/stats/continuous_kappa4.rst:
doc/source/tutorial/stats/continuous_levy_stable.rst:
doc/source/tutorial/stats/continuous_moyal.rst:
doc/source/tutorial/stats/continuous_pearson3.rst:
doc/source/tutorial/stats/continuous_skewnorm.rst:
doc/source/tutorial/stats/continuous_vonmises_line.rst:
```
### Scipy/Numpy/Python version information:
```
1.3.0.dev0+f89a187 1.16.0 sys.version_info(major=3, minor=6, micro=6, releaselevel='final', serial=0)
```
|
1.0
|
DOC: Not all stats distributions have tutorials - Not all stats distributions have tutorials. Here's a list of missing ones. [Since the terms `frechet_l` and `frechet_r` are deprecated, they don't need a tutorial, but the others probably do.]
```
python -c "import operator, scipy.stats; print(' '.join(['doc/source/tutorial/stats/continuous_' + dst + '.rst' for dst in sorted(set(map(operator.itemgetter(0), scipy.stats._distr_params.distcont)))]))" | xargs ls -1 2>&1 | fgrep "No such" | awk '{print $2}'
doc/source/tutorial/stats/continuous_argus.rst:
doc/source/tutorial/stats/continuous_crystalball.rst:
doc/source/tutorial/stats/continuous_exponnorm.rst:
doc/source/tutorial/stats/continuous_frechet_l.rst:
doc/source/tutorial/stats/continuous_frechet_r.rst:
doc/source/tutorial/stats/continuous_halfgennorm.rst:
doc/source/tutorial/stats/continuous_kappa3.rst:
doc/source/tutorial/stats/continuous_kappa4.rst:
doc/source/tutorial/stats/continuous_levy_stable.rst:
doc/source/tutorial/stats/continuous_moyal.rst:
doc/source/tutorial/stats/continuous_pearson3.rst:
doc/source/tutorial/stats/continuous_skewnorm.rst:
doc/source/tutorial/stats/continuous_vonmises_line.rst:
```
### Scipy/Numpy/Python version information:
```
1.3.0.dev0+f89a187 1.16.0 sys.version_info(major=3, minor=6, micro=6, releaselevel='final', serial=0)
```
|
non_process
|
doc not all stats distributions have tutorials not all stats distributions have tutorials here s a list of missing ones python c import operator scipy stats print join xargs ls fgrep no such awk print doc source tutorial stats continuous argus rst doc source tutorial stats continuous crystalball rst doc source tutorial stats continuous exponnorm rst doc source tutorial stats continuous frechet l rst doc source tutorial stats continuous frechet r rst doc source tutorial stats continuous halfgennorm rst doc source tutorial stats continuous rst doc source tutorial stats continuous rst doc source tutorial stats continuous levy stable rst doc source tutorial stats continuous moyal rst doc source tutorial stats continuous rst doc source tutorial stats continuous skewnorm rst doc source tutorial stats continuous vonmises line rst scipy numpy python version information sys version info major minor micro releaselevel final serial
| 0
|
1,987
| 4,816,835,122
|
IssuesEvent
|
2016-11-04 11:30:54
|
woesterduolf/Mission-reisbureau
|
https://api.github.com/repos/woesterduolf/Mission-reisbureau
|
opened
|
Payment methode
|
Boekingsprocess priority: highest Type:Feature
|
Mockup design (see page 7)
The main area is the payment area. On top we still have the booking plus the booking number.
The customer is greeted with the question which method of payment he would like to use. He can then select that payment option from a dropdown menu displaying all the possible options.
Once he has selected the payment option, the customer has to fill in a form.
The top part of the form is asking the customer to enter the name of the accountholder for the selected payment option. He can then fill in his name in the field.
Below that, the customer is asked to put in the card number for his selected payment option. He can then fill in his number in the field.
After that we have text asking the customer to fil in the security number of his card. This is displayed on the back of his card most of the time. He can then fill in the number in the field.
Lastly we have 2 times an area where the customer is asked for his address info. He can put in the address info in the respective fields.
Once all has been filled in , he can click the ‘confirm payment’ button, which will make the back process the payment information. The customer will be taken to the booking confirmation page if the payment was successful. If it wasn’t, he will get an error screen.
And last, in the bottom right corner are all the payment methods shown with their own logos. This makes it a bit easier to see if your payment method is accepted.
|
1.0
|
Payment methode - Mockup design (see page 7)
The main area is the payment area. On top we still have the booking plus the booking number.
The customer is greeted with the question which method of payment he would like to use. He can then select that payment option from a dropdown menu displaying all the possible options.
Once he has selected the payment option, the customer has to fill in a form.
The top part of the form is asking the customer to enter the name of the accountholder for the selected payment option. He can then fill in his name in the field.
Below that, the customer is asked to put in the card number for his selected payment option. He can then fill in his number in the field.
After that we have text asking the customer to fil in the security number of his card. This is displayed on the back of his card most of the time. He can then fill in the number in the field.
Lastly we have 2 times an area where the customer is asked for his address info. He can put in the address info in the respective fields.
Once all has been filled in , he can click the ‘confirm payment’ button, which will make the back process the payment information. The customer will be taken to the booking confirmation page if the payment was successful. If it wasn’t, he will get an error screen.
And last, in the bottom right corner are all the payment methods shown with their own logos. This makes it a bit easier to see if your payment method is accepted.
|
process
|
payment methode mockup design see page the main area is the payment area on top we still have the booking plus the booking number the customer is greeted with the question which method of payment he would like to use he can then select that payment option from a dropdown menu displaying all the possible options once he has selected the payment option the customer has to fill in a form the top part of the form is asking the customer to enter the name of the accountholder for the selected payment option he can then fill in his name in the field below that the customer is asked to put in the card number for his selected payment option he can then fill in his number in the field after that we have text asking the customer to fil in the security number of his card this is displayed on the back of his card most of the time he can then fill in the number in the field lastly we have times an area where the customer is asked for his address info he can put in the address info in the respective fields once all has been filled in he can click the ‘confirm payment’ button which will make the back process the payment information the customer will be taken to the booking confirmation page if the payment was successful if it wasn’t he will get an error screen and last in the bottom right corner are all the payment methods shown with their own logos this makes it a bit easier to see if your payment method is accepted
| 1
|
20,945
| 27,806,008,343
|
IssuesEvent
|
2023-03-17 19:59:21
|
Deltares/Ribasim
|
https://api.github.com/repos/Deltares/Ribasim
|
closed
|
add pump/pumpstation object (pomp/gemaal)
|
physical process
|
Add component for a pumpstation with one or more pumps
- [x] conceptual design: simple pump with a capacity from the forcing table that reduces extractions on an empty basin
- [ ] implementation
- [ ] unit tests (needs #116 first)
- [ ] input validation
- [ ] updated example model
- [ ] updated tutorial/example model building script
- [x] devdocs: how to add a new node type (moved to #126)
- [ ] include as nodetype in QGIS
|
1.0
|
add pump/pumpstation object (pomp/gemaal) - Add component for a pumpstation with one or more pumps
- [x] conceptual design: simple pump with a capacity from the forcing table that reduces extractions on an empty basin
- [ ] implementation
- [ ] unit tests (needs #116 first)
- [ ] input validation
- [ ] updated example model
- [ ] updated tutorial/example model building script
- [x] devdocs: how to add a new node type (moved to #126)
- [ ] include as nodetype in QGIS
|
process
|
add pump pumpstation object pomp gemaal add component for a pumpstation with one or more pumps conceptual design simple pump with a capacity from the forcing table that reduces extractions on an empty basin implementation unit tests needs first input validation updated example model updated tutorial example model building script devdocs how to add a new node type moved to include as nodetype in qgis
| 1
|
368,864
| 10,885,528,842
|
IssuesEvent
|
2019-11-18 10:34:41
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
apply.lloydsbank.co.uk - The radio buttons are misplaced
|
browser-firefox engine-gecko os-linux priority-important severity-minor sitepatch-applied type-css
|
<!-- @browser: Firefox 67.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0 -->
<!-- @reported_with: web -->
**URL**: https://apply.lloydsbank.co.uk/sales-content/cwa/l/pca/index-app.html?product=classicaccountLTB#!d
**Browser / Version**: Firefox 67.0
**Operating System**: Fedora
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: The radio buttons are displaced to the left. On Firefox Android, the buttons simply don't work.
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/7/d468926e-749f-4d44-b37f-acc26fbc1314.jpg)
[](https://webcompat.com/uploads/2019/7/1add1956-2184-4c23-84b6-8ec3c98c334d.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@ImAnnoying2`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
apply.lloydsbank.co.uk - The radio buttons are misplaced - <!-- @browser: Firefox 67.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0 -->
<!-- @reported_with: web -->
**URL**: https://apply.lloydsbank.co.uk/sales-content/cwa/l/pca/index-app.html?product=classicaccountLTB#!d
**Browser / Version**: Firefox 67.0
**Operating System**: Fedora
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: The radio buttons are displaced to the left. On Firefox Android, the buttons simply don't work.
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/7/d468926e-749f-4d44-b37f-acc26fbc1314.jpg)
[](https://webcompat.com/uploads/2019/7/1add1956-2184-4c23-84b6-8ec3c98c334d.jpg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
Submitted in the name of `@ImAnnoying2`
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
apply lloydsbank co uk the radio buttons are misplaced url browser version firefox operating system fedora tested another browser yes problem type design is broken description the radio buttons are displaced to the left on firefox android the buttons simply don t work steps to reproduce browser configuration none submitted in the name of from with ❤️
| 0
|
16,972
| 22,335,047,150
|
IssuesEvent
|
2022-06-14 17:43:45
|
BCDevOps/nr-apm-stack
|
https://api.github.com/repos/BCDevOps/nr-apm-stack
|
closed
|
Tweak Lambda function memory
|
stack/lambda/event-stream-processing performance-tuning
|
There seems to be a correlation between memory size and assigned compute power.
Experiment if increasing memory to 3GB reduces the execution time
|
1.0
|
Tweak Lambda function memory - There seems to be a correlation between memory size and assigned compute power.
Experiment if increasing memory to 3GB reduces the execution time
|
process
|
tweak lambda function memory there seems to be a correlation between memory size and assigned compute power experiment if increasing memory to reduces the execution time
| 1
|
20,576
| 10,532,352,211
|
IssuesEvent
|
2019-10-01 10:32:03
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
Possibility to add a default role to the AnonymousToken
|
Feature Security
|
**Description**
If i want use ACLs and want to grant an action to a **not** logged in user but this action should **not** be **allowed** to an **logged in** user I have to use a separate role for the AnonymousToken.
Unfortunately there is no built-in possibility to add default roles to the AnonymousToken so I have to create a CompilerPass to override the Service Definition of the AnonymousAuthenticationListener to replace the line
`$token = new AnonymousToken($this->secret, 'anon.', []);`
with
`$token = new AnonymousToken($this->secret, 'anon.', ['ROLE_NOT_LOGGED_ID']);`
Maybe it would be more elegant to have the possibility to define a default "not logged in"-role in the security settings of the framework?
|
True
|
Possibility to add a default role to the AnonymousToken - **Description**
If i want use ACLs and want to grant an action to a **not** logged in user but this action should **not** be **allowed** to an **logged in** user I have to use a separate role for the AnonymousToken.
Unfortunately there is no built-in possibility to add default roles to the AnonymousToken so I have to create a CompilerPass to override the Service Definition of the AnonymousAuthenticationListener to replace the line
`$token = new AnonymousToken($this->secret, 'anon.', []);`
with
`$token = new AnonymousToken($this->secret, 'anon.', ['ROLE_NOT_LOGGED_ID']);`
Maybe it would be more elegant to have the possibility to define a default "not logged in"-role in the security settings of the framework?
|
non_process
|
possibility to add a default role to the anonymoustoken description if i want use acls and want to grant an action to a not logged in user but this action should not be allowed to an logged in user i have to use a separate role for the anonymoustoken unfortunately there is no built in possibility to add default roles to the anonymoustoken so i have to create a compilerpass to override the service definition of the anonymousauthenticationlistener to replace the line token new anonymoustoken this secret anon with token new anonymoustoken this secret anon maybe it would be more elegant to have the possibility to define a default not logged in role in the security settings of the framework
| 0
|
19,677
| 26,031,908,166
|
IssuesEvent
|
2022-12-21 22:20:21
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
runOnce strategy output variables do not use lifecycle-hookname, but the deployment job name
|
devops/prod doc-bug Pri2 devops-cicd-process/tech
|
This section of the documentation is incorrect or unclear:
For runOnce strategy: $[dependencies.<job-name>.outputs['**<lifecycle-hookname>**.<step-name>.<variable-name>']] (for example, $[dependencies.JobA.outputs['**Deploy**.StepA.VariableA']])
For runOnce strategy plus a resourceType: $[dependencies.<job-name>.outputs['**<lifecycle-hookname>**_<resource-name>.<step-name>.<variable-name>']]. (for example, $[dependencies.JobA.outputs['**Deploy**_VM1.StepA.VariableA']])
It should use the deployment job name instead of lifecycle-hookname, which is correctly mentioned in the documentation below.
For a runOnce job, specify the name of the job instead of the lifecycle hook:
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
runOnce strategy output variables do not use lifecycle-hookname, but the deployment job name - This section of the documentation is incorrect or unclear:
For runOnce strategy: $[dependencies.<job-name>.outputs['**<lifecycle-hookname>**.<step-name>.<variable-name>']] (for example, $[dependencies.JobA.outputs['**Deploy**.StepA.VariableA']])
For runOnce strategy plus a resourceType: $[dependencies.<job-name>.outputs['**<lifecycle-hookname>**_<resource-name>.<step-name>.<variable-name>']]. (for example, $[dependencies.JobA.outputs['**Deploy**_VM1.StepA.VariableA']])
It should use the deployment job name instead of lifecycle-hookname, which is correctly mentioned in the documentation below.
For a runOnce job, specify the name of the job instead of the lifecycle hook:
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
runonce strategy output variables do not use lifecycle hookname but the deployment job name this section of the documentation is incorrect or unclear for runonce strategy for example for runonce strategy plus a resourcetype for example it should use the deployment job name instead of lifecycle hookname which is correctly mentioned in the documentation below for a runonce job specify the name of the job instead of the lifecycle hook document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
13,733
| 16,489,324,391
|
IssuesEvent
|
2021-05-24 23:49:12
|
Leviatan-Analytics/LA-data-processing
|
https://api.github.com/repos/Leviatan-Analytics/LA-data-processing
|
closed
|
Sprint retrospective [1]
|
Data Processing Sprint 1 Week 4
|
Estimated time: 1 hs per assignee
Objectives:
- Show client the sprint Progress
|
1.0
|
Sprint retrospective [1] - Estimated time: 1 hs per assignee
Objectives:
- Show client the sprint Progress
|
process
|
sprint retrospective estimated time hs per assignee objectives show client the sprint progress
| 1
|
26,058
| 12,343,391,523
|
IssuesEvent
|
2020-05-15 03:51:59
|
tuna/issues
|
https://api.github.com/repos/tuna/issues
|
closed
|
nix-channels/store 下载较大文件时有大概率断流
|
Service Issue
|
<!--
请使用此模板来报告 bug,并尽可能多地提供信息。
Please use this template while reporting a bug and provide as much info as possible.
-->
#### 发生了什么(What happened)
使用 nix 需要下载较大文件(>=10 MB)时,前若干秒有 2MB/s 左右的速度,之后有一定概率速度逐渐降至 0 。
#### 期望的现象(What you expected to happen)
以恒定可接受的速度下载完整个文件。
#### 如何重现(How to reproduce it)
(此为 `/nix/store/m4jc35q12cmfy832r9rkb90ci7rdih1x-dotnet-sdk-3.1.102` 的实际下载 URL )
```
> curl 'https://mirrors.tuna.tsinghua.edu.cn/nix-channels/store/nar/1zg1n83jzq0wx75pcdn7cqdm1ny4sckn2jbjbii3kyq9ppfwhdlx.nar.xz' >/dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
7 70.3M 7 5263k 0 0 16741 0 1:13:27 0:05:21 1:08:06 0
```
#### 其他事项(Anything else we need to know)
- 已排除代理影响。
- 小文件下载正常。
- Related: #797
#### 您的环境(Environment)
- 操作系统(OS Version):NixOS 20.09pre222244.22a3bf9fb9e (Nightingale), x86_64 Linux 5.4.32
- 浏览器(如果适用)(Browser version, if applicable):/
- 其他(Others):
路由信息
```
> mtr mirrors.tuna.tsinghua.edu.cn
<... some content omitted ...>
Host Loss% Snt Last Avg Best Wrst StDev
1. semicolon.lan 0.0% 16 0.9 1.2 0.9 2.3 0.4
2. 218.108.255.106 0.0% 16 2.6 3.6 1.4 17.0 4.2
3. 218.109.3.21 0.0% 16 2.1 2.8 1.6 6.8 1.4
4. 30.250.8.66 0.0% 16 2.0 2.1 1.6 2.9 0.4
5. 30.250.7.130 0.0% 16 4.3 2.4 1.9 4.3 0.6
6. 30.250.12.101 66.7% 16 3.1 2.4 2.1 3.1 0.4
7. 210.32.123.77 0.0% 16 2.4 3.1 2.0 8.8 1.7
8. (waiting for reply)
9. (waiting for reply)
10. 42.245.252.17 0.0% 16 8.2 8.5 7.3 10.6 0.9
11. (waiting for reply)
12. (waiting for reply)
13. (waiting for reply)
14. 101.4.115.105 93.3% 16 8.9 8.9 8.9 8.9 0.0
15. 101.4.117.30 0.0% 16 27.5 29.8 27.5 36.7 2.5
16. 101.4.116.118 0.0% 16 36.4 35.0 33.3 38.0 1.6
17. 101.4.112.69 0.0% 16 33.0 33.5 32.2 43.7 2.7
18. 101.4.113.234 0.0% 16 34.1 34.1 33.4 36.0 0.7
19. qhu1.cernet.net 0.0% 16 32.4 33.4 32.2 38.0 1.6
20. 118.229.4.34 0.0% 16 32.9 33.5 32.2 41.2 2.2
21. 118.229.2.138 0.0% 16 33.2 34.0 32.4 43.0 2.5
22. 118.229.2.145 0.0% 16 33.3 33.1 32.3 34.6 0.6
23. (waiting for reply)
24. 101.6.8.193 0.0% 16 33.7 35.3 32.4 52.3 5.4
```
|
1.0
|
nix-channels/store 下载较大文件时有大概率断流 - <!--
请使用此模板来报告 bug,并尽可能多地提供信息。
Please use this template while reporting a bug and provide as much info as possible.
-->
#### 发生了什么(What happened)
使用 nix 需要下载较大文件(>=10 MB)时,前若干秒有 2MB/s 左右的速度,之后有一定概率速度逐渐降至 0 。
#### 期望的现象(What you expected to happen)
以恒定可接受的速度下载完整个文件。
#### 如何重现(How to reproduce it)
(此为 `/nix/store/m4jc35q12cmfy832r9rkb90ci7rdih1x-dotnet-sdk-3.1.102` 的实际下载 URL )
```
> curl 'https://mirrors.tuna.tsinghua.edu.cn/nix-channels/store/nar/1zg1n83jzq0wx75pcdn7cqdm1ny4sckn2jbjbii3kyq9ppfwhdlx.nar.xz' >/dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
7 70.3M 7 5263k 0 0 16741 0 1:13:27 0:05:21 1:08:06 0
```
#### 其他事项(Anything else we need to know)
- 已排除代理影响。
- 小文件下载正常。
- Related: #797
#### 您的环境(Environment)
- 操作系统(OS Version):NixOS 20.09pre222244.22a3bf9fb9e (Nightingale), x86_64 Linux 5.4.32
- 浏览器(如果适用)(Browser version, if applicable):/
- 其他(Others):
路由信息
```
> mtr mirrors.tuna.tsinghua.edu.cn
<... some content omitted ...>
Host Loss% Snt Last Avg Best Wrst StDev
1. semicolon.lan 0.0% 16 0.9 1.2 0.9 2.3 0.4
2. 218.108.255.106 0.0% 16 2.6 3.6 1.4 17.0 4.2
3. 218.109.3.21 0.0% 16 2.1 2.8 1.6 6.8 1.4
4. 30.250.8.66 0.0% 16 2.0 2.1 1.6 2.9 0.4
5. 30.250.7.130 0.0% 16 4.3 2.4 1.9 4.3 0.6
6. 30.250.12.101 66.7% 16 3.1 2.4 2.1 3.1 0.4
7. 210.32.123.77 0.0% 16 2.4 3.1 2.0 8.8 1.7
8. (waiting for reply)
9. (waiting for reply)
10. 42.245.252.17 0.0% 16 8.2 8.5 7.3 10.6 0.9
11. (waiting for reply)
12. (waiting for reply)
13. (waiting for reply)
14. 101.4.115.105 93.3% 16 8.9 8.9 8.9 8.9 0.0
15. 101.4.117.30 0.0% 16 27.5 29.8 27.5 36.7 2.5
16. 101.4.116.118 0.0% 16 36.4 35.0 33.3 38.0 1.6
17. 101.4.112.69 0.0% 16 33.0 33.5 32.2 43.7 2.7
18. 101.4.113.234 0.0% 16 34.1 34.1 33.4 36.0 0.7
19. qhu1.cernet.net 0.0% 16 32.4 33.4 32.2 38.0 1.6
20. 118.229.4.34 0.0% 16 32.9 33.5 32.2 41.2 2.2
21. 118.229.2.138 0.0% 16 33.2 34.0 32.4 43.0 2.5
22. 118.229.2.145 0.0% 16 33.3 33.1 32.3 34.6 0.6
23. (waiting for reply)
24. 101.6.8.193 0.0% 16 33.7 35.3 32.4 52.3 5.4
```
|
non_process
|
nix channels store 下载较大文件时有大概率断流 请使用此模板来报告 bug,并尽可能多地提供信息。 please use this template while reporting a bug and provide as much info as possible 发生了什么(what happened) 使用 nix 需要下载较大文件( mb)时,前若干秒有 s 左右的速度,之后有一定概率速度逐渐降至 。 期望的现象(what you expected to happen) 以恒定可接受的速度下载完整个文件。 如何重现(how to reproduce it) (此为 nix store dotnet sdk 的实际下载 url ) curl dev null total received xferd average speed time time time current dload upload total spent left speed 其他事项(anything else we need to know) 已排除代理影响。 小文件下载正常。 related 您的环境(environment) 操作系统(os version):nixos nightingale linux 浏览器(如果适用)(browser version if applicable): 其他(others): 路由信息 mtr mirrors tuna tsinghua edu cn host loss snt last avg best wrst stdev semicolon lan waiting for reply waiting for reply waiting for reply waiting for reply waiting for reply cernet net waiting for reply
| 0
|
531,814
| 15,512,068,267
|
IssuesEvent
|
2021-03-12 00:54:55
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
Docker Image created during the functional testing are not getting removed once done
|
kind/cleanup priority/important-soon
|
**To Reproduce**
1. Run `make functional`
2. Check docker images by running `docker images`
**Output of `docker images` command:**
```
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
minikube-local-cache-test functional-20210220224626-594968 3d0e01f439b0 14 minutes ago 30B
```
Those images should be removed once the `make functional` gets finished. This will be applicable for some other `make` commands too.
If this is good to proceed I would like to send a PR for this
|
1.0
|
Docker Image created during the functional testing are not getting removed once done - **To Reproduce**
1. Run `make functional`
2. Check docker images by running `docker images`
**Output of `docker images` command:**
```
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
minikube-local-cache-test functional-20210220224626-594968 3d0e01f439b0 14 minutes ago 30B
```
Those images should be removed once the `make functional` gets finished. This will be applicable for some other `make` commands too.
If this is good to proceed I would like to send a PR for this
|
non_process
|
docker image created during the functional testing are not getting removed once done to reproduce run make functional check docker images by running docker images output of docker images command docker images repository tag image id created size minikube local cache test functional minutes ago those images should be removed once the make functional gets finished this will be applicable for some other make commands too if this is good to proceed i would like to send a pr for this
| 0
|
13,565
| 16,105,054,692
|
IssuesEvent
|
2021-04-27 14:03:49
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Wrong x86-64 disassembly for memory references with displacement of zero
|
Feature: Processor/x86 Type: Bug
|
**Describe the bug**
When disassembling instructions that have a displacement of zero, e.g. `mov qword ptr [reg+0], reg`, the instruction is incorrectly decoded.
As an example, the bytes `48 89 54 24 00` are decoded as `mov qword ptr [rsp + rsp*0x1], rdx`, when they should be decoded as `mov qword ptr [rsp+0], rdx`.
**Background**
A practical example is as follows:
```
0: 48 83 ec 18 sub rsp,0x18 ; LARGE_INTEGER value;
4: 4c 8b c9 mov r9,rcx
7: 4d 8b 41 18 mov r8,QWORD PTR [r9+0x18] ; int* buffer = (int*)Irp->AssociatedIrp.SystemBuffer;
b: 41 8b 08 mov ecx,DWORD PTR [r8] ; int address = *buffer;
e: 0f 32 rdmsr ; value.QuadPart = __readmsr(address);
10: 48 c1 e2 20 shl rdx,0x20
14: 48 0b d0 or rdx,rax
; ++ DECODING FAULT OCCURS HERE IN GHIDRA:
17: 48 89 54 24 00 mov QWORD PTR [rsp+0x0],rdx ; *(LARGE_INTEGER*)buffer = value;
1c: 8b c2 mov eax,edx
1e: 41 89 00 mov DWORD PTR [r8],eax
21: 8b 44 24 04 mov eax,DWORD PTR [rsp+0x4]
25: 41 89 40 04 mov DWORD PTR [r8+0x4],eax
29: 49 c7 41 38 08 00 00 mov QWORD PTR [r9+0x38],0x8 ; Irp->IoStatus.Information = 8
30: 00
31: 33 c0 xor eax,eax ; return 0
33: 48 83 c4 18 add rsp,0x18
37: c3 ret
```
**To reproduce**
Steps to reproduce the behavior:
1. Open an x86_64 binary containing the following instruction bytes: `48 89 54 24 00`.
2. Open disassembly at that instruction.
3. Observe that the instruction is incorrectly decoded as `mov qword ptr [rsp + rsp*0x1], rdx`.
**Expected behavior**
Instruction decoded as `mov qword ptr [rsp+0], rdx`.
**Environment (please complete the following information):**
- OS: Win10 1909
- Java Version 14.0.2
- Ghidra Version: 9.2 (2020-Nov-13 111 EST]
|
1.0
|
Wrong x86-64 disassembly for memory references with displacement of zero - **Describe the bug**
When disassembling instructions that have a displacement of zero, e.g. `mov qword ptr [reg+0], reg`, the instruction is incorrectly decoded.
As an example, the bytes `48 89 54 24 00` are decoded as `mov qword ptr [rsp + rsp*0x1], rdx`, when they should be decoded as `mov qword ptr [rsp+0], rdx`.
**Background**
A practical example is as follows:
```
0: 48 83 ec 18 sub rsp,0x18 ; LARGE_INTEGER value;
4: 4c 8b c9 mov r9,rcx
7: 4d 8b 41 18 mov r8,QWORD PTR [r9+0x18] ; int* buffer = (int*)Irp->AssociatedIrp.SystemBuffer;
b: 41 8b 08 mov ecx,DWORD PTR [r8] ; int address = *buffer;
e: 0f 32 rdmsr ; value.QuadPart = __readmsr(address);
10: 48 c1 e2 20 shl rdx,0x20
14: 48 0b d0 or rdx,rax
; ++ DECODING FAULT OCCURS HERE IN GHIDRA:
17: 48 89 54 24 00 mov QWORD PTR [rsp+0x0],rdx ; *(LARGE_INTEGER*)buffer = value;
1c: 8b c2 mov eax,edx
1e: 41 89 00 mov DWORD PTR [r8],eax
21: 8b 44 24 04 mov eax,DWORD PTR [rsp+0x4]
25: 41 89 40 04 mov DWORD PTR [r8+0x4],eax
29: 49 c7 41 38 08 00 00 mov QWORD PTR [r9+0x38],0x8 ; Irp->IoStatus.Information = 8
30: 00
31: 33 c0 xor eax,eax ; return 0
33: 48 83 c4 18 add rsp,0x18
37: c3 ret
```
**To reproduce**
Steps to reproduce the behavior:
1. Open an x86_64 binary containing the following instruction bytes: `48 89 54 24 00`.
2. Open disassembly at that instruction.
3. Observe that the instruction is incorrectly decoded as `mov qword ptr [rsp + rsp*0x1], rdx`.
**Expected behavior**
Instruction decoded as `mov qword ptr [rsp+0], rdx`.
**Environment (please complete the following information):**
- OS: Win10 1909
- Java Version 14.0.2
- Ghidra Version: 9.2 (2020-Nov-13 111 EST]
|
process
|
wrong disassembly for memory references with displacement of zero describe the bug when disassembling instructions that have a displacement of zero e g mov qword ptr reg the instruction is incorrectly decoded as an example the bytes are decoded as mov qword ptr rdx when they should be decoded as mov qword ptr rdx background a practical example is as follows ec sub rsp large integer value mov rcx mov qword ptr int buffer int irp associatedirp systembuffer b mov ecx dword ptr int address buffer e rdmsr value quadpart readmsr address shl rdx or rdx rax decoding fault occurs here in ghidra mov qword ptr rdx large integer buffer value mov eax edx mov dword ptr eax mov eax dword ptr mov dword ptr eax mov qword ptr irp iostatus information xor eax eax return add rsp ret to reproduce steps to reproduce the behavior open an binary containing the following instruction bytes open disassembly at that instruction observe that the instruction is incorrectly decoded as mov qword ptr rdx expected behavior instruction decoded as mov qword ptr rdx environment please complete the following information os java version ghidra version nov est
| 1
|
8,577
| 11,745,709,568
|
IssuesEvent
|
2020-03-12 10:18:39
|
googleapis/gax-java
|
https://api.github.com/repos/googleapis/gax-java
|
opened
|
Java 7 CI broken
|
dependencies priority: p1 type: cleanup type: process
|
some sort of cert issue. Not a huge surprise. The Java 7 TLS certs are a little out of date. We might just need to update the java image we use to Zulu or some such.
* What went wrong:
A problem occurred configuring root project 'gax-java'.
> Could not resolve all artifacts for configuration ':classpath'.
> Could not resolve gradle.plugin.com.dorongold.plugins:task-tree:1.3.1.
Required by:
project :
> Could not resolve gradle.plugin.com.dorongold.plugins:task-tree:1.3.1.
> Could not get resource 'https://plugins.gradle.org/m2/gradle/plugin/com/dorongold/plugins/task-tree/1.3.1/task-tree-1.3.1.pom'.
> Could not GET 'https://plugins.gradle.org/m2/gradle/plugin/com/dorongold/plugins/task-tree/1.3.1/task-tree-1.3.1.pom'.
> sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: signature check failed
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
|
1.0
|
Java 7 CI broken - some sort of cert issue. Not a huge surprise. The Java 7 TLS certs are a little out of date. We might just need to update the java image we use to Zulu or some such.
* What went wrong:
A problem occurred configuring root project 'gax-java'.
> Could not resolve all artifacts for configuration ':classpath'.
> Could not resolve gradle.plugin.com.dorongold.plugins:task-tree:1.3.1.
Required by:
project :
> Could not resolve gradle.plugin.com.dorongold.plugins:task-tree:1.3.1.
> Could not get resource 'https://plugins.gradle.org/m2/gradle/plugin/com/dorongold/plugins/task-tree/1.3.1/task-tree-1.3.1.pom'.
> Could not GET 'https://plugins.gradle.org/m2/gradle/plugin/com/dorongold/plugins/task-tree/1.3.1/task-tree-1.3.1.pom'.
> sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: signature check failed
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
|
process
|
java ci broken some sort of cert issue not a huge surprise the java tls certs are a little out of date we might just need to update the java image we use to zulu or some such what went wrong a problem occurred configuring root project gax java could not resolve all artifacts for configuration classpath could not resolve gradle plugin com dorongold plugins task tree required by project could not resolve gradle plugin com dorongold plugins task tree could not get resource could not get sun security validator validatorexception pkix path validation failed java security cert certpathvalidatorexception signature check failed try run with stacktrace option to get the stack trace run with info or debug option to get more log output run with scan to get full insights
| 1
|
8,974
| 12,091,508,036
|
IssuesEvent
|
2020-04-19 11:56:59
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Reading online csv file in a model causes QGIS to freeze
|
Bug Feedback Processing
|
Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [20971](https://issues.qgis.org/issues/20971)
Affected QGIS version: 3.4.3
Redmine category:processing/modeller
---
I am trying to use an online CSV file and plot its content as points using a Processing model. These steps causes QGIS to freeze:
1) Open a new model
2) Add a vector input
3) Add the algorithm for creating points from a table and configure it
3) Run the model and select an online csv file. I used https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/4.5_day.csv
4) QGIS now freezes before the model can run
|
1.0
|
Reading online csv file in a model causes QGIS to freeze - Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [20971](https://issues.qgis.org/issues/20971)
Affected QGIS version: 3.4.3
Redmine category:processing/modeller
---
I am trying to use an online CSV file and plot its content as points using a Processing model. These steps causes QGIS to freeze:
1) Open a new model
2) Add a vector input
3) Add the algorithm for creating points from a table and configure it
3) Run the model and select an online csv file. I used https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/4.5_day.csv
4) QGIS now freezes before the model can run
|
process
|
reading online csv file in a model causes qgis to freeze author name magnus nilsson magnus nilsson original redmine issue affected qgis version redmine category processing modeller i am trying to use an online csv file and plot its content as points using a processing model these steps causes qgis to freeze open a new model add a vector input add the algorithm for creating points from a table and configure it run the model and select an online csv file i used qgis now freezes before the model can run
| 1
|
13,549
| 16,091,393,286
|
IssuesEvent
|
2021-04-26 17:08:52
|
ORNL-AMO/AMO-Tools-Suite
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Suite
|
opened
|
Basin heater calculation tweak
|
Calculator Process Cooling
|
Issue overview
--------------
quick change to line 160
```const double calc1 = ratedCapacity * 0.038676 / /*0.038676 = 12000 * 0.011 / 3413*/```
replace with
```const double calc1 = ratedCapacity * 0.0483304042/ /*0.038676 = 15000 * 0.011 / 3413*/ ```
|
1.0
|
Basin heater calculation tweak - Issue overview
--------------
quick change to line 160
```const double calc1 = ratedCapacity * 0.038676 / /*0.038676 = 12000 * 0.011 / 3413*/```
replace with
```const double calc1 = ratedCapacity * 0.0483304042/ /*0.038676 = 15000 * 0.011 / 3413*/ ```
|
process
|
basin heater calculation tweak issue overview quick change to line const double ratedcapacity replace with const double ratedcapacity
| 1
|
21,160
| 28,134,778,858
|
IssuesEvent
|
2023-04-01 08:39:59
|
anitsh/til
|
https://api.github.com/repos/anitsh/til
|
opened
|
Kaizen - Process Improvement
|
agile process
|
[Share To make great changes in your life, follow the philosophy of kaizen on LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https://bigthink.com/smart-skills/kaizen/)
When we want to make a change in life, especially a big change, it can be daunting. If we decide to lose several pounds, it’s easy to give up when we see little results after months and months of sweat, wheezing, and eating salad. Any gargantuan task, from self-improvement to writing a dissertation, brings on such a weary chorus of sighs that we find ourselves plodding along half-heartedly.
We don’t do well with vastness, and a distant horizon makes a lot of people say, “Screw it, I’m off for a drink.” And this has knock-on effects. When we fail in our goals, [we are less likely](https://www.frontiersin.org/articles/10.3389/fpsyg.2021.704790/full) to do well in the future. Success begets success, and failure repeats itself.
We expect too much of ourselves and others
There’s something wrong with each of us. Even if you tried to live a faultless, blameless, perfect life, there is always something left to criticize. You might be nobly philanthropic, but perhaps you take too much time for yourself. You might be a doting and diligent daughter, but perhaps you don’t call your dad as much as you should. You might be a Trojan at work, but perhaps you spend a bit of company time on social media. No one is perfect.
But the point is not to be perfect, but to be better; flawlessness is impossible.
We live in an age where we expect a lot of people. Mistakes, no matter how innocent, have ruined careers. Forgiveness seems as rare as the Egyptian phoenix. Yet, to see both yourself and other people as temporarily disappointing God(s) isn’t healthy. Instead, we should focus not on being the best, but rather being better than you once were. As the Roman Stoic, Seneca, put it:
“I am not a ‘wise man,’ nor… shall I ever be. And so, require not from me that I should be equal to the best, but that I should be better than the wicked. It is enough for me if every day I reduce the number of my vices, and blame my mistakes.”
The problem, though, is that the command to “be better” is a classic example of a vague, unhelpful resolution that usually will be broken by lunch time. Trite, vapid, and ill-defined targets will get you nowhere. That is why the Japanese philosophy of kaizen (改善) is so powerful and so useful. It makes the insurmountable manageable and allows us to accomplish even the greatest of tasks.
The philosophy of kaizen
Kaizen is not some ancient, arcane secret buried deep within some lost monastic scrolls. It’s a business practice popularized in the 20th century by Toyota — as in the car manufacturer.
It literally translates as “good change,” and it’s the practice of gradual, continuous improvement. It’s the philosophy that says we can all better ourselves, but the best (and most sustainable) way to do so is slowly and in small steps. Toyota was once a textile company, and its transition to making cars was not an overnight revolution (which is called kaikaku). Instead, there was a change here, a shift there. Every day something was different, every week something was better, and when a month became a year, incredible change had been achieved.
We live in an age of quick fixes and instant gratification, but kaizen is neither. Its slow, determined improvement can seem pointlessly small and insignificant when taken alone. But just as many drops will one day make an ocean, kaizen can transform any life. As the days turn to years, you will look back on who you were with new eyes.
Kaizen is a [proven](https://www.oecd.org/dev/Impacts-of-Kaizen-management-on-workers.pdf), [effective](https://link.springer.com/article/10.1057/s41287-021-00459-0), and practical way by which to be better.
|
1.0
|
Kaizen - Process Improvement - [Share To make great changes in your life, follow the philosophy of kaizen on LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https://bigthink.com/smart-skills/kaizen/)
When we want to make a change in life, especially a big change, it can be daunting. If we decide to lose several pounds, it’s easy to give up when we see little results after months and months of sweat, wheezing, and eating salad. Any gargantuan task, from self-improvement to writing a dissertation, brings on such a weary chorus of sighs that we find ourselves plodding along half-heartedly.
We don’t do well with vastness, and a distant horizon makes a lot of people say, “Screw it, I’m off for a drink.” And this has knock-on effects. When we fail in our goals, [we are less likely](https://www.frontiersin.org/articles/10.3389/fpsyg.2021.704790/full) to do well in the future. Success begets success, and failure repeats itself.
We expect too much of ourselves and others
There’s something wrong with each of us. Even if you tried to live a faultless, blameless, perfect life, there is always something left to criticize. You might be nobly philanthropic, but perhaps you take too much time for yourself. You might be a doting and diligent daughter, but perhaps you don’t call your dad as much as you should. You might be a Trojan at work, but perhaps you spend a bit of company time on social media. No one is perfect.
But the point is not to be perfect, but to be better; flawlessness is impossible.
We live in an age where we expect a lot of people. Mistakes, no matter how innocent, have ruined careers. Forgiveness seems as rare as the Egyptian phoenix. Yet, to see both yourself and other people as temporarily disappointing God(s) isn’t healthy. Instead, we should focus not on being the best, but rather being better than you once were. As the Roman Stoic, Seneca, put it:
“I am not a ‘wise man,’ nor… shall I ever be. And so, require not from me that I should be equal to the best, but that I should be better than the wicked. It is enough for me if every day I reduce the number of my vices, and blame my mistakes.”
The problem, though, is that the command to “be better” is a classic example of a vague, unhelpful resolution that usually will be broken by lunch time. Trite, vapid, and ill-defined targets will get you nowhere. That is why the Japanese philosophy of kaizen (改善) is so powerful and so useful. It makes the insurmountable manageable and allows us to accomplish even the greatest of tasks.
The philosophy of kaizen
Kaizen is not some ancient, arcane secret buried deep within some lost monastic scrolls. It’s a business practice popularized in the 20th century by Toyota — as in the car manufacturer.
It literally translates as “good change,” and it’s the practice of gradual, continuous improvement. It’s the philosophy that says we can all better ourselves, but the best (and most sustainable) way to do so is slowly and in small steps. Toyota was once a textile company, and its transition to making cars was not an overnight revolution (which is called kaikaku). Instead, there was a change here, a shift there. Every day something was different, every week something was better, and when a month became a year, incredible change had been achieved.
We live in an age of quick fixes and instant gratification, but kaizen is neither. Its slow, determined improvement can seem pointlessly small and insignificant when taken alone. But just as many drops will one day make an ocean, kaizen can transform any life. As the days turn to years, you will look back on who you were with new eyes.
Kaizen is a [proven](https://www.oecd.org/dev/Impacts-of-Kaizen-management-on-workers.pdf), [effective](https://link.springer.com/article/10.1057/s41287-021-00459-0), and practical way by which to be better.
|
process
|
kaizen process improvement when we want to make a change in life especially a big change it can be daunting if we decide to lose several pounds it’s easy to give up when we see little results after months and months of sweat wheezing and eating salad any gargantuan task from self improvement to writing a dissertation brings on such a weary chorus of sighs that we find ourselves plodding along half heartedly we don’t do well with vastness and a distant horizon makes a lot of people say “screw it i’m off for a drink ” and this has knock on effects when we fail in our goals to do well in the future success begets success and failure repeats itself we expect too much of ourselves and others there’s something wrong with each of us even if you tried to live a faultless blameless perfect life there is always something left to criticize you might be nobly philanthropic but perhaps you take too much time for yourself you might be a doting and diligent daughter but perhaps you don’t call your dad as much as you should you might be a trojan at work but perhaps you spend a bit of company time on social media no one is perfect but the point is not to be perfect but to be better flawlessness is impossible we live in an age where we expect a lot of people mistakes no matter how innocent have ruined careers forgiveness seems as rare as the egyptian phoenix yet to see both yourself and other people as temporarily disappointing god s isn’t healthy instead we should focus not on being the best but rather being better than you once were as the roman stoic seneca put it “i am not a ‘wise man ’ nor… shall i ever be and so require not from me that i should be equal to the best but that i should be better than the wicked it is enough for me if every day i reduce the number of my vices and blame my mistakes ” the problem though is that the command to “be better” is a classic example of a vague unhelpful resolution that usually will be broken by lunch time trite vapid and ill defined targets will get you nowhere that is why the japanese philosophy of kaizen 改善 is so powerful and so useful it makes the insurmountable manageable and allows us to accomplish even the greatest of tasks the philosophy of kaizen kaizen is not some ancient arcane secret buried deep within some lost monastic scrolls it’s a business practice popularized in the century by toyota — as in the car manufacturer it literally translates as “good change ” and it’s the practice of gradual continuous improvement it’s the philosophy that says we can all better ourselves but the best and most sustainable way to do so is slowly and in small steps toyota was once a textile company and its transition to making cars was not an overnight revolution which is called kaikaku instead there was a change here a shift there every day something was different every week something was better and when a month became a year incredible change had been achieved we live in an age of quick fixes and instant gratification but kaizen is neither its slow determined improvement can seem pointlessly small and insignificant when taken alone but just as many drops will one day make an ocean kaizen can transform any life as the days turn to years you will look back on who you were with new eyes kaizen is a and practical way by which to be better
| 1
|
1,986
| 4,816,827,194
|
IssuesEvent
|
2016-11-04 11:28:59
|
woesterduolf/Mission-reisbureau
|
https://api.github.com/repos/woesterduolf/Mission-reisbureau
|
opened
|
Booking overview
|
Boekingsprocess priority: highest Type:Feature
|
See mockup file (page 5)
Again we have the banner on the top and the banner on the left.
Apart from that, there is a main area that shows all the booking options the customer has selected so far. On top the booking number is displayed for easy communication with the travel agency.
Below that is all the information: date of arrival, date of departure, the city the customer has chosen, the hotel he has chosen, the room he has chosen. Also displayed is the total cost for the hotel.
Right below that we have the selected means of travel.
The bus and flying option will be discussed in a later page. Also included is the costs for the selected travel option. And concluding comes the total cost for the selected trip.
The customer can now do 3 thing;
Finalize booking, which will bring him to the payment page
Save booking, which will take him to the save page
or cancels booking, which will bring him back to the beginning of the site.
|
1.0
|
Booking overview - See mockup file (page 5)
Again we have the banner on the top and the banner on the left.
Apart from that, there is a main area that shows all the booking options the customer has selected so far. On top the booking number is displayed for easy communication with the travel agency.
Below that is all the information: date of arrival, date of departure, the city the customer has chosen, the hotel he has chosen, the room he has chosen. Also displayed is the total cost for the hotel.
Right below that we have the selected means of travel.
The bus and flying option will be discussed in a later page. Also included is the costs for the selected travel option. And concluding comes the total cost for the selected trip.
The customer can now do 3 thing;
Finalize booking, which will bring him to the payment page
Save booking, which will take him to the save page
or cancels booking, which will bring him back to the beginning of the site.
|
process
|
booking overview see mockup file page again we have the banner on the top and the banner on the left apart from that there is a main area that shows all the booking options the customer has selected so far on top the booking number is displayed for easy communication with the travel agency below that is all the information date of arrival date of departure the city the customer has chosen the hotel he has chosen the room he has chosen also displayed is the total cost for the hotel right below that we have the selected means of travel the bus and flying option will be discussed in a later page also included is the costs for the selected travel option and concluding comes the total cost for the selected trip the customer can now do thing finalize booking which will bring him to the payment page save booking which will take him to the save page or cancels booking which will bring him back to the beginning of the site
| 1
|
4,335
| 7,242,199,108
|
IssuesEvent
|
2018-02-14 06:16:38
|
muflihun/residue
|
https://api.github.com/repos/muflihun/residue
|
closed
|
Race condition causing deadlock when creating log file
|
area: log-processing edge-case type: bug
|
When log rotation truncates the file and file is being created. Last log message seen is
```
22:51:56,265 [AdminHandler] [INFO] FSR: Prepending 'level' format specifier in filename for logger [sample-app] as we have multiple filenames for levels
22:51:56,265 [AdminHandler] [INFO] FSR: [/tmp/logs/sample-app.log] => [/tmp/logs/backups/sample-app/] as [global-2018.log]
22:51:56,265 [AdminHandler] [INFO] FSR: Result: [/tmp/logs/backups/sample-app/] [2018.tar.gz] with [3] items
22:51:56,265 [AdminHandler] [vDETAILS] [log-rotator.cc:272] Ignoring rotating empty file /tmp/logs/sample-app-verbose.log
22:51:56,265 [AdminHandler] [vDETAILS] [log-rotator.cc:272] Ignoring rotating empty file /tmp/logs/sample-app-debug.log
22:51:56,265 [AdminHandler] [vDETAILS] [log-rotator.cc:261] Rotating [/tmp/logs/sample-app.log] => [/tmp/logs/backups/sample-app/global-2018.log] (1.2GB)
22:51:56,265 [LogDispatcher] [vDEBUG] [log-request-handler.cc:185] Force check: 0, clientRef: 0x7000098bddf0, *clientRef: muflihun00102030, bypassChecks: 1
22:51:56,265 [LogDispatcher] [ERROR] File not found [/tmp/logs/sample-app.log] [Logger: sample-app]. Creating...
22:51:56,265 [LogDispatcher] [INFO] Accessing file...
```
The next line is updating permission but there is no lock required from this point onwards so not sure why is this happening
|
1.0
|
Race condition causing deadlock when creating log file - When log rotation truncates the file and file is being created. Last log message seen is
```
22:51:56,265 [AdminHandler] [INFO] FSR: Prepending 'level' format specifier in filename for logger [sample-app] as we have multiple filenames for levels
22:51:56,265 [AdminHandler] [INFO] FSR: [/tmp/logs/sample-app.log] => [/tmp/logs/backups/sample-app/] as [global-2018.log]
22:51:56,265 [AdminHandler] [INFO] FSR: Result: [/tmp/logs/backups/sample-app/] [2018.tar.gz] with [3] items
22:51:56,265 [AdminHandler] [vDETAILS] [log-rotator.cc:272] Ignoring rotating empty file /tmp/logs/sample-app-verbose.log
22:51:56,265 [AdminHandler] [vDETAILS] [log-rotator.cc:272] Ignoring rotating empty file /tmp/logs/sample-app-debug.log
22:51:56,265 [AdminHandler] [vDETAILS] [log-rotator.cc:261] Rotating [/tmp/logs/sample-app.log] => [/tmp/logs/backups/sample-app/global-2018.log] (1.2GB)
22:51:56,265 [LogDispatcher] [vDEBUG] [log-request-handler.cc:185] Force check: 0, clientRef: 0x7000098bddf0, *clientRef: muflihun00102030, bypassChecks: 1
22:51:56,265 [LogDispatcher] [ERROR] File not found [/tmp/logs/sample-app.log] [Logger: sample-app]. Creating...
22:51:56,265 [LogDispatcher] [INFO] Accessing file...
```
The next line is updating permission but there is no lock required from this point onwards so not sure why is this happening
|
process
|
race condition causing deadlock when creating log file when log rotation truncates the file and file is being created last log message seen is fsr prepending level format specifier in filename for logger as we have multiple filenames for levels fsr as fsr result with items ignoring rotating empty file tmp logs sample app verbose log ignoring rotating empty file tmp logs sample app debug log rotating force check clientref clientref bypasschecks file not found creating accessing file the next line is updating permission but there is no lock required from this point onwards so not sure why is this happening
| 1
|
21,201
| 28,238,789,666
|
IssuesEvent
|
2023-04-06 04:42:03
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
[Mirror] rules_go and gazelle deps
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
https://github.com/golang/tools/archive/refs/tags/v0.7.0.zip
https://github.com/golang/sys/archive/refs/tags/v0.6.0.zip
https://github.com/golang/xerrors/archive/04be3eba64a22a838cdb17b8dca15a52871c08b4.zip
https://github.com/protocolbuffers/protobuf-go/archive/refs/tags/v1.30.0.zip
https://github.com/golang/protobuf/archive/refs/tags/v1.5.3.zip
https://github.com/mwitkow/go-proto-validators/archive/refs/tags/v0.3.2.zip
https://github.com/gogo/protobuf/archive/refs/tags/v1.3.2.zip
https://github.com/googleapis/go-genproto/archive/6ac7f18bb9d5eeeb13a9f1ae4f21e4374a1952f8.zip
https://github.com/googleapis/googleapis/archive/83c3605afb5a39952bf0a0809875d41cf2a558ca.zip
https://github.com/golang/mock/archive/refs/tags/v1.7.0-rc.1.zip
|
1.0
|
[Mirror] rules_go and gazelle deps - ### Please list the URLs of the archives you'd like to mirror:
https://github.com/golang/tools/archive/refs/tags/v0.7.0.zip
https://github.com/golang/sys/archive/refs/tags/v0.6.0.zip
https://github.com/golang/xerrors/archive/04be3eba64a22a838cdb17b8dca15a52871c08b4.zip
https://github.com/protocolbuffers/protobuf-go/archive/refs/tags/v1.30.0.zip
https://github.com/golang/protobuf/archive/refs/tags/v1.5.3.zip
https://github.com/mwitkow/go-proto-validators/archive/refs/tags/v0.3.2.zip
https://github.com/gogo/protobuf/archive/refs/tags/v1.3.2.zip
https://github.com/googleapis/go-genproto/archive/6ac7f18bb9d5eeeb13a9f1ae4f21e4374a1952f8.zip
https://github.com/googleapis/googleapis/archive/83c3605afb5a39952bf0a0809875d41cf2a558ca.zip
https://github.com/golang/mock/archive/refs/tags/v1.7.0-rc.1.zip
|
process
|
rules go and gazelle deps please list the urls of the archives you d like to mirror
| 1
|
7,737
| 8,040,998,684
|
IssuesEvent
|
2018-07-31 00:17:40
|
terraform-providers/terraform-provider-aws
|
https://api.github.com/repos/terraform-providers/terraform-provider-aws
|
closed
|
Cannot remove rules from aws_waf_web_acl
|
bug service/waf
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
Terraform v0.11.7
+ provider.aws v1.15.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_waf_web_acl
* aws_waf_rule
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
#### Before
```hcl
resource "aws_waf_web_acl" "global_waf_acl" {
name = "GlobalWAF"
metric_name = "GlobalWAF"
default_action { type = "ALLOW" }
rules {
action { type = "BLOCK" }
priority = 1
rule_id = "${aws_waf_rate_based_rule.brute_force_rule.id}"
type = "RATE_BASED"
}
rules {
action { type = "BLOCK" }
priority = 2
rule_id = "${aws_waf_rule.auto_block_list_rule.id}"
type = "REGULAR"
}
}
```
#### First attempt
```hcl
resource "aws_waf_web_acl" "global_waf_acl" {
name = "GlobalWAF"
metric_name = "GlobalWAF"
default_action { type = "ALLOW" }
rules {
action { type = "BLOCK" }
priority = 1
rule_id = "${aws_waf_rate_based_rule.brute_force_rule.id}"
type = "RATE_BASED"
}
rules {
action { type = "BLOCK" }
priority = 2
rule_id = "${aws_waf_rule.new_rule.id}"
type = "REGULAR"
}
rules {
action { type = "BLOCK" }
priority = 3
rule_id = "${aws_waf_rule.auto_block_list_rule.id}"
type = "REGULAR"
}
}
```
`* aws_waf_web_acl.global_waf_acl: Error Updating WAF ACL: Error Updating WAF ACL: ValidationException: Cannot allow rule <ID> with priority 2. Another rule already has this priority.`
#### Second attempt
```hcl
resource "aws_waf_web_acl" "global_waf_acl" {
name = "GlobalWAF"
metric_name = "GlobalWAF"
default_action { type = "ALLOW" }
rules {
action { type = "BLOCK" }
priority = 1
rule_id = "${aws_waf_rate_based_rule.brute_force_rule.id}"
type = "RATE_BASED"
}
}
```
`aws_waf_web_acl.global_waf_acl: Modifications complete after 1s`
All rules are still attached to WAF in AWS Console
### Expected Behavior
I should be able to update rule priorities.
I should also be able to remove rules and have them removed from the WAF
### Actual Behavior
Priority conflict.
Rules still present.
### Steps to Reproduce
See above HCL
|
1.0
|
Cannot remove rules from aws_waf_web_acl - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
Terraform v0.11.7
+ provider.aws v1.15.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_waf_web_acl
* aws_waf_rule
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
#### Before
```hcl
resource "aws_waf_web_acl" "global_waf_acl" {
name = "GlobalWAF"
metric_name = "GlobalWAF"
default_action { type = "ALLOW" }
rules {
action { type = "BLOCK" }
priority = 1
rule_id = "${aws_waf_rate_based_rule.brute_force_rule.id}"
type = "RATE_BASED"
}
rules {
action { type = "BLOCK" }
priority = 2
rule_id = "${aws_waf_rule.auto_block_list_rule.id}"
type = "REGULAR"
}
}
```
#### First attempt
```hcl
resource "aws_waf_web_acl" "global_waf_acl" {
name = "GlobalWAF"
metric_name = "GlobalWAF"
default_action { type = "ALLOW" }
rules {
action { type = "BLOCK" }
priority = 1
rule_id = "${aws_waf_rate_based_rule.brute_force_rule.id}"
type = "RATE_BASED"
}
rules {
action { type = "BLOCK" }
priority = 2
rule_id = "${aws_waf_rule.new_rule.id}"
type = "REGULAR"
}
rules {
action { type = "BLOCK" }
priority = 3
rule_id = "${aws_waf_rule.auto_block_list_rule.id}"
type = "REGULAR"
}
}
```
`* aws_waf_web_acl.global_waf_acl: Error Updating WAF ACL: Error Updating WAF ACL: ValidationException: Cannot allow rule <ID> with priority 2. Another rule already has this priority.`
#### Second attempt
```hcl
resource "aws_waf_web_acl" "global_waf_acl" {
name = "GlobalWAF"
metric_name = "GlobalWAF"
default_action { type = "ALLOW" }
rules {
action { type = "BLOCK" }
priority = 1
rule_id = "${aws_waf_rate_based_rule.brute_force_rule.id}"
type = "RATE_BASED"
}
}
```
`aws_waf_web_acl.global_waf_acl: Modifications complete after 1s`
All rules are still attached to WAF in AWS Console
### Expected Behavior
I should be able to update rule priorities.
I should also be able to remove rules and have them removed from the WAF
### Actual Behavior
Priority conflict.
Rules still present.
### Steps to Reproduce
See above HCL
|
non_process
|
cannot remove rules from aws waf web acl community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version terraform provider aws affected resource s aws waf web acl aws waf rule terraform configuration files before hcl resource aws waf web acl global waf acl name globalwaf metric name globalwaf default action type allow rules action type block priority rule id aws waf rate based rule brute force rule id type rate based rules action type block priority rule id aws waf rule auto block list rule id type regular first attempt hcl resource aws waf web acl global waf acl name globalwaf metric name globalwaf default action type allow rules action type block priority rule id aws waf rate based rule brute force rule id type rate based rules action type block priority rule id aws waf rule new rule id type regular rules action type block priority rule id aws waf rule auto block list rule id type regular aws waf web acl global waf acl error updating waf acl error updating waf acl validationexception cannot allow rule with priority another rule already has this priority second attempt hcl resource aws waf web acl global waf acl name globalwaf metric name globalwaf default action type allow rules action type block priority rule id aws waf rate based rule brute force rule id type rate based aws waf web acl global waf acl modifications complete after all rules are still attached to waf in aws console expected behavior i should be able to update rule priorities i should also be able to remove rules and have them removed from the waf actual behavior priority conflict rules still present steps to reproduce see above hcl
| 0
|
3,525
| 6,564,767,995
|
IssuesEvent
|
2017-09-08 04:05:17
|
zero-os/0-Disk
|
https://api.github.com/repos/zero-os/0-Disk
|
closed
|
redo switch-to-slave-cluster feature using the stderr logger
|
process_duplicate type_feature
|
Originally the `switch-to-slave-cluster` was implemented as follows:
1. read the config yaml file (which used to be the way to store configs, and it contained _all_ configurations) ;
2. use the slave cluster as the primary cluster for a given vdisk;
3. delete the slave cluster for the vdisk;
However, since #330 it has been decided (in agreement with @FastGeert) to only ever read from the config source (etcd), at least within the 0-Disk codebase. It's only the upper layers which write the config. Instead we should notify the upper layers using the stderr layers as described in issue #300, as 0-Disk doesn't have enough knowledge to properly do the switching of clusters.
In milestone 6 the original feature as described above has been disabled.
Once issue #300 is resolved, this feature can be re-implemented and enabled.
|
1.0
|
redo switch-to-slave-cluster feature using the stderr logger - Originally the `switch-to-slave-cluster` was implemented as follows:
1. read the config yaml file (which used to be the way to store configs, and it contained _all_ configurations) ;
2. use the slave cluster as the primary cluster for a given vdisk;
3. delete the slave cluster for the vdisk;
However, since #330 it has been decided (in agreement with @FastGeert) to only ever read from the config source (etcd), at least within the 0-Disk codebase. It's only the upper layers which write the config. Instead we should notify the upper layers using the stderr layers as described in issue #300, as 0-Disk doesn't have enough knowledge to properly do the switching of clusters.
In milestone 6 the original feature as described above has been disabled.
Once issue #300 is resolved, this feature can be re-implemented and enabled.
|
process
|
redo switch to slave cluster feature using the stderr logger originally the switch to slave cluster was implemented as follows read the config yaml file which used to be the way to store configs and it contained all configurations use the slave cluster as the primary cluster for a given vdisk delete the slave cluster for the vdisk however since it has been decided in agreement with fastgeert to only ever read from the config source etcd at least within the disk codebase it s only the upper layers which write the config instead we should notify the upper layers using the stderr layers as described in issue as disk doesn t have enough knowledge to properly do the switching of clusters in milestone the original feature as described above has been disabled once issue is resolved this feature can be re implemented and enabled
| 1
|
15,852
| 20,032,075,970
|
IssuesEvent
|
2022-02-02 07:43:54
|
plazi/treatmentBank
|
https://api.github.com/repos/plazi/treatmentBank
|
opened
|
processing: parallel processing?
|
question processing
|
At the moment, something is processing at Frankfurt, and all the daily processing is on hold, which means we have also no output of new taxa on a daily base and we can't proceeed with various projects:

Couldn't we find a solution to run these large jobs either at a time with little activity. or probably better, also for the future, to do this on a separate machine (virutal machine, instance, whatever the technical term is)?
|
1.0
|
processing: parallel processing? - At the moment, something is processing at Frankfurt, and all the daily processing is on hold, which means we have also no output of new taxa on a daily base and we can't proceeed with various projects:

Couldn't we find a solution to run these large jobs either at a time with little activity. or probably better, also for the future, to do this on a separate machine (virutal machine, instance, whatever the technical term is)?
|
process
|
processing parallel processing at the moment something is processing at frankfurt and all the daily processing is on hold which means we have also no output of new taxa on a daily base and we can t proceeed with various projects couldn t we find a solution to run these large jobs either at a time with little activity or probably better also for the future to do this on a separate machine virutal machine instance whatever the technical term is
| 1
|
82,794
| 16,040,893,032
|
IssuesEvent
|
2021-04-22 07:44:39
|
smeas/Beer-and-Plunder
|
https://api.github.com/repos/smeas/Beer-and-Plunder
|
closed
|
Set up basic brawl system
|
4p code
|
**Description**
Set up the basic viking brawl system.
**Subtasks**
- [x] Vikings enter brawl state when reaching below a threshold
- [x] Vikings can leave the brawl state when a condition is met
- [x] There is an indicator to when a viking is brawling
- [x] The brawling spreads to nearby tables
- [ ] Entering a brawl causes damage to the tavern
|
1.0
|
Set up basic brawl system - **Description**
Set up the basic viking brawl system.
**Subtasks**
- [x] Vikings enter brawl state when reaching below a threshold
- [x] Vikings can leave the brawl state when a condition is met
- [x] There is an indicator to when a viking is brawling
- [x] The brawling spreads to nearby tables
- [ ] Entering a brawl causes damage to the tavern
|
non_process
|
set up basic brawl system description set up the basic viking brawl system subtasks vikings enter brawl state when reaching below a threshold vikings can leave the brawl state when a condition is met there is an indicator to when a viking is brawling the brawling spreads to nearby tables entering a brawl causes damage to the tavern
| 0
|
84,360
| 3,663,926,478
|
IssuesEvent
|
2016-02-19 09:16:16
|
bedita/bedita
|
https://api.github.com/repos/bedita/bedita
|
opened
|
YouTube shortened URL are not interpreted
|
Module - Multimedia Priority - Normal Topic - Core Type - Enhancement
|
Since 2009 Google and Youtube shortens youtube urls, i.e.:
https://www.youtube.com/watch?v=f_p9SO7u2HE[…]
becomes
https://youtu.be/f_p9SO7u2HE
This kind of urls are not working in adding media "by URL".
|
1.0
|
YouTube shortened URL are not interpreted - Since 2009 Google and Youtube shortens youtube urls, i.e.:
https://www.youtube.com/watch?v=f_p9SO7u2HE[…]
becomes
https://youtu.be/f_p9SO7u2HE
This kind of urls are not working in adding media "by URL".
|
non_process
|
youtube shortened url are not interpreted since google and youtube shortens youtube urls i e becomes this kind of urls are not working in adding media by url
| 0
|
4,355
| 7,260,435,274
|
IssuesEvent
|
2018-02-18 09:50:25
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE] Remove c++ geometry snapper plugin
|
Automatic new feature Processing
|
Original commit: https://github.com/qgis/QGIS/commit/9b667d1e8a0759a8f4d807cae98e7e4224dbf5ac by nyalldawson
All functionality is now available through analysis lib + processing
algorithm.
Marked as feature for documentation + changelog flagging
|
1.0
|
[FEATURE] Remove c++ geometry snapper plugin - Original commit: https://github.com/qgis/QGIS/commit/9b667d1e8a0759a8f4d807cae98e7e4224dbf5ac by nyalldawson
All functionality is now available through analysis lib + processing
algorithm.
Marked as feature for documentation + changelog flagging
|
process
|
remove c geometry snapper plugin original commit by nyalldawson all functionality is now available through analysis lib processing algorithm marked as feature for documentation changelog flagging
| 1
|
333,398
| 10,121,658,648
|
IssuesEvent
|
2019-07-31 16:04:27
|
TykTechnologies/tyk
|
https://api.github.com/repos/TykTechnologies/tyk
|
closed
|
New MW - hmac signature to the message between tyk and the upstreem
|
Priority: Medium customer request enhancement
|
**Do you want to request a *feature* or report a *bug*?**
feature
**What is the current behavior?**
Doesn't exists.
At the moment if your upstream cannot do mTLS, they can't properlly verify the client. This solution will enable upstream to verify requests coming from Tyk if they will implement the code to verify this signature.
**What is the expected behavior?**
Tyk is doing a similar thing when verifying the signature in the request, as described [here](https://tyk.io/docs/security/your-apis/hmac-signatures/) (using [draft 5](https://tools.ietf.org/html/draft-cavage-http-signatures-05) ).
We can possiblly add support to sign the request that is coming out of tyk after all the other MW have been executed.
Special cases:
- Post plugin - possiblly run this signature after running the plugin.
- Virtual endpoint - possiblly not working at all along with this MW.
Note:
- This is the latest [draft](https://tools.ietf.org/html/draft-cavage-http-signatures-10#section-4.1.1), but it has expired last year.
- Later stage - consider supporting RSA based signatures to both request and response.
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem**
**Which versions of Tyk affected by this issue? Did this work in previous versions of Tyk?**
2.8
|
1.0
|
New MW - hmac signature to the message between tyk and the upstreem - **Do you want to request a *feature* or report a *bug*?**
feature
**What is the current behavior?**
Doesn't exists.
At the moment if your upstream cannot do mTLS, they can't properlly verify the client. This solution will enable upstream to verify requests coming from Tyk if they will implement the code to verify this signature.
**What is the expected behavior?**
Tyk is doing a similar thing when verifying the signature in the request, as described [here](https://tyk.io/docs/security/your-apis/hmac-signatures/) (using [draft 5](https://tools.ietf.org/html/draft-cavage-http-signatures-05) ).
We can possiblly add support to sign the request that is coming out of tyk after all the other MW have been executed.
Special cases:
- Post plugin - possiblly run this signature after running the plugin.
- Virtual endpoint - possiblly not working at all along with this MW.
Note:
- This is the latest [draft](https://tools.ietf.org/html/draft-cavage-http-signatures-10#section-4.1.1), but it has expired last year.
- Later stage - consider supporting RSA based signatures to both request and response.
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem**
**Which versions of Tyk affected by this issue? Did this work in previous versions of Tyk?**
2.8
|
non_process
|
new mw hmac signature to the message between tyk and the upstreem do you want to request a feature or report a bug feature what is the current behavior doesn t exists at the moment if your upstream cannot do mtls they can t properlly verify the client this solution will enable upstream to verify requests coming from tyk if they will implement the code to verify this signature what is the expected behavior tyk is doing a similar thing when verifying the signature in the request as described using we can possiblly add support to sign the request that is coming out of tyk after all the other mw have been executed special cases post plugin possiblly run this signature after running the plugin virtual endpoint possiblly not working at all along with this mw note this is the latest but it has expired last year later stage consider supporting rsa based signatures to both request and response if the current behavior is a bug please provide the steps to reproduce and if possible a minimal demo of the problem which versions of tyk affected by this issue did this work in previous versions of tyk
| 0
|
11,933
| 14,706,233,199
|
IssuesEvent
|
2021-01-04 19:30:07
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
closed
|
Ability to whitelist and rename journald fields
|
domain: processing domain: remap domain: sources have: should needs: requirements source: journald type: enhancement
|
The [`journald` source fields](https://vector.dev/docs/reference/sources/journald/#output) are somewhat excessive. We should think about a simple way for users to whitelist and rename fields. There are a couple of ways we could go with this:
1. Implement this in the `journald` source itself.
2. Build a transform that can handle this. (ex: #750 #377)
This poses an interesting question, which seems to pop up more often now, which UX is better?
1. We currently provide options to alter fields at the source, such as the `host_key` option.
2. Bundling this behavior in the `journald` source seems restrictive since I could see this being used by other sources.
For this particular case, I lean towards a transform because it seems sufficiently generic. I also don't know what we would gain by doing this in the source. Additionally, if we make progress on #1447 it makes it simpler for users to chain transforms together. This encourages us to prioritize composability of features like this.
|
1.0
|
Ability to whitelist and rename journald fields - The [`journald` source fields](https://vector.dev/docs/reference/sources/journald/#output) are somewhat excessive. We should think about a simple way for users to whitelist and rename fields. There are a couple of ways we could go with this:
1. Implement this in the `journald` source itself.
2. Build a transform that can handle this. (ex: #750 #377)
This poses an interesting question, which seems to pop up more often now, which UX is better?
1. We currently provide options to alter fields at the source, such as the `host_key` option.
2. Bundling this behavior in the `journald` source seems restrictive since I could see this being used by other sources.
For this particular case, I lean towards a transform because it seems sufficiently generic. I also don't know what we would gain by doing this in the source. Additionally, if we make progress on #1447 it makes it simpler for users to chain transforms together. This encourages us to prioritize composability of features like this.
|
process
|
ability to whitelist and rename journald fields the are somewhat excessive we should think about a simple way for users to whitelist and rename fields there are a couple of ways we could go with this implement this in the journald source itself build a transform that can handle this ex this poses an interesting question which seems to pop up more often now which ux is better we currently provide options to alter fields at the source such as the host key option bundling this behavior in the journald source seems restrictive since i could see this being used by other sources for this particular case i lean towards a transform because it seems sufficiently generic i also don t know what we would gain by doing this in the source additionally if we make progress on it makes it simpler for users to chain transforms together this encourages us to prioritize composability of features like this
| 1
|
194,094
| 15,396,921,868
|
IssuesEvent
|
2021-03-03 21:21:22
|
pjnalls/Angularization
|
https://api.github.com/repos/pjnalls/Angularization
|
opened
|
Storybook: Research and experiment with reading files in components as strings in order to show all code snippets of a component
|
documentation enhancement integration research
|
Research references:
- https://github.com/storybookjs/storybook/issues/1843
- https://stackoverflow.com/questions/53954558/how-to-turn-a-file-into-a-string-in-nodejs
|
1.0
|
Storybook: Research and experiment with reading files in components as strings in order to show all code snippets of a component - Research references:
- https://github.com/storybookjs/storybook/issues/1843
- https://stackoverflow.com/questions/53954558/how-to-turn-a-file-into-a-string-in-nodejs
|
non_process
|
storybook research and experiment with reading files in components as strings in order to show all code snippets of a component research references
| 0
|
119,108
| 25,469,559,382
|
IssuesEvent
|
2022-11-25 08:58:22
|
renovatebot/renovate
|
https://api.github.com/repos/renovatebot/renovate
|
opened
|
CodeCommit: Better user identification
|
type:feature priority-2-high status:ready platform:codecommit
|
### What would you like Renovate to be able to do?
Currently, we have: https://github.com/renovatebot/renovate/blob/325a11257de46e68f2eb4400041763554f05cbd5/lib/modules/platform/codecommit/index.md?plain=1#L44
Ideally we would like to avoid this recommendation to have `IAMReadOnlyAccess`
### If you have any ideas on how this should be implemented, please tell us here.
Currently in https://github.com/renovatebot/renovate/blob/main/lib/modules/platform/codecommit/iam-client.ts we:
- Call GetUser() to get user.arn
- If it fails, parse the error message to learn the user arn that way
Instead, we should try to find a way to get the user ARN which:
- uses least privilege, and
- does not rely on formatting of error messages
### Is this a feature you are interested in implementing yourself?
No
|
1.0
|
CodeCommit: Better user identification - ### What would you like Renovate to be able to do?
Currently, we have: https://github.com/renovatebot/renovate/blob/325a11257de46e68f2eb4400041763554f05cbd5/lib/modules/platform/codecommit/index.md?plain=1#L44
Ideally we would like to avoid this recommendation to have `IAMReadOnlyAccess`
### If you have any ideas on how this should be implemented, please tell us here.
Currently in https://github.com/renovatebot/renovate/blob/main/lib/modules/platform/codecommit/iam-client.ts we:
- Call GetUser() to get user.arn
- If it fails, parse the error message to learn the user arn that way
Instead, we should try to find a way to get the user ARN which:
- uses least privilege, and
- does not rely on formatting of error messages
### Is this a feature you are interested in implementing yourself?
No
|
non_process
|
codecommit better user identification what would you like renovate to be able to do currently we have ideally we would like to avoid this recommendation to have iamreadonlyaccess if you have any ideas on how this should be implemented please tell us here currently in we call getuser to get user arn if it fails parse the error message to learn the user arn that way instead we should try to find a way to get the user arn which uses least privilege and does not rely on formatting of error messages is this a feature you are interested in implementing yourself no
| 0
|
636,605
| 20,604,082,351
|
IssuesEvent
|
2022-03-06 18:11:41
|
rocky-linux/rockylinux.org
|
https://api.github.com/repos/rocky-linux/rockylinux.org
|
closed
|
Fix the multiple broken URLs used on the cloud-images page
|
priority: high tag: content type: bug
|
#### Description
When visiting this page:
https://rockylinux.org/cloud-images
and clicking on any valid **Deploy** link other than the ones for `us-west-1`, the AWS console pops up a message such as this:
> We cannot proceed with your requested configuration. This AMI (ami-09ca837d91f083d04) does not exist.
The specified AMI you are trying to use has an invalid ID, does not exist, or does not exist in this region. Check the AMI details, or try again with a valid AMI.
One link in particular also has a trumcated AMI ID; `ap-northeast-2` lists `ami-0280ce8ecafa32cf` instead of `ami-0280ce8ecafa32cf7`.
#### Screenshots
The screenshot matching the quoted passage above:

All of the other broken links (and in particular, the AMI ID used for `ap-northeast-2`) will result in the same failure (with the AMI ID changed as appropriate).
#### Files
https://github.com/rocky-linux/rockylinux.org/blob/develop/src/pages/cloud-images.js
or the latest commit (as of this writing):
https://github.com/rocky-linux/rockylinux.org/blob/7c9d46556f0586c05b5c90bbc55261dd8d527c99/src/pages/cloud-images.js
#### To Reproduce
1. Visit: https://rockylinux.org/cloud-images
1. Select any region _other than_ `us-west-1`
1. Click the **Deploy** link
|
1.0
|
Fix the multiple broken URLs used on the cloud-images page - #### Description
When visiting this page:
https://rockylinux.org/cloud-images
and clicking on any valid **Deploy** link other than the ones for `us-west-1`, the AWS console pops up a message such as this:
> We cannot proceed with your requested configuration. This AMI (ami-09ca837d91f083d04) does not exist.
The specified AMI you are trying to use has an invalid ID, does not exist, or does not exist in this region. Check the AMI details, or try again with a valid AMI.
One link in particular also has a trumcated AMI ID; `ap-northeast-2` lists `ami-0280ce8ecafa32cf` instead of `ami-0280ce8ecafa32cf7`.
#### Screenshots
The screenshot matching the quoted passage above:

All of the other broken links (and in particular, the AMI ID used for `ap-northeast-2`) will result in the same failure (with the AMI ID changed as appropriate).
#### Files
https://github.com/rocky-linux/rockylinux.org/blob/develop/src/pages/cloud-images.js
or the latest commit (as of this writing):
https://github.com/rocky-linux/rockylinux.org/blob/7c9d46556f0586c05b5c90bbc55261dd8d527c99/src/pages/cloud-images.js
#### To Reproduce
1. Visit: https://rockylinux.org/cloud-images
1. Select any region _other than_ `us-west-1`
1. Click the **Deploy** link
|
non_process
|
fix the multiple broken urls used on the cloud images page description when visiting this page and clicking on any valid deploy link other than the ones for us west the aws console pops up a message such as this we cannot proceed with your requested configuration this ami ami does not exist the specified ami you are trying to use has an invalid id does not exist or does not exist in this region check the ami details or try again with a valid ami one link in particular also has a trumcated ami id ap northeast lists ami instead of ami screenshots the screenshot matching the quoted passage above all of the other broken links and in particular the ami id used for ap northeast will result in the same failure with the ami id changed as appropriate files or the latest commit as of this writing to reproduce visit select any region other than us west click the deploy link
| 0
|
9,604
| 12,545,275,897
|
IssuesEvent
|
2020-06-05 18:39:20
|
google/ground-platform
|
https://api.github.com/repos/google/ground-platform
|
closed
|
[Dev workflow] Automatically build and test on push and PR
|
priority: p2 type: process
|
Related: #38 #58
We should choose some continuous integration tool to run tests/checks/deployments/etc.
|
1.0
|
[Dev workflow] Automatically build and test on push and PR - Related: #38 #58
We should choose some continuous integration tool to run tests/checks/deployments/etc.
|
process
|
automatically build and test on push and pr related we should choose some continuous integration tool to run tests checks deployments etc
| 1
|
17,864
| 23,812,321,508
|
IssuesEvent
|
2022-09-04 23:26:38
|
bisq-network/proposals
|
https://api.github.com/repos/bisq-network/proposals
|
closed
|
Have a clearly defined process for how Burning man trades BTC from donation and trade fee addresses for BSQ
|
was:approved a:proposal re:processes
|
> _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://bisq.wiki/Proposals)._
<!-- Please do not remove the text above. -->
References:
[Donation Address Owner - Bisq Wiki](https://bisq.wiki/Donation_Address_Owner)
[Donation Address Owner - Role](https://github.com/bisq-network/roles/issues/80)
[Arbitration; Donation Address Owner - Bisq Wiki](https://bisq.wiki/Arbitration#Donation_Address_Owner)
## Background
The Donation Address Owner, AKA Burning Man, provided an essential role for Bisq. One of their roles is to buy BSQ using the funds they have received in the following BTC wallets:
- Trade Fees: [38bZBj5peYS3Husdz7AH3gEUiUbYRD951t](https://mempool.space/address/38bZBj5peYS3Husdz7AH3gEUiUbYRD951t)
- Donation Address (funds sent to arbitration): [34VLFgtFKAtwTdZ5rengTT2g2zC99sWQLC](https://mempool.space/address/34VLFgtFKAtwTdZ5rengTT2g2zC99sWQLC)
Traditionally Burning Man has been buying BSQ on Sundays. Recently they have proposed they will be limiting to their buys to BSQ Swap offers.
Therefore, if Bisq traders or contributors want to sell some BSQ Sunday is a good time to sell.
The BTC sent to the above addresses is usually 1-2 BTC a month so Burning Man trades make up a significant percentage of the BSQ/BTC market volume (eg February 2022 had a BSQ/BTC volume of 4 BTC).
## Problem
About 12 months ago @refund-agent2 started to [buy BTC from burningman with 30 day average](https://github.com/bisq-network/proposals/issues/294), as opposed to buying it on the open market. At the same time Bisq also began to [only partially reimburse high volume trades](https://github.com/bisq-network/proposals/issues/296).
The two actions above slightly changed the dynamic of how Burning Man bought BSQ. The following is my assumption from reading the issues so would be good for @burningman3 to confirm the process.
Previously to the two proposals Burning Man bought BSQ with BTC from the Trade Fee and Donation Address on the open market
Since the two proposals Burning Man buys BSQ with BTC from the Trade Fee Address on the open market, and then trades with Refund Agent and partially reimbursed traders using funds from the Donation Address.
This change of dynamic has resulted in a build up of funds in the Donation Address. Funds build up in the donation address for a few reasons:
- @refund-agent2 charges arbitration fee (effectively does not give traders a full refund, this leaves funds available in the donation address as he is asking for less in reimbursement)
- Some traders that have been partially refunded might not contact Burning Man for a refund.
Currently funds in the Donation Address are just over 4 BTC. Whilst this is not an issue in itself, I think it would be good to put in place a process for how funds are spent to stop the build up of BTC in the donation address. This decreases the risk to the DAO.
Taking into account the above and commenting on the current situation I think the build up of BTC in the donation address has caused a decrease in BSQ volume. The current amount of BSQ in the donation address would purchase 148,754 BSQ at todays 30 day average price. To put that into perspective the total contributor requests since September 2021 have been 146,732 BSQ. The amount of BTC in the donation address therefore represents about 5 cycles worth of contributor compensation requests.
Anecdotally contributors have expressed concerns that they are finding it increasingly difficult to sell their BSQ. This is despite putting up their offers on a Sunday. I think the fact the Burning Man has been trading with the funds from Trade Fees and not the Donation Address is a significant contributor to this.
## Outcome
Burning Man to use funds from both addresses to:
- Trade directly with @refund-agent2 shortly following their reimbursement request
- Trade directly with traders that have been partially reimbursed following their reimbursement request
- Use all remaining (unallocated for the above) funds to buy BSQ on the open market
## Considerations
Burning Man is providing a service for the DAO. Therefore, I think the DAO should have some input as to if they do or do not want to have some parameters of what prices Burning Man should trade BSQ for.
My thoughts are BSQ is should only be bought by Burning Man when they can achieve a BSQ/BTC price equal to or less than the 30 day weighted average. I have also considered if it would be appropriate for Burning Man to **make** offers to buy BSQ when they are unable to achieve Taking offers to buy BTC for the price above. I believe this would have a positive impact for the DAO, essentially ensuring the maximum amount of BSQ can be bought as possible.
The alternative would be for Burning Man to buy BSQ/BTC at price less than or equal to a given percentage over the 30 day weighted average. This would be less beneficial for the DAO, but more beneficial to users that wanted to sell BSQ for higher prices.
Of course another alternative would be a more Laissez-faire approach as just let Burning Man do as they see fit.
## Solution
DAO to decide on a clear process for how Burning man trades BTC from donation and trade fee addresses for BSQ.
DAO to decide on it they do or do not want BTC building up in Donation Address.
Any process decided upon should be communicated in the wiki for buyers and sellers of BSQ or anyone with an interest in Bisq.
I think the wiki could do with an update and would be happy to update it and include the outcome of this proposal, and also the related recent proposal [have a clearly defined process for how users with accepted DAO reimbursement requests can trade with Burningman](https://github.com/bisq-network/proposals/issues/366).
|
1.0
|
Have a clearly defined process for how Burning man trades BTC from donation and trade fee addresses for BSQ - > _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://bisq.wiki/Proposals)._
<!-- Please do not remove the text above. -->
References:
[Donation Address Owner - Bisq Wiki](https://bisq.wiki/Donation_Address_Owner)
[Donation Address Owner - Role](https://github.com/bisq-network/roles/issues/80)
[Arbitration; Donation Address Owner - Bisq Wiki](https://bisq.wiki/Arbitration#Donation_Address_Owner)
## Background
The Donation Address Owner, AKA Burning Man, provided an essential role for Bisq. One of their roles is to buy BSQ using the funds they have received in the following BTC wallets:
- Trade Fees: [38bZBj5peYS3Husdz7AH3gEUiUbYRD951t](https://mempool.space/address/38bZBj5peYS3Husdz7AH3gEUiUbYRD951t)
- Donation Address (funds sent to arbitration): [34VLFgtFKAtwTdZ5rengTT2g2zC99sWQLC](https://mempool.space/address/34VLFgtFKAtwTdZ5rengTT2g2zC99sWQLC)
Traditionally Burning Man has been buying BSQ on Sundays. Recently they have proposed they will be limiting to their buys to BSQ Swap offers.
Therefore, if Bisq traders or contributors want to sell some BSQ Sunday is a good time to sell.
The BTC sent to the above addresses is usually 1-2 BTC a month so Burning Man trades make up a significant percentage of the BSQ/BTC market volume (eg February 2022 had a BSQ/BTC volume of 4 BTC).
## Problem
About 12 months ago @refund-agent2 started to [buy BTC from burningman with 30 day average](https://github.com/bisq-network/proposals/issues/294), as opposed to buying it on the open market. At the same time Bisq also began to [only partially reimburse high volume trades](https://github.com/bisq-network/proposals/issues/296).
The two actions above slightly changed the dynamic of how Burning Man bought BSQ. The following is my assumption from reading the issues so would be good for @burningman3 to confirm the process.
Previously to the two proposals Burning Man bought BSQ with BTC from the Trade Fee and Donation Address on the open market
Since the two proposals Burning Man buys BSQ with BTC from the Trade Fee Address on the open market, and then trades with Refund Agent and partially reimbursed traders using funds from the Donation Address.
This change of dynamic has resulted in a build up of funds in the Donation Address. Funds build up in the donation address for a few reasons:
- @refund-agent2 charges arbitration fee (effectively does not give traders a full refund, this leaves funds available in the donation address as he is asking for less in reimbursement)
- Some traders that have been partially refunded might not contact Burning Man for a refund.
Currently funds in the Donation Address are just over 4 BTC. Whilst this is not an issue in itself, I think it would be good to put in place a process for how funds are spent to stop the build up of BTC in the donation address. This decreases the risk to the DAO.
Taking into account the above and commenting on the current situation I think the build up of BTC in the donation address has caused a decrease in BSQ volume. The current amount of BSQ in the donation address would purchase 148,754 BSQ at todays 30 day average price. To put that into perspective the total contributor requests since September 2021 have been 146,732 BSQ. The amount of BTC in the donation address therefore represents about 5 cycles worth of contributor compensation requests.
Anecdotally contributors have expressed concerns that they are finding it increasingly difficult to sell their BSQ. This is despite putting up their offers on a Sunday. I think the fact the Burning Man has been trading with the funds from Trade Fees and not the Donation Address is a significant contributor to this.
## Outcome
Burning Man to use funds from both addresses to:
- Trade directly with @refund-agent2 shortly following their reimbursement request
- Trade directly with traders that have been partially reimbursed following their reimbursement request
- Use all remaining (unallocated for the above) funds to buy BSQ on the open market
## Considerations
Burning Man is providing a service for the DAO. Therefore, I think the DAO should have some input as to if they do or do not want to have some parameters of what prices Burning Man should trade BSQ for.
My thoughts are BSQ is should only be bought by Burning Man when they can achieve a BSQ/BTC price equal to or less than the 30 day weighted average. I have also considered if it would be appropriate for Burning Man to **make** offers to buy BSQ when they are unable to achieve Taking offers to buy BTC for the price above. I believe this would have a positive impact for the DAO, essentially ensuring the maximum amount of BSQ can be bought as possible.
The alternative would be for Burning Man to buy BSQ/BTC at price less than or equal to a given percentage over the 30 day weighted average. This would be less beneficial for the DAO, but more beneficial to users that wanted to sell BSQ for higher prices.
Of course another alternative would be a more Laissez-faire approach as just let Burning Man do as they see fit.
## Solution
DAO to decide on a clear process for how Burning man trades BTC from donation and trade fee addresses for BSQ.
DAO to decide on it they do or do not want BTC building up in Donation Address.
Any process decided upon should be communicated in the wiki for buyers and sellers of BSQ or anyone with an interest in Bisq.
I think the wiki could do with an update and would be happy to update it and include the outcome of this proposal, and also the related recent proposal [have a clearly defined process for how users with accepted DAO reimbursement requests can trade with Burningman](https://github.com/bisq-network/proposals/issues/366).
|
process
|
have a clearly defined process for how burning man trades btc from donation and trade fee addresses for bsq this is a bisq network proposal please familiarize yourself with the references background the donation address owner aka burning man provided an essential role for bisq one of their roles is to buy bsq using the funds they have received in the following btc wallets trade fees donation address funds sent to arbitration traditionally burning man has been buying bsq on sundays recently they have proposed they will be limiting to their buys to bsq swap offers therefore if bisq traders or contributors want to sell some bsq sunday is a good time to sell the btc sent to the above addresses is usually btc a month so burning man trades make up a significant percentage of the bsq btc market volume eg february had a bsq btc volume of btc problem about months ago refund started to as opposed to buying it on the open market at the same time bisq also began to the two actions above slightly changed the dynamic of how burning man bought bsq the following is my assumption from reading the issues so would be good for to confirm the process previously to the two proposals burning man bought bsq with btc from the trade fee and donation address on the open market since the two proposals burning man buys bsq with btc from the trade fee address on the open market and then trades with refund agent and partially reimbursed traders using funds from the donation address this change of dynamic has resulted in a build up of funds in the donation address funds build up in the donation address for a few reasons refund charges arbitration fee effectively does not give traders a full refund this leaves funds available in the donation address as he is asking for less in reimbursement some traders that have been partially refunded might not contact burning man for a refund currently funds in the donation address are just over btc whilst this is not an issue in itself i think it would be good to put in place a process for how funds are spent to stop the build up of btc in the donation address this decreases the risk to the dao taking into account the above and commenting on the current situation i think the build up of btc in the donation address has caused a decrease in bsq volume the current amount of bsq in the donation address would purchase bsq at todays day average price to put that into perspective the total contributor requests since september have been bsq the amount of btc in the donation address therefore represents about cycles worth of contributor compensation requests anecdotally contributors have expressed concerns that they are finding it increasingly difficult to sell their bsq this is despite putting up their offers on a sunday i think the fact the burning man has been trading with the funds from trade fees and not the donation address is a significant contributor to this outcome burning man to use funds from both addresses to trade directly with refund shortly following their reimbursement request trade directly with traders that have been partially reimbursed following their reimbursement request use all remaining unallocated for the above funds to buy bsq on the open market considerations burning man is providing a service for the dao therefore i think the dao should have some input as to if they do or do not want to have some parameters of what prices burning man should trade bsq for my thoughts are bsq is should only be bought by burning man when they can achieve a bsq btc price equal to or less than the day weighted average i have also considered if it would be appropriate for burning man to make offers to buy bsq when they are unable to achieve taking offers to buy btc for the price above i believe this would have a positive impact for the dao essentially ensuring the maximum amount of bsq can be bought as possible the alternative would be for burning man to buy bsq btc at price less than or equal to a given percentage over the day weighted average this would be less beneficial for the dao but more beneficial to users that wanted to sell bsq for higher prices of course another alternative would be a more laissez faire approach as just let burning man do as they see fit solution dao to decide on a clear process for how burning man trades btc from donation and trade fee addresses for bsq dao to decide on it they do or do not want btc building up in donation address any process decided upon should be communicated in the wiki for buyers and sellers of bsq or anyone with an interest in bisq i think the wiki could do with an update and would be happy to update it and include the outcome of this proposal and also the related recent proposal
| 1
|
245,182
| 20,751,124,529
|
IssuesEvent
|
2022-03-15 07:39:36
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
pkg/migration/migrationmanager/migrationmanager_test: TestAlreadyRunningJobsAreHandledProperly failed
|
C-test-failure O-robot branch-master
|
pkg/migration/migrationmanager/migrationmanager_test.TestAlreadyRunningJobsAreHandledProperly [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4575632&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4575632&tab=artifacts#/) on master @ [f5fc84fb5707428ae9505c5e3e90cf3f63d465ad](https://github.com/cockroachdb/cockroach/commits/f5fc84fb5707428ae9505c5e3e90cf3f63d465ad):
```
=== RUN TestAlreadyRunningJobsAreHandledProperly
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/d95b29da870c978d2be92e1efde9d140/logTestAlreadyRunningJobsAreHandledProperly4036161113
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss,deadlock
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestAlreadyRunningJobsAreHandledProperly.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
pkg/migration/migrationmanager/migrationmanager_test: TestAlreadyRunningJobsAreHandledProperly failed - pkg/migration/migrationmanager/migrationmanager_test.TestAlreadyRunningJobsAreHandledProperly [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4575632&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4575632&tab=artifacts#/) on master @ [f5fc84fb5707428ae9505c5e3e90cf3f63d465ad](https://github.com/cockroachdb/cockroach/commits/f5fc84fb5707428ae9505c5e3e90cf3f63d465ad):
```
=== RUN TestAlreadyRunningJobsAreHandledProperly
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/d95b29da870c978d2be92e1efde9d140/logTestAlreadyRunningJobsAreHandledProperly4036161113
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss,deadlock
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestAlreadyRunningJobsAreHandledProperly.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_process
|
pkg migration migrationmanager migrationmanager test testalreadyrunningjobsarehandledproperly failed pkg migration migrationmanager migrationmanager test testalreadyrunningjobsarehandledproperly with on master run testalreadyrunningjobsarehandledproperly test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline help see also parameters in this failure tags bazel gss deadlock
| 0
|
5,948
| 7,439,260,192
|
IssuesEvent
|
2018-03-27 05:32:53
|
Microsoft/vsts-tasks
|
https://api.github.com/repos/Microsoft/vsts-tasks
|
closed
|
VSTS Deployment Request Timeout
|
Area: AzureAppService Area: Release
|
## Environment
- Server - VSTS or TFS on-premises?
VSTS
- Agent - Hosted or Private:
Hosted
## Issue Description
We are having an issue with Deploying To Linux Web App. Basically the PUT request times out. I couldn't find any information online about this issue, so would appreciate help. Let me know if more information is needed.
Thanks
### Error logs
2018-03-10T17:15:53.3688181Z ##[debug]Performing Linux built-in package deployment
2018-03-10T17:15:53.3714741Z ##[debug][PUT]https://$****f:********@***.scm.azurewebsites.net/api/zip/site/wwwroot/
2018-03-10T17:19:01.0846803Z ##[error]Failed to deploy web package to App Service.
2018-03-10T17:19:01.0884186Z ##[debug]Processed: ##vso[task.issue type=error;]Failed to deploy web package to App Service.
2018-03-10T17:19:01.0906281Z ##[debug]task result: Failed
2018-03-10T17:19:01.0943956Z ##[error]Error: Error: Failed to deploy App Service package using kudu service : Request timeout: /api/zip/site/wwwroot/
|
1.0
|
VSTS Deployment Request Timeout - ## Environment
- Server - VSTS or TFS on-premises?
VSTS
- Agent - Hosted or Private:
Hosted
## Issue Description
We are having an issue with Deploying To Linux Web App. Basically the PUT request times out. I couldn't find any information online about this issue, so would appreciate help. Let me know if more information is needed.
Thanks
### Error logs
2018-03-10T17:15:53.3688181Z ##[debug]Performing Linux built-in package deployment
2018-03-10T17:15:53.3714741Z ##[debug][PUT]https://$****f:********@***.scm.azurewebsites.net/api/zip/site/wwwroot/
2018-03-10T17:19:01.0846803Z ##[error]Failed to deploy web package to App Service.
2018-03-10T17:19:01.0884186Z ##[debug]Processed: ##vso[task.issue type=error;]Failed to deploy web package to App Service.
2018-03-10T17:19:01.0906281Z ##[debug]task result: Failed
2018-03-10T17:19:01.0943956Z ##[error]Error: Error: Failed to deploy App Service package using kudu service : Request timeout: /api/zip/site/wwwroot/
|
non_process
|
vsts deployment request timeout environment server vsts or tfs on premises vsts agent hosted or private hosted issue description we are having an issue with deploying to linux web app basically the put request times out i couldn t find any information online about this issue so would appreciate help let me know if more information is needed thanks error logs performing linux built in package deployment failed to deploy web package to app service processed vso failed to deploy web package to app service task result failed error error failed to deploy app service package using kudu service request timeout api zip site wwwroot
| 0
|
752,307
| 26,280,405,889
|
IssuesEvent
|
2023-01-07 08:17:47
|
PowerNukkitX/PowerNukkitX
|
https://api.github.com/repos/PowerNukkitX/PowerNukkitX
|
closed
|
if the player was in gm 3 and he moves to gm 1, then there are no items in the creative
|
bug | 漏洞 Unconfirmed | 未确认 Low Priority | 低优先级
|
# 🐞 I found a bug
<!--
👉 This template is helpful, but you may erase everything if you can express the issue clearly
Feel free to ask questions or start related discussion
-->
### 📸 Screenshots / Videos
<!-- ✍ If applicable, add screenshots or video recordings to help explain your problem -->

### ▶ Steps to Reproduce
<!--- ✍ Reliable steps which someone can use to reproduce the issue. -->
1. Run command '...'
2. Click on '....'
3. Put '....' at '...'
4. See error
### ✔ Expected Behavior
<!-- ✍ What would you expect to happen -->
### ❌ Actual Behavior
<!-- ✍ What actually happened -->
### 📋 Debug information
<!-- Use the 'debugpaste upload' and 'timings paste' command in PowerNukkit -->
<!-- You can get the version from the file name, the 'about' or 'debugpaste' command outputs -->
* PowerNukkit version: ✍
* Debug link: ✍
* Timings link (if relevant): ✍
### 💢 Crash Dump, Stack Trace and Other Files
<!-- ✍ Use https://hastebin.com for big logs or dumps -->
### 💬 Anything else we should know?
<!-- ✍ This is the perfect place to add any additional details -->
|
1.0
|
if the player was in gm 3 and he moves to gm 1, then there are no items in the creative - # 🐞 I found a bug
<!--
👉 This template is helpful, but you may erase everything if you can express the issue clearly
Feel free to ask questions or start related discussion
-->
### 📸 Screenshots / Videos
<!-- ✍ If applicable, add screenshots or video recordings to help explain your problem -->

### ▶ Steps to Reproduce
<!--- ✍ Reliable steps which someone can use to reproduce the issue. -->
1. Run command '...'
2. Click on '....'
3. Put '....' at '...'
4. See error
### ✔ Expected Behavior
<!-- ✍ What would you expect to happen -->
### ❌ Actual Behavior
<!-- ✍ What actually happened -->
### 📋 Debug information
<!-- Use the 'debugpaste upload' and 'timings paste' command in PowerNukkit -->
<!-- You can get the version from the file name, the 'about' or 'debugpaste' command outputs -->
* PowerNukkit version: ✍
* Debug link: ✍
* Timings link (if relevant): ✍
### 💢 Crash Dump, Stack Trace and Other Files
<!-- ✍ Use https://hastebin.com for big logs or dumps -->
### 💬 Anything else we should know?
<!-- ✍ This is the perfect place to add any additional details -->
|
non_process
|
if the player was in gm and he moves to gm then there are no items in the creative 🐞 i found a bug 👉 this template is helpful but you may erase everything if you can express the issue clearly feel free to ask questions or start related discussion 📸 screenshots videos ▶ steps to reproduce run command click on put at see error ✔ expected behavior ❌ actual behavior 📋 debug information powernukkit version ✍ debug link ✍ timings link if relevant ✍ 💢 crash dump stack trace and other files 💬 anything else we should know
| 0
|
10,910
| 13,688,384,705
|
IssuesEvent
|
2020-09-30 11:38:11
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
closed
|
new `sha3` remap function
|
domain: mapping domain: processing transform: remap type: feature
|
The `sha3` remap function hashes the provided argument with the [SHA3](https://en.wikipedia.org/wiki/SHA-3) algorithm.
It takes an optional second argument to specify the [algorithm variant](https://en.wikipedia.org/wiki/SHA-3#Comparison_of_SHA_functions) used to perform the hashing, defaulting to the `SHAKE256` variant.
This will become a _named argument_ `variant` once #3851 lands.
The function returns an error for unknown variants.
## Examples
For all examples assume the following event:
```js
{
"message": "Hello world",
"remote_addr": "54.23.22.123"
}
```
### Path
```
.fingerprint = sha3(.message)
```
### String literal, with custom algorithm variant
```
.fingerprint = sha3("my string", "SHA3-384")
```
### Operators
```
.fingerprint = sha3(.message + .remote_addr)
```
|
1.0
|
new `sha3` remap function - The `sha3` remap function hashes the provided argument with the [SHA3](https://en.wikipedia.org/wiki/SHA-3) algorithm.
It takes an optional second argument to specify the [algorithm variant](https://en.wikipedia.org/wiki/SHA-3#Comparison_of_SHA_functions) used to perform the hashing, defaulting to the `SHAKE256` variant.
This will become a _named argument_ `variant` once #3851 lands.
The function returns an error for unknown variants.
## Examples
For all examples assume the following event:
```js
{
"message": "Hello world",
"remote_addr": "54.23.22.123"
}
```
### Path
```
.fingerprint = sha3(.message)
```
### String literal, with custom algorithm variant
```
.fingerprint = sha3("my string", "SHA3-384")
```
### Operators
```
.fingerprint = sha3(.message + .remote_addr)
```
|
process
|
new remap function the remap function hashes the provided argument with the algorithm it takes an optional second argument to specify the used to perform the hashing defaulting to the variant this will become a named argument variant once lands the function returns an error for unknown variants examples for all examples assume the following event js message hello world remote addr path fingerprint message string literal with custom algorithm variant fingerprint my string operators fingerprint message remote addr
| 1
|
9,146
| 12,203,197,494
|
IssuesEvent
|
2020-04-30 10:11:21
|
MHRA/products
|
https://api.github.com/repos/MHRA/products
|
closed
|
AUTOMATIC BATCH PROCESS - Create service attaches metadata to file
|
EPIC - Auto Batch Process :oncoming_automobile: HIGH PRIORITY :arrow_double_up: TASK :rescue_worker_helmet:
|
### User want
As a user
I want to see up to date documents on the products website
So I can make informed decisions
**Customer acceptance criteria**
**Technical acceptance criteria**
Create service attaches metadata from the received message to a retrieved file.
A lot of this code can be lifted from the existing import process.
**Data acceptance criteria**
**Testing acceptance criteria**
**Size**
S
**Value**
**Effort**
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [x] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
1.0
|
AUTOMATIC BATCH PROCESS - Create service attaches metadata to file - ### User want
As a user
I want to see up to date documents on the products website
So I can make informed decisions
**Customer acceptance criteria**
**Technical acceptance criteria**
Create service attaches metadata from the received message to a retrieved file.
A lot of this code can be lifted from the existing import process.
**Data acceptance criteria**
**Testing acceptance criteria**
**Size**
S
**Value**
**Effort**
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [x] Development
- [ ] Quality Assurance
- [ ] Release and Validate
|
process
|
automatic batch process create service attaches metadata to file user want as a user i want to see up to date documents on the products website so i can make informed decisions customer acceptance criteria technical acceptance criteria create service attaches metadata from the received message to a retrieved file a lot of this code can be lifted from the existing import process data acceptance criteria testing acceptance criteria size s value effort exit criteria met backlog discovery duxd development quality assurance release and validate
| 1
|
9,197
| 12,232,367,401
|
IssuesEvent
|
2020-05-04 09:33:35
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
DEBUG=* could also log the Client and Engine version by default
|
kind/improvement process/next-milestone team/typescript topic: cli topic: prisma-client
|
When debugging stuff, I often use `DEBUG=*` to get a lot of output. It could be nice to have the available versions in that output by default, so you do not have to output stuff additionally.
|
1.0
|
DEBUG=* could also log the Client and Engine version by default - When debugging stuff, I often use `DEBUG=*` to get a lot of output. It could be nice to have the available versions in that output by default, so you do not have to output stuff additionally.
|
process
|
debug could also log the client and engine version by default when debugging stuff i often use debug to get a lot of output it could be nice to have the available versions in that output by default so you do not have to output stuff additionally
| 1
|
16,700
| 21,802,251,687
|
IssuesEvent
|
2022-05-16 06:59:02
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
Ghidra can't handle float/integer mixed on Microsoft x64 calling convention
|
Feature: Processor/x86
|
I suspect Ghidra found bad parameters when mixed float/integer parameter.
according to [Microsoft document](https://docs.microsoft.com/en-us/cpp/build/x64-calling-convention?view=msvc-160#parameter-passing), `(int, float, int)` should be passed as `(ECX, XMM1, R8D)`.
But Ghidra also found XMM0 and RDX, and it starts extraout_blahblah. I feel confused about that.
I think some fix is required on [its prototype defintion](https://github.com/NationalSecurityAgency/ghidra/blob/e7488245fd3e85dea6050e0dd66bb4a4dbeeb53b/Ghidra/Processors/x86/data/languages/x86-64-win.cspec#L42), but I can't understand what the correct fix is.
|
1.0
|
Ghidra can't handle float/integer mixed on Microsoft x64 calling convention - I suspect Ghidra found bad parameters when mixed float/integer parameter.
according to [Microsoft document](https://docs.microsoft.com/en-us/cpp/build/x64-calling-convention?view=msvc-160#parameter-passing), `(int, float, int)` should be passed as `(ECX, XMM1, R8D)`.
But Ghidra also found XMM0 and RDX, and it starts extraout_blahblah. I feel confused about that.
I think some fix is required on [its prototype defintion](https://github.com/NationalSecurityAgency/ghidra/blob/e7488245fd3e85dea6050e0dd66bb4a4dbeeb53b/Ghidra/Processors/x86/data/languages/x86-64-win.cspec#L42), but I can't understand what the correct fix is.
|
process
|
ghidra can t handle float integer mixed on microsoft calling convention i suspect ghidra found bad parameters when mixed float integer parameter according to int float int should be passed as ecx but ghidra also found and rdx and it starts extraout blahblah i feel confused about that i think some fix is required on but i can t understand what the correct fix is
| 1
|
11,681
| 14,540,855,564
|
IssuesEvent
|
2020-12-15 13:52:18
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
migrate deploy: Permission denied ERROR for table _prisma_migrations in `list_migrations`
|
process/candidate team/migrations topic: migrate
|
## Bug description
```sh
$ prisma migrate deploy --preview-feature
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "XXX", schema "public" at "XXX:5432"
Error: Database error: Error querying the database: db error: ERROR: permission denied for table _prisma_migrations
0: sql_migration_connector::sql_imperative_migration_persistence::list_migrations
at migration-engine/connectors/sql-migration-connector/src/sql_imperative_migration_persistence.rs:121
1: migration_core::api::DiagnoseMigrationHistory
at migration-engine/core/src/api.rs:148
```
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
I'm getting the error above when running `prisma migrate deploy` against my production db using version @2.13. **The database is completely empty**. Prior to doing that I run `SET ROLE "role-name";`, but the above error occurs regardless.
The error message is quite slim, so I don't really know how to deal with it.
Does anyone know the actual query that this step runs ?
## Expected behavior
I would expect this not to throw, or give me more detailed information about why it does.
## Prisma information
I don't think this is relevant since the error occurs before any attempt to apply the schema.
## Environment & setup
- OS: Mac os & docker alpine
- Database: PostgreSQL
- Node.js version: 14.8.0
- Prisma version: 2.13.0
`npm ls | grep "@prisma"` =>
├─┬ @prisma/cli@2.13.0
│ ├── @prisma/bar@0.0.1
│ └── @prisma/engines@2.13.0-32.833ab05d2a20e822f6736a39a27de4fc8f6b3e49
├─┬ @prisma/client@2.13.0
│ └── @prisma/engines-version@2.13.0-32.833ab05d2a20e822f6736a39a27de4fc8f6b3e49
|
1.0
|
migrate deploy: Permission denied ERROR for table _prisma_migrations in `list_migrations` - ## Bug description
```sh
$ prisma migrate deploy --preview-feature
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "XXX", schema "public" at "XXX:5432"
Error: Database error: Error querying the database: db error: ERROR: permission denied for table _prisma_migrations
0: sql_migration_connector::sql_imperative_migration_persistence::list_migrations
at migration-engine/connectors/sql-migration-connector/src/sql_imperative_migration_persistence.rs:121
1: migration_core::api::DiagnoseMigrationHistory
at migration-engine/core/src/api.rs:148
```
<!-- A clear and concise description of what the bug is. -->
## How to reproduce
I'm getting the error above when running `prisma migrate deploy` against my production db using version @2.13. **The database is completely empty**. Prior to doing that I run `SET ROLE "role-name";`, but the above error occurs regardless.
The error message is quite slim, so I don't really know how to deal with it.
Does anyone know the actual query that this step runs ?
## Expected behavior
I would expect this not to throw, or give me more detailed information about why it does.
## Prisma information
I don't think this is relevant since the error occurs before any attempt to apply the schema.
## Environment & setup
- OS: Mac os & docker alpine
- Database: PostgreSQL
- Node.js version: 14.8.0
- Prisma version: 2.13.0
`npm ls | grep "@prisma"` =>
├─┬ @prisma/cli@2.13.0
│ ├── @prisma/bar@0.0.1
│ └── @prisma/engines@2.13.0-32.833ab05d2a20e822f6736a39a27de4fc8f6b3e49
├─┬ @prisma/client@2.13.0
│ └── @prisma/engines-version@2.13.0-32.833ab05d2a20e822f6736a39a27de4fc8f6b3e49
|
process
|
migrate deploy permission denied error for table prisma migrations in list migrations bug description sh prisma migrate deploy preview feature environment variables loaded from env prisma schema loaded from prisma schema prisma datasource db postgresql database xxx schema public at xxx error database error error querying the database db error error permission denied for table prisma migrations sql migration connector sql imperative migration persistence list migrations at migration engine connectors sql migration connector src sql imperative migration persistence rs migration core api diagnosemigrationhistory at migration engine core src api rs how to reproduce i m getting the error above when running prisma migrate deploy against my production db using version the database is completely empty prior to doing that i run set role role name but the above error occurs regardless the error message is quite slim so i don t really know how to deal with it does anyone know the actual query that this step runs expected behavior i would expect this not to throw or give me more detailed information about why it does prisma information i don t think this is relevant since the error occurs before any attempt to apply the schema environment setup os mac os docker alpine database postgresql node js version prisma version npm ls grep prisma ├─┬ prisma cli │ ├── prisma bar │ └── prisma engines ├─┬ prisma client │ └── prisma engines version
| 1
|
414,261
| 27,982,969,384
|
IssuesEvent
|
2023-03-26 11:29:03
|
Light7734/Light
|
https://api.github.com/repos/Light7734/Light
|
opened
|
Create a thorough development guideline
|
documentation
|
- [ ] Coding
- [ ] Style
- [ ] Core architecture
- [ ] Required module architecture
- [ ] Issues
- [ ] Pull requests
- [ ] Commit messages
- [ ] Code of conduct
- [ ] Developer Contacts
|
1.0
|
Create a thorough development guideline - - [ ] Coding
- [ ] Style
- [ ] Core architecture
- [ ] Required module architecture
- [ ] Issues
- [ ] Pull requests
- [ ] Commit messages
- [ ] Code of conduct
- [ ] Developer Contacts
|
non_process
|
create a thorough development guideline coding style core architecture required module architecture issues pull requests commit messages code of conduct developer contacts
| 0
|
16,101
| 20,322,132,342
|
IssuesEvent
|
2022-02-18 00:06:18
|
ooi-data/CE06ISSM-MFD37-03-CTDBPC000-recovered_host-ctdbp_cdef_dcl_instrument_recovered
|
https://api.github.com/repos/ooi-data/CE06ISSM-MFD37-03-CTDBPC000-recovered_host-ctdbp_cdef_dcl_instrument_recovered
|
closed
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T06:59:13.692351.
## Details
Flow name: `CE06ISSM-MFD37-03-CTDBPC000-recovered_host-ctdbp_cdef_dcl_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T06:59:13.692351.
## Details
Flow name: `CE06ISSM-MFD37-03-CTDBPC000-recovered_host-ctdbp_cdef_dcl_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host ctdbp cdef dcl instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
6,894
| 10,036,550,960
|
IssuesEvent
|
2019-07-18 10:57:28
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
reopened
|
multiple select assignee selection
|
2.0.6 Process bug
|
when in multiple selection mode, in every entity, after selecting an assignee to a task, the assignee gets assigned but the status doesnt change to "assigned"

|
1.0
|
multiple select assignee selection - when in multiple selection mode, in every entity, after selecting an assignee to a task, the assignee gets assigned but the status doesnt change to "assigned"

|
process
|
multiple select assignee selection when in multiple selection mode in every entity after selecting an assignee to a task the assignee gets assigned but the status doesnt change to assigned
| 1
|
20,235
| 26,840,348,278
|
IssuesEvent
|
2023-02-02 23:37:34
|
hackforla/peopledepot
|
https://api.github.com/repos/hackforla/peopledepot
|
closed
|
CONTRIBUTING.md: modify instructions and env file so non-interactive
|
role: back end size: 1pt Feature: Process Improvement
|
### Overview
Current instructions for Docker require the person deploying to enter in a username, email, and password for creating the superuser. This article https://docs.djangoproject.com/en/3.0/ref/django-admin/#django-admin-createsuperuser explains how to automatically create superuser using env variables without requiring any interactive entry. To do this in People Depot, the .env.dev.example file and CONTRIBUTING.md need to be modified.
|
1.0
|
CONTRIBUTING.md: modify instructions and env file so non-interactive - ### Overview
Current instructions for Docker require the person deploying to enter in a username, email, and password for creating the superuser. This article https://docs.djangoproject.com/en/3.0/ref/django-admin/#django-admin-createsuperuser explains how to automatically create superuser using env variables without requiring any interactive entry. To do this in People Depot, the .env.dev.example file and CONTRIBUTING.md need to be modified.
|
process
|
contributing md modify instructions and env file so non interactive overview current instructions for docker require the person deploying to enter in a username email and password for creating the superuser this article explains how to automatically create superuser using env variables without requiring any interactive entry to do this in people depot the env dev example file and contributing md need to be modified
| 1
|
549,108
| 16,086,077,935
|
IssuesEvent
|
2021-04-26 11:23:02
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
auth.hulu.com - site is not usable
|
browser-firefox engine-gecko os-ios priority-important
|
<!-- @browser: Firefox iOS 33.0 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.0 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/71887 -->
**URL**: https://auth.hulu.com/sso/salesforce/submit?entityId=https%3A%2F%2Fhelp.hulu.com
**Browser / Version**: Firefox iOS 33.0
**Operating System**: iOS 14.4.2
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Sales force error when trying to access Help
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
auth.hulu.com - site is not usable - <!-- @browser: Firefox iOS 33.0 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.0 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/71887 -->
**URL**: https://auth.hulu.com/sso/salesforce/submit?entityId=https%3A%2F%2Fhelp.hulu.com
**Browser / Version**: Firefox iOS 33.0
**Operating System**: iOS 14.4.2
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Sales force error when trying to access Help
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
auth hulu com site is not usable url browser version firefox ios operating system ios tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce sales force error when trying to access help browser configuration none from with ❤️
| 0
|
176,150
| 14,564,976,734
|
IssuesEvent
|
2020-12-17 06:21:28
|
VowpalWabbit/vowpal_wabbit
|
https://api.github.com/repos/VowpalWabbit/vowpal_wabbit
|
opened
|
Matrix factorization working correctly?
|
Documentation
|
### Matrix factorization example
I am dealing with the example from the docs - Matrix factorization example
> https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Matrix-factorization-example
I get the same rsme as in the documentation. I only used user_id and item_id
sample:
> reating |user |item
> 3 |u 163 |i 216
> 3 |u 465 |i 109
> 5 |u 469 |i 513
> 5 |u 242 |i 1137
> 4 |u 669 |i 340
vw -d train.vw -q ui --rank 10 --l2 0.001 --learning_rate 0.015 --passes 20 --decay_learning_rate 0.97 --power_t 0 -f movielens.reg -c -k --quiet
**rsme - 0.95**
But when I want to try additional features - similar to the example (Multiple features in a namespace)
I get a deterioration in rsme.
In example - append producer
> Lets take an example multiple-namespaces.vw:
1 |user 1 |item a |producer P
vw -t -d multiple-namespaces.vw --audit --rank 1 -q ui -q up --quiet | grep "^\t"| tr '\t' "\n"
I didn't find **_multiple-namespaces.vw_** and I appended "movie genre" from file "ml-100k/u.item"
sample:
> reating |user |item |theme
> 3 |u 822 |i 272 |t Drama
> 3 |u 332 |i 770 |t Crime Film_Noir Mystery Thriller
> 4 |u 615 |i 644 |t Documentary
> 5 |u 261 |i 340 |t Drama
> 3 |u 597 |i 824 |t Comedy
It is intuitively clear that the genre should definitely improve the model.
But this does not happen and the rsme is getting worse, in any training options
examples
1. **without "-q with t"**
vw -d train.vw -q ui --rank 10 --l2 0.001 --learning_rate 0.015 --passes 20 --decay_learning_rate 0.97 --power_t 0 -f movielens.reg -c -k --quiet
**rsme - 0.96**
2. **with -q ut**
!vw -d train.vw -q ui -q ut --rank 10 --l2 0.001 --learning_rate 0.015 --passes 20 --decay_learning_rate 0.97 --power_t 0 -f movielens.reg -c -k --quiet
**rsme - 0.97**
3. **with -q it**
!vw -d train.vw -q ui -q it --rank 10 --l2 0.001 --learning_rate 0.015 --passes 20 --decay_learning_rate 0.97 --power_t 0 -f movielens.reg -c -k --quiet
**rsme - 0.99**
I'm upset and don't understand
Any of my actions worsens the performance
**Questions**
1. What am I doing wrong?
Maybe the rank system doesn't work like that?
2. How to choose a --rank? It's not clear from the documentation.
What does it depend on? If I add additional fields - I need to increase --rank - how much?
3. I'm trying to make a recommendation system based on this functionality.
Maybe this is not the best option, what can you advise?
|
1.0
|
Matrix factorization working correctly? - ### Matrix factorization example
I am dealing with the example from the docs - Matrix factorization example
> https://github.com/VowpalWabbit/vowpal_wabbit/wiki/Matrix-factorization-example
I get the same rsme as in the documentation. I only used user_id and item_id
sample:
> reating |user |item
> 3 |u 163 |i 216
> 3 |u 465 |i 109
> 5 |u 469 |i 513
> 5 |u 242 |i 1137
> 4 |u 669 |i 340
vw -d train.vw -q ui --rank 10 --l2 0.001 --learning_rate 0.015 --passes 20 --decay_learning_rate 0.97 --power_t 0 -f movielens.reg -c -k --quiet
**rsme - 0.95**
But when I want to try additional features - similar to the example (Multiple features in a namespace)
I get a deterioration in rsme.
In example - append producer
> Lets take an example multiple-namespaces.vw:
1 |user 1 |item a |producer P
vw -t -d multiple-namespaces.vw --audit --rank 1 -q ui -q up --quiet | grep "^\t"| tr '\t' "\n"
I didn't find **_multiple-namespaces.vw_** and I appended "movie genre" from file "ml-100k/u.item"
sample:
> reating |user |item |theme
> 3 |u 822 |i 272 |t Drama
> 3 |u 332 |i 770 |t Crime Film_Noir Mystery Thriller
> 4 |u 615 |i 644 |t Documentary
> 5 |u 261 |i 340 |t Drama
> 3 |u 597 |i 824 |t Comedy
It is intuitively clear that the genre should definitely improve the model.
But this does not happen and the rsme is getting worse, in any training options
examples
1. **without "-q with t"**
vw -d train.vw -q ui --rank 10 --l2 0.001 --learning_rate 0.015 --passes 20 --decay_learning_rate 0.97 --power_t 0 -f movielens.reg -c -k --quiet
**rsme - 0.96**
2. **with -q ut**
!vw -d train.vw -q ui -q ut --rank 10 --l2 0.001 --learning_rate 0.015 --passes 20 --decay_learning_rate 0.97 --power_t 0 -f movielens.reg -c -k --quiet
**rsme - 0.97**
3. **with -q it**
!vw -d train.vw -q ui -q it --rank 10 --l2 0.001 --learning_rate 0.015 --passes 20 --decay_learning_rate 0.97 --power_t 0 -f movielens.reg -c -k --quiet
**rsme - 0.99**
I'm upset and don't understand
Any of my actions worsens the performance
**Questions**
1. What am I doing wrong?
Maybe the rank system doesn't work like that?
2. How to choose a --rank? It's not clear from the documentation.
What does it depend on? If I add additional fields - I need to increase --rank - how much?
3. I'm trying to make a recommendation system based on this functionality.
Maybe this is not the best option, what can you advise?
|
non_process
|
matrix factorization working correctly matrix factorization example i am dealing with the example from the docs matrix factorization example i get the same rsme as in the documentation i only used user id and item id sample reating user item u i u i u i u i u i vw d train vw q ui rank learning rate passes decay learning rate power t f movielens reg c k quiet rsme but when i want to try additional features similar to the example multiple features in a namespace i get a deterioration in rsme in example append producer lets take an example multiple namespaces vw user item a producer p vw t d multiple namespaces vw audit rank q ui q up quiet grep t tr t n i didn t find multiple namespaces vw and i appended movie genre from file ml u item sample reating user item theme u i t drama u i t crime film noir mystery thriller u i t documentary u i t drama u i t comedy it is intuitively clear that the genre should definitely improve the model but this does not happen and the rsme is getting worse in any training options examples without q with t vw d train vw q ui rank learning rate passes decay learning rate power t f movielens reg c k quiet rsme with q ut vw d train vw q ui q ut rank learning rate passes decay learning rate power t f movielens reg c k quiet rsme with q it vw d train vw q ui q it rank learning rate passes decay learning rate power t f movielens reg c k quiet rsme i m upset and don t understand any of my actions worsens the performance questions what am i doing wrong maybe the rank system doesn t work like that how to choose a rank it s not clear from the documentation what does it depend on if i add additional fields i need to increase rank how much i m trying to make a recommendation system based on this functionality maybe this is not the best option what can you advise
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.