Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
146,193 | 11,728,887,082 | IssuesEvent | 2020-03-10 18:20:33 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Panic in rancher logs after a scheduled CIS scan run | [zube]: To Test area/scan-tool kind/bug-qa status/blocker | **What kind of request is this (question/bug/enhancement/feature request):** bug
**Steps to reproduce (least amount of steps as possible):**
- Schedule a CIS scan in a cluster, through the API
- Give in `cisScanConfig` through View in API --> Edit --> `scheduledClusterScan`
```
cisScanConfig": {
"overrideBenchmarkVersion": "rke-cis-1.4",
"profile": "permissive"
},
```
- ScheduleConfig set to
```
"scheduleConfig": {
"cronSchedule": "*/3 * * * *",
"retention": 2,
"type": "/v3/schemas/scheduledClusterScanConfig"
},
```
- When the Scan is run on the cluster, Panic is seen, and the Scan is stuck in `Creating` state.
**Result:**
- Panic seen in rancher logs
- Rancher logs:
```
2020/03/06 20:33:09 [INFO] Marking CIS scan complete: ss-cis-1583526780000249907
2020/03/06 20:33:09 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526780000249907 [cisScanHandler] failed with : cisScanHandler: Updated: runner pod not yet deleted, will retry
2020/03/06 20:33:09 [INFO] Deleting chart using helm version: rancher-helm
[main] 2020/03/06 20:33:10 Starting Tiller v2.16+unreleased (tls=false)
[main] 2020/03/06 20:33:10 GRPC listening on :39688
[main] 2020/03/06 20:33:10 Probes listening on :38630
[main] 2020/03/06 20:33:10 Storage driver is ConfigMap
[main] 2020/03/06 20:33:10 Max history per release is 10
[storage] 2020/03/06 20:33:10 getting release history for "ss-cis-1583526780000249907"
[tiller] 2020/03/06 20:33:10 uninstall: Deleting ss-cis-1583526780000249907
[tiller] 2020/03/06 20:33:10 executing 0 pre-delete hooks for ss-cis-1583526780000249907
[tiller] 2020/03/06 20:33:10 hooks complete for pre-delete ss-cis-1583526780000249907
[storage] 2020/03/06 20:33:10 updating release "ss-cis-1583526780000249907.v1"
2020/03/06 20:33:10 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526780000249907 [cisScanHandler] failed with : cisScanHandler: Updated: runner pod not yet deleted, will retry
[kube] 2020/03/06 20:33:10 Starting delete for "ss-cis-1583526780000249907-rancher-cis-benchmark" Service
[kube] 2020/03/06 20:33:10 Starting delete for "security-scan-runner-ss-cis-1583526780000249907" Pod
[kube] 2020/03/06 20:33:10 Starting delete for "s-sa-ss-cis-1583526780000249907" ClusterRoleBinding
[kube] 2020/03/06 20:33:11 Starting delete for "s-sa-ss-cis-1583526780000249907" ClusterRole
[kube] 2020/03/06 20:33:11 Starting delete for "s-sa-ss-cis-1583526780000249907" ServiceAccount
[kube] 2020/03/06 20:33:11 Starting delete for "s-config-cm-ss-cis-1583526780000249907" ConfigMap
2020/03/06 20:33:11 [INFO] ClusterScanWatcher: Sync: alert manager not deployed
2020/03/06 20:33:11 [INFO] ClusterScanWatcher: Sync: alert manager not deployed
[kube] 2020/03/06 20:33:11 Starting delete for "s-plugins-cm-ss-cis-1583526780000249907" ConfigMap
[tiller] 2020/03/06 20:33:11 executing 0 post-delete hooks for ss-cis-1583526780000249907
[tiller] 2020/03/06 20:33:11 hooks complete for post-delete ss-cis-1583526780000249907
[tiller] 2020/03/06 20:33:11 purge requested for ss-cis-1583526780000249907
[storage] 2020/03/06 20:33:11 deleting release "ss-cis-1583526780000249907.v1"
2020/03/06 20:33:11 [WARNING] release "ss-cis-1583526780000249907" deleted
2020/03/06 20:33:12 [INFO] Deleting chart using helm version: rancher-helm
[main] 2020/03/06 20:33:12 Starting Tiller v2.16+unreleased (tls=false)
[main] 2020/03/06 20:33:12 GRPC listening on :52456
[main] 2020/03/06 20:33:12 Probes listening on :51426
[main] 2020/03/06 20:33:12 Storage driver is ConfigMap
[main] 2020/03/06 20:33:12 Max history per release is 10
[storage] 2020/03/06 20:33:13 getting release history for "ss-cis-1583526780000249907"
[tiller] 2020/03/06 20:33:13 uninstall: Release not loaded: ss-cis-1583526780000249907
2020/03/06 20:33:46 [ERROR] could not convert gke config to map
2020/03/06 20:33:46 [ERROR] could not convert gke config to map
2020/03/06 20:33:51 [ERROR] could not convert gke config to map
2020/03/06 20:33:53 [ERROR] could not convert gke config to map
2020/03/06 20:34:27 [ERROR] could not convert gke config to map
2020/03/06 20:34:29 [ERROR] NotifierController c-7zbgl/n-8dxcv [notifier-config-syncer] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found
2020/03/06 20:34:53 [INFO] [certificates] Checking and deleting unused kube-kubelet certificates
2020/03/06 20:34:53 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-15-226
2020/03/06 20:34:53 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-11-60
2020-03-06 20:34:53.634444 I | mvcc: store.index: compact 53544
2020-03-06 20:34:53.662447 I | mvcc: finished scheduled compaction at 53544 (took 27.111304ms)
2020/03/06 20:34:55 [INFO] [certificates] Checking and deleting unused kube-kubelet certificates
2020/03/06 20:34:55 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-11-60
2020/03/06 20:34:55 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-11-156
2020/03/06 20:34:55 [INFO] [certificates] Checking and deleting unused kube-kubelet certificates
2020/03/06 20:34:55 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-11-156
2020/03/06 20:34:55 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-15-226
2020/03/06 20:35:15 [ERROR] could not convert gke config to map
2020/03/06 20:35:22 [ERROR] ProjectAlertRuleController p-j7cck/memory-close-to-resource-limited [project-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertRuleController p-vdmvw/memory-close-to-resource-limited [project-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertRuleController p-j7cck/less-than-half-workload-available [project-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertGroupController p-vdmvw/projectalert-workload-alert [project-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertRuleController p-vdmvw/less-than-half-workload-available [project-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/scheduler-system-service [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/db-over-size [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/no-leader [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/cluster-scan-scheduled-all [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertGroupController c-7zbgl/node-alert [cluster-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/etcd-system-service [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertGroupController c-7zbgl/kube-components-alert [cluster-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertGroupController c-7zbgl/etcd-alert [cluster-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertGroupController p-j7cck/projectalert-workload-alert [project-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/cluster-scan-manual-failure-only [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/node-disk-running-full [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/cluster-scan-scheduled--failure-only [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/cluster-scan-manual-all [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/controllermanager-system-service [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/high-number-of-leader-changes [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/high-memmory [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/high-cpu-load [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
E0306 20:35:44.148122 27 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
2020/03/06 20:36:00 [INFO] cisScanHandler: Create: deploying helm chart
2020/03/06 20:36:00 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526960000227340 recovered from panic "runtime error: invalid memory address or nil pointer dereference". (err=<nil>) Call stack:
goroutine 17972 [running]:
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime.RecoverFromPanic(0xc00afadd30)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:158 +0xb5
panic(0x365bd20, 0x6bdb040)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/rancher/pkg/controllers/user/cis.(*cisScanHandler).Create(0xc00b40e600, 0xc00ccb8600, 0xc001971818, 0x44618c, 0x60, 0x3b60ac0)
/go/src/github.com/rancher/rancher/pkg/controllers/user/cis/clusterScanHandler.go:185 +0xe7f
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanLifecycleAdapter).Create(0xc00b426910, 0x4656bc0, 0xc00ccb8600, 0xc00fa2da40, 0x1, 0x1, 0xc00bd4fe00)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:30 +0x52
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.checkNil(0x4656bc0, 0xc00ccb8600, 0xc001971a08, 0xc00bd4f980, 0x0, 0x0, 0x40a60a)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:191 +0x3e
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).record(0xc00b44a2d0, 0x4656bc0, 0xc00bd4f980, 0xc001971a08, 0xc00b426910, 0x1, 0x0, 0xc003559840)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:181 +0xd0
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).create(0xc00b44a2d0, 0x4656bc0, 0xc00bd4f980, 0x0, 0x0, 0xc001971b01, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:219 +0x2ac
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).sync(0xc00b44a2d0, 0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bd4f080, 0x403501, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:64 +0x1c0
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.NewClusterScanLifecycleAdapter.func1(0xc00e4f6990, 0x22, 0xc00bd4f080, 0xc00bd4f080, 0x40a101, 0x10, 0x339c6c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:60 +0x53
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanController).AddClusterScopedHandler.func1(0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bd4f080, 0xc006cb0cd8, 0x3, 0x3, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_controller.go:180 +0x79
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).syncHandler(0xc0089c7480, 0x339c980, 0xc00de9a740, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:387 +0x371
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).processNextWorkItem(0xc0089c7480, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:296 +0xef
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).runWorker(0xc0089c7480)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:284 +0x2b
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00bcb1a00)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00bcb1a00, 0x3b9aca00, 0x0, 0x1, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00bcb1a00, 0x3b9aca00, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).run
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:276 +0xe8
2020/03/06 20:36:00 [INFO] cisScanHandler: Create: deploying helm chart
2020/03/06 20:36:00 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526960000227340 recovered from panic "runtime error: invalid memory address or nil pointer dereference". (err=<nil>) Call stack:
goroutine 17972 [running]:
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime.RecoverFromPanic(0xc00afadd30)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:158 +0xb5
panic(0x365bd20, 0x6bdb040)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/rancher/pkg/controllers/user/cis.(*cisScanHandler).Create(0xc00b40e600, 0xc00ccb8780, 0xc00afad818, 0x44618c, 0x60, 0x3b60ac0)
/go/src/github.com/rancher/rancher/pkg/controllers/user/cis/clusterScanHandler.go:185 +0xe7f
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanLifecycleAdapter).Create(0xc00b426910, 0x4656bc0, 0xc00ccb8780, 0xc00fa2db00, 0x1, 0x1, 0xc00bae2780)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:30 +0x52
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.checkNil(0x4656bc0, 0xc00ccb8780, 0xc00afada08, 0xc00bae2600, 0x0, 0x0, 0x40a60a)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:191 +0x3e
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).record(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0xc00afada08, 0xc00b426910, 0x1, 0x0, 0xc003559fc0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:181 +0xd0
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).create(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0x0, 0x0, 0xc00afadb01, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:219 +0x2ac
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).sync(0xc00b44a2d0, 0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0x403501, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:64 +0x1c0
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.NewClusterScanLifecycleAdapter.func1(0xc00e4f6990, 0x22, 0xc00bae2600, 0xc00bae2600, 0x40a101, 0x10, 0x339c6c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:60 +0x53
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanController).AddClusterScopedHandler.func1(0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0xc00afadcd8, 0x3, 0x3, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_controller.go:180 +0x79
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).syncHandler(0xc0089c7480, 0x339c980, 0xc00de9a740, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:387 +0x371
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).processNextWorkItem(0xc0089c7480, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:296 +0xef
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).runWorker(0xc0089c7480)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:284 +0x2b
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00bcb1a00)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00bcb1a00, 0x3b9aca00, 0x0, 0x1, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00bcb1a00, 0x3b9aca00, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).run
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:276 +0xe8
2020/03/06 20:36:00 [INFO] cisScanHandler: Create: deploying helm chart
2020/03/06 20:36:00 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526960000227340 recovered from panic "runtime error: invalid memory address or nil pointer dereference". (err=<nil>) Call stack:
goroutine 17976 [running]:
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime.RecoverFromPanic(0xc00700bd30)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:158 +0xb5
panic(0x365bd20, 0x6bdb040)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/rancher/pkg/controllers/user/cis.(*cisScanHandler).Create(0xc00b40e600, 0xc0095a4180, 0xc010ac5818, 0x44618c, 0x60, 0x3b60ac0)
/go/src/github.com/rancher/rancher/pkg/controllers/user/cis/clusterScanHandler.go:185 +0xe7f
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanLifecycleAdapter).Create(0xc00b426910, 0x4656bc0, 0xc0095a4180, 0xc00c2c73e0, 0x1, 0x1, 0xc00bae2780)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:30 +0x52
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.checkNil(0x4656bc0, 0xc0095a4180, 0xc010ac5a08, 0xc00bae2600, 0x0, 0x0, 0x40a60a)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:191 +0x3e
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).record(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0xc010ac5a08, 0xc00b426910, 0x1, 0x0, 0xc007b899c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:181 +0xd0
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).create(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0x0, 0x0, 0xc010ac5b01, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:219 +0x2ac
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).sync(0xc00b44a2d0, 0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0x403501, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:64 +0x1c0
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.NewClusterScanLifecycleAdapter.func1(0xc00e4f6990, 0x22, 0xc00bae2600, 0xc00bae2600, 0x40a101, 0x10, 0x339c6c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:60 +0x53
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanController).AddClusterScopedHandler.func1(0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0xc005124cd8, 0x3, 0x3, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_controller.go:180 +0x79
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).syncHandler(0xc0089c7480, 0x339c980, 0xc00de9a740, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:387 +0x371
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).processNextWorkItem(0xc0089c7480, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:296 +0xef
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).runWorker(0xc0089c7480)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:284 +0x2b
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00bcb1a40)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00bcb1a40, 0x3b9aca00, 0x0, 0x1, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00bcb1a40, 0x3b9aca00, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).run
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:276 +0xe8
2020/03/06 20:36:02 [INFO] cisScanHandler: Create: deploying helm chart
2020/03/06 20:36:02 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526960000227340 recovered from panic "runtime error: invalid memory address or nil pointer dereference". (err=<nil>) Call stack:
goroutine 17977 [running]:
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime.RecoverFromPanic(0xc00746fd30)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:158 +0xb5
panic(0x365bd20, 0x6bdb040)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/rancher/pkg/controllers/user/cis.(*cisScanHandler).Create(0xc00b40e600, 0xc00ceb8f00, 0xc00305f818, 0x44618c, 0x60, 0x3b60ac0)
/go/src/github.com/rancher/rancher/pkg/controllers/user/cis/clusterScanHandler.go:185 +0xe7f
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanLifecycleAdapter).Create(0xc00b426910, 0x4656bc0, 0xc00ceb8f00, 0xc00ca3ad80, 0x1, 0x1, 0xc00bae2780)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:30 +0x52
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.checkNil(0x4656bc0, 0xc00ceb8f00, 0xc00305fa08, 0xc00bae2600, 0x0, 0x0, 0x40a60a)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:191 +0x3e
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).record(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0xc00305fa08, 0xc00b426910, 0x1, 0x0, 0xc005f27600)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:181 +0xd0
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).create(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0x0, 0x0, 0xc00305fb01, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:219 +0x2ac
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).sync(0xc00b44a2d0, 0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0x403501, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:64 +0x1c0
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.NewClusterScanLifecycleAdapter.func1(0xc00e4f6990, 0x22, 0xc00bae2600, 0xc00bae2600, 0x40a101, 0x10, 0x339c6c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:60 +0x53
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanController).AddClusterScopedHandler.func1(0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0xc007094cd8, 0x3, 0x3, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_controller.go:180 +0x79
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).syncHandler(0xc0089c7480, 0x339c980, 0xc00de9a740, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:387 +0x371
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).processNextWorkItem(0xc0089c7480, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:296 +0xef
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).runWorker(0xc0089c7480)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:284 +0x2b
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00bcb1a50)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00bcb1a50, 0x3b9aca00, 0x0, 0x1, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00bcb1a50, 0x3b9aca00, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).run
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:276 +0xe8
```
**Other details that may be helpful:**
**Environment information**
- Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head - latest - commit id: `beba5247e`
- Installation option (single install/HA): single
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported): rke DO cluster
- Kubernetes version (use kubectl version):
```
1.17
``` | 1.0 | Panic in rancher logs after a scheduled CIS scan run - **What kind of request is this (question/bug/enhancement/feature request):** bug
**Steps to reproduce (least amount of steps as possible):**
- Schedule a CIS scan in a cluster, through the API
- Give in `cisScanConfig` through View in API --> Edit --> `scheduledClusterScan`
```
cisScanConfig": {
"overrideBenchmarkVersion": "rke-cis-1.4",
"profile": "permissive"
},
```
- ScheduleConfig set to
```
"scheduleConfig": {
"cronSchedule": "*/3 * * * *",
"retention": 2,
"type": "/v3/schemas/scheduledClusterScanConfig"
},
```
- When the Scan is run on the cluster, Panic is seen, and the Scan is stuck in `Creating` state.
**Result:**
- Panic seen in rancher logs
- Rancher logs:
```
2020/03/06 20:33:09 [INFO] Marking CIS scan complete: ss-cis-1583526780000249907
2020/03/06 20:33:09 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526780000249907 [cisScanHandler] failed with : cisScanHandler: Updated: runner pod not yet deleted, will retry
2020/03/06 20:33:09 [INFO] Deleting chart using helm version: rancher-helm
[main] 2020/03/06 20:33:10 Starting Tiller v2.16+unreleased (tls=false)
[main] 2020/03/06 20:33:10 GRPC listening on :39688
[main] 2020/03/06 20:33:10 Probes listening on :38630
[main] 2020/03/06 20:33:10 Storage driver is ConfigMap
[main] 2020/03/06 20:33:10 Max history per release is 10
[storage] 2020/03/06 20:33:10 getting release history for "ss-cis-1583526780000249907"
[tiller] 2020/03/06 20:33:10 uninstall: Deleting ss-cis-1583526780000249907
[tiller] 2020/03/06 20:33:10 executing 0 pre-delete hooks for ss-cis-1583526780000249907
[tiller] 2020/03/06 20:33:10 hooks complete for pre-delete ss-cis-1583526780000249907
[storage] 2020/03/06 20:33:10 updating release "ss-cis-1583526780000249907.v1"
2020/03/06 20:33:10 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526780000249907 [cisScanHandler] failed with : cisScanHandler: Updated: runner pod not yet deleted, will retry
[kube] 2020/03/06 20:33:10 Starting delete for "ss-cis-1583526780000249907-rancher-cis-benchmark" Service
[kube] 2020/03/06 20:33:10 Starting delete for "security-scan-runner-ss-cis-1583526780000249907" Pod
[kube] 2020/03/06 20:33:10 Starting delete for "s-sa-ss-cis-1583526780000249907" ClusterRoleBinding
[kube] 2020/03/06 20:33:11 Starting delete for "s-sa-ss-cis-1583526780000249907" ClusterRole
[kube] 2020/03/06 20:33:11 Starting delete for "s-sa-ss-cis-1583526780000249907" ServiceAccount
[kube] 2020/03/06 20:33:11 Starting delete for "s-config-cm-ss-cis-1583526780000249907" ConfigMap
2020/03/06 20:33:11 [INFO] ClusterScanWatcher: Sync: alert manager not deployed
2020/03/06 20:33:11 [INFO] ClusterScanWatcher: Sync: alert manager not deployed
[kube] 2020/03/06 20:33:11 Starting delete for "s-plugins-cm-ss-cis-1583526780000249907" ConfigMap
[tiller] 2020/03/06 20:33:11 executing 0 post-delete hooks for ss-cis-1583526780000249907
[tiller] 2020/03/06 20:33:11 hooks complete for post-delete ss-cis-1583526780000249907
[tiller] 2020/03/06 20:33:11 purge requested for ss-cis-1583526780000249907
[storage] 2020/03/06 20:33:11 deleting release "ss-cis-1583526780000249907.v1"
2020/03/06 20:33:11 [WARNING] release "ss-cis-1583526780000249907" deleted
2020/03/06 20:33:12 [INFO] Deleting chart using helm version: rancher-helm
[main] 2020/03/06 20:33:12 Starting Tiller v2.16+unreleased (tls=false)
[main] 2020/03/06 20:33:12 GRPC listening on :52456
[main] 2020/03/06 20:33:12 Probes listening on :51426
[main] 2020/03/06 20:33:12 Storage driver is ConfigMap
[main] 2020/03/06 20:33:12 Max history per release is 10
[storage] 2020/03/06 20:33:13 getting release history for "ss-cis-1583526780000249907"
[tiller] 2020/03/06 20:33:13 uninstall: Release not loaded: ss-cis-1583526780000249907
2020/03/06 20:33:46 [ERROR] could not convert gke config to map
2020/03/06 20:33:46 [ERROR] could not convert gke config to map
2020/03/06 20:33:51 [ERROR] could not convert gke config to map
2020/03/06 20:33:53 [ERROR] could not convert gke config to map
2020/03/06 20:34:27 [ERROR] could not convert gke config to map
2020/03/06 20:34:29 [ERROR] NotifierController c-7zbgl/n-8dxcv [notifier-config-syncer] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found
2020/03/06 20:34:53 [INFO] [certificates] Checking and deleting unused kube-kubelet certificates
2020/03/06 20:34:53 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-15-226
2020/03/06 20:34:53 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-11-60
2020-03-06 20:34:53.634444 I | mvcc: store.index: compact 53544
2020-03-06 20:34:53.662447 I | mvcc: finished scheduled compaction at 53544 (took 27.111304ms)
2020/03/06 20:34:55 [INFO] [certificates] Checking and deleting unused kube-kubelet certificates
2020/03/06 20:34:55 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-11-60
2020/03/06 20:34:55 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-11-156
2020/03/06 20:34:55 [INFO] [certificates] Checking and deleting unused kube-kubelet certificates
2020/03/06 20:34:55 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-11-156
2020/03/06 20:34:55 [INFO] [certificates] Deleting unused certificate: kube-kubelet-172-31-15-226
2020/03/06 20:35:15 [ERROR] could not convert gke config to map
2020/03/06 20:35:22 [ERROR] ProjectAlertRuleController p-j7cck/memory-close-to-resource-limited [project-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertRuleController p-vdmvw/memory-close-to-resource-limited [project-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertRuleController p-j7cck/less-than-half-workload-available [project-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertGroupController p-vdmvw/projectalert-workload-alert [project-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertRuleController p-vdmvw/less-than-half-workload-available [project-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/scheduler-system-service [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/db-over-size [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/no-leader [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/cluster-scan-scheduled-all [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertGroupController c-7zbgl/node-alert [cluster-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/etcd-system-service [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertGroupController c-7zbgl/kube-components-alert [cluster-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertGroupController c-7zbgl/etcd-alert [cluster-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ProjectAlertGroupController p-j7cck/projectalert-workload-alert [project-alert-group-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [project-alert-group-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/cluster-scan-manual-failure-only [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/node-disk-running-full [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/cluster-scan-scheduled--failure-only [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/cluster-scan-manual-all [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/controllermanager-system-service [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/high-number-of-leader-changes [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/high-memmory [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
2020/03/06 20:35:22 [ERROR] ClusterAlertRuleController c-7zbgl/high-cpu-load [cluster-alert-rule-controller] failed with : Failed to get service for alertmanager, service "cattle-prometheus/alertmanager-operated" not found, [cluster-alert-rule-deployer] failed with : failed to get Alertmanager Deployment information, statefulsets.apps "alertmanager-cluster-alerting" not found
E0306 20:35:44.148122 27 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
2020/03/06 20:36:00 [INFO] cisScanHandler: Create: deploying helm chart
2020/03/06 20:36:00 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526960000227340 recovered from panic "runtime error: invalid memory address or nil pointer dereference". (err=<nil>) Call stack:
goroutine 17972 [running]:
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime.RecoverFromPanic(0xc00afadd30)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:158 +0xb5
panic(0x365bd20, 0x6bdb040)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/rancher/pkg/controllers/user/cis.(*cisScanHandler).Create(0xc00b40e600, 0xc00ccb8600, 0xc001971818, 0x44618c, 0x60, 0x3b60ac0)
/go/src/github.com/rancher/rancher/pkg/controllers/user/cis/clusterScanHandler.go:185 +0xe7f
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanLifecycleAdapter).Create(0xc00b426910, 0x4656bc0, 0xc00ccb8600, 0xc00fa2da40, 0x1, 0x1, 0xc00bd4fe00)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:30 +0x52
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.checkNil(0x4656bc0, 0xc00ccb8600, 0xc001971a08, 0xc00bd4f980, 0x0, 0x0, 0x40a60a)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:191 +0x3e
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).record(0xc00b44a2d0, 0x4656bc0, 0xc00bd4f980, 0xc001971a08, 0xc00b426910, 0x1, 0x0, 0xc003559840)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:181 +0xd0
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).create(0xc00b44a2d0, 0x4656bc0, 0xc00bd4f980, 0x0, 0x0, 0xc001971b01, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:219 +0x2ac
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).sync(0xc00b44a2d0, 0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bd4f080, 0x403501, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:64 +0x1c0
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.NewClusterScanLifecycleAdapter.func1(0xc00e4f6990, 0x22, 0xc00bd4f080, 0xc00bd4f080, 0x40a101, 0x10, 0x339c6c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:60 +0x53
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanController).AddClusterScopedHandler.func1(0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bd4f080, 0xc006cb0cd8, 0x3, 0x3, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_controller.go:180 +0x79
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).syncHandler(0xc0089c7480, 0x339c980, 0xc00de9a740, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:387 +0x371
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).processNextWorkItem(0xc0089c7480, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:296 +0xef
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).runWorker(0xc0089c7480)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:284 +0x2b
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00bcb1a00)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00bcb1a00, 0x3b9aca00, 0x0, 0x1, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00bcb1a00, 0x3b9aca00, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).run
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:276 +0xe8
2020/03/06 20:36:00 [INFO] cisScanHandler: Create: deploying helm chart
2020/03/06 20:36:00 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526960000227340 recovered from panic "runtime error: invalid memory address or nil pointer dereference". (err=<nil>) Call stack:
goroutine 17972 [running]:
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime.RecoverFromPanic(0xc00afadd30)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:158 +0xb5
panic(0x365bd20, 0x6bdb040)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/rancher/pkg/controllers/user/cis.(*cisScanHandler).Create(0xc00b40e600, 0xc00ccb8780, 0xc00afad818, 0x44618c, 0x60, 0x3b60ac0)
/go/src/github.com/rancher/rancher/pkg/controllers/user/cis/clusterScanHandler.go:185 +0xe7f
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanLifecycleAdapter).Create(0xc00b426910, 0x4656bc0, 0xc00ccb8780, 0xc00fa2db00, 0x1, 0x1, 0xc00bae2780)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:30 +0x52
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.checkNil(0x4656bc0, 0xc00ccb8780, 0xc00afada08, 0xc00bae2600, 0x0, 0x0, 0x40a60a)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:191 +0x3e
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).record(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0xc00afada08, 0xc00b426910, 0x1, 0x0, 0xc003559fc0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:181 +0xd0
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).create(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0x0, 0x0, 0xc00afadb01, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:219 +0x2ac
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).sync(0xc00b44a2d0, 0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0x403501, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:64 +0x1c0
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.NewClusterScanLifecycleAdapter.func1(0xc00e4f6990, 0x22, 0xc00bae2600, 0xc00bae2600, 0x40a101, 0x10, 0x339c6c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:60 +0x53
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanController).AddClusterScopedHandler.func1(0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0xc00afadcd8, 0x3, 0x3, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_controller.go:180 +0x79
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).syncHandler(0xc0089c7480, 0x339c980, 0xc00de9a740, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:387 +0x371
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).processNextWorkItem(0xc0089c7480, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:296 +0xef
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).runWorker(0xc0089c7480)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:284 +0x2b
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00bcb1a00)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00bcb1a00, 0x3b9aca00, 0x0, 0x1, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00bcb1a00, 0x3b9aca00, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).run
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:276 +0xe8
2020/03/06 20:36:00 [INFO] cisScanHandler: Create: deploying helm chart
2020/03/06 20:36:00 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526960000227340 recovered from panic "runtime error: invalid memory address or nil pointer dereference". (err=<nil>) Call stack:
goroutine 17976 [running]:
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime.RecoverFromPanic(0xc00700bd30)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:158 +0xb5
panic(0x365bd20, 0x6bdb040)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/rancher/pkg/controllers/user/cis.(*cisScanHandler).Create(0xc00b40e600, 0xc0095a4180, 0xc010ac5818, 0x44618c, 0x60, 0x3b60ac0)
/go/src/github.com/rancher/rancher/pkg/controllers/user/cis/clusterScanHandler.go:185 +0xe7f
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanLifecycleAdapter).Create(0xc00b426910, 0x4656bc0, 0xc0095a4180, 0xc00c2c73e0, 0x1, 0x1, 0xc00bae2780)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:30 +0x52
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.checkNil(0x4656bc0, 0xc0095a4180, 0xc010ac5a08, 0xc00bae2600, 0x0, 0x0, 0x40a60a)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:191 +0x3e
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).record(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0xc010ac5a08, 0xc00b426910, 0x1, 0x0, 0xc007b899c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:181 +0xd0
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).create(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0x0, 0x0, 0xc010ac5b01, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:219 +0x2ac
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).sync(0xc00b44a2d0, 0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0x403501, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:64 +0x1c0
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.NewClusterScanLifecycleAdapter.func1(0xc00e4f6990, 0x22, 0xc00bae2600, 0xc00bae2600, 0x40a101, 0x10, 0x339c6c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:60 +0x53
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanController).AddClusterScopedHandler.func1(0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0xc005124cd8, 0x3, 0x3, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_controller.go:180 +0x79
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).syncHandler(0xc0089c7480, 0x339c980, 0xc00de9a740, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:387 +0x371
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).processNextWorkItem(0xc0089c7480, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:296 +0xef
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).runWorker(0xc0089c7480)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:284 +0x2b
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00bcb1a40)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00bcb1a40, 0x3b9aca00, 0x0, 0x1, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00bcb1a40, 0x3b9aca00, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).run
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:276 +0xe8
2020/03/06 20:36:02 [INFO] cisScanHandler: Create: deploying helm chart
2020/03/06 20:36:02 [ERROR] ClusterScanController c-p2t4r/ss-cis-1583526960000227340 recovered from panic "runtime error: invalid memory address or nil pointer dereference". (err=<nil>) Call stack:
goroutine 17977 [running]:
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime.RecoverFromPanic(0xc00746fd30)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:158 +0xb5
panic(0x365bd20, 0x6bdb040)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/rancher/rancher/pkg/controllers/user/cis.(*cisScanHandler).Create(0xc00b40e600, 0xc00ceb8f00, 0xc00305f818, 0x44618c, 0x60, 0x3b60ac0)
/go/src/github.com/rancher/rancher/pkg/controllers/user/cis/clusterScanHandler.go:185 +0xe7f
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanLifecycleAdapter).Create(0xc00b426910, 0x4656bc0, 0xc00ceb8f00, 0xc00ca3ad80, 0x1, 0x1, 0xc00bae2780)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:30 +0x52
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.checkNil(0x4656bc0, 0xc00ceb8f00, 0xc00305fa08, 0xc00bae2600, 0x0, 0x0, 0x40a60a)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:191 +0x3e
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).record(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0xc00305fa08, 0xc00b426910, 0x1, 0x0, 0xc005f27600)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:181 +0xd0
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).create(0xc00b44a2d0, 0x4656bc0, 0xc00bae2600, 0x0, 0x0, 0xc00305fb01, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:219 +0x2ac
github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle.(*objectLifecycleAdapter).sync(0xc00b44a2d0, 0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0x403501, 0x0, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/lifecycle/object.go:64 +0x1c0
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.NewClusterScanLifecycleAdapter.func1(0xc00e4f6990, 0x22, 0xc00bae2600, 0xc00bae2600, 0x40a101, 0x10, 0x339c6c0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_lifecycle_adapter.go:60 +0x53
github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3.(*clusterScanController).AddClusterScopedHandler.func1(0xc00e4f6990, 0x22, 0x3db62e0, 0xc00bae2600, 0xc007094cd8, 0x3, 0x3, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/types/apis/management.cattle.io/v3/zz_generated_cluster_scan_controller.go:180 +0x79
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).syncHandler(0xc0089c7480, 0x339c980, 0xc00de9a740, 0x0, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:387 +0x371
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).processNextWorkItem(0xc0089c7480, 0x0)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:296 +0xef
github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).runWorker(0xc0089c7480)
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:284 +0x2b
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00bcb1a50)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00bcb1a50, 0x3b9aca00, 0x0, 0x1, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00bcb1a50, 0x3b9aca00, 0xc000aa0ae0)
/go/src/github.com/rancher/rancher/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/rancher/rancher/vendor/github.com/rancher/norman/controller.(*genericController).run
/go/src/github.com/rancher/rancher/vendor/github.com/rancher/norman/controller/generic_controller.go:276 +0xe8
```
**Other details that may be helpful:**
**Environment information**
- Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): master-head - latest - commit id: `beba5247e`
- Installation option (single install/HA): single
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported): rke DO cluster
- Kubernetes version (use kubectl version):
```
1.17
``` | non_main | panic in rancher logs after a scheduled cis scan run what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible schedule a cis scan in a cluster through the api give in cisscanconfig through view in api edit scheduledclusterscan cisscanconfig overridebenchmarkversion rke cis profile permissive scheduleconfig set to scheduleconfig cronschedule retention type schemas scheduledclusterscanconfig when the scan is run on the cluster panic is seen and the scan is stuck in creating state result panic seen in rancher logs rancher logs marking cis scan complete ss cis clusterscancontroller c ss cis failed with cisscanhandler updated runner pod not yet deleted will retry deleting chart using helm version rancher helm starting tiller unreleased tls false grpc listening on probes listening on storage driver is configmap max history per release is getting release history for ss cis uninstall deleting ss cis executing pre delete hooks for ss cis hooks complete for pre delete ss cis updating release ss cis clusterscancontroller c ss cis failed with cisscanhandler updated runner pod not yet deleted will retry starting delete for ss cis rancher cis benchmark service starting delete for security scan runner ss cis pod starting delete for s sa ss cis clusterrolebinding starting delete for s sa ss cis clusterrole starting delete for s sa ss cis serviceaccount starting delete for s config cm ss cis configmap clusterscanwatcher sync alert manager not deployed clusterscanwatcher sync alert manager not deployed starting delete for s plugins cm ss cis configmap executing post delete hooks for ss cis hooks complete for post delete ss cis purge requested for ss cis deleting release ss cis release ss cis deleted deleting chart using helm version rancher helm starting tiller unreleased tls false grpc listening on probes listening on storage driver is configmap max history per release is getting release history for ss cis uninstall release not loaded ss cis could not convert gke config to map could not convert gke config to map could not convert gke config to map could not convert gke config to map could not convert gke config to map notifiercontroller c n failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found checking and deleting unused kube kubelet certificates deleting unused certificate kube kubelet deleting unused certificate kube kubelet i mvcc store index compact i mvcc finished scheduled compaction at took checking and deleting unused kube kubelet certificates deleting unused certificate kube kubelet deleting unused certificate kube kubelet checking and deleting unused kube kubelet certificates deleting unused certificate kube kubelet deleting unused certificate kube kubelet could not convert gke config to map projectalertrulecontroller p memory close to resource limited failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found projectalertrulecontroller p vdmvw memory close to resource limited failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found projectalertrulecontroller p less than half workload available failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found projectalertgroupcontroller p vdmvw projectalert workload alert failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found projectalertrulecontroller p vdmvw less than half workload available failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c scheduler system service failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c db over size failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c no leader failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c cluster scan scheduled all failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertgroupcontroller c node alert failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c etcd system service failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertgroupcontroller c kube components alert failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertgroupcontroller c etcd alert failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found projectalertgroupcontroller p projectalert workload alert failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c cluster scan manual failure only failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c node disk running full failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c cluster scan scheduled failure only failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c cluster scan manual all failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c controllermanager system service failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c high number of leader changes failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c high memmory failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found clusteralertrulecontroller c high cpu load failed with failed to get service for alertmanager service cattle prometheus alertmanager operated not found failed with failed to get alertmanager deployment information statefulsets apps alertmanager cluster alerting not found watcher go watch chan error etcdserver mvcc required revision has been compacted cisscanhandler create deploying helm chart clusterscancontroller c ss cis recovered from panic runtime error invalid memory address or nil pointer dereference err call stack goroutine github com rancher rancher vendor io apimachinery pkg util runtime recoverfrompanic go src github com rancher rancher vendor io apimachinery pkg util runtime runtime go panic usr local go src runtime panic go github com rancher rancher pkg controllers user cis cisscanhandler create go src github com rancher rancher pkg controllers user cis clusterscanhandler go github com rancher rancher vendor github com rancher types apis management cattle io clusterscanlifecycleadapter create go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan lifecycle adapter go github com rancher rancher vendor github com rancher norman lifecycle checknil go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter record go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter create go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter sync go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher types apis management cattle io newclusterscanlifecycleadapter go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan lifecycle adapter go github com rancher rancher vendor github com rancher types apis management cattle io clusterscancontroller addclusterscopedhandler go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan controller go github com rancher rancher vendor github com rancher norman controller genericcontroller synchandler go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor github com rancher norman controller genericcontroller processnextworkitem go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor github com rancher norman controller genericcontroller runworker go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor io apimachinery pkg util wait jitteruntil go src github com rancher rancher vendor io apimachinery pkg util wait wait go github com rancher rancher vendor io apimachinery pkg util wait jitteruntil go src github com rancher rancher vendor io apimachinery pkg util wait wait go github com rancher rancher vendor io apimachinery pkg util wait until go src github com rancher rancher vendor io apimachinery pkg util wait wait go created by github com rancher rancher vendor github com rancher norman controller genericcontroller run go src github com rancher rancher vendor github com rancher norman controller generic controller go cisscanhandler create deploying helm chart clusterscancontroller c ss cis recovered from panic runtime error invalid memory address or nil pointer dereference err call stack goroutine github com rancher rancher vendor io apimachinery pkg util runtime recoverfrompanic go src github com rancher rancher vendor io apimachinery pkg util runtime runtime go panic usr local go src runtime panic go github com rancher rancher pkg controllers user cis cisscanhandler create go src github com rancher rancher pkg controllers user cis clusterscanhandler go github com rancher rancher vendor github com rancher types apis management cattle io clusterscanlifecycleadapter create go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan lifecycle adapter go github com rancher rancher vendor github com rancher norman lifecycle checknil go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter record go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter create go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter sync go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher types apis management cattle io newclusterscanlifecycleadapter go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan lifecycle adapter go github com rancher rancher vendor github com rancher types apis management cattle io clusterscancontroller addclusterscopedhandler go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan controller go github com rancher rancher vendor github com rancher norman controller genericcontroller synchandler go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor github com rancher norman controller genericcontroller processnextworkitem go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor github com rancher norman controller genericcontroller runworker go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor io apimachinery pkg util wait jitteruntil go src github com rancher rancher vendor io apimachinery pkg util wait wait go github com rancher rancher vendor io apimachinery pkg util wait jitteruntil go src github com rancher rancher vendor io apimachinery pkg util wait wait go github com rancher rancher vendor io apimachinery pkg util wait until go src github com rancher rancher vendor io apimachinery pkg util wait wait go created by github com rancher rancher vendor github com rancher norman controller genericcontroller run go src github com rancher rancher vendor github com rancher norman controller generic controller go cisscanhandler create deploying helm chart clusterscancontroller c ss cis recovered from panic runtime error invalid memory address or nil pointer dereference err call stack goroutine github com rancher rancher vendor io apimachinery pkg util runtime recoverfrompanic go src github com rancher rancher vendor io apimachinery pkg util runtime runtime go panic usr local go src runtime panic go github com rancher rancher pkg controllers user cis cisscanhandler create go src github com rancher rancher pkg controllers user cis clusterscanhandler go github com rancher rancher vendor github com rancher types apis management cattle io clusterscanlifecycleadapter create go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan lifecycle adapter go github com rancher rancher vendor github com rancher norman lifecycle checknil go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter record go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter create go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter sync go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher types apis management cattle io newclusterscanlifecycleadapter go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan lifecycle adapter go github com rancher rancher vendor github com rancher types apis management cattle io clusterscancontroller addclusterscopedhandler go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan controller go github com rancher rancher vendor github com rancher norman controller genericcontroller synchandler go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor github com rancher norman controller genericcontroller processnextworkitem go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor github com rancher norman controller genericcontroller runworker go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor io apimachinery pkg util wait jitteruntil go src github com rancher rancher vendor io apimachinery pkg util wait wait go github com rancher rancher vendor io apimachinery pkg util wait jitteruntil go src github com rancher rancher vendor io apimachinery pkg util wait wait go github com rancher rancher vendor io apimachinery pkg util wait until go src github com rancher rancher vendor io apimachinery pkg util wait wait go created by github com rancher rancher vendor github com rancher norman controller genericcontroller run go src github com rancher rancher vendor github com rancher norman controller generic controller go cisscanhandler create deploying helm chart clusterscancontroller c ss cis recovered from panic runtime error invalid memory address or nil pointer dereference err call stack goroutine github com rancher rancher vendor io apimachinery pkg util runtime recoverfrompanic go src github com rancher rancher vendor io apimachinery pkg util runtime runtime go panic usr local go src runtime panic go github com rancher rancher pkg controllers user cis cisscanhandler create go src github com rancher rancher pkg controllers user cis clusterscanhandler go github com rancher rancher vendor github com rancher types apis management cattle io clusterscanlifecycleadapter create go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan lifecycle adapter go github com rancher rancher vendor github com rancher norman lifecycle checknil go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter record go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter create go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher norman lifecycle objectlifecycleadapter sync go src github com rancher rancher vendor github com rancher norman lifecycle object go github com rancher rancher vendor github com rancher types apis management cattle io newclusterscanlifecycleadapter go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan lifecycle adapter go github com rancher rancher vendor github com rancher types apis management cattle io clusterscancontroller addclusterscopedhandler go src github com rancher rancher vendor github com rancher types apis management cattle io zz generated cluster scan controller go github com rancher rancher vendor github com rancher norman controller genericcontroller synchandler go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor github com rancher norman controller genericcontroller processnextworkitem go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor github com rancher norman controller genericcontroller runworker go src github com rancher rancher vendor github com rancher norman controller generic controller go github com rancher rancher vendor io apimachinery pkg util wait jitteruntil go src github com rancher rancher vendor io apimachinery pkg util wait wait go github com rancher rancher vendor io apimachinery pkg util wait jitteruntil go src github com rancher rancher vendor io apimachinery pkg util wait wait go github com rancher rancher vendor io apimachinery pkg util wait until go src github com rancher rancher vendor io apimachinery pkg util wait wait go created by github com rancher rancher vendor github com rancher norman controller genericcontroller run go src github com rancher rancher vendor github com rancher norman controller generic controller go other details that may be helpful environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui master head latest commit id installation option single install ha single cluster information cluster type hosted infrastructure provider custom imported rke do cluster kubernetes version use kubectl version | 0 |
2,343 | 8,376,460,896 | IssuesEvent | 2018-10-05 19:54:40 | NervanaSystems/ngraph-mxnet | https://api.github.com/repos/NervanaSystems/ngraph-mxnet | closed | Rewrite the subgraph Identification algorithm to use greedy topological sort | maintainability | The two pass algorithm we have now is slower than it needs to be, we could use a greedy topological sort algorithm instead. Might be made moot by https://github.com/NervanaSystems/ngraph-mxnet/issues/288 | True | Rewrite the subgraph Identification algorithm to use greedy topological sort - The two pass algorithm we have now is slower than it needs to be, we could use a greedy topological sort algorithm instead. Might be made moot by https://github.com/NervanaSystems/ngraph-mxnet/issues/288 | main | rewrite the subgraph identification algorithm to use greedy topological sort the two pass algorithm we have now is slower than it needs to be we could use a greedy topological sort algorithm instead might be made moot by | 1 |
222,217 | 7,430,658,114 | IssuesEvent | 2018-03-25 05:03:21 | KAIST-IS521/2018s-onion-team3 | https://api.github.com/repos/KAIST-IS521/2018s-onion-team3 | closed | [Program] pgpcrypto.* file | Priority: High Status: In progress Type: Online | Issue #6 branch에서 작업이 완료되어 PR되었습니다.
```
git fetch origin master
git pull origin master
```
를 통해 동기화 후, 작업해 주세요.
- pgpcrypto.* 파일에 Enc, Dec 부분을 구현해주세요.
- 암호화, 복호화는 각각 PGP를 이용하며, passPhrase는 keymanager에서 사용되고 있기 때문에, 연결하는 것은 제가 추후에 작업하며 연결하도록 하겠습니다. passPhrase는 입력받아 사용하도록 먼저 작업하세요. | 1.0 | [Program] pgpcrypto.* file - Issue #6 branch에서 작업이 완료되어 PR되었습니다.
```
git fetch origin master
git pull origin master
```
를 통해 동기화 후, 작업해 주세요.
- pgpcrypto.* 파일에 Enc, Dec 부분을 구현해주세요.
- 암호화, 복호화는 각각 PGP를 이용하며, passPhrase는 keymanager에서 사용되고 있기 때문에, 연결하는 것은 제가 추후에 작업하며 연결하도록 하겠습니다. passPhrase는 입력받아 사용하도록 먼저 작업하세요. | non_main | pgpcrypto file issue branch에서 작업이 완료되어 pr되었습니다 git fetch origin master git pull origin master 를 통해 동기화 후 작업해 주세요 pgpcrypto 파일에 enc dec 부분을 구현해주세요 암호화 복호화는 각각 pgp를 이용하며 passphrase는 keymanager에서 사용되고 있기 때문에 연결하는 것은 제가 추후에 작업하며 연결하도록 하겠습니다 passphrase는 입력받아 사용하도록 먼저 작업하세요 | 0 |
46,771 | 5,829,118,530 | IssuesEvent | 2017-05-08 13:53:13 | matplotlib/matplotlib | https://api.github.com/repos/matplotlib/matplotlib | closed | PS backend is not tested | backend/ps Testing | As evidenced by #3523 and pointed out in #3526, the test suite doesn't exercise the ps backend at all. At the very least, I think we should have a smoketest of some sort to ensure that the backend is putting out valid files in the first place.
| 1.0 | PS backend is not tested - As evidenced by #3523 and pointed out in #3526, the test suite doesn't exercise the ps backend at all. At the very least, I think we should have a smoketest of some sort to ensure that the backend is putting out valid files in the first place.
| non_main | ps backend is not tested as evidenced by and pointed out in the test suite doesn t exercise the ps backend at all at the very least i think we should have a smoketest of some sort to ensure that the backend is putting out valid files in the first place | 0 |
456,855 | 13,151,016,507 | IssuesEvent | 2020-08-09 14:40:48 | chrisjsewell/docutils | https://api.github.com/repos/chrisjsewell/docutils | closed | Add Telephone Number recognition to the RST parser [SF:feature-requests:46] | closed-rejected feature-requests priority-5 |
author: timehorse
created: 2015-09-15 13:38:27.188000
assigned: None
SF_url: https://sourceforge.net/p/docutils/feature-requests/46
This proposal defines how phone numbers would be recognized and marked up in a document tree.
There are two recognition mode's proposed for phone number recognition: simple and explicit mode.
Simple Mode::
A phone number is defined as a sequence of digits separated by zero or more separation fields, specifically one of the set period (.) or dash (-). The regular expression for the match would be defined by `r'\d+([-.]\d+)*'`.
The Simple Mode matching would only be allowed in the :Contact: field parsing.
Explicit Mode::
A phone number must be prefixed by the telephone markup to be recognized as a number. The markup can take the form of the three letter mneumonic tel, which can optionally be capitalized and may optionally be followed by a period (.). This is followed by a volon and then the phone number as specified by the simple mode above. It's regular expression would be of the form `r'[Tt]el\.?:\s*\d+([-.]\d+)*'`.
The Explicit Mode matching would be across the entire document.
Markup::
Telephone elements would be marked up in the Doctree as follows::
<reference refuri="tel:12.34.56.78.90">
12.34.56.78.90
---
commenter: milde
posted: 2015-09-21 10:56:17.037000
title: #46 Add Telephone Number recognition to the RST parser
attachments:
- https://sourceforge.net/p/docutils/feature-requests/_discuss/thread/1a3a0432/09f9/attachment/tel.rst
I would not want an "explicit mode" different from what is already implemented:
The rST support for "standalone hyperlinks" is specified in
http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#standalone-hyperlinks
Although not listed explicitely, "tel:" is among the "known schemes, per the Official IANA Registry of URI Schemes" and indeed, the example `tel:+45/123-4567` is converted to the XML `<reference refuri="tel:+45/123-4567">tel:+45/123-4567</reference>`.
If a different URI and display is desired, the various hyperlink references all allow the "tel:" scheme in the URI, e.g. `+45-321/12345 <tel:+45-321/12345>`__.
The "simple mode" could be considered as a special case, where valid "tel:" URIs are recognized also without the scheme. (Similar to email addresses but restricted to a "contact" field.)
---
commenter: timehorse
posted: 2015-09-21 18:32:47.416000
title: #46 Add Telephone Number recognition to the RST parser
I agree with following the URI rules for other URI objects rather than an explicit mode. The existing markup for telephones and other URIs should be sufficient for resolving ambiguity and then defining special rules for the Contact field would be all I propose added since the Contact field already has a special "implied email" handler. Any numberic sequence conforming to a standard phone number, even only of a series of digits should be a phone number as there is already a separate Address field so the Contact should not contain an address. But this is something we still may want to discuss.
---
commenter: milde
posted: 2015-09-22 15:39:39.574000
title: #46 Add Telephone Number recognition to the RST parser
Actually, the "implied email" handler is not restricted to the :contact: field. An email address is recognized anywhere in the document (try it). See also states.py.
This means that a special handling of phone numbers in :contact: would be a novity without precedence. This makes acceptance in the docutils core a bit harder. You will have to prove that the advantages over using a "normal" hyperlink reference outwigh the added complexity and the possibility of false positives.
For the screenplay writer, the alternatives would be
* use of conforming tel: URIs in the source,
* use hyperlink reference markup (`` `+32/1234-34 <tel:+32/1234-34>`__ ``) or similar (inconvenient),
* post-process the content of :contact: in a transform (idea: if the complete content conforms to a telephone number, convert to a hyperlink reference)
---
commenter: timehorse
posted: 2015-09-22 16:37:10.301000
title: #46 Add Telephone Number recognition to the RST parser
So we could then just say there is no simple mode, all references must be explicit as tel:514-555-1212 and that is already parsed into a refuri node as I understand what you're telling me though if one wants the writer to drop the tel: prefix one has to use the embedded link syntax: \`514-555-1212 <tel:514-555-1212>\`\_\_ which I assume binds the interpreted text into a self-referential hyperlink with node:
<reference refuri="tel:514-555-1212">
....514-555-1212
(BTW, this phone number in the +1 exchange [US/Canada] is information for Montréal, PQ.)
---
commenter: milde
posted: 2015-09-23 16:20:13.189000
title: #46 Add Telephone Number recognition to the RST parser
We could say:
If you want the telephone number to become a hyperlink, you can use a valid "tel:" URI (like tel:+35/12345.6789-33). If the link text should be different from the "href" value, use a
"hyperlinke reference", either regualr, anonymous or embedded.
Example::
:contact: `1234`__
:address: at home
...
__ tel:+33/456-1234
Would this suffice?
---
commenter: timehorse
posted: 2015-09-23 17:27:45.589000
title: #46 Add Telephone Number recognition to the RST parser
I think that's a clever way to get the linked number while keeping the nitty-gritty of how the hyperlink works somewhat opaque. So I do like it.
My main motivation though is just allowing more than one form of contact, like:
:contact: `(514) 555-1212`__
noone@nothing.com
...
__ tel:+1-514-555-1212
Which is a nice approach to solve the problem as stated but means if the only reason to have the markup is to just distingush it from an email address for post processing it's a bit of a kludge.
Recall that my main motivation is to make it easier for a Writer to pull out the docinfo date in order to populate a number of standard fields used in the Manuscript format. Clearly just marking it up as an interpreted role could also work but that seems to general to me. It may just mean, if you want to provide a phone number in a Manuscript document (for instance in the ODT writer and I assume Latex and clearly the same could hold for HTML templates) you have to mark it up in a standard URI using the Tel protocol.
I'm not per se happy about making it so complicated but I can't think of another solutuon which would avoid being unnecessarily essoteric.
---
commenter: milde
posted: 2015-09-24 09:37:17.322000
title: #46 Add Telephone Number recognition to the RST parser
How about using separate fields - even if these are not
standard docutils-docinfo-fields::
:author: A\. Hitchcock
:phone: 123/456 789-0
:email: ah@example.com
Looks good with standard writers and makes it easy for human readers as well as post processing software to interpret the contact info.
As current behaviour (recognition of tel: URI as standalone hyperlink) is close to the proposed "explicit mode" and somethin along the "simple mode" is better implemented as a transform acting on a :contact: or :phone: docinfo field, I suggest closing this ticket.
---
commenter: timehorse
posted: 2015-09-24 19:37:39.015000
title: #46 Add Telephone Number recognition to the RST parser
If we go wiew fields, which of course won't get promoted to DocInfo and must be searched for by a writer which could be a level of complexity that we might later find unwarrented if these new fields become standardized but we work with what we have, and with that being the result of this proposal of keeping the status quo and adding new fields to docinfo is itself a different issue I concur. I'm closing the ticket.
---
commenter: timehorse
posted: 2015-09-24 19:38:19.984000
title: #46 Add Telephone Number recognition to the RST parser
- **status**: open --> closed-rejected
| 1.0 | Add Telephone Number recognition to the RST parser [SF:feature-requests:46] -
author: timehorse
created: 2015-09-15 13:38:27.188000
assigned: None
SF_url: https://sourceforge.net/p/docutils/feature-requests/46
This proposal defines how phone numbers would be recognized and marked up in a document tree.
There are two recognition mode's proposed for phone number recognition: simple and explicit mode.
Simple Mode::
A phone number is defined as a sequence of digits separated by zero or more separation fields, specifically one of the set period (.) or dash (-). The regular expression for the match would be defined by `r'\d+([-.]\d+)*'`.
The Simple Mode matching would only be allowed in the :Contact: field parsing.
Explicit Mode::
A phone number must be prefixed by the telephone markup to be recognized as a number. The markup can take the form of the three letter mneumonic tel, which can optionally be capitalized and may optionally be followed by a period (.). This is followed by a volon and then the phone number as specified by the simple mode above. It's regular expression would be of the form `r'[Tt]el\.?:\s*\d+([-.]\d+)*'`.
The Explicit Mode matching would be across the entire document.
Markup::
Telephone elements would be marked up in the Doctree as follows::
<reference refuri="tel:12.34.56.78.90">
12.34.56.78.90
---
commenter: milde
posted: 2015-09-21 10:56:17.037000
title: #46 Add Telephone Number recognition to the RST parser
attachments:
- https://sourceforge.net/p/docutils/feature-requests/_discuss/thread/1a3a0432/09f9/attachment/tel.rst
I would not want an "explicit mode" different from what is already implemented:
The rST support for "standalone hyperlinks" is specified in
http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#standalone-hyperlinks
Although not listed explicitely, "tel:" is among the "known schemes, per the Official IANA Registry of URI Schemes" and indeed, the example `tel:+45/123-4567` is converted to the XML `<reference refuri="tel:+45/123-4567">tel:+45/123-4567</reference>`.
If a different URI and display is desired, the various hyperlink references all allow the "tel:" scheme in the URI, e.g. `+45-321/12345 <tel:+45-321/12345>`__.
The "simple mode" could be considered as a special case, where valid "tel:" URIs are recognized also without the scheme. (Similar to email addresses but restricted to a "contact" field.)
---
commenter: timehorse
posted: 2015-09-21 18:32:47.416000
title: #46 Add Telephone Number recognition to the RST parser
I agree with following the URI rules for other URI objects rather than an explicit mode. The existing markup for telephones and other URIs should be sufficient for resolving ambiguity and then defining special rules for the Contact field would be all I propose added since the Contact field already has a special "implied email" handler. Any numberic sequence conforming to a standard phone number, even only of a series of digits should be a phone number as there is already a separate Address field so the Contact should not contain an address. But this is something we still may want to discuss.
---
commenter: milde
posted: 2015-09-22 15:39:39.574000
title: #46 Add Telephone Number recognition to the RST parser
Actually, the "implied email" handler is not restricted to the :contact: field. An email address is recognized anywhere in the document (try it). See also states.py.
This means that a special handling of phone numbers in :contact: would be a novity without precedence. This makes acceptance in the docutils core a bit harder. You will have to prove that the advantages over using a "normal" hyperlink reference outwigh the added complexity and the possibility of false positives.
For the screenplay writer, the alternatives would be
* use of conforming tel: URIs in the source,
* use hyperlink reference markup (`` `+32/1234-34 <tel:+32/1234-34>`__ ``) or similar (inconvenient),
* post-process the content of :contact: in a transform (idea: if the complete content conforms to a telephone number, convert to a hyperlink reference)
---
commenter: timehorse
posted: 2015-09-22 16:37:10.301000
title: #46 Add Telephone Number recognition to the RST parser
So we could then just say there is no simple mode, all references must be explicit as tel:514-555-1212 and that is already parsed into a refuri node as I understand what you're telling me though if one wants the writer to drop the tel: prefix one has to use the embedded link syntax: \`514-555-1212 <tel:514-555-1212>\`\_\_ which I assume binds the interpreted text into a self-referential hyperlink with node:
<reference refuri="tel:514-555-1212">
....514-555-1212
(BTW, this phone number in the +1 exchange [US/Canada] is information for Montréal, PQ.)
---
commenter: milde
posted: 2015-09-23 16:20:13.189000
title: #46 Add Telephone Number recognition to the RST parser
We could say:
If you want the telephone number to become a hyperlink, you can use a valid "tel:" URI (like tel:+35/12345.6789-33). If the link text should be different from the "href" value, use a
"hyperlinke reference", either regualr, anonymous or embedded.
Example::
:contact: `1234`__
:address: at home
...
__ tel:+33/456-1234
Would this suffice?
---
commenter: timehorse
posted: 2015-09-23 17:27:45.589000
title: #46 Add Telephone Number recognition to the RST parser
I think that's a clever way to get the linked number while keeping the nitty-gritty of how the hyperlink works somewhat opaque. So I do like it.
My main motivation though is just allowing more than one form of contact, like:
:contact: `(514) 555-1212`__
noone@nothing.com
...
__ tel:+1-514-555-1212
Which is a nice approach to solve the problem as stated but means if the only reason to have the markup is to just distingush it from an email address for post processing it's a bit of a kludge.
Recall that my main motivation is to make it easier for a Writer to pull out the docinfo date in order to populate a number of standard fields used in the Manuscript format. Clearly just marking it up as an interpreted role could also work but that seems to general to me. It may just mean, if you want to provide a phone number in a Manuscript document (for instance in the ODT writer and I assume Latex and clearly the same could hold for HTML templates) you have to mark it up in a standard URI using the Tel protocol.
I'm not per se happy about making it so complicated but I can't think of another solutuon which would avoid being unnecessarily essoteric.
---
commenter: milde
posted: 2015-09-24 09:37:17.322000
title: #46 Add Telephone Number recognition to the RST parser
How about using separate fields - even if these are not
standard docutils-docinfo-fields::
:author: A\. Hitchcock
:phone: 123/456 789-0
:email: ah@example.com
Looks good with standard writers and makes it easy for human readers as well as post processing software to interpret the contact info.
As current behaviour (recognition of tel: URI as standalone hyperlink) is close to the proposed "explicit mode" and somethin along the "simple mode" is better implemented as a transform acting on a :contact: or :phone: docinfo field, I suggest closing this ticket.
---
commenter: timehorse
posted: 2015-09-24 19:37:39.015000
title: #46 Add Telephone Number recognition to the RST parser
If we go wiew fields, which of course won't get promoted to DocInfo and must be searched for by a writer which could be a level of complexity that we might later find unwarrented if these new fields become standardized but we work with what we have, and with that being the result of this proposal of keeping the status quo and adding new fields to docinfo is itself a different issue I concur. I'm closing the ticket.
---
commenter: timehorse
posted: 2015-09-24 19:38:19.984000
title: #46 Add Telephone Number recognition to the RST parser
- **status**: open --> closed-rejected
| non_main | add telephone number recognition to the rst parser author timehorse created assigned none sf url this proposal defines how phone numbers would be recognized and marked up in a document tree there are two recognition mode s proposed for phone number recognition simple and explicit mode simple mode a phone number is defined as a sequence of digits separated by zero or more separation fields specifically one of the set period or dash the regular expression for the match would be defined by r d d the simple mode matching would only be allowed in the contact field parsing explicit mode a phone number must be prefixed by the telephone markup to be recognized as a number the markup can take the form of the three letter mneumonic tel which can optionally be capitalized and may optionally be followed by a period this is followed by a volon and then the phone number as specified by the simple mode above it s regular expression would be of the form r el s d d the explicit mode matching would be across the entire document markup telephone elements would be marked up in the doctree as follows commenter milde posted title add telephone number recognition to the rst parser attachments i would not want an explicit mode different from what is already implemented the rst support for standalone hyperlinks is specified in although not listed explicitely tel is among the known schemes per the official iana registry of uri schemes and indeed the example tel is converted to the xml tel if a different uri and display is desired the various hyperlink references all allow the tel scheme in the uri e g the simple mode could be considered as a special case where valid tel uris are recognized also without the scheme similar to email addresses but restricted to a contact field commenter timehorse posted title add telephone number recognition to the rst parser i agree with following the uri rules for other uri objects rather than an explicit mode the existing markup for telephones and other uris should be sufficient for resolving ambiguity and then defining special rules for the contact field would be all i propose added since the contact field already has a special implied email handler any numberic sequence conforming to a standard phone number even only of a series of digits should be a phone number as there is already a separate address field so the contact should not contain an address but this is something we still may want to discuss commenter milde posted title add telephone number recognition to the rst parser actually the implied email handler is not restricted to the contact field an email address is recognized anywhere in the document try it see also states py this means that a special handling of phone numbers in contact would be a novity without precedence this makes acceptance in the docutils core a bit harder you will have to prove that the advantages over using a normal hyperlink reference outwigh the added complexity and the possibility of false positives for the screenplay writer the alternatives would be use of conforming tel uris in the source use hyperlink reference markup or similar inconvenient post process the content of contact in a transform idea if the complete content conforms to a telephone number convert to a hyperlink reference commenter timehorse posted title add telephone number recognition to the rst parser so we could then just say there is no simple mode all references must be explicit as tel and that is already parsed into a refuri node as i understand what you re telling me though if one wants the writer to drop the tel prefix one has to use the embedded link syntax which i assume binds the interpreted text into a self referential hyperlink with node btw this phone number in the exchange is information for montréal pq commenter milde posted title add telephone number recognition to the rst parser we could say if you want the telephone number to become a hyperlink you can use a valid tel uri like tel if the link text should be different from the href value use a hyperlinke reference either regualr anonymous or embedded example contact address at home tel would this suffice commenter timehorse posted title add telephone number recognition to the rst parser i think that s a clever way to get the linked number while keeping the nitty gritty of how the hyperlink works somewhat opaque so i do like it my main motivation though is just allowing more than one form of contact like contact noone nothing com tel which is a nice approach to solve the problem as stated but means if the only reason to have the markup is to just distingush it from an email address for post processing it s a bit of a kludge recall that my main motivation is to make it easier for a writer to pull out the docinfo date in order to populate a number of standard fields used in the manuscript format clearly just marking it up as an interpreted role could also work but that seems to general to me it may just mean if you want to provide a phone number in a manuscript document for instance in the odt writer and i assume latex and clearly the same could hold for html templates you have to mark it up in a standard uri using the tel protocol i m not per se happy about making it so complicated but i can t think of another solutuon which would avoid being unnecessarily essoteric commenter milde posted title add telephone number recognition to the rst parser how about using separate fields even if these are not standard docutils docinfo fields author a hitchcock phone email ah example com looks good with standard writers and makes it easy for human readers as well as post processing software to interpret the contact info as current behaviour recognition of tel uri as standalone hyperlink is close to the proposed explicit mode and somethin along the simple mode is better implemented as a transform acting on a contact or phone docinfo field i suggest closing this ticket commenter timehorse posted title add telephone number recognition to the rst parser if we go wiew fields which of course won t get promoted to docinfo and must be searched for by a writer which could be a level of complexity that we might later find unwarrented if these new fields become standardized but we work with what we have and with that being the result of this proposal of keeping the status quo and adding new fields to docinfo is itself a different issue i concur i m closing the ticket commenter timehorse posted title add telephone number recognition to the rst parser status open closed rejected | 0 |
3,757 | 15,794,905,730 | IssuesEvent | 2021-04-02 12:03:17 | arcticicestudio/styleguide-markdown | https://api.github.com/repos/arcticicestudio/styleguide-markdown | closed | Monorepo with Remark packages | context-pkg context-workflow scope-compatibility scope-dx scope-maintainability scope-pkg-plugin-support scope-quality scope-stability target-pkg-remark-preset-lint type-feature | ## Current Project State
Currently this repository only contains the actual styleguide documentation while specific projects that implement the guidelines for linters and code style analyzer live in separate repositories. This is the best approach for modularity and a small and clear code base, but it increases the maintenance overhead by 1(n) since changes to the development workflow or toolbox, general project documentations as well as dependency management requires changes in every repository with dedicated tickets/issues and PRs. In particular, Node packages require frequent dependency management due to their fast development cycles to keep up-to-date with the latest package changes like (security) bug fixes.
This styleguide is currently implemented by the [remark-preset-lint-arcticicestudio][gh-arcticicestudio/remark-preset-lint-arcticicestudio] Node package living in its own repository. The development workflow is clean using most of GitHub's awesome features like project boards, _codeowner_ assignments, issue & PR automation and so on, but changes often require multiple actions when packages depend on each other or they use the same development tooling and documentation standards.
### Monorepo Comparison
[Monorepos][trbdev-monorepo] are a fantastic way to manage such a project structure, but there are also some points that must be taken into account:
- **No more scoped code** — the developer experience with Git is slightly worse because commits can contains changes to multiple scopes of the code. Since there is only a “transparent separation” of code, that was previously located in a dedicated repository but is not aggregated into a parent (e.g. `packages`) with other modules, commits can now contain changes to multiple code scopes spread over the entire code base.
- **No more assignment of commits to single modules** — like described in the bullet point above, commit can contain changes to multiple modules, it is harder to detect which commit targeted a specific module.
- **Steeper learning curve for new contributors** — in a dedicated repository that only hosts a specific module it is easier for new developers to contribute to the project, but in a monorepo they might need to change code in multiple places within other modules or the root code/documentation of the entire project.
- **Uniform version number** — in order to keep conform to [SemVer][], the entire project must use a uniform version number. This means that a module that has not been changed since the last version must also be incremented in order to keep compatible with the other modules.
Using different version numbers prefixed/suffixed with an individual version number **is a not an option**, **increases the maintenance overhead** and **and drastically reduces the project overview and quality**! This would result in multiple Git tags on the `main` branch as well as “empty” changelogs and release notes with placeholder logs that only refer to changes of other modules.
## Project Future
Even though a _monorepo_ requires some special thoughts, it also comes with a lot of benefits and makes sense **for specific project modules that are slightly coupled** and where using dedicated repositories only increases the maintenance overhead **when changes must be reflected in multiple modules anyway**.
In order to reduce the maintenance overhead, the [remark-preset-lint-arcticicestudio][gh-arcticicestudio/remark-preset-lint-arcticicestudio] Node package will be migrated into this repository by adapting to [Yarn workspaces][yarn-docs-ws]. This simplifies the development tooling setup and allows to use a unified documentation base as well as a smoother development and testing workflow.
This change also implies that the root of the repository will be the main package for the entire project setup including shared development dependencies, tools and documentations while the packages will only contain specific configurations and (dev)dependencies.
### Scoped Packages
Currently [remark-preset-lint-arcticicestudio][gh-arcticicestudio/remark-preset-lint-arcticicestudio] is not a [scoped package][npm-docs-scopes] but suffixed with `-arcticicestudio`. To simplify the naming and improving the usage of user/organization specific packages, it will be scoped to `@arcticicestudio` resulting in the new name `@arcticicestudio/remark-preset-lint`.
The currently released public version will be deprecated using the [`npm deprecate` command][npm-docs-cli-depr] where the provided message will point out to migrate to the new scoped packages.
### Versioning
The styleguide itself and all packages will use a shared/fixed/locked version. This helps all packages to keep in sync and ensure the compatibility with the latest style guide version.
[gh-arcticicestudio/remark-preset-lint-arcticicestudio]: https://github.com/arcticicestudio/remark-preset-lint-arcticicestudio
[npm-docs-cli-depr]: https://docs.npmjs.com/cli/deprecate
[npm-docs-scopes]: https://docs.npmjs.com/about-scopes
[semver]: https://semver.org
[trbdev-monorepo]: https://trunkbaseddevelopment.com/monorepos
[yarn-docs-ws]: https://yarnpkg.com/en/docs/workspaces
| True | Monorepo with Remark packages - ## Current Project State
Currently this repository only contains the actual styleguide documentation while specific projects that implement the guidelines for linters and code style analyzer live in separate repositories. This is the best approach for modularity and a small and clear code base, but it increases the maintenance overhead by 1(n) since changes to the development workflow or toolbox, general project documentations as well as dependency management requires changes in every repository with dedicated tickets/issues and PRs. In particular, Node packages require frequent dependency management due to their fast development cycles to keep up-to-date with the latest package changes like (security) bug fixes.
This styleguide is currently implemented by the [remark-preset-lint-arcticicestudio][gh-arcticicestudio/remark-preset-lint-arcticicestudio] Node package living in its own repository. The development workflow is clean using most of GitHub's awesome features like project boards, _codeowner_ assignments, issue & PR automation and so on, but changes often require multiple actions when packages depend on each other or they use the same development tooling and documentation standards.
### Monorepo Comparison
[Monorepos][trbdev-monorepo] are a fantastic way to manage such a project structure, but there are also some points that must be taken into account:
- **No more scoped code** — the developer experience with Git is slightly worse because commits can contains changes to multiple scopes of the code. Since there is only a “transparent separation” of code, that was previously located in a dedicated repository but is not aggregated into a parent (e.g. `packages`) with other modules, commits can now contain changes to multiple code scopes spread over the entire code base.
- **No more assignment of commits to single modules** — like described in the bullet point above, commit can contain changes to multiple modules, it is harder to detect which commit targeted a specific module.
- **Steeper learning curve for new contributors** — in a dedicated repository that only hosts a specific module it is easier for new developers to contribute to the project, but in a monorepo they might need to change code in multiple places within other modules or the root code/documentation of the entire project.
- **Uniform version number** — in order to keep conform to [SemVer][], the entire project must use a uniform version number. This means that a module that has not been changed since the last version must also be incremented in order to keep compatible with the other modules.
Using different version numbers prefixed/suffixed with an individual version number **is a not an option**, **increases the maintenance overhead** and **and drastically reduces the project overview and quality**! This would result in multiple Git tags on the `main` branch as well as “empty” changelogs and release notes with placeholder logs that only refer to changes of other modules.
## Project Future
Even though a _monorepo_ requires some special thoughts, it also comes with a lot of benefits and makes sense **for specific project modules that are slightly coupled** and where using dedicated repositories only increases the maintenance overhead **when changes must be reflected in multiple modules anyway**.
In order to reduce the maintenance overhead, the [remark-preset-lint-arcticicestudio][gh-arcticicestudio/remark-preset-lint-arcticicestudio] Node package will be migrated into this repository by adapting to [Yarn workspaces][yarn-docs-ws]. This simplifies the development tooling setup and allows to use a unified documentation base as well as a smoother development and testing workflow.
This change also implies that the root of the repository will be the main package for the entire project setup including shared development dependencies, tools and documentations while the packages will only contain specific configurations and (dev)dependencies.
### Scoped Packages
Currently [remark-preset-lint-arcticicestudio][gh-arcticicestudio/remark-preset-lint-arcticicestudio] is not a [scoped package][npm-docs-scopes] but suffixed with `-arcticicestudio`. To simplify the naming and improving the usage of user/organization specific packages, it will be scoped to `@arcticicestudio` resulting in the new name `@arcticicestudio/remark-preset-lint`.
The currently released public version will be deprecated using the [`npm deprecate` command][npm-docs-cli-depr] where the provided message will point out to migrate to the new scoped packages.
### Versioning
The styleguide itself and all packages will use a shared/fixed/locked version. This helps all packages to keep in sync and ensure the compatibility with the latest style guide version.
[gh-arcticicestudio/remark-preset-lint-arcticicestudio]: https://github.com/arcticicestudio/remark-preset-lint-arcticicestudio
[npm-docs-cli-depr]: https://docs.npmjs.com/cli/deprecate
[npm-docs-scopes]: https://docs.npmjs.com/about-scopes
[semver]: https://semver.org
[trbdev-monorepo]: https://trunkbaseddevelopment.com/monorepos
[yarn-docs-ws]: https://yarnpkg.com/en/docs/workspaces
| main | monorepo with remark packages current project state currently this repository only contains the actual styleguide documentation while specific projects that implement the guidelines for linters and code style analyzer live in separate repositories this is the best approach for modularity and a small and clear code base but it increases the maintenance overhead by n since changes to the development workflow or toolbox general project documentations as well as dependency management requires changes in every repository with dedicated tickets issues and prs in particular node packages require frequent dependency management due to their fast development cycles to keep up to date with the latest package changes like security bug fixes this styleguide is currently implemented by the node package living in its own repository the development workflow is clean using most of github s awesome features like project boards codeowner assignments issue pr automation and so on but changes often require multiple actions when packages depend on each other or they use the same development tooling and documentation standards monorepo comparison are a fantastic way to manage such a project structure but there are also some points that must be taken into account no more scoped code — the developer experience with git is slightly worse because commits can contains changes to multiple scopes of the code since there is only a “transparent separation” of code that was previously located in a dedicated repository but is not aggregated into a parent e g packages with other modules commits can now contain changes to multiple code scopes spread over the entire code base no more assignment of commits to single modules — like described in the bullet point above commit can contain changes to multiple modules it is harder to detect which commit targeted a specific module steeper learning curve for new contributors — in a dedicated repository that only hosts a specific module it is easier for new developers to contribute to the project but in a monorepo they might need to change code in multiple places within other modules or the root code documentation of the entire project uniform version number — in order to keep conform to the entire project must use a uniform version number this means that a module that has not been changed since the last version must also be incremented in order to keep compatible with the other modules using different version numbers prefixed suffixed with an individual version number is a not an option increases the maintenance overhead and and drastically reduces the project overview and quality this would result in multiple git tags on the main branch as well as “empty” changelogs and release notes with placeholder logs that only refer to changes of other modules project future even though a monorepo requires some special thoughts it also comes with a lot of benefits and makes sense for specific project modules that are slightly coupled and where using dedicated repositories only increases the maintenance overhead when changes must be reflected in multiple modules anyway in order to reduce the maintenance overhead the node package will be migrated into this repository by adapting to this simplifies the development tooling setup and allows to use a unified documentation base as well as a smoother development and testing workflow this change also implies that the root of the repository will be the main package for the entire project setup including shared development dependencies tools and documentations while the packages will only contain specific configurations and dev dependencies scoped packages currently is not a but suffixed with arcticicestudio to simplify the naming and improving the usage of user organization specific packages it will be scoped to arcticicestudio resulting in the new name arcticicestudio remark preset lint the currently released public version will be deprecated using the where the provided message will point out to migrate to the new scoped packages versioning the styleguide itself and all packages will use a shared fixed locked version this helps all packages to keep in sync and ensure the compatibility with the latest style guide version | 1 |
1,293 | 5,475,887,768 | IssuesEvent | 2017-03-11 15:42:14 | WhitestormJS/whitestorm.js | https://api.github.com/repos/WhitestormJS/whitestorm.js | opened | Move modules from repos to /modules/ | MAINTAINANCE | As we now use `/modules/` folder to keep modules. We should move there some other modules that we keep updating:
- [ ] [whs-module-statsjs](https://github.com/WhitestormJS/whs-module-statsjs)
- [ ] [whs-module-dat.gui](https://github.com/WhitestormJS/whs-module-dat.gui)
###### Version:
- [x] v2.x.x
- [ ] v1.x.x
###### Issue type:
- [ ] Bug
- [ ] Proposal/Enhancement
- [ ] Question
------
<details>
<summary> <b>Tested on: </b> </summary>
###### --- Desktop
- [ ] Chrome
- [ ] Chrome Canary
- [ ] Chrome dev-channel
- [ ] Firefox
- [ ] Opera
- [ ] Microsoft IE
- [ ] Microsoft Edge
###### --- Android
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
###### --- IOS
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
</details>
| True | Move modules from repos to /modules/ - As we now use `/modules/` folder to keep modules. We should move there some other modules that we keep updating:
- [ ] [whs-module-statsjs](https://github.com/WhitestormJS/whs-module-statsjs)
- [ ] [whs-module-dat.gui](https://github.com/WhitestormJS/whs-module-dat.gui)
###### Version:
- [x] v2.x.x
- [ ] v1.x.x
###### Issue type:
- [ ] Bug
- [ ] Proposal/Enhancement
- [ ] Question
------
<details>
<summary> <b>Tested on: </b> </summary>
###### --- Desktop
- [ ] Chrome
- [ ] Chrome Canary
- [ ] Chrome dev-channel
- [ ] Firefox
- [ ] Opera
- [ ] Microsoft IE
- [ ] Microsoft Edge
###### --- Android
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
###### --- IOS
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
</details>
| main | move modules from repos to modules as we now use modules folder to keep modules we should move there some other modules that we keep updating version x x x x issue type bug proposal enhancement question tested on desktop chrome chrome canary chrome dev channel firefox opera microsoft ie microsoft edge android chrome firefox opera ios chrome firefox opera | 1 |
317,081 | 27,210,938,225 | IssuesEvent | 2023-02-20 16:28:35 | TEHE-Studios/ExpandedHelicopterEvents | https://api.github.com/repos/TEHE-Studios/ExpandedHelicopterEvents | closed | Anti-Cheat triggering - Malformed packet type 22 | bug PZ engine needs testing | [08-01-23 03:38:32.198] WARN : Multiplayer , 1673149112198> 45,546,333> PacketValidator.doKickUser> Kick: player="NIK" type="PacketValidator" issuer="Type22" description="null".
[08-01-23 03:38:32.199] WARN : General , 1673149112199> 45,546,334> GameServer.kick> The player NIK was kicked. The reason was UI_Policy_Kick, Type22.
I'm not 100% certain this mod is causing it, but I'm 99% sure this mod is causing it. | 1.0 | Anti-Cheat triggering - Malformed packet type 22 - [08-01-23 03:38:32.198] WARN : Multiplayer , 1673149112198> 45,546,333> PacketValidator.doKickUser> Kick: player="NIK" type="PacketValidator" issuer="Type22" description="null".
[08-01-23 03:38:32.199] WARN : General , 1673149112199> 45,546,334> GameServer.kick> The player NIK was kicked. The reason was UI_Policy_Kick, Type22.
I'm not 100% certain this mod is causing it, but I'm 99% sure this mod is causing it. | non_main | anti cheat triggering malformed packet type warn multiplayer packetvalidator dokickuser kick player nik type packetvalidator issuer description null warn general gameserver kick the player nik was kicked the reason was ui policy kick i m not certain this mod is causing it but i m sure this mod is causing it | 0 |
1,962 | 6,688,875,248 | IssuesEvent | 2017-10-08 19:33:54 | cannawen/metric_units_reddit_bot | https://api.github.com/repos/cannawen/metric_units_reddit_bot | closed | "oz" is not being converted to "troy oz" in subreddit /r/Pmsforsale | bug first timers only hacktoberfest in progress maintainer approved | [metric_units](https://www.reddit.com/user/metric_units) is a sassy reddit bot that finds imperial units, and replies with a metric conversion.
## First timers only
This issue is reserved for anyone who has **never** made a pull request to Open Source Software. If you are not a first timer, we would still love your help! Please see our other open issues :)
[Read our New to OSS guide](https://github.com/cannawen/metric_units_reddit_bot/blob/master/NEW-TO-OSS.md) for an overview of what to expect
### IMPORTANT: Comment below if you would like to volunteer to fix this bug.
---
## Recommended experience
- Programming fundamentals (`if` statements, arrays, etc.)
- No previous experience working with Regular Expressions
- Some familiarity with how Reddit works
## Time estimate
30-60 minutes
## Background Information
So, you want to work on a Reddit bot that converts imperial units to metric units? Awesome! It's not an easy problem to solve though :( Imperial units are confusing!!
Take ounces, for example. When someone says "ounces" they usually mean [regular ("avoirdupois") ounces](https://en.wikipedia.org/wiki/Avoirdupois) (which is 28.3495 grams). But, they could also be referring to ["troy" ounces](https://en.wikipedia.org/wiki/Troy_weight) (31.1035 grams). Troy ounces are most often used when dealing with precious metals, like gold or silver
## The problem
The subreddit [/r/Pmsforsale](https://www.reddit.com/r/Pmsforsale/) is all about precious metals. They may refer to something as "ounce", but what they really mean is "troy ounce"
So, when the bot finds itself in /r/Pmsforsale, we want it to find all mentions of "ounces" and replace them with "troy ounces". This should already be happening, but it is not! There is a bug in the code.
To replace "ounces" with "troy ounces", we must find them by using a thing called Regular Expressions (also known as a, "regex"). Regexes help us find strings that match a certain pattern. For example if we had a regex `a` and applied it to this string: `ababcA` ... it would find all of the lower case a's but ignore the other characters (b, c, and A).
OPTIONAL: You can go through [this tutorial](https://regex101.com/) to learn more about regexes
OK, we are ready to get started. Lets get our development environment set up!
If you run into any problems, try googling for a solution. If that doesn't work, reply to this issue with screenshots of the error and what steps you have already taken to try to solve the problem. We're happy to help :)
1. Open up a terminal window and type `node --version`
2. If it complains you do not have node, download it [here](https://nodejs.org/en/download/).
3. If your Node version is under v6.2.1, you have to update it
4. Fork the main github repository and "clone your fork" (download the bot's code) using git. [See more detailed instructions here](https://guides.github.com/activities/forking/)
5. Run `npm install` to download all of the required dependencies
6. Run `npm test` to run all of the tests. All green? Yay!
7. Open `./src/conversion_helper-test.js` in your favourite text editor
8. Search for the text "should convert oz to troy oz in precious metal sub"
9. Replace `it.skip` with `it.only`. This will tell the program to only run this single test. While you're here, take a look at the test! Read it carefully, can you guess what it does?
10. Run `npm test` again
11. Observe failing test :( Boo, failing tests! Booo! Tests allow us to define inputs and expected outputs to functions, so we can see if a function is doing the correct thing.
12. This test is failing because of a bug in the code. Let's go find that code
13. Go to `./src/conversion_helper.js` and find where we declare `specialSubredditsRegex`
14. This line creates a regex, but all regexes are case-sensitive by default! Notice the difference in capitalization in our test vs. our regular expression?
15. Your task is to make the regex match case-insensitively to make the test pass. Please use Google or any other resource.
OPTIONAL: Can you think of other ways to make the test pass? Post your ideas in your Pull Request description later on (step 18).
16. Once your single test is passing, change the `it.only` to `it` to run all of the tests again.
17. Is everything green? Woohoo, green! Yay!!
18. Please commit the changes with a descriptive git commit message, and then make a pull request back to the main repository :)
OPTIONAL: Don't forget to [give yourself credit](https://github.com/cannawen/metric_units_reddit_bot/blob/master/CONTRIBUTING.md#add-yourself-to-the-contributors-list)! Thank you for contributing to metric_units bot!
19. Wait for your PR to be reviewed by a maintainer, and address any comments they may have
## Step 20: YOUR CHANGE GETS MERGED AND RELEASED!! Party!!! | True | "oz" is not being converted to "troy oz" in subreddit /r/Pmsforsale - [metric_units](https://www.reddit.com/user/metric_units) is a sassy reddit bot that finds imperial units, and replies with a metric conversion.
## First timers only
This issue is reserved for anyone who has **never** made a pull request to Open Source Software. If you are not a first timer, we would still love your help! Please see our other open issues :)
[Read our New to OSS guide](https://github.com/cannawen/metric_units_reddit_bot/blob/master/NEW-TO-OSS.md) for an overview of what to expect
### IMPORTANT: Comment below if you would like to volunteer to fix this bug.
---
## Recommended experience
- Programming fundamentals (`if` statements, arrays, etc.)
- No previous experience working with Regular Expressions
- Some familiarity with how Reddit works
## Time estimate
30-60 minutes
## Background Information
So, you want to work on a Reddit bot that converts imperial units to metric units? Awesome! It's not an easy problem to solve though :( Imperial units are confusing!!
Take ounces, for example. When someone says "ounces" they usually mean [regular ("avoirdupois") ounces](https://en.wikipedia.org/wiki/Avoirdupois) (which is 28.3495 grams). But, they could also be referring to ["troy" ounces](https://en.wikipedia.org/wiki/Troy_weight) (31.1035 grams). Troy ounces are most often used when dealing with precious metals, like gold or silver
## The problem
The subreddit [/r/Pmsforsale](https://www.reddit.com/r/Pmsforsale/) is all about precious metals. They may refer to something as "ounce", but what they really mean is "troy ounce"
So, when the bot finds itself in /r/Pmsforsale, we want it to find all mentions of "ounces" and replace them with "troy ounces". This should already be happening, but it is not! There is a bug in the code.
To replace "ounces" with "troy ounces", we must find them by using a thing called Regular Expressions (also known as a, "regex"). Regexes help us find strings that match a certain pattern. For example if we had a regex `a` and applied it to this string: `ababcA` ... it would find all of the lower case a's but ignore the other characters (b, c, and A).
OPTIONAL: You can go through [this tutorial](https://regex101.com/) to learn more about regexes
OK, we are ready to get started. Lets get our development environment set up!
If you run into any problems, try googling for a solution. If that doesn't work, reply to this issue with screenshots of the error and what steps you have already taken to try to solve the problem. We're happy to help :)
1. Open up a terminal window and type `node --version`
2. If it complains you do not have node, download it [here](https://nodejs.org/en/download/).
3. If your Node version is under v6.2.1, you have to update it
4. Fork the main github repository and "clone your fork" (download the bot's code) using git. [See more detailed instructions here](https://guides.github.com/activities/forking/)
5. Run `npm install` to download all of the required dependencies
6. Run `npm test` to run all of the tests. All green? Yay!
7. Open `./src/conversion_helper-test.js` in your favourite text editor
8. Search for the text "should convert oz to troy oz in precious metal sub"
9. Replace `it.skip` with `it.only`. This will tell the program to only run this single test. While you're here, take a look at the test! Read it carefully, can you guess what it does?
10. Run `npm test` again
11. Observe failing test :( Boo, failing tests! Booo! Tests allow us to define inputs and expected outputs to functions, so we can see if a function is doing the correct thing.
12. This test is failing because of a bug in the code. Let's go find that code
13. Go to `./src/conversion_helper.js` and find where we declare `specialSubredditsRegex`
14. This line creates a regex, but all regexes are case-sensitive by default! Notice the difference in capitalization in our test vs. our regular expression?
15. Your task is to make the regex match case-insensitively to make the test pass. Please use Google or any other resource.
OPTIONAL: Can you think of other ways to make the test pass? Post your ideas in your Pull Request description later on (step 18).
16. Once your single test is passing, change the `it.only` to `it` to run all of the tests again.
17. Is everything green? Woohoo, green! Yay!!
18. Please commit the changes with a descriptive git commit message, and then make a pull request back to the main repository :)
OPTIONAL: Don't forget to [give yourself credit](https://github.com/cannawen/metric_units_reddit_bot/blob/master/CONTRIBUTING.md#add-yourself-to-the-contributors-list)! Thank you for contributing to metric_units bot!
19. Wait for your PR to be reviewed by a maintainer, and address any comments they may have
## Step 20: YOUR CHANGE GETS MERGED AND RELEASED!! Party!!! | main | oz is not being converted to troy oz in subreddit r pmsforsale is a sassy reddit bot that finds imperial units and replies with a metric conversion first timers only this issue is reserved for anyone who has never made a pull request to open source software if you are not a first timer we would still love your help please see our other open issues for an overview of what to expect important comment below if you would like to volunteer to fix this bug recommended experience programming fundamentals if statements arrays etc no previous experience working with regular expressions some familiarity with how reddit works time estimate minutes background information so you want to work on a reddit bot that converts imperial units to metric units awesome it s not an easy problem to solve though imperial units are confusing take ounces for example when someone says ounces they usually mean which is grams but they could also be referring to grams troy ounces are most often used when dealing with precious metals like gold or silver the problem the subreddit is all about precious metals they may refer to something as ounce but what they really mean is troy ounce so when the bot finds itself in r pmsforsale we want it to find all mentions of ounces and replace them with troy ounces this should already be happening but it is not there is a bug in the code to replace ounces with troy ounces we must find them by using a thing called regular expressions also known as a regex regexes help us find strings that match a certain pattern for example if we had a regex a and applied it to this string ababca it would find all of the lower case a s but ignore the other characters b c and a optional you can go through to learn more about regexes ok we are ready to get started lets get our development environment set up if you run into any problems try googling for a solution if that doesn t work reply to this issue with screenshots of the error and what steps you have already taken to try to solve the problem we re happy to help open up a terminal window and type node version if it complains you do not have node download it if your node version is under you have to update it fork the main github repository and clone your fork download the bot s code using git run npm install to download all of the required dependencies run npm test to run all of the tests all green yay open src conversion helper test js in your favourite text editor search for the text should convert oz to troy oz in precious metal sub replace it skip with it only this will tell the program to only run this single test while you re here take a look at the test read it carefully can you guess what it does run npm test again observe failing test boo failing tests booo tests allow us to define inputs and expected outputs to functions so we can see if a function is doing the correct thing this test is failing because of a bug in the code let s go find that code go to src conversion helper js and find where we declare specialsubredditsregex this line creates a regex but all regexes are case sensitive by default notice the difference in capitalization in our test vs our regular expression your task is to make the regex match case insensitively to make the test pass please use google or any other resource optional can you think of other ways to make the test pass post your ideas in your pull request description later on step once your single test is passing change the it only to it to run all of the tests again is everything green woohoo green yay please commit the changes with a descriptive git commit message and then make a pull request back to the main repository optional don t forget to thank you for contributing to metric units bot wait for your pr to be reviewed by a maintainer and address any comments they may have step your change gets merged and released party | 1 |
412,281 | 12,037,643,103 | IssuesEvent | 2020-04-13 22:20:12 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | closed | Hearing Details layout updates | Priority: Medium Product: caseflow-hearings Stakeholder: BVA Team: Tango 💃 | ### Description
Update the layout and information hierarchy of the Hearing Details page.
### Designs
Figma: https://www.figma.com/file/V87TZArfdurCGJiEjQ73ES/Virtual-Hearings?node-id=109%3A17309
### Acceptance criteria
- Hearing Details fields are re-arranged into the following sections separated by divider lines:
- section with `VLJ`, `Hearing Coordinator`, `Hearing Room` fields
- section with `Hearing Type` field
- section with Virtual Hearing Details (h3) heading; `VLJ Virtual Hearing Link`, `Veteran Email for Notifications`, and `POA/Representative Email for Notifications` fields; and the Email Notification History accordion/section
- section with `Waive 90 Day Evidence Hold` field
- section with `Notes` field
- The following Transcription Details headings are changed to h3:
- Transcription Problem
- Transcription Request
### Background/context/resources
This work has been broken out from Display more info about sent Virtual Hearings emails (#13370), which was based on usability testing with Hearing Coordinators (#12960) | 1.0 | Hearing Details layout updates - ### Description
Update the layout and information hierarchy of the Hearing Details page.
### Designs
Figma: https://www.figma.com/file/V87TZArfdurCGJiEjQ73ES/Virtual-Hearings?node-id=109%3A17309
### Acceptance criteria
- Hearing Details fields are re-arranged into the following sections separated by divider lines:
- section with `VLJ`, `Hearing Coordinator`, `Hearing Room` fields
- section with `Hearing Type` field
- section with Virtual Hearing Details (h3) heading; `VLJ Virtual Hearing Link`, `Veteran Email for Notifications`, and `POA/Representative Email for Notifications` fields; and the Email Notification History accordion/section
- section with `Waive 90 Day Evidence Hold` field
- section with `Notes` field
- The following Transcription Details headings are changed to h3:
- Transcription Problem
- Transcription Request
### Background/context/resources
This work has been broken out from Display more info about sent Virtual Hearings emails (#13370), which was based on usability testing with Hearing Coordinators (#12960) | non_main | hearing details layout updates description update the layout and information hierarchy of the hearing details page designs figma acceptance criteria hearing details fields are re arranged into the following sections separated by divider lines section with vlj hearing coordinator hearing room fields section with hearing type field section with virtual hearing details heading vlj virtual hearing link veteran email for notifications and poa representative email for notifications fields and the email notification history accordion section section with waive day evidence hold field section with notes field the following transcription details headings are changed to transcription problem transcription request background context resources this work has been broken out from display more info about sent virtual hearings emails which was based on usability testing with hearing coordinators | 0 |
784,455 | 27,571,770,507 | IssuesEvent | 2023-03-08 09:51:27 | iScsc/iscsc.fr | https://api.github.com/repos/iScsc/iscsc.fr | closed | Blog posts can't be shared by their URL | bug Priority: Medium Severity: Minor | #### Problem
The website has been deployed, and I was trying to visit a blog post from `amtoine` (the first actually) by clicking on an URL
#### Step to reproduce:
- Try to access to https://iscsc.fr/blog/637669b524bc362fff7a2b23
- go to https://iscsc.fr and choose the blog post from `amtoine - 2022-11-17T17:04:53.781Z`
- the URL are the same but accessing with the URL seems to fail | 1.0 | Blog posts can't be shared by their URL - #### Problem
The website has been deployed, and I was trying to visit a blog post from `amtoine` (the first actually) by clicking on an URL
#### Step to reproduce:
- Try to access to https://iscsc.fr/blog/637669b524bc362fff7a2b23
- go to https://iscsc.fr and choose the blog post from `amtoine - 2022-11-17T17:04:53.781Z`
- the URL are the same but accessing with the URL seems to fail | non_main | blog posts can t be shared by their url problem the website has been deployed and i was trying to visit a blog post from amtoine the first actually by clicking on an url step to reproduce try to access to go to and choose the blog post from amtoine the url are the same but accessing with the url seems to fail | 0 |
235,686 | 25,959,040,197 | IssuesEvent | 2022-12-18 16:36:58 | snowdensb/caseflow | https://api.github.com/repos/snowdensb/caseflow | opened | CVE-2022-23514 (High) detected in loofah-2.9.0.gem | security vulnerability | ## CVE-2022-23514 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loofah-2.9.0.gem</b></p></summary>
<p>Loofah is a general library for manipulating and transforming HTML/XML documents and fragments, built on top of Nokogiri.
Loofah excels at HTML sanitization (XSS prevention). It includes some nice HTML sanitizers, which are based on HTML5lib's safelist, so it most likely won't make your codes less secure. (These statements have not been evaluated by Netexperts.)
ActiveRecord extensions for sanitization are available in the [`loofah-activerecord` gem](https://github.com/flavorjones/loofah-activerecord).</p>
<p>Library home page: <a href="https://rubygems.org/gems/loofah-2.9.0.gem">https://rubygems.org/gems/loofah-2.9.0.gem</a></p>
<p>
Dependency Hierarchy:
- rails-5.2.4.5.gem (Root Library)
- actionview-5.2.4.5.gem
- rails-html-sanitizer-1.3.0.gem
- :x: **loofah-2.9.0.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/caseflow/commit/81f8b3f5658022f994993a18a7653667705b7f6e">81f8b3f5658022f994993a18a7653667705b7f6e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Loofah is a general library for manipulating and transforming HTML/XML documents and fragments, built on top of Nokogiri. Loofah < 2.19.1 contains an inefficient regular expression that is susceptible to excessive backtracking when attempting to sanitize certain SVG attributes. This may lead to a denial of service through CPU resource consumption. This issue is patched in version 2.19.1.
<p>Publish Date: 2022-12-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23514>CVE-2022-23514</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/flavorjones/loofah/security/advisories/GHSA-486f-hjj9-9vhh">https://github.com/flavorjones/loofah/security/advisories/GHSA-486f-hjj9-9vhh</a></p>
<p>Release Date: 2022-12-14</p>
<p>Fix Resolution: loofah - 2.19.1</p>
</p>
</details>
<p></p>
| True | CVE-2022-23514 (High) detected in loofah-2.9.0.gem - ## CVE-2022-23514 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loofah-2.9.0.gem</b></p></summary>
<p>Loofah is a general library for manipulating and transforming HTML/XML documents and fragments, built on top of Nokogiri.
Loofah excels at HTML sanitization (XSS prevention). It includes some nice HTML sanitizers, which are based on HTML5lib's safelist, so it most likely won't make your codes less secure. (These statements have not been evaluated by Netexperts.)
ActiveRecord extensions for sanitization are available in the [`loofah-activerecord` gem](https://github.com/flavorjones/loofah-activerecord).</p>
<p>Library home page: <a href="https://rubygems.org/gems/loofah-2.9.0.gem">https://rubygems.org/gems/loofah-2.9.0.gem</a></p>
<p>
Dependency Hierarchy:
- rails-5.2.4.5.gem (Root Library)
- actionview-5.2.4.5.gem
- rails-html-sanitizer-1.3.0.gem
- :x: **loofah-2.9.0.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/caseflow/commit/81f8b3f5658022f994993a18a7653667705b7f6e">81f8b3f5658022f994993a18a7653667705b7f6e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Loofah is a general library for manipulating and transforming HTML/XML documents and fragments, built on top of Nokogiri. Loofah < 2.19.1 contains an inefficient regular expression that is susceptible to excessive backtracking when attempting to sanitize certain SVG attributes. This may lead to a denial of service through CPU resource consumption. This issue is patched in version 2.19.1.
<p>Publish Date: 2022-12-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23514>CVE-2022-23514</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/flavorjones/loofah/security/advisories/GHSA-486f-hjj9-9vhh">https://github.com/flavorjones/loofah/security/advisories/GHSA-486f-hjj9-9vhh</a></p>
<p>Release Date: 2022-12-14</p>
<p>Fix Resolution: loofah - 2.19.1</p>
</p>
</details>
<p></p>
| non_main | cve high detected in loofah gem cve high severity vulnerability vulnerable library loofah gem loofah is a general library for manipulating and transforming html xml documents and fragments built on top of nokogiri loofah excels at html sanitization xss prevention it includes some nice html sanitizers which are based on s safelist so it most likely won t make your codes less secure these statements have not been evaluated by netexperts activerecord extensions for sanitization are available in the library home page a href dependency hierarchy rails gem root library actionview gem rails html sanitizer gem x loofah gem vulnerable library found in head commit a href found in base branch master vulnerability details loofah is a general library for manipulating and transforming html xml documents and fragments built on top of nokogiri loofah contains an inefficient regular expression that is susceptible to excessive backtracking when attempting to sanitize certain svg attributes this may lead to a denial of service through cpu resource consumption this issue is patched in version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution loofah | 0 |
797,516 | 28,147,549,177 | IssuesEvent | 2023-04-02 17:02:22 | Greenstand/treetracker-admin-client | https://api.github.com/repos/Greenstand/treetracker-admin-client | opened | Capture Detail: Show `grower_reference_id` as Grower ID | type: bug good first issue tool: Verify priority size: small tool: Captures | Instead of **Grower Account ID**, we should display **Grower ID** in _Capture Details_, with the value taken from `raw_capture.grower_reference_id` or `capture.grower_reference_id`.
### Current Behaviour
<img width="269" alt="Screenshot 2023-04-02 at 18 00 44" src="https://user-images.githubusercontent.com/5558838/229367603-bc145cc6-caa5-4b99-b3f5-aea3f8c5c1f5.png">
### Desired Behaviour
<img width="269" alt="Screenshot 2023-04-02 at 17 59 42" src="https://user-images.githubusercontent.com/5558838/229367610-00a0f91c-9cb2-4e26-b458-b2bad459b62d.png">
| 1.0 | Capture Detail: Show `grower_reference_id` as Grower ID - Instead of **Grower Account ID**, we should display **Grower ID** in _Capture Details_, with the value taken from `raw_capture.grower_reference_id` or `capture.grower_reference_id`.
### Current Behaviour
<img width="269" alt="Screenshot 2023-04-02 at 18 00 44" src="https://user-images.githubusercontent.com/5558838/229367603-bc145cc6-caa5-4b99-b3f5-aea3f8c5c1f5.png">
### Desired Behaviour
<img width="269" alt="Screenshot 2023-04-02 at 17 59 42" src="https://user-images.githubusercontent.com/5558838/229367610-00a0f91c-9cb2-4e26-b458-b2bad459b62d.png">
| non_main | capture detail show grower reference id as grower id instead of grower account id we should display grower id in capture details with the value taken from raw capture grower reference id or capture grower reference id current behaviour img width alt screenshot at src desired behaviour img width alt screenshot at src | 0 |
519 | 3,912,056,992 | IssuesEvent | 2016-04-20 08:58:23 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | opened | Rationalize the configuration | enhancement low maintainability | The current configuration we have is a mess. We are not taking advantage at all of the fact that we are using PHP files, having the configuration as a simple set of keys and values, instead of grouping options that belong together, for example. Besides, the option names have nothing to do with each other, using different formats and notations.
We should put some order in the configuration, grouping things that should go together, and unifying the naming conventions as much as possible.
Additionally, if we change the configuration we should provide an automatic script that could consume an existing configuration with the old options and generate an updated configuration file. | True | Rationalize the configuration - The current configuration we have is a mess. We are not taking advantage at all of the fact that we are using PHP files, having the configuration as a simple set of keys and values, instead of grouping options that belong together, for example. Besides, the option names have nothing to do with each other, using different formats and notations.
We should put some order in the configuration, grouping things that should go together, and unifying the naming conventions as much as possible.
Additionally, if we change the configuration we should provide an automatic script that could consume an existing configuration with the old options and generate an updated configuration file. | main | rationalize the configuration the current configuration we have is a mess we are not taking advantage at all of the fact that we are using php files having the configuration as a simple set of keys and values instead of grouping options that belong together for example besides the option names have nothing to do with each other using different formats and notations we should put some order in the configuration grouping things that should go together and unifying the naming conventions as much as possible additionally if we change the configuration we should provide an automatic script that could consume an existing configuration with the old options and generate an updated configuration file | 1 |
116,830 | 15,020,011,201 | IssuesEvent | 2021-02-01 14:14:04 | mozilla-lockwise/lockwise-ios | https://api.github.com/repos/mozilla-lockwise/lockwise-ios | reopened | Edit entries | archived feature-CUD needs-design | When I have an incorrectly saved credential from Firefox (eg password but no username), I want the ability to refine that entry to be useful for me when I next go to log into that account.
## Requirements
* Should provide the ability to edit from the entry detail view
* Should have ability to cancel the edits without confirmation if no changes made
* Cancel should return the entry to the previous state (no way to save some edits and not others)
* Should have to confirm cancelling when changes are made
* Should display toast to confirm edit is saved (no designs yet)
* Should serve appropriate error messages if the incorrect data is entered (see epic around error states) | 1.0 | Edit entries - When I have an incorrectly saved credential from Firefox (eg password but no username), I want the ability to refine that entry to be useful for me when I next go to log into that account.
## Requirements
* Should provide the ability to edit from the entry detail view
* Should have ability to cancel the edits without confirmation if no changes made
* Cancel should return the entry to the previous state (no way to save some edits and not others)
* Should have to confirm cancelling when changes are made
* Should display toast to confirm edit is saved (no designs yet)
* Should serve appropriate error messages if the incorrect data is entered (see epic around error states) | non_main | edit entries when i have an incorrectly saved credential from firefox eg password but no username i want the ability to refine that entry to be useful for me when i next go to log into that account requirements should provide the ability to edit from the entry detail view should have ability to cancel the edits without confirmation if no changes made cancel should return the entry to the previous state no way to save some edits and not others should have to confirm cancelling when changes are made should display toast to confirm edit is saved no designs yet should serve appropriate error messages if the incorrect data is entered see epic around error states | 0 |
303,261 | 22,961,376,628 | IssuesEvent | 2022-07-19 15:38:23 | JanssenProject/jans | https://api.github.com/repos/JanssenProject/jans | opened | docs: add artifact link | area-documentation | - [x] Add artifact link for janssen to the main readme
- [x] add latest release link to the main readme | 1.0 | docs: add artifact link - - [x] Add artifact link for janssen to the main readme
- [x] add latest release link to the main readme | non_main | docs add artifact link add artifact link for janssen to the main readme add latest release link to the main readme | 0 |
15,943 | 5,195,704,375 | IssuesEvent | 2017-01-23 10:17:53 | SemsTestOrg/combinearchive-web | https://api.github.com/repos/SemsTestOrg/combinearchive-web | closed | [ArchiveContent] Relayout the Archive Content section | code fixed migrated minor task | ## Trac Ticket #14
**component:** code
**owner:** somebody
**reporter:** martinP
**created:** 2014-07-31 08:22:22
**milestone:**
**type:** task
**version:**
**keywords:**
## comment 1
**time:** 2014-07-31 09:57:01
**author:** martin
idea was:
* file content almost 100%
* tree hidden behind a smart-phone-like-button
* one mouseover or touch/click show tree above content
## comment 2
**time:** 2014-08-07 19:57:13
**author:** martin
Updated **cc** to **martin, martinP**
## comment 3
**time:** 2014-08-07 19:57:13
**author:** martin
the sequence of entries should be only lexicographically. moving bbbb before aaaa should not affect anything (not even the filetree...)
## comment 4
**time:** 2014-08-07 19:58:25
**author:** martin
add icons for certain file types. atm everything looks like a folder...
## comment 5
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"1ed1b073629ea2ff92f72988b34805f2ea0e83fc"]:
```CommitTicketReference repository="" revision="1ed1b073629ea2ff92f72988b34805f2ea0e83fc"
Merge branch 'feature_newdesgin'
[fixes #14]
```
## comment 6
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **resolution** to **fixed**
## comment 7
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **status** to **closed**
## comment 8
**time:** 2014-09-25 17:03:19
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"1ed1b073629ea2ff92f72988b34805f2ea0e83fc"]:
```CommitTicketReference repository="" revision="1ed1b073629ea2ff92f72988b34805f2ea0e83fc"
Merge branch 'feature_newdesgin'
[fixes #14]
```
| 1.0 | [ArchiveContent] Relayout the Archive Content section - ## Trac Ticket #14
**component:** code
**owner:** somebody
**reporter:** martinP
**created:** 2014-07-31 08:22:22
**milestone:**
**type:** task
**version:**
**keywords:**
## comment 1
**time:** 2014-07-31 09:57:01
**author:** martin
idea was:
* file content almost 100%
* tree hidden behind a smart-phone-like-button
* one mouseover or touch/click show tree above content
## comment 2
**time:** 2014-08-07 19:57:13
**author:** martin
Updated **cc** to **martin, martinP**
## comment 3
**time:** 2014-08-07 19:57:13
**author:** martin
the sequence of entries should be only lexicographically. moving bbbb before aaaa should not affect anything (not even the filetree...)
## comment 4
**time:** 2014-08-07 19:58:25
**author:** martin
add icons for certain file types. atm everything looks like a folder...
## comment 5
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"1ed1b073629ea2ff92f72988b34805f2ea0e83fc"]:
```CommitTicketReference repository="" revision="1ed1b073629ea2ff92f72988b34805f2ea0e83fc"
Merge branch 'feature_newdesgin'
[fixes #14]
```
## comment 6
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **resolution** to **fixed**
## comment 7
**time:** 2014-09-25 10:40:02
**author:** mp487 <martin.peters3@uni-rostock.de>
Updated **status** to **closed**
## comment 8
**time:** 2014-09-25 17:03:19
**author:** mp487 <martin.peters3@uni-rostock.de>
In changeset:"1ed1b073629ea2ff92f72988b34805f2ea0e83fc"]:
```CommitTicketReference repository="" revision="1ed1b073629ea2ff92f72988b34805f2ea0e83fc"
Merge branch 'feature_newdesgin'
[fixes #14]
```
| non_main | relayout the archive content section trac ticket component code owner somebody reporter martinp created milestone type task version keywords comment time author martin idea was file content almost tree hidden behind a smart phone like button one mouseover or touch click show tree above content comment time author martin updated cc to martin martinp comment time author martin the sequence of entries should be only lexicographically moving bbbb before aaaa should not affect anything not even the filetree comment time author martin add icons for certain file types atm everything looks like a folder comment time author in changeset committicketreference repository revision merge branch feature newdesgin comment time author updated resolution to fixed comment time author updated status to closed comment time author in changeset committicketreference repository revision merge branch feature newdesgin | 0 |
885 | 4,543,625,453 | IssuesEvent | 2016-09-10 07:23:14 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | docker_container module got "No such image" when using same image with difference tag | affects_2.1 bug_report cloud docker waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
I always got "Error creating container: 404 Client Error: Not Found (\"No such image: xxx:yyy\")" when using same image name with difference tag
##### STEPS TO REPRODUCE
```
---
- name: Run container
hosts: localhost
connection: local
gather_facts: no
tasks:
- name: run container with first tag
docker_container:
image: alpine:3.3
name: first_container
command: sleep 999
- name: run container with another tag
docker_container:
image: alpine:3.4
name: second_container
command: sleep 999
```
##### EXPECTED RESULTS
It should can be run without any error
##### ACTUAL RESULTS
```
$ ansible-playbook --vault-password-file vault_pass tmp.yml -vvvv
Using /home/username/git/ansible/ansible.cfg as config file
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
Loaded callback default of type stdout, v2.0
PLAYBOOK: tmp.yml **************************************************************
1 plays in tmp.yml
PLAY [Run container] ***********************************************************
TASK [run container with first tag] ********************************************
task path: /home/username/git/ansible/tmp.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: username
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364 `" && echo ansible-tmp-1473089618.0-228211754531364="` echo $HOME/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp08uTgz TO /home/username/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364/docker_container
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/username/.virtualenvs/ansible/bin/python /home/username/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364/docker_container; rm -rf "/home/username/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {"ansible_facts": {"ansible_docker_container": {"AppArmorProfile": "", "Args": ["999"], "Config": {"AttachStderr": false, "AttachStdin": false, "AttachStdout": false, "Cmd": ["sleep", "999"], "Domainname": "", "Entrypoint": null, "Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"], "Hostname": "1d5c227a2979", "Image": "alpine:3.3", "Labels": {}, "OnBuild": null, "OpenStdin": false, "StdinOnce": false, "Tty": false, "User": "", "Volumes": null, "WorkingDir": ""}, "Created": "2016-09-05T15:33:41.708003148Z", "Driver": "overlay2", "ExecIDs": null, "GraphDriver": {"Data": {"LowerDir": "/var/lib/docker/overlay2/3fa24abc4d1c7d7e8a582e90b0b7c07d19deba1c633a6e4414a5ee01fd6dff27-init/diff:/var/lib/docker/overlay2/63ffa0b5002627b003b6ec8ec3f81fedc9d8118ef6dd8289d1ec2f4d07391cd7/diff", "MergedDir": "/var/lib/docker/overlay2/3fa24abc4d1c7d7e8a582e90b0b7c07d19deba1c633a6e4414a5ee01fd6dff27/merged", "UpperDir": "/var/lib/docker/overlay2/3fa24abc4d1c7d7e8a582e90b0b7c07d19deba1c633a6e4414a5ee01fd6dff27/diff", "WorkDir": "/var/lib/docker/overlay2/3fa24abc4d1c7d7e8a582e90b0b7c07d19deba1c633a6e4414a5ee01fd6dff27/work"}, "Name": "overlay2"}, "HostConfig": {"AutoRemove": false, "Binds": [], "BlkioDeviceReadBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceWriteIOps": null, "BlkioWeight": 0, "BlkioWeightDevice": null, "CapAdd": null, "CapDrop": null, "Cgroup": "", "CgroupParent": "", "ConsoleSize": [0, 0], "ContainerIDFile": "", "CpuCount": 0, "CpuPercent": 0, "CpuPeriod": 0, "CpuQuota": 0, "CpuShares": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": null, "DiskQuota": 0, "Dns": null, "DnsOptions": null, "DnsSearch": null, "ExtraHosts": null, "GroupAdd": null, "IOMaximumBandwidth": 0, "IOMaximumIOps": 0, "IpcMode": "", "Isolation": "", "KernelMemory": 0, "Links": null, "LogConfig": {"Config": {}, "Type": "json-file"}, "Memory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": -1, "NetworkMode": "default", "OomKillDisable": false, "OomScoreAdj": 0, "PidMode": "", "PidsLimit": 0, "PortBindings": null, "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "RestartPolicy": {"MaximumRetryCount": 0, "Name": ""}, "Runtime": "runc", "SecurityOpt": null, "ShmSize": 67108864, "UTSMode": "", "Ulimits": null, "UsernsMode": "", "VolumeDriver": "", "VolumesFrom": null}, "HostnamePath": "/var/lib/docker/containers/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416/hostname", "HostsPath": "/var/lib/docker/containers/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416/hosts", "Id": "1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416", "Image": "sha256:47cf20d8c26c46fff71be614d9f54997edacfe8d46d51769706e5aba94b16f2b", "LogPath": "/var/lib/docker/containers/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416-json.log", "MountLabel": "", "Mounts": [], "Name": "/first_container", "NetworkSettings": {"Bridge": "", "EndpointID": "bf4ae339f169c0eaa0a5efdb6347a0de3dd90745ad6a04b1f9eafb95374ce2e3", "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "HairpinMode": false, "IPAddress": "172.17.0.11", "IPPrefixLen": 16, "IPv6Gateway": "", "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:0b", "Networks": {"bridge": {"Aliases": null, "EndpointID": "bf4ae339f169c0eaa0a5efdb6347a0de3dd90745ad6a04b1f9eafb95374ce2e3", "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAMConfig": null, "IPAddress": "172.17.0.11", "IPPrefixLen": 16, "IPv6Gateway": "", "Links": null, "MacAddress": "02:42:ac:11:00:0b", "NetworkID": "db1bc29f464b7995375de785a966e3f6ff8dced498c531f7705ec0e8d54bf0e2"}}, "Ports": {}, "SandboxID": "27999e70d0e1f23b06d01760159c894b74f1fa50f82ef44b55dfa1835ba0e6b5", "SandboxKey": "/var/run/docker/netns/27999e70d0e1", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null}, "Path": "sleep", "ProcessLabel": "", "ResolvConfPath": "/var/lib/docker/containers/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416/resolv.conf", "RestartCount": 0, "State": {"Dead": false, "Error": "", "ExitCode": 0, "FinishedAt": "0001-01-01T00:00:00Z", "OOMKilled": false, "Paused": false, "Pid": 24138, "Restarting": false, "Running": true, "StartedAt": "2016-09-05T15:33:42.016419065Z", "Status": "running"}}}, "changed": true, "invocation": {"module_args": {"api_version": null, "blkio_weight": null, "cacert_path": null, "capabilities": null, "cert_path": null, "cleanup": false, "command": "sleep 999", "cpu_period": null, "cpu_quota": null, "cpu_shares": null, "cpuset_cpus": null, "cpuset_mems": null, "debug": false, "detach": true, "devices": null, "dns_opts": null, "dns_search_domains": null, "dns_servers": null, "docker_host": null, "entrypoint": null, "env": null, "env_file": null, "etc_hosts": null, "exposed_ports": null, "filter_logger": false, "force_kill": false, "groups": null, "hostname": null, "image": "alpine:3.3", "interactive": false, "ipc_mode": null, "keep_volumes": true, "kernel_memory": null, "key_path": null, "kill_signal": null, "labels": null, "links": null, "log_driver": "json-file", "log_options": null, "mac_address": null, "memory": "0", "memory_reservation": null, "memory_swap": null, "memory_swappiness": null, "name": "first_container", "network_mode": null, "networks": null, "oom_killer": null, "paused": false, "pid_mode": null, "privileged": false, "published_ports": null, "pull": false, "purge_networks": null, "read_only": false, "recreate": false, "restart": false, "restart_policy": null, "restart_retries": 0, "security_opts": null, "shm_size": null, "ssl_version": null, "state": "started", "stop_signal": null, "stop_timeout": null, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null, "trust_image_content": false, "tty": false, "ulimits": null, "user": null, "uts": null, "volume_driver": null, "volumes": null, "volumes_from": null}, "module_name": "docker_container"}}
TASK [run container with another tag] ******************************************
task path: /home/username/git/ansible/tmp.yml:12
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: username
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204 `" && echo ansible-tmp-1473089622.12-49273116108204="` echo $HOME/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpbAUuMp TO /home/username/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204/docker_container
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/username/.virtualenvs/ansible/bin/python /home/username/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204/docker_container; rm -rf "/home/username/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"api_version": null, "blkio_weight": null, "cacert_path": null, "capabilities": null, "cert_path": null, "cleanup": false, "command": "sleep 999", "cpu_period": null, "cpu_quota": null, "cpu_shares": null, "cpuset_cpus": null, "cpuset_mems": null, "debug": false, "detach": true, "devices": null, "dns_opts": null, "dns_search_domains": null, "dns_servers": null, "docker_host": null, "entrypoint": null, "env": null, "env_file": null, "etc_hosts": null, "exposed_ports": null, "filter_logger": false, "force_kill": false, "groups": null, "hostname": null, "image": "alpine:3.4", "interactive": false, "ipc_mode": null, "keep_volumes": true, "kernel_memory": null, "key_path": null, "kill_signal": null, "labels": null, "links": null, "log_driver": "json-file", "log_options": null, "mac_address": null, "memory": "0", "memory_reservation": null, "memory_swap": null, "memory_swappiness": null, "name": "second_container", "network_mode": null, "networks": null, "oom_killer": null, "paused": false, "pid_mode": null, "privileged": false, "published_ports": null, "pull": false, "purge_networks": null, "read_only": false, "recreate": false, "restart": false, "restart_policy": null, "restart_retries": 0, "security_opts": null, "shm_size": null, "ssl_version": null, "state": "started", "stop_signal": null, "stop_timeout": null, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null, "trust_image_content": false, "tty": false, "ulimits": null, "user": null, "uts": null, "volume_driver": null, "volumes": null, "volumes_from": null}, "module_name": "docker_container"}, "msg": "Error creating container: 404 Client Error: Not Found (\"No such image: alpine:3.4\")"}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1
TASK: run container with first tag -------------------------------------- 4.23s
TASK: run container with another tag ------------------------------------ 0.26s
```
| True | docker_container module got "No such image" when using same image with difference tag - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
I always got "Error creating container: 404 Client Error: Not Found (\"No such image: xxx:yyy\")" when using same image name with difference tag
##### STEPS TO REPRODUCE
```
---
- name: Run container
hosts: localhost
connection: local
gather_facts: no
tasks:
- name: run container with first tag
docker_container:
image: alpine:3.3
name: first_container
command: sleep 999
- name: run container with another tag
docker_container:
image: alpine:3.4
name: second_container
command: sleep 999
```
##### EXPECTED RESULTS
It should can be run without any error
##### ACTUAL RESULTS
```
$ ansible-playbook --vault-password-file vault_pass tmp.yml -vvvv
Using /home/username/git/ansible/ansible.cfg as config file
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
Loaded callback default of type stdout, v2.0
PLAYBOOK: tmp.yml **************************************************************
1 plays in tmp.yml
PLAY [Run container] ***********************************************************
TASK [run container with first tag] ********************************************
task path: /home/username/git/ansible/tmp.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: username
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364 `" && echo ansible-tmp-1473089618.0-228211754531364="` echo $HOME/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp08uTgz TO /home/username/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364/docker_container
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/username/.virtualenvs/ansible/bin/python /home/username/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364/docker_container; rm -rf "/home/username/.ansible/tmp/ansible-tmp-1473089618.0-228211754531364/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {"ansible_facts": {"ansible_docker_container": {"AppArmorProfile": "", "Args": ["999"], "Config": {"AttachStderr": false, "AttachStdin": false, "AttachStdout": false, "Cmd": ["sleep", "999"], "Domainname": "", "Entrypoint": null, "Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"], "Hostname": "1d5c227a2979", "Image": "alpine:3.3", "Labels": {}, "OnBuild": null, "OpenStdin": false, "StdinOnce": false, "Tty": false, "User": "", "Volumes": null, "WorkingDir": ""}, "Created": "2016-09-05T15:33:41.708003148Z", "Driver": "overlay2", "ExecIDs": null, "GraphDriver": {"Data": {"LowerDir": "/var/lib/docker/overlay2/3fa24abc4d1c7d7e8a582e90b0b7c07d19deba1c633a6e4414a5ee01fd6dff27-init/diff:/var/lib/docker/overlay2/63ffa0b5002627b003b6ec8ec3f81fedc9d8118ef6dd8289d1ec2f4d07391cd7/diff", "MergedDir": "/var/lib/docker/overlay2/3fa24abc4d1c7d7e8a582e90b0b7c07d19deba1c633a6e4414a5ee01fd6dff27/merged", "UpperDir": "/var/lib/docker/overlay2/3fa24abc4d1c7d7e8a582e90b0b7c07d19deba1c633a6e4414a5ee01fd6dff27/diff", "WorkDir": "/var/lib/docker/overlay2/3fa24abc4d1c7d7e8a582e90b0b7c07d19deba1c633a6e4414a5ee01fd6dff27/work"}, "Name": "overlay2"}, "HostConfig": {"AutoRemove": false, "Binds": [], "BlkioDeviceReadBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceWriteIOps": null, "BlkioWeight": 0, "BlkioWeightDevice": null, "CapAdd": null, "CapDrop": null, "Cgroup": "", "CgroupParent": "", "ConsoleSize": [0, 0], "ContainerIDFile": "", "CpuCount": 0, "CpuPercent": 0, "CpuPeriod": 0, "CpuQuota": 0, "CpuShares": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": null, "DiskQuota": 0, "Dns": null, "DnsOptions": null, "DnsSearch": null, "ExtraHosts": null, "GroupAdd": null, "IOMaximumBandwidth": 0, "IOMaximumIOps": 0, "IpcMode": "", "Isolation": "", "KernelMemory": 0, "Links": null, "LogConfig": {"Config": {}, "Type": "json-file"}, "Memory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": -1, "NetworkMode": "default", "OomKillDisable": false, "OomScoreAdj": 0, "PidMode": "", "PidsLimit": 0, "PortBindings": null, "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "RestartPolicy": {"MaximumRetryCount": 0, "Name": ""}, "Runtime": "runc", "SecurityOpt": null, "ShmSize": 67108864, "UTSMode": "", "Ulimits": null, "UsernsMode": "", "VolumeDriver": "", "VolumesFrom": null}, "HostnamePath": "/var/lib/docker/containers/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416/hostname", "HostsPath": "/var/lib/docker/containers/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416/hosts", "Id": "1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416", "Image": "sha256:47cf20d8c26c46fff71be614d9f54997edacfe8d46d51769706e5aba94b16f2b", "LogPath": "/var/lib/docker/containers/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416-json.log", "MountLabel": "", "Mounts": [], "Name": "/first_container", "NetworkSettings": {"Bridge": "", "EndpointID": "bf4ae339f169c0eaa0a5efdb6347a0de3dd90745ad6a04b1f9eafb95374ce2e3", "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "HairpinMode": false, "IPAddress": "172.17.0.11", "IPPrefixLen": 16, "IPv6Gateway": "", "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:0b", "Networks": {"bridge": {"Aliases": null, "EndpointID": "bf4ae339f169c0eaa0a5efdb6347a0de3dd90745ad6a04b1f9eafb95374ce2e3", "Gateway": "172.17.0.1", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAMConfig": null, "IPAddress": "172.17.0.11", "IPPrefixLen": 16, "IPv6Gateway": "", "Links": null, "MacAddress": "02:42:ac:11:00:0b", "NetworkID": "db1bc29f464b7995375de785a966e3f6ff8dced498c531f7705ec0e8d54bf0e2"}}, "Ports": {}, "SandboxID": "27999e70d0e1f23b06d01760159c894b74f1fa50f82ef44b55dfa1835ba0e6b5", "SandboxKey": "/var/run/docker/netns/27999e70d0e1", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null}, "Path": "sleep", "ProcessLabel": "", "ResolvConfPath": "/var/lib/docker/containers/1d5c227a297954781c4cd363920bb404f8bb2e4cf4f0910eb47ad3ce4105a416/resolv.conf", "RestartCount": 0, "State": {"Dead": false, "Error": "", "ExitCode": 0, "FinishedAt": "0001-01-01T00:00:00Z", "OOMKilled": false, "Paused": false, "Pid": 24138, "Restarting": false, "Running": true, "StartedAt": "2016-09-05T15:33:42.016419065Z", "Status": "running"}}}, "changed": true, "invocation": {"module_args": {"api_version": null, "blkio_weight": null, "cacert_path": null, "capabilities": null, "cert_path": null, "cleanup": false, "command": "sleep 999", "cpu_period": null, "cpu_quota": null, "cpu_shares": null, "cpuset_cpus": null, "cpuset_mems": null, "debug": false, "detach": true, "devices": null, "dns_opts": null, "dns_search_domains": null, "dns_servers": null, "docker_host": null, "entrypoint": null, "env": null, "env_file": null, "etc_hosts": null, "exposed_ports": null, "filter_logger": false, "force_kill": false, "groups": null, "hostname": null, "image": "alpine:3.3", "interactive": false, "ipc_mode": null, "keep_volumes": true, "kernel_memory": null, "key_path": null, "kill_signal": null, "labels": null, "links": null, "log_driver": "json-file", "log_options": null, "mac_address": null, "memory": "0", "memory_reservation": null, "memory_swap": null, "memory_swappiness": null, "name": "first_container", "network_mode": null, "networks": null, "oom_killer": null, "paused": false, "pid_mode": null, "privileged": false, "published_ports": null, "pull": false, "purge_networks": null, "read_only": false, "recreate": false, "restart": false, "restart_policy": null, "restart_retries": 0, "security_opts": null, "shm_size": null, "ssl_version": null, "state": "started", "stop_signal": null, "stop_timeout": null, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null, "trust_image_content": false, "tty": false, "ulimits": null, "user": null, "uts": null, "volume_driver": null, "volumes": null, "volumes_from": null}, "module_name": "docker_container"}}
TASK [run container with another tag] ******************************************
task path: /home/username/git/ansible/tmp.yml:12
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: username
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204 `" && echo ansible-tmp-1473089622.12-49273116108204="` echo $HOME/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpbAUuMp TO /home/username/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204/docker_container
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/username/.virtualenvs/ansible/bin/python /home/username/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204/docker_container; rm -rf "/home/username/.ansible/tmp/ansible-tmp-1473089622.12-49273116108204/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"api_version": null, "blkio_weight": null, "cacert_path": null, "capabilities": null, "cert_path": null, "cleanup": false, "command": "sleep 999", "cpu_period": null, "cpu_quota": null, "cpu_shares": null, "cpuset_cpus": null, "cpuset_mems": null, "debug": false, "detach": true, "devices": null, "dns_opts": null, "dns_search_domains": null, "dns_servers": null, "docker_host": null, "entrypoint": null, "env": null, "env_file": null, "etc_hosts": null, "exposed_ports": null, "filter_logger": false, "force_kill": false, "groups": null, "hostname": null, "image": "alpine:3.4", "interactive": false, "ipc_mode": null, "keep_volumes": true, "kernel_memory": null, "key_path": null, "kill_signal": null, "labels": null, "links": null, "log_driver": "json-file", "log_options": null, "mac_address": null, "memory": "0", "memory_reservation": null, "memory_swap": null, "memory_swappiness": null, "name": "second_container", "network_mode": null, "networks": null, "oom_killer": null, "paused": false, "pid_mode": null, "privileged": false, "published_ports": null, "pull": false, "purge_networks": null, "read_only": false, "recreate": false, "restart": false, "restart_policy": null, "restart_retries": 0, "security_opts": null, "shm_size": null, "ssl_version": null, "state": "started", "stop_signal": null, "stop_timeout": null, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null, "trust_image_content": false, "tty": false, "ulimits": null, "user": null, "uts": null, "volume_driver": null, "volumes": null, "volumes_from": null}, "module_name": "docker_container"}, "msg": "Error creating container: 404 Client Error: Not Found (\"No such image: alpine:3.4\")"}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=1
TASK: run container with first tag -------------------------------------- 4.23s
TASK: run container with another tag ------------------------------------ 0.26s
```
| main | docker container module got no such image when using same image with difference tag issue type bug report component name docker container ansible version ansible configuration n a os environment n a summary i always got error creating container client error not found no such image xxx yyy when using same image name with difference tag steps to reproduce name run container hosts localhost connection local gather facts no tasks name run container with first tag docker container image alpine name first container command sleep name run container with another tag docker container image alpine name second container command sleep expected results it should can be run without any error actual results ansible playbook vault password file vault pass tmp yml vvvv using home username git ansible ansible cfg as config file host file not found etc ansible hosts provided hosts list is empty only localhost is available loaded callback default of type stdout playbook tmp yml plays in tmp yml play task task path home username git ansible tmp yml establish local connection for user username exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home username ansible tmp ansible tmp docker container exec bin sh c lang en us utf lc all en us utf lc messages en us utf home username virtualenvs ansible bin python home username ansible tmp ansible tmp docker container rm rf home username ansible tmp ansible tmp dev null sleep changed ansible facts ansible docker container apparmorprofile args config attachstderr false attachstdin false attachstdout false cmd domainname entrypoint null env hostname image alpine labels onbuild null openstdin false stdinonce false tty false user volumes null workingdir created driver execids null graphdriver data lowerdir var lib docker init diff var lib docker diff mergeddir var lib docker merged upperdir var lib docker diff workdir var lib docker work name hostconfig autoremove false binds blkiodevicereadbps null blkiodevicereadiops null blkiodevicewritebps null blkiodevicewriteiops null blkioweight blkioweightdevice null capadd null capdrop null cgroup cgroupparent consolesize containeridfile cpucount cpupercent cpuperiod cpuquota cpushares cpusetcpus cpusetmems devices null diskquota dns null dnsoptions null dnssearch null extrahosts null groupadd null iomaximumbandwidth iomaximumiops ipcmode isolation kernelmemory links null logconfig config type json file memory memoryreservation memoryswap memoryswappiness networkmode default oomkilldisable false oomscoreadj pidmode pidslimit portbindings null privileged false publishallports false readonlyrootfs false restartpolicy maximumretrycount name runtime runc securityopt null shmsize utsmode ulimits null usernsmode volumedriver volumesfrom null hostnamepath var lib docker containers hostname hostspath var lib docker containers hosts id image logpath var lib docker containers json log mountlabel mounts name first container networksettings bridge endpointid gateway hairpinmode false ipaddress ipprefixlen macaddress ac networks bridge aliases null endpointid gateway ipamconfig null ipaddress ipprefixlen links null macaddress ac networkid ports sandboxid sandboxkey var run docker netns secondaryipaddresses null null path sleep processlabel resolvconfpath var lib docker containers resolv conf restartcount state dead false error exitcode finishedat oomkilled false paused false pid restarting false running true startedat status running changed true invocation module args api version null blkio weight null cacert path null capabilities null cert path null cleanup false command sleep cpu period null cpu quota null cpu shares null cpuset cpus null cpuset mems null debug false detach true devices null dns opts null dns search domains null dns servers null docker host null entrypoint null env null env file null etc hosts null exposed ports null filter logger false force kill false groups null hostname null image alpine interactive false ipc mode null keep volumes true kernel memory null key path null kill signal null labels null links null log driver json file log options null mac address null memory memory reservation null memory swap null memory swappiness null name first container network mode null networks null oom killer null paused false pid mode null privileged false published ports null pull false purge networks null read only false recreate false restart false restart policy null restart retries security opts null shm size null ssl version null state started stop signal null stop timeout null timeout null tls null tls hostname null tls verify null trust image content false tty false ulimits null user null uts null volume driver null volumes null volumes from null module name docker container task task path home username git ansible tmp yml establish local connection for user username exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpbauump to home username ansible tmp ansible tmp docker container exec bin sh c lang en us utf lc all en us utf lc messages en us utf home username virtualenvs ansible bin python home username ansible tmp ansible tmp docker container rm rf home username ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args api version null blkio weight null cacert path null capabilities null cert path null cleanup false command sleep cpu period null cpu quota null cpu shares null cpuset cpus null cpuset mems null debug false detach true devices null dns opts null dns search domains null dns servers null docker host null entrypoint null env null env file null etc hosts null exposed ports null filter logger false force kill false groups null hostname null image alpine interactive false ipc mode null keep volumes true kernel memory null key path null kill signal null labels null links null log driver json file log options null mac address null memory memory reservation null memory swap null memory swappiness null name second container network mode null networks null oom killer null paused false pid mode null privileged false published ports null pull false purge networks null read only false recreate false restart false restart policy null restart retries security opts null shm size null ssl version null state started stop signal null stop timeout null timeout null tls null tls hostname null tls verify null trust image content false tty false ulimits null user null uts null volume driver null volumes null volumes from null module name docker container msg error creating container client error not found no such image alpine no more hosts left play recap localhost ok changed unreachable failed task run container with first tag task run container with another tag | 1 |
295,407 | 9,086,067,740 | IssuesEvent | 2019-02-18 10:00:18 | conan-io/conan | https://api.github.com/repos/conan-io/conan | opened | get_path refactor | complex: medium priority: medium stage: queue type: engineering | The `get_path` should be split into a `get_file_contents` and `file_list`. The issue is that currently the `get_file` interface is propagated down to the rest clients.
Some ideas:
The refactor idea is, the `rest_client_v1` and `rest_client_v2` should (probably) have a function to get the snapshot (file list of a recipe/package) and other to get the file contents (from a recipe/package), returning both the same data format. Also probably the `remote_manager` could have both methods split too. The `conan_api.get_path()` could obtain the snapshot (from the remote_manager), decide if it is a directory, and get the file calling the remote_manager again?
The same for querying locally. It looks cleaner to get the file list (snapshot) first, decide if the requested path is a dir (with the same code than it will use for the remote part) and obtain the file contents from a different function.
Related with https://github.com/conan-io/conan/pull/4494
Related with https://github.com/conan-io/conan/issues/4132#issuecomment-464658464
| 1.0 | get_path refactor - The `get_path` should be split into a `get_file_contents` and `file_list`. The issue is that currently the `get_file` interface is propagated down to the rest clients.
Some ideas:
The refactor idea is, the `rest_client_v1` and `rest_client_v2` should (probably) have a function to get the snapshot (file list of a recipe/package) and other to get the file contents (from a recipe/package), returning both the same data format. Also probably the `remote_manager` could have both methods split too. The `conan_api.get_path()` could obtain the snapshot (from the remote_manager), decide if it is a directory, and get the file calling the remote_manager again?
The same for querying locally. It looks cleaner to get the file list (snapshot) first, decide if the requested path is a dir (with the same code than it will use for the remote part) and obtain the file contents from a different function.
Related with https://github.com/conan-io/conan/pull/4494
Related with https://github.com/conan-io/conan/issues/4132#issuecomment-464658464
| non_main | get path refactor the get path should be split into a get file contents and file list the issue is that currently the get file interface is propagated down to the rest clients some ideas the refactor idea is the rest client and rest client should probably have a function to get the snapshot file list of a recipe package and other to get the file contents from a recipe package returning both the same data format also probably the remote manager could have both methods split too the conan api get path could obtain the snapshot from the remote manager decide if it is a directory and get the file calling the remote manager again the same for querying locally it looks cleaner to get the file list snapshot first decide if the requested path is a dir with the same code than it will use for the remote part and obtain the file contents from a different function related with related with | 0 |
1,488 | 6,425,193,012 | IssuesEvent | 2017-08-09 14:58:02 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | opened | Localization support: use Weblate? | maintainability | It seems that some people are keen to contribute to OpenRefine's localization. It is a bit sad to see PRs like #924, where a lot of work was done but wasn't eventually delivered to the users because the translation is not complete.
have come across Weblate, a nice web interface to help translators localize the interface. It would both decrease the technical requirements to start translating, and make it more of a collaborative process. It is tightly integrated with git, which means translations can be pushed to the git repository by Weblate, or proposed as pull requests.
It seems that it is possible to get free hosted accounts for open source projects. If the community is interested, I can ask them for one and do the initial set up. | True | Localization support: use Weblate? - It seems that some people are keen to contribute to OpenRefine's localization. It is a bit sad to see PRs like #924, where a lot of work was done but wasn't eventually delivered to the users because the translation is not complete.
have come across Weblate, a nice web interface to help translators localize the interface. It would both decrease the technical requirements to start translating, and make it more of a collaborative process. It is tightly integrated with git, which means translations can be pushed to the git repository by Weblate, or proposed as pull requests.
It seems that it is possible to get free hosted accounts for open source projects. If the community is interested, I can ask them for one and do the initial set up. | main | localization support use weblate it seems that some people are keen to contribute to openrefine s localization it is a bit sad to see prs like where a lot of work was done but wasn t eventually delivered to the users because the translation is not complete have come across weblate a nice web interface to help translators localize the interface it would both decrease the technical requirements to start translating and make it more of a collaborative process it is tightly integrated with git which means translations can be pushed to the git repository by weblate or proposed as pull requests it seems that it is possible to get free hosted accounts for open source projects if the community is interested i can ask them for one and do the initial set up | 1 |
5,158 | 26,270,931,237 | IssuesEvent | 2023-01-06 16:55:15 | aws/serverless-application-model | https://api.github.com/repos/aws/serverless-application-model | closed | SAM Api Gateway cache with queryStringParam and PathParam | type/bug maintainer/need-followup | **Description:**
I would like to enable chaching for the API Gateway which distinguish requests based on QueryStringParameters and RequestParameters/PathParams, I was able to enable cache for the ServerlessRestApi but for some reasion doesn't matter what i do it just ignores the params defined. At this point im not even sure if this is an issue or bug, but this would be nice to know/have a feature where i could just simply define my Methods in the global section of a cloudformation template, and would include params(both query and path params) in caching.
I also made a stack overflow question regarding this, for more details please check: https://stackoverflow.com/questions/57907320/aws-enable-caching-with-querrystringparameter-pathparameter-for-sam-api-gateway
Example Yaml template
```
`AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Api:
EndpointConfiguration: REGIONAL
CacheClusterEnabled: true
CacheClusterSize: "0.5"
MethodSettings:
- CachingEnabled: true
CacheDataEncrypted: true
CacheTtlInSeconds: 60
HttpMethod: "*"
ResourcePath: "/*"
- ResourcePath: "/~1item~1/~1{itemCode}"
CachingEnabled: true
CacheDataEncrypted: true
CacheTtlInSeconds: 60
HttpMethod: "*"
Resources:
......
GetItem:
Type: 'AWS::Serverless::Function'
Properties:
Handler: GetItem.handler
Runtime: nodejs8.10
Timeout: 20
CodeUri: "codes"
Events:
GetItem:
Type: Api
Properties:
Path: /item/{itemCode}
Method: get
......`
```
**Observed result:**
Caching not enabled for params thus returning incorrect response
**Expected result:**
Enable caching for params and distingush requests based on the params.
| True | SAM Api Gateway cache with queryStringParam and PathParam - **Description:**
I would like to enable chaching for the API Gateway which distinguish requests based on QueryStringParameters and RequestParameters/PathParams, I was able to enable cache for the ServerlessRestApi but for some reasion doesn't matter what i do it just ignores the params defined. At this point im not even sure if this is an issue or bug, but this would be nice to know/have a feature where i could just simply define my Methods in the global section of a cloudformation template, and would include params(both query and path params) in caching.
I also made a stack overflow question regarding this, for more details please check: https://stackoverflow.com/questions/57907320/aws-enable-caching-with-querrystringparameter-pathparameter-for-sam-api-gateway
Example Yaml template
```
`AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Api:
EndpointConfiguration: REGIONAL
CacheClusterEnabled: true
CacheClusterSize: "0.5"
MethodSettings:
- CachingEnabled: true
CacheDataEncrypted: true
CacheTtlInSeconds: 60
HttpMethod: "*"
ResourcePath: "/*"
- ResourcePath: "/~1item~1/~1{itemCode}"
CachingEnabled: true
CacheDataEncrypted: true
CacheTtlInSeconds: 60
HttpMethod: "*"
Resources:
......
GetItem:
Type: 'AWS::Serverless::Function'
Properties:
Handler: GetItem.handler
Runtime: nodejs8.10
Timeout: 20
CodeUri: "codes"
Events:
GetItem:
Type: Api
Properties:
Path: /item/{itemCode}
Method: get
......`
```
**Observed result:**
Caching not enabled for params thus returning incorrect response
**Expected result:**
Enable caching for params and distingush requests based on the params.
| main | sam api gateway cache with querystringparam and pathparam description i would like to enable chaching for the api gateway which distinguish requests based on querystringparameters and requestparameters pathparams i was able to enable cache for the serverlessrestapi but for some reasion doesn t matter what i do it just ignores the params defined at this point im not even sure if this is an issue or bug but this would be nice to know have a feature where i could just simply define my methods in the global section of a cloudformation template and would include params both query and path params in caching i also made a stack overflow question regarding this for more details please check example yaml template awstemplateformatversion transform aws serverless globals api endpointconfiguration regional cacheclusterenabled true cacheclustersize methodsettings cachingenabled true cachedataencrypted true cachettlinseconds httpmethod resourcepath resourcepath itemcode cachingenabled true cachedataencrypted true cachettlinseconds httpmethod resources getitem type aws serverless function properties handler getitem handler runtime timeout codeuri codes events getitem type api properties path item itemcode method get observed result caching not enabled for params thus returning incorrect response expected result enable caching for params and distingush requests based on the params | 1 |
179,611 | 30,273,873,926 | IssuesEvent | 2023-07-07 17:42:55 | elementary/code | https://api.github.com/repos/elementary/code | closed | Code assumes smb-share to be read-only | Priority: Medium Status: Incomplete Needs Design | Scratch assumes any gvfs-mounted location to be read-only, while it's not always the case.
ProblemType: Bug
DistroRelease: elementary OS 0.2
Package: scratch-text-editor 2.0~r1201-0+pkg51~localbuild
ProcVersionSignature: Ubuntu 3.2.0-52.78-generic 3.2.48
Uname: Linux 3.2.0-52-generic x86_64
ApportVersion: 2.0.1-0ubuntu17.4+elementary3~precise1
Architecture: amd64
CrashDB: scratch_text_editor
Date: Sun Sep 8 01:03:55 2013
GsettingsChanges:
InstallationMedia: elementary OS 0.2 "Luna" - Daily amd64 (20130601)
MarkForUpload: True
ProcEnviron:
TERM=xterm
PATH=(custom, no user)
LANG=ru_RU.UTF-8
SHELL=/usr/bin/fish
SourcePackage: scratch-text-editor
UpgradeStatus: No upgrade log present (probably fresh install)
Launchpad Details: [#LP1222254](https://bugs.launchpad.net/bugs/1222254) Sergey "Shnatsel" Davidoff - 2013-09-07 21:08:55 +0000
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/44841563-scratch-mistakingly-assumes-remote-locations-to-be-read-only?utm_campaign=plugin&utm_content=tracker%2F61917289&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F61917289&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | Code assumes smb-share to be read-only - Scratch assumes any gvfs-mounted location to be read-only, while it's not always the case.
ProblemType: Bug
DistroRelease: elementary OS 0.2
Package: scratch-text-editor 2.0~r1201-0+pkg51~localbuild
ProcVersionSignature: Ubuntu 3.2.0-52.78-generic 3.2.48
Uname: Linux 3.2.0-52-generic x86_64
ApportVersion: 2.0.1-0ubuntu17.4+elementary3~precise1
Architecture: amd64
CrashDB: scratch_text_editor
Date: Sun Sep 8 01:03:55 2013
GsettingsChanges:
InstallationMedia: elementary OS 0.2 "Luna" - Daily amd64 (20130601)
MarkForUpload: True
ProcEnviron:
TERM=xterm
PATH=(custom, no user)
LANG=ru_RU.UTF-8
SHELL=/usr/bin/fish
SourcePackage: scratch-text-editor
UpgradeStatus: No upgrade log present (probably fresh install)
Launchpad Details: [#LP1222254](https://bugs.launchpad.net/bugs/1222254) Sergey "Shnatsel" Davidoff - 2013-09-07 21:08:55 +0000
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/44841563-scratch-mistakingly-assumes-remote-locations-to-be-read-only?utm_campaign=plugin&utm_content=tracker%2F61917289&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F61917289&utm_medium=issues&utm_source=github).
</bountysource-plugin> | non_main | code assumes smb share to be read only scratch assumes any gvfs mounted location to be read only while it s not always the case problemtype bug distrorelease elementary os package scratch text editor localbuild procversionsignature ubuntu generic uname linux generic apportversion architecture crashdb scratch text editor date sun sep gsettingschanges installationmedia elementary os luna daily markforupload true procenviron term xterm path custom no user lang ru ru utf shell usr bin fish sourcepackage scratch text editor upgradestatus no upgrade log present probably fresh install launchpad details sergey shnatsel davidoff want to back this issue we accept bounties via | 0 |
248,975 | 7,947,900,989 | IssuesEvent | 2018-07-11 05:48:27 | elementary/photos | https://api.github.com/repos/elementary/photos | closed | There are untranslated strings | Priority: Medium Status: In Progress | On Juno Beta. This happens even though translation on Weblate is complete, but otherwise it seems that some of those strings are not on Weblate. There may be more strings than those on screenshots.


| 1.0 | There are untranslated strings - On Juno Beta. This happens even though translation on Weblate is complete, but otherwise it seems that some of those strings are not on Weblate. There may be more strings than those on screenshots.


| non_main | there are untranslated strings on juno beta this happens even though translation on weblate is complete but otherwise it seems that some of those strings are not on weblate there may be more strings than those on screenshots | 0 |
637,460 | 20,648,595,022 | IssuesEvent | 2022-03-09 00:03:53 | Laravel-Backpack/CRUD | https://api.github.com/repos/Laravel-Backpack/CRUD | closed | [v5 Bug] loadOnce directive is not rendered on edit form after update to v5 | Bug URGENT triage Priority: MUST | # Bug report
### What I did
Updated from v4.1 to v5 following upgrade guide at:
https://backpackforlaravel.com/docs/5.x/upgrade-guide.
### What happened
Most of the things went well after update but it seems like new @loadOnce directive is not rendered for form fields on edit form. On top of the page are css and on the bootom js stuff. Rest of the application works fine.

### Is it a bug in the latest version of Backpack?
After I run ```composer update backpack/crud``` the bug... is it still there?
Yes
### Backpack, Laravel, PHP, DB version
When I run ```php artisan backpack:version``` the output is:
### PHP VERSION:
PHP 8.1.3 (cli) (built: Mar 4 2022 17:38:46) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.1.3, Copyright (c) Zend Technologies
with Zend OPcache v8.1.3, Copyright (c), by Zend Technologies
### LARAVEL VERSION:
v9.3.1@c1c4404511b83fbf90ccbcdf864d4a85537f35e4
### BACKPACK VERSION:
5.0.9@d995de2026b6fcb1b13b469f371c929c29d1c1cc | 1.0 | [v5 Bug] loadOnce directive is not rendered on edit form after update to v5 - # Bug report
### What I did
Updated from v4.1 to v5 following upgrade guide at:
https://backpackforlaravel.com/docs/5.x/upgrade-guide.
### What happened
Most of the things went well after update but it seems like new @loadOnce directive is not rendered for form fields on edit form. On top of the page are css and on the bootom js stuff. Rest of the application works fine.

### Is it a bug in the latest version of Backpack?
After I run ```composer update backpack/crud``` the bug... is it still there?
Yes
### Backpack, Laravel, PHP, DB version
When I run ```php artisan backpack:version``` the output is:
### PHP VERSION:
PHP 8.1.3 (cli) (built: Mar 4 2022 17:38:46) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.1.3, Copyright (c) Zend Technologies
with Zend OPcache v8.1.3, Copyright (c), by Zend Technologies
### LARAVEL VERSION:
v9.3.1@c1c4404511b83fbf90ccbcdf864d4a85537f35e4
### BACKPACK VERSION:
5.0.9@d995de2026b6fcb1b13b469f371c929c29d1c1cc | non_main | loadonce directive is not rendered on edit form after update to bug report what i did updated from to following upgrade guide at what happened most of the things went well after update but it seems like new loadonce directive is not rendered for form fields on edit form on top of the page are css and on the bootom js stuff rest of the application works fine is it a bug in the latest version of backpack after i run composer update backpack crud the bug is it still there yes backpack laravel php db version when i run php artisan backpack version the output is php version php cli built mar nts copyright c the php group zend engine copyright c zend technologies with zend opcache copyright c by zend technologies laravel version backpack version | 0 |
3,687 | 15,057,671,911 | IssuesEvent | 2021-02-03 22:04:01 | IITIDIDX597/sp_2021_team2 | https://api.github.com/repos/IITIDIDX597/sp_2021_team2 | opened | Efficient stock management | Maintaining Inventory Story | As a manager I want to know when the kitchen items are out of stock, So that I can reduce chance of over or under ordering of stock. | True | Efficient stock management - As a manager I want to know when the kitchen items are out of stock, So that I can reduce chance of over or under ordering of stock. | main | efficient stock management as a manager i want to know when the kitchen items are out of stock so that i can reduce chance of over or under ordering of stock | 1 |
3,331 | 12,942,529,528 | IssuesEvent | 2020-07-18 02:33:46 | short-d/short | https://api.github.com/repos/short-d/short | closed | [Refactor] Toggle should use CSS module | maintainability | **What is frustrating you?**
`Toggle` component does not make use of CSS modules.
**Your solution**
`Toggle` should have a corresponding CSS module.
| True | [Refactor] Toggle should use CSS module - **What is frustrating you?**
`Toggle` component does not make use of CSS modules.
**Your solution**
`Toggle` should have a corresponding CSS module.
| main | toggle should use css module what is frustrating you toggle component does not make use of css modules your solution toggle should have a corresponding css module | 1 |
58,031 | 7,113,261,227 | IssuesEvent | 2018-01-17 19:49:42 | aspnet/Mvc | https://api.github.com/repos/aspnet/Mvc | closed | FileVersionProvider doesn't work with escaped urls | by design investigate | I've noticed that not all of my images are receiving cache breakers when using `asp-append-version="true"` and after looking into it more I've found it's only when the path is escaped.
This will receive a cache breaker
```html
<img asp-append-version="true" src="/content/images/some folder/file.jpg">
```
This will not receive a cache breaker
```html
<img asp-append-version="true" src="/content/images/some%20folder/file.jpg">
```
I'm using the `FileVersionProvider` in my own tag helpers to apply cache breakers to a few other tags/attributes (meta tags & `srcset`) and I'm seeing the issue there as well. In those instances if I unescape the path before passing it into `FileVersionProvider.AddFileVersionToPath()` then a cache breaker is applied to it.
Is this behavior expected, or should escaped paths be unescaped before looking them up on disk?
The reason I'm escaping these paths is I've run into issues in older versions of iOS Safari that sometimes don't load the resource if it has spaces in it, and it makes my `srcset` parsing much simpler. For now this isn't a big deal since the files affected aren't changing, but if they do the only way to make sure they all work is to rename the folders so they don't have a space in them. | 1.0 | FileVersionProvider doesn't work with escaped urls - I've noticed that not all of my images are receiving cache breakers when using `asp-append-version="true"` and after looking into it more I've found it's only when the path is escaped.
This will receive a cache breaker
```html
<img asp-append-version="true" src="/content/images/some folder/file.jpg">
```
This will not receive a cache breaker
```html
<img asp-append-version="true" src="/content/images/some%20folder/file.jpg">
```
I'm using the `FileVersionProvider` in my own tag helpers to apply cache breakers to a few other tags/attributes (meta tags & `srcset`) and I'm seeing the issue there as well. In those instances if I unescape the path before passing it into `FileVersionProvider.AddFileVersionToPath()` then a cache breaker is applied to it.
Is this behavior expected, or should escaped paths be unescaped before looking them up on disk?
The reason I'm escaping these paths is I've run into issues in older versions of iOS Safari that sometimes don't load the resource if it has spaces in it, and it makes my `srcset` parsing much simpler. For now this isn't a big deal since the files affected aren't changing, but if they do the only way to make sure they all work is to rename the folders so they don't have a space in them. | non_main | fileversionprovider doesn t work with escaped urls i ve noticed that not all of my images are receiving cache breakers when using asp append version true and after looking into it more i ve found it s only when the path is escaped this will receive a cache breaker html this will not receive a cache breaker html i m using the fileversionprovider in my own tag helpers to apply cache breakers to a few other tags attributes meta tags srcset and i m seeing the issue there as well in those instances if i unescape the path before passing it into fileversionprovider addfileversiontopath then a cache breaker is applied to it is this behavior expected or should escaped paths be unescaped before looking them up on disk the reason i m escaping these paths is i ve run into issues in older versions of ios safari that sometimes don t load the resource if it has spaces in it and it makes my srcset parsing much simpler for now this isn t a big deal since the files affected aren t changing but if they do the only way to make sure they all work is to rename the folders so they don t have a space in them | 0 |
4,238 | 20,999,654,346 | IssuesEvent | 2022-03-29 16:13:25 | jxk20/nlb_goodreads_searcher | https://api.github.com/repos/jxk20/nlb_goodreads_searcher | closed | Set up github CI/CD pipeline | maintainability | - [x] Do tests for `client`
- [x] Check for code coverage
- [ ] Do tests for `server` | True | Set up github CI/CD pipeline - - [x] Do tests for `client`
- [x] Check for code coverage
- [ ] Do tests for `server` | main | set up github ci cd pipeline do tests for client check for code coverage do tests for server | 1 |
139,428 | 18,852,043,766 | IssuesEvent | 2021-11-11 22:21:44 | DemoEnv/Java-Demo | https://api.github.com/repos/DemoEnv/Java-Demo | opened | CVE-2013-4002 (Medium) detected in xercesImpl-2.8.0.jar | security vulnerability | ## CVE-2013-4002 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p></summary>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: Java-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- xom-1.2.5.jar
- :x: **xercesImpl-2.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DemoEnv/Java-Demo/commit/43308dc67d60bc98113872a647b47a4971a2ff2a">43308dc67d60bc98113872a647b47a4971a2ff2a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names.
<p>Publish Date: 2013-07-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4002>CVE-2013-4002</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002</a></p>
<p>Release Date: 2013-07-23</p>
<p>Fix Resolution: xerces:xercesImpl:Xerces-J_2_12_0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"xerces","packageName":"xercesImpl","packageVersion":"2.8.0","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;xom:xom:1.2.5;xerces:xercesImpl:2.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"xerces:xercesImpl:Xerces-J_2_12_0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2013-4002","vulnerabilityDetails":"XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4002","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2013-4002 (Medium) detected in xercesImpl-2.8.0.jar - ## CVE-2013-4002 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p></summary>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: Java-Demo/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- esapi-2.1.0.1.jar (Root Library)
- xom-1.2.5.jar
- :x: **xercesImpl-2.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DemoEnv/Java-Demo/commit/43308dc67d60bc98113872a647b47a4971a2ff2a">43308dc67d60bc98113872a647b47a4971a2ff2a</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names.
<p>Publish Date: 2013-07-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4002>CVE-2013-4002</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002</a></p>
<p>Release Date: 2013-07-23</p>
<p>Fix Resolution: xerces:xercesImpl:Xerces-J_2_12_0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"xerces","packageName":"xercesImpl","packageVersion":"2.8.0","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.owasp.esapi:esapi:2.1.0.1;xom:xom:1.2.5;xerces:xercesImpl:2.8.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"xerces:xercesImpl:Xerces-J_2_12_0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2013-4002","vulnerabilityDetails":"XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4002","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_main | cve medium detected in xercesimpl jar cve medium severity vulnerability vulnerable library xercesimpl jar is the next generation of high performance fully compliant xml parsers in the apache xerces family this new version of xerces introduces the xerces native interface xni a complete framework for building parser components and configurations that is extremely modular and easy to program library home page a href path to dependency file java demo pom xml path to vulnerable library home wss scanner repository xerces xercesimpl xercesimpl jar dependency hierarchy esapi jar root library xom jar x xercesimpl jar vulnerable library found in head commit a href found in base branch main vulnerability details xmlscanner java in apache java parser before as used in the java runtime environment jre in ibm java before before before and before as well as oracle java se and earlier java se and earlier java se and earlier jrockit and earlier jrockit and earlier java se embedded and earlier and possibly other products allows remote attackers to cause a denial of service via vectors related to xml attribute names publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xerces xercesimpl xerces j isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org owasp esapi esapi xom xom xerces xercesimpl isminimumfixversionavailable true minimumfixversion xerces xercesimpl xerces j basebranches vulnerabilityidentifier cve vulnerabilitydetails xmlscanner java in apache java parser before as used in the java runtime environment jre in ibm java before before before and before as well as oracle java se and earlier java se and earlier java se and earlier jrockit and earlier jrockit and earlier java se embedded and earlier and possibly other products allows remote attackers to cause a denial of service via vectors related to xml attribute names vulnerabilityurl | 0 |
394,038 | 11,628,569,129 | IssuesEvent | 2020-02-27 18:35:49 | holochain/docs-pages | https://api.github.com/repos/holochain/docs-pages | reopened | Suggesting Edit: Building Holochain Apps - Bridging | bug priority | Page URL: http://developer.holochain.org/docs/guide/bridging/
`that allows a synchronous bidirectional transfer` should be asynchronous
| 1.0 | Suggesting Edit: Building Holochain Apps - Bridging - Page URL: http://developer.holochain.org/docs/guide/bridging/
`that allows a synchronous bidirectional transfer` should be asynchronous
| non_main | suggesting edit building holochain apps bridging page url that allows a synchronous bidirectional transfer should be asynchronous | 0 |
4,231 | 20,969,330,458 | IssuesEvent | 2022-03-28 09:51:35 | Lissy93/dashy | https://api.github.com/repos/Lissy93/dashy | closed | [BUG] Glances Widget with Basic Auth results in "Request failed with status code 401" | 🐛 Bug 👤 Awaiting Maintainer Response | ### Environment
Self-Hosted (Docker)
### Version
2.0.4
### Describe the problem
The Glances Widgets fail to fetch data due to a 401 error. I've checked with both my browser and postman and I am 100% sure the credentials are correct. Both run on Docker. I had a few issues with cors before getting to this point so it maybe be related to that but I'm unsure, I've attached the Dashy config and console log as well the traefik configs for both to the additional info. Apologizes if this is a simple misconfiguration and not a bug.
### Additional info
--- Dashy Console Log ---
CoolConsole.js:18 Stack Trace
Error: Request failed with status code 401
i @ CoolConsole.js:18
l @ ErrorHandler.js:27
error @ WidgetMixin.js:75
(anonymous) @ WidgetMixin.js:122
Promise.catch (async)
(anonymous) @ WidgetMixin.js:121
makeRequest @ WidgetMixin.js:113
fetchData @ GlancesMixin.js:27
update @ WidgetMixin.js:67
(anonymous) @ WidgetMixin.js:71
createError.js:16 Uncaught (in promise) Error: Request failed with status code 401
at e.exports (createError.js:16:15)
at e.exports (settle.js:17:12)
at XMLHttpRequest.x (xhr.js:66:7)
GET https://glances.redacted.tld/api/3/cpu 401
(anonymous) @ xhr.js:210
e.exports @ xhr.js:15
e.exports @ dispatchRequest.js:58
h.request @ Axios.js:112
(anonymous) @ bind.js:9
(anonymous) @ WidgetMixin.js:114
makeRequest @ WidgetMixin.js:113
fetchData @ GlancesMixin.js:27
update @ WidgetMixin.js:67
(anonymous) @ WidgetMixin.js:71
--- Dashy Widget Config ---
widgets:
- type: gl-current-cpu
options:
hostname: https://glances.redacted.tld
username: glances
password: <password>
--- Dashy Traefik Config ---
- traefik.enable=true
- traefik.docker.network=link
- traefik.http.routers.dashy.rule=Host(`redacted.tld`)
- traefik.http.routers.dashy.entrypoints=websecure
- traefik.http.routers.dashy.tls=true
- traefik.http.routers.dashy.tls.certresolver=letsencrypt
- traefik.http.routers.dashy.middlewares=hsts@file,authelia@docker
--- Glances Traefik Config ---
- traefik.enable=true
- traefik.docker.network=link
- traefik.http.routers.glances.rule=Host(`glances.redacted.tld`)
- traefik.http.routers.glances.entrypoints=websecure
- traefik.http.routers.glances.tls=true
- traefik.http.routers.glances.tls.certresolver=letsencrypt
- traefik.http.middlewares.corsglances.headers.accesscontrolallowmethods=GET,OPTIONS,PUT
- traefik.http.middlewares.corsglances.headers.accesscontrolalloworiginlist=https://redacted.tld
- traefik.http.middlewares.corsglances.headers.accesscontrolallowheaders=authorization,headers
- traefik.http.middlewares.corsglances.headers.accesscontrolmaxage=100
- traefik.http.middlewares.corsglances.headers.accesscontrolallowcredentials=true
- traefik.http.middlewares.corsglances.headers.addvaryheader=true
- traefik.http.routers.glances.middlewares=corsglances,hsts@file
- traefik.http.services.glances.loadbalancer.server.port=61208
- traefik.http.routers.glances.service=glances
### Please tick the boxes
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number)
- [X] You've checked that this [issue hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | True | [BUG] Glances Widget with Basic Auth results in "Request failed with status code 401" - ### Environment
Self-Hosted (Docker)
### Version
2.0.4
### Describe the problem
The Glances Widgets fail to fetch data due to a 401 error. I've checked with both my browser and postman and I am 100% sure the credentials are correct. Both run on Docker. I had a few issues with cors before getting to this point so it maybe be related to that but I'm unsure, I've attached the Dashy config and console log as well the traefik configs for both to the additional info. Apologizes if this is a simple misconfiguration and not a bug.
### Additional info
--- Dashy Console Log ---
CoolConsole.js:18 Stack Trace
Error: Request failed with status code 401
i @ CoolConsole.js:18
l @ ErrorHandler.js:27
error @ WidgetMixin.js:75
(anonymous) @ WidgetMixin.js:122
Promise.catch (async)
(anonymous) @ WidgetMixin.js:121
makeRequest @ WidgetMixin.js:113
fetchData @ GlancesMixin.js:27
update @ WidgetMixin.js:67
(anonymous) @ WidgetMixin.js:71
createError.js:16 Uncaught (in promise) Error: Request failed with status code 401
at e.exports (createError.js:16:15)
at e.exports (settle.js:17:12)
at XMLHttpRequest.x (xhr.js:66:7)
GET https://glances.redacted.tld/api/3/cpu 401
(anonymous) @ xhr.js:210
e.exports @ xhr.js:15
e.exports @ dispatchRequest.js:58
h.request @ Axios.js:112
(anonymous) @ bind.js:9
(anonymous) @ WidgetMixin.js:114
makeRequest @ WidgetMixin.js:113
fetchData @ GlancesMixin.js:27
update @ WidgetMixin.js:67
(anonymous) @ WidgetMixin.js:71
--- Dashy Widget Config ---
widgets:
- type: gl-current-cpu
options:
hostname: https://glances.redacted.tld
username: glances
password: <password>
--- Dashy Traefik Config ---
- traefik.enable=true
- traefik.docker.network=link
- traefik.http.routers.dashy.rule=Host(`redacted.tld`)
- traefik.http.routers.dashy.entrypoints=websecure
- traefik.http.routers.dashy.tls=true
- traefik.http.routers.dashy.tls.certresolver=letsencrypt
- traefik.http.routers.dashy.middlewares=hsts@file,authelia@docker
--- Glances Traefik Config ---
- traefik.enable=true
- traefik.docker.network=link
- traefik.http.routers.glances.rule=Host(`glances.redacted.tld`)
- traefik.http.routers.glances.entrypoints=websecure
- traefik.http.routers.glances.tls=true
- traefik.http.routers.glances.tls.certresolver=letsencrypt
- traefik.http.middlewares.corsglances.headers.accesscontrolallowmethods=GET,OPTIONS,PUT
- traefik.http.middlewares.corsglances.headers.accesscontrolalloworiginlist=https://redacted.tld
- traefik.http.middlewares.corsglances.headers.accesscontrolallowheaders=authorization,headers
- traefik.http.middlewares.corsglances.headers.accesscontrolmaxage=100
- traefik.http.middlewares.corsglances.headers.accesscontrolallowcredentials=true
- traefik.http.middlewares.corsglances.headers.addvaryheader=true
- traefik.http.routers.glances.middlewares=corsglances,hsts@file
- traefik.http.services.glances.loadbalancer.server.port=61208
- traefik.http.routers.glances.service=glances
### Please tick the boxes
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number)
- [X] You've checked that this [issue hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | main | glances widget with basic auth results in request failed with status code environment self hosted docker version describe the problem the glances widgets fail to fetch data due to a error i ve checked with both my browser and postman and i am sure the credentials are correct both run on docker i had a few issues with cors before getting to this point so it maybe be related to that but i m unsure i ve attached the dashy config and console log as well the traefik configs for both to the additional info apologizes if this is a simple misconfiguration and not a bug additional info dashy console log coolconsole js stack trace error request failed with status code i coolconsole js l errorhandler js error widgetmixin js anonymous widgetmixin js promise catch async anonymous widgetmixin js makerequest widgetmixin js fetchdata glancesmixin js update widgetmixin js anonymous widgetmixin js createerror js uncaught in promise error request failed with status code at e exports createerror js at e exports settle js at xmlhttprequest x xhr js get anonymous xhr js e exports xhr js e exports dispatchrequest js h request axios js anonymous bind js anonymous widgetmixin js makerequest widgetmixin js fetchdata glancesmixin js update widgetmixin js anonymous widgetmixin js dashy widget config widgets type gl current cpu options hostname username glances password dashy traefik config traefik enable true traefik docker network link traefik http routers dashy rule host redacted tld traefik http routers dashy entrypoints websecure traefik http routers dashy tls true traefik http routers dashy tls certresolver letsencrypt traefik http routers dashy middlewares hsts file authelia docker glances traefik config traefik enable true traefik docker network link traefik http routers glances rule host glances redacted tld traefik http routers glances entrypoints websecure traefik http routers glances tls true traefik http routers glances tls certresolver letsencrypt traefik http middlewares corsglances headers accesscontrolallowmethods get options put traefik http middlewares corsglances headers accesscontrolalloworiginlist traefik http middlewares corsglances headers accesscontrolallowheaders authorization headers traefik http middlewares corsglances headers accesscontrolmaxage traefik http middlewares corsglances headers accesscontrolallowcredentials true traefik http middlewares corsglances headers addvaryheader true traefik http routers glances middlewares corsglances hsts file traefik http services glances loadbalancer server port traefik http routers glances service glances please tick the boxes you are using a version of dashy check the first two digits of the version number you ve checked that this you ve checked the and guide you agree to the | 1 |
185,848 | 21,867,534,814 | IssuesEvent | 2022-05-19 01:04:54 | yael-lindman/jenkins | https://api.github.com/repos/yael-lindman/jenkins | closed | CVE-2016-3092 (High) detected in commons-fileupload-1.3.1-jenkins-2.jar - autoclosed | security vulnerability | ## CVE-2016-3092 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.3.1-jenkins-2.jar</b></p></summary>
<p>The Apache Commons FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Path to dependency file: /tmp/ws-scm/jenkins/core/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/commons-fileupload/commons-fileupload/1.3.1-jenkins-2/commons-fileupload-1.3.1-jenkins-2.jar,/jenkins/war/target/jenkins/WEB-INF/lib/commons-fileupload-1.3.1-jenkins-2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.1-jenkins-2/commons-fileupload-1.3.1-jenkins-2.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-fileupload-1.3.1-jenkins-2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/yael-lindman/jenkins/commit/3abfb254ea347629c1aed893510d133a3831c30e">3abfb254ea347629c1aed893510d133a3831c30e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause a denial of service (CPU consumption) via a long boundary string.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3092>CVE-2016-3092</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092</a></p>
<p>Release Date: 2016-07-04</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M8,8.5.3,8.0.36,7.0.70,org.apache.tomcat:tomcat-coyote:9.0.0.M8,8.5.3,8.0.36,7.0.70,commons-fileupload:commons-fileupload:1.3.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-3092 (High) detected in commons-fileupload-1.3.1-jenkins-2.jar - autoclosed - ## CVE-2016-3092 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.3.1-jenkins-2.jar</b></p></summary>
<p>The Apache Commons FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Path to dependency file: /tmp/ws-scm/jenkins/core/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/commons-fileupload/commons-fileupload/1.3.1-jenkins-2/commons-fileupload-1.3.1-jenkins-2.jar,/jenkins/war/target/jenkins/WEB-INF/lib/commons-fileupload-1.3.1-jenkins-2.jar,/home/wss-scanner/.m2/repository/commons-fileupload/commons-fileupload/1.3.1-jenkins-2/commons-fileupload-1.3.1-jenkins-2.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-fileupload-1.3.1-jenkins-2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/yael-lindman/jenkins/commit/3abfb254ea347629c1aed893510d133a3831c30e">3abfb254ea347629c1aed893510d133a3831c30e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause a denial of service (CPU consumption) via a long boundary string.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3092>CVE-2016-3092</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092</a></p>
<p>Release Date: 2016-07-04</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M8,8.5.3,8.0.36,7.0.70,org.apache.tomcat:tomcat-coyote:9.0.0.M8,8.5.3,8.0.36,7.0.70,commons-fileupload:commons-fileupload:1.3.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in commons fileupload jenkins jar autoclosed cve high severity vulnerability vulnerable library commons fileupload jenkins jar the apache commons fileupload component provides a simple yet flexible means of adding support for multipart file upload functionality to servlets and web applications path to dependency file tmp ws scm jenkins core pom xml path to vulnerable library canner repository commons fileupload commons fileupload jenkins commons fileupload jenkins jar jenkins war target jenkins web inf lib commons fileupload jenkins jar home wss scanner repository commons fileupload commons fileupload jenkins commons fileupload jenkins jar dependency hierarchy x commons fileupload jenkins jar vulnerable library found in head commit a href vulnerability details the multipartstream class in apache commons fileupload before as used in apache tomcat x before x before x before and x before and other products allows remote attackers to cause a denial of service cpu consumption via a long boundary string publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote commons fileupload commons fileupload step up your open source security game with whitesource | 0 |
4,820 | 24,847,836,015 | IssuesEvent | 2022-10-26 17:19:54 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | RecursionError in records endpoint | type: bug work: backend status: ready restricted: maintainers | ## Description
I've been getting this error for a few tables in my environment.
* These tables have been created using the 'Create new table' button, not using file import.
* These tables have no rows.
* The tables have a link to another table.
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/2/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/records.py", line 67, in list
records = paginator.paginate_queryset(
File "/code/mathesar/api/pagination.py", line 82, in paginate_queryset
preview_metadata, preview_columns = get_preview_info(table.id)
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 70, in get_preview_info
fk_constraints = [
File "/code/mathesar/utils/preview.py", line 73, in <listcomp>
if table_constraint.type == ConstraintType.FOREIGN_KEY.value
File "/code/mathesar/models/base.py", line 774, in type
return constraint_utils.get_constraint_type_from_char(self._constraint_record['contype'])
File "/code/mathesar/models/base.py", line 766, in _constraint_record
return get_constraint_record_from_oid(self.oid, engine)
File "/code/db/constraints/operations/select.py", line 33, in get_constraint_record_from_oid
pg_constraint = get_pg_catalog_table("pg_constraint", engine, metadata=metadata)
File "/code/db/utils.py", line 92, in warning_ignored_func
return f(*args, **kwargs)
File "/code/db/utils.py", line 99, in get_pg_catalog_table
return sqlalchemy.Table(table_name, metadata, autoload_with=engine, schema='pg_catalog')
File "<string>", line 2, in __new__
<source code not available>
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py", line 298, in warned
return fn(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 600, in __new__
metadata._remove_table(name, schema)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 595, in __new__
table._init(name, metadata, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 670, in _init
self._autoload(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 705, in _autoload
conn_insp.reflect_table(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 774, in reflect_table
for col_d in self.get_columns(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 497, in get_columns
col_defs = self.dialect.get_columns(
File "<string>", line 2, in get_columns
<source code not available>
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 55, in cache
ret = fn(self, con, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py", line 3585, in get_columns
table_oid = self.get_table_oid(
File "<string>", line 2, in get_table_oid
<source code not available>
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 55, in cache
ret = fn(self, con, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py", line 3462, in get_table_oid
c = connection.execute(s, dict(table_name=table_name, schema=schema))
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/future/engine.py", line 280, in execute
return self._execute_20(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1582, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 324, in _execute_on_connection
return connection._execute_clauseelement(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1451, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1813, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1998, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1786, in _execute_context
result = context._setup_result_proxy()
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1406, in _setup_result_proxy
result = self._setup_dml_or_text_result()
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1494, in _setup_dml_or_text_result
result = _cursor.CursorResult(self, strategy, cursor_description)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py", line 1253, in __init__
metadata = self._init_metadata(context, cursor_description)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py", line 1310, in _init_metadata
metadata = metadata._adapt_to_context(context)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py", line 136, in _adapt_to_context
invoked_statement._exported_columns_iterator()
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py", line 126, in _exported_columns_iterator
return iter(self.exported_columns)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py", line 2870, in exported_columns
return self.selected_columns
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 1180, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py", line 6354, in selected_columns
return ColumnCollection(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1128, in __init__
self._initial_populate(columns)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1131, in _initial_populate
self._populate_separate_keys(iter_)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1227, in _populate_separate_keys
self._colset.update(c for k, c in self._collection)
Exception Type: RecursionError at /api/db/v0/tables/2/records/
Exception Value: maximum recursion depth exceeded
```
I'm not sure about the cause and it's occuring consistently for me but unable to reproduce it on staging. | True | RecursionError in records endpoint - ## Description
I've been getting this error for a few tables in my environment.
* These tables have been created using the 'Create new table' button, not using file import.
* These tables have no rows.
* The tables have a link to another table.
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/2/records/
Django Version: 3.1.14
Python Version: 3.9.14
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/db/viewsets/records.py", line 67, in list
records = paginator.paginate_queryset(
File "/code/mathesar/api/pagination.py", line 82, in paginate_queryset
preview_metadata, preview_columns = get_preview_info(table.id)
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 81, in get_preview_info
preview_info, columns = _preview_info_by_column_id(
File "/code/mathesar/utils/preview.py", line 22, in _preview_info_by_column_id
referent_preview_info, referent_preview_columns = get_preview_info(
File "/code/mathesar/utils/preview.py", line 70, in get_preview_info
fk_constraints = [
File "/code/mathesar/utils/preview.py", line 73, in <listcomp>
if table_constraint.type == ConstraintType.FOREIGN_KEY.value
File "/code/mathesar/models/base.py", line 774, in type
return constraint_utils.get_constraint_type_from_char(self._constraint_record['contype'])
File "/code/mathesar/models/base.py", line 766, in _constraint_record
return get_constraint_record_from_oid(self.oid, engine)
File "/code/db/constraints/operations/select.py", line 33, in get_constraint_record_from_oid
pg_constraint = get_pg_catalog_table("pg_constraint", engine, metadata=metadata)
File "/code/db/utils.py", line 92, in warning_ignored_func
return f(*args, **kwargs)
File "/code/db/utils.py", line 99, in get_pg_catalog_table
return sqlalchemy.Table(table_name, metadata, autoload_with=engine, schema='pg_catalog')
File "<string>", line 2, in __new__
<source code not available>
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py", line 298, in warned
return fn(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 600, in __new__
metadata._remove_table(name, schema)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 595, in __new__
table._init(name, metadata, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 670, in _init
self._autoload(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 705, in _autoload
conn_insp.reflect_table(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 774, in reflect_table
for col_d in self.get_columns(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 497, in get_columns
col_defs = self.dialect.get_columns(
File "<string>", line 2, in get_columns
<source code not available>
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 55, in cache
ret = fn(self, con, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py", line 3585, in get_columns
table_oid = self.get_table_oid(
File "<string>", line 2, in get_table_oid
<source code not available>
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 55, in cache
ret = fn(self, con, *args, **kw)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/base.py", line 3462, in get_table_oid
c = connection.execute(s, dict(table_name=table_name, schema=schema))
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/future/engine.py", line 280, in execute
return self._execute_20(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1582, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 324, in _execute_on_connection
return connection._execute_clauseelement(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1451, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1813, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1998, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1786, in _execute_context
result = context._setup_result_proxy()
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1406, in _setup_result_proxy
result = self._setup_dml_or_text_result()
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1494, in _setup_dml_or_text_result
result = _cursor.CursorResult(self, strategy, cursor_description)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py", line 1253, in __init__
metadata = self._init_metadata(context, cursor_description)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py", line 1310, in _init_metadata
metadata = metadata._adapt_to_context(context)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/cursor.py", line 136, in _adapt_to_context
invoked_statement._exported_columns_iterator()
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py", line 126, in _exported_columns_iterator
return iter(self.exported_columns)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py", line 2870, in exported_columns
return self.selected_columns
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 1180, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/selectable.py", line 6354, in selected_columns
return ColumnCollection(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1128, in __init__
self._initial_populate(columns)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1131, in _initial_populate
self._populate_separate_keys(iter_)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1227, in _populate_separate_keys
self._colset.update(c for k, c in self._collection)
Exception Type: RecursionError at /api/db/v0/tables/2/records/
Exception Value: maximum recursion depth exceeded
```
I'm not sure about the cause and it's occuring consistently for me but unable to reproduce it on staging. | main | recursionerror in records endpoint description i ve been getting this error for a few tables in my environment these tables have been created using the create new table button not using file import these tables have no rows the tables have a link to another table environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file code mathesar api db viewsets records py line in list records paginator paginate queryset file code mathesar api pagination py line in paginate queryset preview metadata preview columns get preview info table id file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info preview info columns preview info by column id file code mathesar utils preview py line in preview info by column id referent preview info referent preview columns get preview info file code mathesar utils preview py line in get preview info fk constraints file code mathesar utils preview py line in if table constraint type constrainttype foreign key value file code mathesar models base py line in type return constraint utils get constraint type from char self constraint record file code mathesar models base py line in constraint record return get constraint record from oid self oid engine file code db constraints operations select py line in get constraint record from oid pg constraint get pg catalog table pg constraint engine metadata metadata file code db utils py line in warning ignored func return f args kwargs file code db utils py line in get pg catalog table return sqlalchemy table table name metadata autoload with engine schema pg catalog file line in new file usr local lib site packages sqlalchemy util deprecations py line in warned return fn args kwargs file usr local lib site packages sqlalchemy sql schema py line in new metadata remove table name schema file usr local lib site packages sqlalchemy util langhelpers py line in exit compat raise file usr local lib site packages sqlalchemy util compat py line in raise raise exception file usr local lib site packages sqlalchemy sql schema py line in new table init name metadata args kw file usr local lib site packages sqlalchemy sql schema py line in init self autoload file usr local lib site packages sqlalchemy sql schema py line in autoload conn insp reflect table file usr local lib site packages sqlalchemy engine reflection py line in reflect table for col d in self get columns file usr local lib site packages sqlalchemy engine reflection py line in get columns col defs self dialect get columns file line in get columns file usr local lib site packages sqlalchemy engine reflection py line in cache ret fn self con args kw file usr local lib site packages sqlalchemy dialects postgresql base py line in get columns table oid self get table oid file line in get table oid file usr local lib site packages sqlalchemy engine reflection py line in cache ret fn self con args kw file usr local lib site packages sqlalchemy dialects postgresql base py line in get table oid c connection execute s dict table name table name schema schema file usr local lib site packages sqlalchemy future engine py line in execute return self execute file usr local lib site packages sqlalchemy engine base py line in execute return meth self args kwargs execution options file usr local lib site packages sqlalchemy sql elements py line in execute on connection return connection execute clauseelement file usr local lib site packages sqlalchemy engine base py line in execute clauseelement ret self execute context file usr local lib site packages sqlalchemy engine base py line in execute context self handle dbapi exception file usr local lib site packages sqlalchemy engine base py line in handle dbapi exception util raise exc info with traceback exc info file usr local lib site packages sqlalchemy util compat py line in raise raise exception file usr local lib site packages sqlalchemy engine base py line in execute context result context setup result proxy file usr local lib site packages sqlalchemy engine default py line in setup result proxy result self setup dml or text result file usr local lib site packages sqlalchemy engine default py line in setup dml or text result result cursor cursorresult self strategy cursor description file usr local lib site packages sqlalchemy engine cursor py line in init metadata self init metadata context cursor description file usr local lib site packages sqlalchemy engine cursor py line in init metadata metadata metadata adapt to context context file usr local lib site packages sqlalchemy engine cursor py line in adapt to context invoked statement exported columns iterator file usr local lib site packages sqlalchemy sql selectable py line in exported columns iterator return iter self exported columns file usr local lib site packages sqlalchemy sql selectable py line in exported columns return self selected columns file usr local lib site packages sqlalchemy util langhelpers py line in get obj dict result self fget obj file usr local lib site packages sqlalchemy sql selectable py line in selected columns return columncollection file usr local lib site packages sqlalchemy sql base py line in init self initial populate columns file usr local lib site packages sqlalchemy sql base py line in initial populate self populate separate keys iter file usr local lib site packages sqlalchemy sql base py line in populate separate keys self colset update c for k c in self collection exception type recursionerror at api db tables records exception value maximum recursion depth exceeded i m not sure about the cause and it s occuring consistently for me but unable to reproduce it on staging | 1 |
97,520 | 12,240,517,205 | IssuesEvent | 2020-05-05 00:34:22 | clinicaccess/casn-app | https://api.github.com/repos/clinicaccess/casn-app | closed | Drive Card> Map View> Weird Highlight | Design low | When you tap a drive card and tap the map view, a box is highlighted around it.

| 1.0 | Drive Card> Map View> Weird Highlight - When you tap a drive card and tap the map view, a box is highlighted around it.

| non_main | drive card map view weird highlight when you tap a drive card and tap the map view a box is highlighted around it | 0 |
759 | 4,357,116,649 | IssuesEvent | 2016-08-02 00:00:50 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | JSON Validator: prettyprint JSON when it's valid | Improvement Maintainer Input Requested | ## Background
The library used for the JSON Validator, [jsonlint](https://github.com/zaach/jsonlint), seems to also allow JSON prettyprinting when it's valid, in addition to outputting useful errors in case it isn't.
## To Do
- Broaden the trigger space (use triggers like "json prettyprint", "json beautify", etc
- Edit the description to make it clear this IA doesn't just validate JSON, but beautifies it as well
- Either replace the input with its beautified version if it's valid, or show the beautified version next to it
## Forum
See the topic [Improve JSON Lint Goodie](https://forum.duckduckhack.com/t/improve-json-lint-goodie/327/4)
This issue is part of the [Programming Mission](https://forum.duckduckhack.com/t/duckduckhack-programming-mission-overview/53): help us improve the results for [JavaScript related searches](https://forum.duckduckhack.com/t/javascript-search-overview/94)!
---------------------------------------------------
IA Page: https://duck.co/ia/view/json_validator
maintainer: @sahildua2305 | True | JSON Validator: prettyprint JSON when it's valid - ## Background
The library used for the JSON Validator, [jsonlint](https://github.com/zaach/jsonlint), seems to also allow JSON prettyprinting when it's valid, in addition to outputting useful errors in case it isn't.
## To Do
- Broaden the trigger space (use triggers like "json prettyprint", "json beautify", etc
- Edit the description to make it clear this IA doesn't just validate JSON, but beautifies it as well
- Either replace the input with its beautified version if it's valid, or show the beautified version next to it
## Forum
See the topic [Improve JSON Lint Goodie](https://forum.duckduckhack.com/t/improve-json-lint-goodie/327/4)
This issue is part of the [Programming Mission](https://forum.duckduckhack.com/t/duckduckhack-programming-mission-overview/53): help us improve the results for [JavaScript related searches](https://forum.duckduckhack.com/t/javascript-search-overview/94)!
---------------------------------------------------
IA Page: https://duck.co/ia/view/json_validator
maintainer: @sahildua2305 | main | json validator prettyprint json when it s valid background the library used for the json validator seems to also allow json prettyprinting when it s valid in addition to outputting useful errors in case it isn t to do broaden the trigger space use triggers like json prettyprint json beautify etc edit the description to make it clear this ia doesn t just validate json but beautifies it as well either replace the input with its beautified version if it s valid or show the beautified version next to it forum see the topic this issue is part of the help us improve the results for ia page maintainer | 1 |
13,814 | 5,467,265,993 | IssuesEvent | 2017-03-10 00:31:53 | mitchellh/packer | https://api.github.com/repos/mitchellh/packer | closed | build -force should remove existing AWS AMI | bug builder/amazon | According to the docs, -force should remove existing artifacts before building, however this seems to work for QEMU but not for amazon-ebs builder:
# packer build -force template.json
[...]
==> amazon-ebs: Error: name conflicts with an existing AMI: ami-0f160a63
Build 'amazon-ebs' errored: Error: name conflicts with an existing AMI: ami-0f160a63
| 1.0 | build -force should remove existing AWS AMI - According to the docs, -force should remove existing artifacts before building, however this seems to work for QEMU but not for amazon-ebs builder:
# packer build -force template.json
[...]
==> amazon-ebs: Error: name conflicts with an existing AMI: ami-0f160a63
Build 'amazon-ebs' errored: Error: name conflicts with an existing AMI: ami-0f160a63
| non_main | build force should remove existing aws ami according to the docs force should remove existing artifacts before building however this seems to work for qemu but not for amazon ebs builder packer build force template json amazon ebs error name conflicts with an existing ami ami build amazon ebs errored error name conflicts with an existing ami ami | 0 |
170,427 | 20,870,659,959 | IssuesEvent | 2022-03-22 11:37:37 | HoangBachLeLe/AngularCRUD | https://api.github.com/repos/HoangBachLeLe/AngularCRUD | opened | karma-6.3.17.tgz: 2 vulnerabilities (highest severity is: 9.8) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>karma-6.3.17.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/HoangBachLeLe/AngularCRUD/commit/a6b9e12f906bc518ef7e2135d083ed13373a5353">a6b9e12f906bc518ef7e2135d083ed13373a5353</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-44906](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | minimist-1.2.5.tgz | Transitive | N/A | ❌ |
| [CVE-2021-44907](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | qs-6.9.7.tgz | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-44906</summary>
### Vulnerable Library - <b>minimist-1.2.5.tgz</b></p>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- karma-6.3.17.tgz (Root Library)
- mkdirp-0.5.5.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/HoangBachLeLe/AngularCRUD/commit/a6b9e12f906bc518ef7e2135d083ed13373a5353">a6b9e12f906bc518ef7e2135d083ed13373a5353</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-44907</summary>
### Vulnerable Library - <b>qs-6.9.7.tgz</b></p>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.9.7.tgz">https://registry.npmjs.org/qs/-/qs-6.9.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- karma-6.3.17.tgz (Root Library)
- body-parser-1.19.2.tgz
- :x: **qs-6.9.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/HoangBachLeLe/AngularCRUD/commit/a6b9e12f906bc518ef7e2135d083ed13373a5353">a6b9e12f906bc518ef7e2135d083ed13373a5353</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior.
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907>CVE-2021-44907</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44907">https://nvd.nist.gov/vuln/detail/CVE-2021-44907</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105;cloudscribe.templates - 5.2.0;KnstAsyncApiUI - 1.0.2-pre;Romano.Vue - 1.0.1;Yarnpkg.Yarn - 0.26.1;VueJS.NetCore - 1.1.1;NativeScript.Sidekick.Standalone.Shell - 1.9.1-v2018050205;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.2;Fable.Template.Elmish.React - 0.1.6;Fable.Snowpack.Template - 2.1.0;Yarn.MSBuild - 0.22.0,0.24.6</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.2.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:6.3.17;mkdirp:0.5.5;minimist:1.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-44906","vulnerabilityDetails":"Minimist \u003c\u003d1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"qs","packageVersion":"6.9.7","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:6.3.17;body-parser:1.19.2;qs:6.9.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105;cloudscribe.templates - 5.2.0;KnstAsyncApiUI - 1.0.2-pre;Romano.Vue - 1.0.1;Yarnpkg.Yarn - 0.26.1;VueJS.NetCore - 1.1.1;NativeScript.Sidekick.Standalone.Shell - 1.9.1-v2018050205;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.2;Fable.Template.Elmish.React - 0.1.6;Fable.Snowpack.Template - 2.1.0;Yarn.MSBuild - 0.22.0,0.24.6","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-44907","vulnerabilityDetails":"A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}]</REMEDIATE> --> | True | karma-6.3.17.tgz: 2 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>karma-6.3.17.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/HoangBachLeLe/AngularCRUD/commit/a6b9e12f906bc518ef7e2135d083ed13373a5353">a6b9e12f906bc518ef7e2135d083ed13373a5353</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-44906](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | minimist-1.2.5.tgz | Transitive | N/A | ❌ |
| [CVE-2021-44907](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.5 | qs-6.9.7.tgz | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-44906</summary>
### Vulnerable Library - <b>minimist-1.2.5.tgz</b></p>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- karma-6.3.17.tgz (Root Library)
- mkdirp-0.5.5.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/HoangBachLeLe/AngularCRUD/commit/a6b9e12f906bc518ef7e2135d083ed13373a5353">a6b9e12f906bc518ef7e2135d083ed13373a5353</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-44907</summary>
### Vulnerable Library - <b>qs-6.9.7.tgz</b></p>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.9.7.tgz">https://registry.npmjs.org/qs/-/qs-6.9.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- karma-6.3.17.tgz (Root Library)
- body-parser-1.19.2.tgz
- :x: **qs-6.9.7.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/HoangBachLeLe/AngularCRUD/commit/a6b9e12f906bc518ef7e2135d083ed13373a5353">a6b9e12f906bc518ef7e2135d083ed13373a5353</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior.
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907>CVE-2021-44907</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44907">https://nvd.nist.gov/vuln/detail/CVE-2021-44907</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105;cloudscribe.templates - 5.2.0;KnstAsyncApiUI - 1.0.2-pre;Romano.Vue - 1.0.1;Yarnpkg.Yarn - 0.26.1;VueJS.NetCore - 1.1.1;NativeScript.Sidekick.Standalone.Shell - 1.9.1-v2018050205;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.2;Fable.Template.Elmish.React - 0.1.6;Fable.Snowpack.Template - 2.1.0;Yarn.MSBuild - 0.22.0,0.24.6</p>
</p>
<p></p>
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.2.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:6.3.17;mkdirp:0.5.5;minimist:1.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-44906","vulnerabilityDetails":"Minimist \u003c\u003d1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}},{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"qs","packageVersion":"6.9.7","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:6.3.17;body-parser:1.19.2;qs:6.9.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105;cloudscribe.templates - 5.2.0;KnstAsyncApiUI - 1.0.2-pre;Romano.Vue - 1.0.1;Yarnpkg.Yarn - 0.26.1;VueJS.NetCore - 1.1.1;NativeScript.Sidekick.Standalone.Shell - 1.9.1-v2018050205;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;dotnetng.template - 1.0.0.2;Fable.Template.Elmish.React - 0.1.6;Fable.Snowpack.Template - 2.1.0;Yarn.MSBuild - 0.22.0,0.24.6","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-44907","vulnerabilityDetails":"A Denial of Service vulnerability exists in qs up to 6.8.0 due to insufficient sanitization of property in the gs.parse function. The merge() function allows the assignment of properties on an array in the query. For any property being assigned, a value in the array is converted to an object containing these properties. Essentially, this means that the property whose expected type is Array always has to be checked with Array.isArray() by the user. This may not be obvious to the user and can cause unexpected behavior.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44907","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}]</REMEDIATE> --> | non_main | karma tgz vulnerabilities highest severity is vulnerable library karma tgz path to dependency file package json path to vulnerable library node modules qs package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high minimist tgz transitive n a medium qs tgz transitive n a details cve vulnerable library minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules minimist package json dependency hierarchy karma tgz root library mkdirp tgz x minimist tgz vulnerable library found in head commit a href found in base branch main vulnerability details minimist is vulnerable to prototype pollution via file index js function setkey lines publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bumperlane public service contracts prerelease cloudscribe templates virteom tenant mobile bluetooth prerelease showingvault dotnet sdk prerelease envisia dotnet templates yarnpkg yarn virteom tenant mobile framework uwp prerelease virteom tenant mobile framework ios prerelease bumperlane public api clientmodule prerelease vuejs netcore dianoga virteom tenant mobile bluetooth ios prerelease virteom public utilities prerelease indianadavy vuejswebapitemplate csharp nordron angulartemplate virteom tenant mobile framework prerelease virteom tenant mobile bluetooth android prerelease dotnet scaffold raml parser corevuewebtest dotnetng template sitecoremaster truedynamicplaceholders virteom tenant mobile framework android prerelease fable template elmish react blazorpolyfill build fable snowpack template bumperlane public api client prerelease yarn msbuild blazor tailwindcss bunit bridge aws tslint safe template gr pagerender razor midiator webclient step up your open source security game with whitesource cve vulnerable library qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file package json path to vulnerable library node modules qs package json dependency hierarchy karma tgz root library body parser tgz x qs tgz vulnerable library found in head commit a href found in base branch main vulnerability details a denial of service vulnerability exists in qs up to due to insufficient sanitization of property in the gs parse function the merge function allows the assignment of properties on an array in the query for any property being assigned a value in the array is converted to an object containing these properties essentially this means that the property whose expected type is array always has to be checked with array isarray by the user this may not be obvious to the user and can cause unexpected behavior publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution gr pagerender razor midiator webclient cloudscribe templates knstasyncapiui pre romano vue yarnpkg yarn vuejs netcore nativescript sidekick standalone shell indianadavy vuejswebapitemplate csharp nordron angulartemplate dotnetng template fable template elmish react fable snowpack template yarn msbuild step up your open source security game with whitesource istransitivedependency true dependencytree karma mkdirp minimist isminimumfixversionavailable true minimumfixversion bumperlane public service contracts prerelease cloudscribe templates virteom tenant mobile bluetooth prerelease showingvault dotnet sdk prerelease envisia dotnet templates yarnpkg yarn virteom tenant mobile framework uwp prerelease virteom tenant mobile framework ios prerelease bumperlane public api clientmodule prerelease vuejs netcore dianoga virteom tenant mobile bluetooth ios prerelease virteom public utilities prerelease indianadavy vuejswebapitemplate csharp nordron angulartemplate virteom tenant mobile framework prerelease virteom tenant mobile bluetooth android prerelease dotnet scaffold raml parser corevuewebtest dotnetng template sitecoremaster truedynamicplaceholders virteom tenant mobile framework android prerelease fable template elmish react blazorpolyfill build fable snowpack template bumperlane public api client prerelease yarn msbuild blazor tailwindcss bunit bridge aws tslint safe template gr pagerender razor midiator webclient isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails minimist is vulnerable to prototype pollution via file index js function setkey lines vulnerabilityurl istransitivedependency true dependencytree karma body parser qs isminimumfixversionavailable true minimumfixversion gr pagerender razor midiator webclient cloudscribe templates knstasyncapiui pre romano vue yarnpkg yarn vuejs netcore nativescript sidekick standalone shell indianadavy vuejswebapitemplate csharp nordron angulartemplate dotnetng template fable template elmish react fable snowpack template yarn msbuild isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails a denial of service vulnerability exists in qs up to due to insufficient sanitization of property in the gs parse function the merge function allows the assignment of properties on an array in the query for any property being assigned a value in the array is converted to an object containing these properties essentially this means that the property whose expected type is array always has to be checked with array isarray by the user this may not be obvious to the user and can cause unexpected behavior vulnerabilityurl | 0 |
15,135 | 26,525,556,936 | IssuesEvent | 2023-01-19 08:25:23 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | Gradle - Support npm dependencies in Kotlin JS and multiplatform | type:feature status:requirements priority-5-triage | ### What would you like Renovate to be able to do?
Kotlin supports JS as a target with javascript dependencies from npm:
```kotlin
// build.gradle.kts
plugins {
kotlin("js")
}
dependencies {
implementation(npm("sql.js", "1.7.0"))
implementation(npm(name = "sql.js", version = "1.7.0")) // notation with parameter names
implementation(npm("sql.js", "1.7.0", true)) // notation with unrelated kotlin parameter
implementation(npm(name = "sql.js", version = "1.7.0", generateExternal = true)) // notation with unrelated kotlin parameter
implementation(devNpm("copy-webpack-plugin", "~9.1.0")) // dev scope with "normal" npm version notation
implementation(peerNpm("copy-webpack-plugin", "~9.1.0")) // peer scope
}
```
Like always in kotlin: all parameter names are optionally.
### If you have any ideas on how this should be implemented, please tell us here.
Only a very general idea:
"Just" extract the npm dependency, run the normal npm update check and update the npm version variable in the Gradle script.
Maybe creating a temp npm file could work too.
### Is this a feature you are interested in implementing yourself?
Maybe | 1.0 | Gradle - Support npm dependencies in Kotlin JS and multiplatform - ### What would you like Renovate to be able to do?
Kotlin supports JS as a target with javascript dependencies from npm:
```kotlin
// build.gradle.kts
plugins {
kotlin("js")
}
dependencies {
implementation(npm("sql.js", "1.7.0"))
implementation(npm(name = "sql.js", version = "1.7.0")) // notation with parameter names
implementation(npm("sql.js", "1.7.0", true)) // notation with unrelated kotlin parameter
implementation(npm(name = "sql.js", version = "1.7.0", generateExternal = true)) // notation with unrelated kotlin parameter
implementation(devNpm("copy-webpack-plugin", "~9.1.0")) // dev scope with "normal" npm version notation
implementation(peerNpm("copy-webpack-plugin", "~9.1.0")) // peer scope
}
```
Like always in kotlin: all parameter names are optionally.
### If you have any ideas on how this should be implemented, please tell us here.
Only a very general idea:
"Just" extract the npm dependency, run the normal npm update check and update the npm version variable in the Gradle script.
Maybe creating a temp npm file could work too.
### Is this a feature you are interested in implementing yourself?
Maybe | non_main | gradle support npm dependencies in kotlin js and multiplatform what would you like renovate to be able to do kotlin supports js as a target with javascript dependencies from npm kotlin build gradle kts plugins kotlin js dependencies implementation npm sql js implementation npm name sql js version notation with parameter names implementation npm sql js true notation with unrelated kotlin parameter implementation npm name sql js version generateexternal true notation with unrelated kotlin parameter implementation devnpm copy webpack plugin dev scope with normal npm version notation implementation peernpm copy webpack plugin peer scope like always in kotlin all parameter names are optionally if you have any ideas on how this should be implemented please tell us here only a very general idea just extract the npm dependency run the normal npm update check and update the npm version variable in the gradle script maybe creating a temp npm file could work too is this a feature you are interested in implementing yourself maybe | 0 |
3,648 | 14,861,842,186 | IssuesEvent | 2021-01-19 00:04:35 | DynamoRIO/drmemory | https://api.github.com/repos/DynamoRIO/drmemory | opened | [cleanup] remove !USE_DRSYMS define | Maintainability | Code for `!defined(USE_DRSYMS)` is very old and not supported now. We should just remove the define. | True | [cleanup] remove !USE_DRSYMS define - Code for `!defined(USE_DRSYMS)` is very old and not supported now. We should just remove the define. | main | remove use drsyms define code for defined use drsyms is very old and not supported now we should just remove the define | 1 |
2,730 | 9,669,251,880 | IssuesEvent | 2019-05-21 16:54:05 | precice/precice | https://api.github.com/repos/precice/precice | opened | Use unsigned sizes in SolverInterface | maintainability | We currently use `int` to pass sizes to the `SolverInterface`.
This makes the interface tedious to use with `size_t` and thus with the C++ standard containers due to the narrowing conversion from `int` to `size_t`.
Also, moving to `size_t` should future-proof the library for bigger mesh and data sizes.
This requires to add `size_t` to the communication back-end. | True | Use unsigned sizes in SolverInterface - We currently use `int` to pass sizes to the `SolverInterface`.
This makes the interface tedious to use with `size_t` and thus with the C++ standard containers due to the narrowing conversion from `int` to `size_t`.
Also, moving to `size_t` should future-proof the library for bigger mesh and data sizes.
This requires to add `size_t` to the communication back-end. | main | use unsigned sizes in solverinterface we currently use int to pass sizes to the solverinterface this makes the interface tedious to use with size t and thus with the c standard containers due to the narrowing conversion from int to size t also moving to size t should future proof the library for bigger mesh and data sizes this requires to add size t to the communication back end | 1 |
5,401 | 2,575,332,627 | IssuesEvent | 2015-02-11 22:22:04 | oxyplot/oxyplot | https://api.github.com/repos/oxyplot/oxyplot | closed | Default constructor not found for type OxyPlot.Xamarin.Forms.Platform.iOS.PlotViewRenderer | help-wanted high-priority iOS please-verify unconfirmed-bug Xamarin.Forms you-take-it | Having installed OxyPlot from the NuGet Package Manager like below:
`Install-Package OxyPlot.Xamarin.Forms -Version 2015.1.689-alpha -Pre`
An error occurs on iOS stating that:
`Default constructor not found for type OxyPlot.Xamarin.Forms.Platform.iOS.PlotViewRenderer`
This issue is not present on Android. Have not tested on WP8.
Information:
Xamarin.Forms: 1.3.1.6296 | 1.0 | Default constructor not found for type OxyPlot.Xamarin.Forms.Platform.iOS.PlotViewRenderer - Having installed OxyPlot from the NuGet Package Manager like below:
`Install-Package OxyPlot.Xamarin.Forms -Version 2015.1.689-alpha -Pre`
An error occurs on iOS stating that:
`Default constructor not found for type OxyPlot.Xamarin.Forms.Platform.iOS.PlotViewRenderer`
This issue is not present on Android. Have not tested on WP8.
Information:
Xamarin.Forms: 1.3.1.6296 | non_main | default constructor not found for type oxyplot xamarin forms platform ios plotviewrenderer having installed oxyplot from the nuget package manager like below install package oxyplot xamarin forms version alpha pre an error occurs on ios stating that default constructor not found for type oxyplot xamarin forms platform ios plotviewrenderer this issue is not present on android have not tested on information xamarin forms | 0 |
3,475 | 13,358,725,400 | IssuesEvent | 2020-08-31 12:13:34 | executablebooks/sphinx-autobuild | https://api.github.com/repos/executablebooks/sphinx-autobuild | closed | Maintainance status -- new release soon! | maintainance | Hey @GaretJax!
I see that this package hasn't been updated in a while. Is this still actively maintained? If not, would you be OK with transferring maintainership of this project to someone else (either an individual - I am happy to volunteer for that - or an org like jazzband or executablebooks)? | True | Maintainance status -- new release soon! - Hey @GaretJax!
I see that this package hasn't been updated in a while. Is this still actively maintained? If not, would you be OK with transferring maintainership of this project to someone else (either an individual - I am happy to volunteer for that - or an org like jazzband or executablebooks)? | main | maintainance status new release soon hey garetjax i see that this package hasn t been updated in a while is this still actively maintained if not would you be ok with transferring maintainership of this project to someone else either an individual i am happy to volunteer for that or an org like jazzband or executablebooks | 1 |
184,973 | 21,785,042,059 | IssuesEvent | 2022-05-14 02:15:53 | ignatandrei/WFH_Resources | https://api.github.com/repos/ignatandrei/WFH_Resources | closed | CVE-2020-8116 (High) detected in dot-prop-4.2.0.tgz - autoclosed | security vulnerability | ## CVE-2020-8116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/WFH_Resources/makeData/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/WFH_Resources/makeData/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- nodemon-2.0.2.tgz (Root Library)
- update-notifier-2.5.0.tgz
- configstore-3.1.2.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/WFH_Resources/commit/b91befe3eabd7911d7272583770f2d2bc222ddb5">b91befe3eabd7911d7272583770f2d2bc222ddb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-8116 (High) detected in dot-prop-4.2.0.tgz - autoclosed - ## CVE-2020-8116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/WFH_Resources/makeData/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/WFH_Resources/makeData/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- nodemon-2.0.2.tgz (Root Library)
- update-notifier-2.5.0.tgz
- configstore-3.1.2.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/WFH_Resources/commit/b91befe3eabd7911d7272583770f2d2bc222ddb5">b91befe3eabd7911d7272583770f2d2bc222ddb5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in dot prop tgz autoclosed cve high severity vulnerability vulnerable library dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm wfh resources makedata package json path to vulnerable library tmp ws scm wfh resources makedata node modules dot prop package json dependency hierarchy nodemon tgz root library update notifier tgz configstore tgz x dot prop tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution dot prop step up your open source security game with whitesource | 0 |
216 | 2,870,192,717 | IssuesEvent | 2015-06-06 22:59:13 | OpenLightingProject/ola | https://api.github.com/repos/OpenLightingProject/ola | opened | Leaks detected in OLA | bug Maintainability | https://buildbot.openlighting.org/builders/leakchecker-ola/builds/326
https://buildbot.openlighting.org/builders/leakchecker-ola/builds/326/steps/make%20check/logs/test-suite.log
FAIL: common/rdm/QueueingRDMControllerTester
============================================
WARNING: Perftools heap leak checker is active -- Performance may suffer
...common/rdm/RDMCommand.cpp:526: Source UIDs don't match
..common/rdm/QueueingRDMController.cpp:99: RDM Queue is full, dropping request
....
OK (9)
Leak check _main_ detected leaks of 419 bytes in 20 objects
The 14 largest leaks:
Using local file /var/lib/buildbot/slaves/ola/leakchecker-ola/build/common/rdm/.libs/lt-QueueingRDMControllerTester.
Leak of 44 bytes in 1 objects allocated from:
@ 80530f9 QueueingRDMControllerTest::testAckOverflows
Leak of 44 bytes in 1 objects allocated from:
@ 805315c QueueingRDMControllerTest::testAckOverflows
Leak of 44 bytes in 1 objects allocated from:
@ 8053472 QueueingRDMControllerTest::testAckOverflows
Leak of 44 bytes in 1 objects allocated from:
@ 80535e9 QueueingRDMControllerTest::testAckOverflows
Leak of 44 bytes in 1 objects allocated from:
@ 805364c QueueingRDMControllerTest::testAckOverflows
Leak of 40 bytes in 2 objects allocated from:
@ 401d47c3 ola::rdm::RDMReply::RDMReply
Leak of 34 bytes in 2 objects allocated from:
@ 401cab02 std::basic_string::_Rep::_S_create
Leak of 20 bytes in 1 objects allocated from:
@ 8053293 QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 80532f5 QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 80534d5 QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 805352b QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 80536b0 QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 8053702 QueueingRDMControllerTest::testAckOverflows
Leak of 5 bytes in 5 objects allocated from:
@ 401c5492 ola::rdm::RDMCommand::SetParamData
FAIL: common/rdm/RDMCommandSerializerTester
===========================================
WARNING: Perftools heap leak checker is active -- Performance may suffer
........
OK (8)
Leak check _main_ detected leaks of 25 bytes in 1 objects
The 1 largest leaks:
Using local file /var/lib/buildbot/slaves/ola/leakchecker-ola/build/common/rdm/.libs/lt-RDMCommandSerializerTester.
Leak of 25 bytes in 1 objects allocated from:
@ 804c56f RDMCommandSerializerTest::testRequestOverrides
@ 40049f31 CppUnit::ProtectorChain::ProtectFunctor::operator | True | Leaks detected in OLA - https://buildbot.openlighting.org/builders/leakchecker-ola/builds/326
https://buildbot.openlighting.org/builders/leakchecker-ola/builds/326/steps/make%20check/logs/test-suite.log
FAIL: common/rdm/QueueingRDMControllerTester
============================================
WARNING: Perftools heap leak checker is active -- Performance may suffer
...common/rdm/RDMCommand.cpp:526: Source UIDs don't match
..common/rdm/QueueingRDMController.cpp:99: RDM Queue is full, dropping request
....
OK (9)
Leak check _main_ detected leaks of 419 bytes in 20 objects
The 14 largest leaks:
Using local file /var/lib/buildbot/slaves/ola/leakchecker-ola/build/common/rdm/.libs/lt-QueueingRDMControllerTester.
Leak of 44 bytes in 1 objects allocated from:
@ 80530f9 QueueingRDMControllerTest::testAckOverflows
Leak of 44 bytes in 1 objects allocated from:
@ 805315c QueueingRDMControllerTest::testAckOverflows
Leak of 44 bytes in 1 objects allocated from:
@ 8053472 QueueingRDMControllerTest::testAckOverflows
Leak of 44 bytes in 1 objects allocated from:
@ 80535e9 QueueingRDMControllerTest::testAckOverflows
Leak of 44 bytes in 1 objects allocated from:
@ 805364c QueueingRDMControllerTest::testAckOverflows
Leak of 40 bytes in 2 objects allocated from:
@ 401d47c3 ola::rdm::RDMReply::RDMReply
Leak of 34 bytes in 2 objects allocated from:
@ 401cab02 std::basic_string::_Rep::_S_create
Leak of 20 bytes in 1 objects allocated from:
@ 8053293 QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 80532f5 QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 80534d5 QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 805352b QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 80536b0 QueueingRDMControllerTest::testAckOverflows
Leak of 20 bytes in 1 objects allocated from:
@ 8053702 QueueingRDMControllerTest::testAckOverflows
Leak of 5 bytes in 5 objects allocated from:
@ 401c5492 ola::rdm::RDMCommand::SetParamData
FAIL: common/rdm/RDMCommandSerializerTester
===========================================
WARNING: Perftools heap leak checker is active -- Performance may suffer
........
OK (8)
Leak check _main_ detected leaks of 25 bytes in 1 objects
The 1 largest leaks:
Using local file /var/lib/buildbot/slaves/ola/leakchecker-ola/build/common/rdm/.libs/lt-RDMCommandSerializerTester.
Leak of 25 bytes in 1 objects allocated from:
@ 804c56f RDMCommandSerializerTest::testRequestOverrides
@ 40049f31 CppUnit::ProtectorChain::ProtectFunctor::operator | main | leaks detected in ola fail common rdm queueingrdmcontrollertester warning perftools heap leak checker is active performance may suffer common rdm rdmcommand cpp source uids don t match common rdm queueingrdmcontroller cpp rdm queue is full dropping request ok leak check main detected leaks of bytes in objects the largest leaks using local file var lib buildbot slaves ola leakchecker ola build common rdm libs lt queueingrdmcontrollertester leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from ola rdm rdmreply rdmreply leak of bytes in objects allocated from std basic string rep s create leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from queueingrdmcontrollertest testackoverflows leak of bytes in objects allocated from ola rdm rdmcommand setparamdata fail common rdm rdmcommandserializertester warning perftools heap leak checker is active performance may suffer ok leak check main detected leaks of bytes in objects the largest leaks using local file var lib buildbot slaves ola leakchecker ola build common rdm libs lt rdmcommandserializertester leak of bytes in objects allocated from rdmcommandserializertest testrequestoverrides cppunit protectorchain protectfunctor operator | 1 |
2,783 | 9,978,627,950 | IssuesEvent | 2019-07-09 20:23:42 | dgets/lasttime | https://api.github.com/repos/dgets/lasttime | opened | Change format of database consolidation records/results display | enhancement maintainability | Currently two separate lists are utilized, which makes pagination of them very difficult, if not impossible, so far as I know. In order to get around this problem, we can simply compile them into one listing, with a flag as to whether or not the record was inserted or deleted in front of each entry. Probably color coded in order to make it easier to read. It'd probably be best to have things done this way, anyway, as the inserted entry will be more easily comparable to the deleted/archived entries that it is applicable to.
Honestly, now that I think about how much easier it will be to compare the records of deleted vs. inserted, I think that this should probably have the highest priority en route to #131 right now, in order to increase the ease of debugging. | True | Change format of database consolidation records/results display - Currently two separate lists are utilized, which makes pagination of them very difficult, if not impossible, so far as I know. In order to get around this problem, we can simply compile them into one listing, with a flag as to whether or not the record was inserted or deleted in front of each entry. Probably color coded in order to make it easier to read. It'd probably be best to have things done this way, anyway, as the inserted entry will be more easily comparable to the deleted/archived entries that it is applicable to.
Honestly, now that I think about how much easier it will be to compare the records of deleted vs. inserted, I think that this should probably have the highest priority en route to #131 right now, in order to increase the ease of debugging. | main | change format of database consolidation records results display currently two separate lists are utilized which makes pagination of them very difficult if not impossible so far as i know in order to get around this problem we can simply compile them into one listing with a flag as to whether or not the record was inserted or deleted in front of each entry probably color coded in order to make it easier to read it d probably be best to have things done this way anyway as the inserted entry will be more easily comparable to the deleted archived entries that it is applicable to honestly now that i think about how much easier it will be to compare the records of deleted vs inserted i think that this should probably have the highest priority en route to right now in order to increase the ease of debugging | 1 |
773,419 | 27,157,231,784 | IssuesEvent | 2023-02-17 08:57:21 | bryntum/support | https://api.github.com/repos/bryntum/support | closed | Resource order not applied with `syncDataOnLoad` | bug resolved high-priority forum | [Forum post](https://forum.bryntum.com/viewtopic.php?f=44&t=23654&p=117147#p117147)
Using the [React state demo](https://bryntum.com/products/scheduler/examples/frameworks/react/javascript/react-state/build/), change the second dataset to match the first, but in a different order, like this
```
...
resources: [
[
{ id: 'r1', name: 'Mike' },
{ id: 'r2', name: 'Linda' },
{ id: 'r3', name: 'Chang' },
{ id: 'r4', name: 'Kate' },
{ id: 'r5', name: 'Lisa' },
{ id: 'r6', name: 'Steve' },
{ id: 'r7', name: 'Mark' },
{ id: 'r8', name: 'Madison' },
{ id: 'r9', name: 'Hitomi' },
{ id: 'r10', name: 'Dan' }
],
[
{ id: "r6", name: "Steve" },
{ id: "r7", name: "Mark" },
{ id: "r8", name: "Madison" },
{ id: "r9", name: "Hitomi" },
{ id: "r10", name: "Dan" },
{ id: "r1", name: "Mike" },
{ id: "r2", name: "Linda" },
{ id: "r3", name: "Chang" },
{ id: "r4", name: "Kate" },
{ id: "r5", name: "Lisa" },
],
...
```
The expected behavior is that the resources order should change, but as you'll see in the video, that's not the current behavior.
https://user-images.githubusercontent.com/16693227/214376051-c516448a-d2c5-4d95-80d9-cf8e84dce146.mp4
| 1.0 | Resource order not applied with `syncDataOnLoad` - [Forum post](https://forum.bryntum.com/viewtopic.php?f=44&t=23654&p=117147#p117147)
Using the [React state demo](https://bryntum.com/products/scheduler/examples/frameworks/react/javascript/react-state/build/), change the second dataset to match the first, but in a different order, like this
```
...
resources: [
[
{ id: 'r1', name: 'Mike' },
{ id: 'r2', name: 'Linda' },
{ id: 'r3', name: 'Chang' },
{ id: 'r4', name: 'Kate' },
{ id: 'r5', name: 'Lisa' },
{ id: 'r6', name: 'Steve' },
{ id: 'r7', name: 'Mark' },
{ id: 'r8', name: 'Madison' },
{ id: 'r9', name: 'Hitomi' },
{ id: 'r10', name: 'Dan' }
],
[
{ id: "r6", name: "Steve" },
{ id: "r7", name: "Mark" },
{ id: "r8", name: "Madison" },
{ id: "r9", name: "Hitomi" },
{ id: "r10", name: "Dan" },
{ id: "r1", name: "Mike" },
{ id: "r2", name: "Linda" },
{ id: "r3", name: "Chang" },
{ id: "r4", name: "Kate" },
{ id: "r5", name: "Lisa" },
],
...
```
The expected behavior is that the resources order should change, but as you'll see in the video, that's not the current behavior.
https://user-images.githubusercontent.com/16693227/214376051-c516448a-d2c5-4d95-80d9-cf8e84dce146.mp4
| non_main | resource order not applied with syncdataonload using the change the second dataset to match the first but in a different order like this resources id name mike id name linda id name chang id name kate id name lisa id name steve id name mark id name madison id name hitomi id name dan id name steve id name mark id name madison id name hitomi id name dan id name mike id name linda id name chang id name kate id name lisa the expected behavior is that the resources order should change but as you ll see in the video that s not the current behavior | 0 |
424,571 | 29,144,676,278 | IssuesEvent | 2023-05-18 01:02:58 | jrsteensen/OpenHornet | https://api.github.com/repos/jrsteensen/OpenHornet | opened | Generate MFG Files: OH5A2A1-1 - ASSY, HOOK LEVER & INDICATOR | Type: Documentation "Category: MCAD Priority: Normal" | Generate the manufacturing files for Generate MFG Files: OH5A2A1-1 - ASSY, HOOK LEVER & INDICATOR.
__Check off each item in issue as you complete it.__
### File generation
- [OH Wiki HOWTO Link](https://github.com/jrsteensen/OpenHornet/wiki/HOWTO:-Generating-Fusion360-Manufacturing-Files)
- [ ] Generate SVG files (if required.)
- [ ] Generate 3MF files (if required.)
- [ ] Generate STEP files (if required.)
- [ ] Copy the relevant decal PDFs from the art folder to the relevant manufacturing folder (if required.)
### Review your files
- [ ] Verify against drawing parts list that all the relevant manufacturing files have been created.
- [ ] Open each SVG in your browser and compare against part to ensure it appears the same and its filename is correct.
- [ ] Open each 3MF in a slicer of your choice and verify geometry matches F360 model and its filename is correct.
- [ ] Open each STEP in a STEP file viewer of your choice and verify geometry matches F360 model and its filename is correct.
### Submit your files
- [ ] Create a github PR against the beta 1 branch with the manufacturing files located in correct location of the release folder.
- [ ] Request a review of the PR.
#### Why a PR?
It gives you credit when I generate the changelog in the release, and (more importantly) adds traceability to the history of the issues. | 1.0 | Generate MFG Files: OH5A2A1-1 - ASSY, HOOK LEVER & INDICATOR - Generate the manufacturing files for Generate MFG Files: OH5A2A1-1 - ASSY, HOOK LEVER & INDICATOR.
__Check off each item in issue as you complete it.__
### File generation
- [OH Wiki HOWTO Link](https://github.com/jrsteensen/OpenHornet/wiki/HOWTO:-Generating-Fusion360-Manufacturing-Files)
- [ ] Generate SVG files (if required.)
- [ ] Generate 3MF files (if required.)
- [ ] Generate STEP files (if required.)
- [ ] Copy the relevant decal PDFs from the art folder to the relevant manufacturing folder (if required.)
### Review your files
- [ ] Verify against drawing parts list that all the relevant manufacturing files have been created.
- [ ] Open each SVG in your browser and compare against part to ensure it appears the same and its filename is correct.
- [ ] Open each 3MF in a slicer of your choice and verify geometry matches F360 model and its filename is correct.
- [ ] Open each STEP in a STEP file viewer of your choice and verify geometry matches F360 model and its filename is correct.
### Submit your files
- [ ] Create a github PR against the beta 1 branch with the manufacturing files located in correct location of the release folder.
- [ ] Request a review of the PR.
#### Why a PR?
It gives you credit when I generate the changelog in the release, and (more importantly) adds traceability to the history of the issues. | non_main | generate mfg files assy hook lever indicator generate the manufacturing files for generate mfg files assy hook lever amp indicator check off each item in issue as you complete it file generation generate svg files if required generate files if required generate step files if required copy the relevant decal pdfs from the art folder to the relevant manufacturing folder if required review your files verify against drawing parts list that all the relevant manufacturing files have been created open each svg in your browser and compare against part to ensure it appears the same and its filename is correct open each in a slicer of your choice and verify geometry matches model and its filename is correct open each step in a step file viewer of your choice and verify geometry matches model and its filename is correct submit your files create a github pr against the beta branch with the manufacturing files located in correct location of the release folder request a review of the pr why a pr it gives you credit when i generate the changelog in the release and more importantly adds traceability to the history of the issues | 0 |
541,554 | 15,829,344,756 | IssuesEvent | 2021-04-06 11:03:38 | airshipit/treasuremap | https://api.github.com/repos/airshipit/treasuremap | closed | VRRP Cleanup | 2-Manifests enhancement priority/critical size s | To correct the issue raised on VRRP seems to be from the json6902patch merge construct.
The below attributes to be added to the preKubeadmCommands as a part of the list but it has ended up concatenating both the lines which is leading to the issue.
- op: add
path: "/spec/kubeadmConfigSpec/preKubeadmCommands/-"
value:
apt-get update && apt-get install -y bridge-utils keepalived ipset ipvsadm
systemctl enable --now keepalived
The fix is to put and additional add op for the second line like below.
- op: add
path: "/spec/kubeadmConfigSpec/preKubeadmCommands/-"
value:
apt-get update && apt-get install -y bridge-utils keepalived ipset ipvsadm
- op: add
path: "/spec/kubeadmConfigSpec/preKubeadmCommands/-"
value:
systemctl enable --now keepalived
Also remove the apt-get install/update as it should be addressed in [image builder #10](https://github.com/airshipit/images/issues/10) . This needs to be remove this line as part of the fix.
| 1.0 | VRRP Cleanup - To correct the issue raised on VRRP seems to be from the json6902patch merge construct.
The below attributes to be added to the preKubeadmCommands as a part of the list but it has ended up concatenating both the lines which is leading to the issue.
- op: add
path: "/spec/kubeadmConfigSpec/preKubeadmCommands/-"
value:
apt-get update && apt-get install -y bridge-utils keepalived ipset ipvsadm
systemctl enable --now keepalived
The fix is to put and additional add op for the second line like below.
- op: add
path: "/spec/kubeadmConfigSpec/preKubeadmCommands/-"
value:
apt-get update && apt-get install -y bridge-utils keepalived ipset ipvsadm
- op: add
path: "/spec/kubeadmConfigSpec/preKubeadmCommands/-"
value:
systemctl enable --now keepalived
Also remove the apt-get install/update as it should be addressed in [image builder #10](https://github.com/airshipit/images/issues/10) . This needs to be remove this line as part of the fix.
| non_main | vrrp cleanup to correct the issue raised on vrrp seems to be from the merge construct the below attributes to be added to the prekubeadmcommands as a part of the list but it has ended up concatenating both the lines which is leading to the issue op add path spec kubeadmconfigspec prekubeadmcommands value apt get update apt get install y bridge utils keepalived ipset ipvsadm systemctl enable now keepalived the fix is to put and additional add op for the second line like below op add path spec kubeadmconfigspec prekubeadmcommands value apt get update apt get install y bridge utils keepalived ipset ipvsadm op add path spec kubeadmconfigspec prekubeadmcommands value systemctl enable now keepalived also remove the apt get install update as it should be addressed in this needs to be remove this line as part of the fix | 0 |
337,695 | 10,220,112,876 | IssuesEvent | 2019-08-15 20:25:32 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | m.ebay.com - see bug description | browser-firefox-mobile engine-gecko priority-critical | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://m.ebay.com/orderDetails?itemId=303240098195&txnId=0
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 7.0
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: ebay missing feedback option in mobile mode.
**Steps to Reproduce**:
Random problem on ebay that the (feedback menu selection item) button is missing to leave feedback. Usually in mobile mode (affects ff, ffb, and chrome, so site issue, not browser), have to switch to "classic" or desktop mode in ebay or browser to have the feedback button available. (Random because it typically only affects one or two sellers at a time and the ability of the buyer to leave feedback for those sellers.) (EG if you bought 10 items, one each from 10 different sellers, there would be about one in 10 that you cant leave feedback for in mobile mode.)
[](https://webcompat.com/uploads/2019/8/85ae8b33-610f-4936-b8c6-0f57cb0d6421.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190802162006</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | m.ebay.com - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://m.ebay.com/orderDetails?itemId=303240098195&txnId=0
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 7.0
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: ebay missing feedback option in mobile mode.
**Steps to Reproduce**:
Random problem on ebay that the (feedback menu selection item) button is missing to leave feedback. Usually in mobile mode (affects ff, ffb, and chrome, so site issue, not browser), have to switch to "classic" or desktop mode in ebay or browser to have the feedback button available. (Random because it typically only affects one or two sellers at a time and the ability of the buyer to leave feedback for those sellers.) (EG if you bought 10 items, one each from 10 different sellers, there would be about one in 10 that you cant leave feedback for in mobile mode.)
[](https://webcompat.com/uploads/2019/8/85ae8b33-610f-4936-b8c6-0f57cb0d6421.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190802162006</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: beta</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | m ebay com see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description ebay missing feedback option in mobile mode steps to reproduce random problem on ebay that the feedback menu selection item button is missing to leave feedback usually in mobile mode affects ff ffb and chrome so site issue not browser have to switch to classic or desktop mode in ebay or browser to have the feedback button available random because it typically only affects one or two sellers at a time and the ability of the buyer to leave feedback for those sellers eg if you bought items one each from different sellers there would be about one in that you cant leave feedback for in mobile mode browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta from with ❤️ | 0 |
122,418 | 26,126,189,129 | IssuesEvent | 2022-12-28 18:56:17 | shelcia/mocker | https://api.github.com/repos/shelcia/mocker | closed | Fix code scanning alert - Database query built from user-controlled sources | medium codepeak22 | <!-- Warning: The suggested title contains the alert rule name. This can expose security information. -->
Tracking issue for:
- [ ] https://github.com/shelcia/mocker/security/code-scanning/1
| 1.0 | Fix code scanning alert - Database query built from user-controlled sources - <!-- Warning: The suggested title contains the alert rule name. This can expose security information. -->
Tracking issue for:
- [ ] https://github.com/shelcia/mocker/security/code-scanning/1
| non_main | fix code scanning alert database query built from user controlled sources tracking issue for | 0 |
177,780 | 14,644,880,481 | IssuesEvent | 2020-12-26 03:19:54 | rrousselGit/freezed | https://api.github.com/repos/rrousselGit/freezed | opened | Create Utilities for Intellij | documentation needs triage | **Describe what scenario you think is uncovered by the existing examples/articles**
A clear and concise description of the problem that you want to be explained.
I added live templates for IntelliJ so you can create boilerplate code faster
https://github.com/Tinhorn/freezed_intellij_live_templates
**Describe why existing examples/articles do not cover this case**
Explain which examples/articles you have seen before making this request, and
why they did not help you with your problem.
The utility section only have something for VSCode, not IntelliJ
**Additional context**
Add any other context or screenshots about the documentation request here.
I have a repo with instructions here:
https://github.com/Tinhorn/freezed_intellij_live_templates
Let me know what I can do to make it better.
Thanks for the package.
It has been a great help.
Merry Christmas | 1.0 | Create Utilities for Intellij - **Describe what scenario you think is uncovered by the existing examples/articles**
A clear and concise description of the problem that you want to be explained.
I added live templates for IntelliJ so you can create boilerplate code faster
https://github.com/Tinhorn/freezed_intellij_live_templates
**Describe why existing examples/articles do not cover this case**
Explain which examples/articles you have seen before making this request, and
why they did not help you with your problem.
The utility section only have something for VSCode, not IntelliJ
**Additional context**
Add any other context or screenshots about the documentation request here.
I have a repo with instructions here:
https://github.com/Tinhorn/freezed_intellij_live_templates
Let me know what I can do to make it better.
Thanks for the package.
It has been a great help.
Merry Christmas | non_main | create utilities for intellij describe what scenario you think is uncovered by the existing examples articles a clear and concise description of the problem that you want to be explained i added live templates for intellij so you can create boilerplate code faster describe why existing examples articles do not cover this case explain which examples articles you have seen before making this request and why they did not help you with your problem the utility section only have something for vscode not intellij additional context add any other context or screenshots about the documentation request here i have a repo with instructions here let me know what i can do to make it better thanks for the package it has been a great help merry christmas | 0 |
52,418 | 27,555,327,776 | IssuesEvent | 2023-03-07 17:29:15 | hyperledger/besu | https://api.github.com/repos/hyperledger/besu | closed | Shape RPC traffic to prevent DoS from heavy load | performance icebox TeamChupa | ### Description
As an person running a Besu node, I want it to rate limit RPC requests when there are lots of pending requests that have not yet completed.
Today it's fairly easy for me to run operations over RPC that cause a node to become bogged down and stop tracking the network head, or worse, to throw OOM exceptions and hang. In enterprise environments it's likely that nodes will be shared across many services/users. In this sort of environment, we don't want a surge in RPC traffic from one user to crash the node or deny service to other users who are keeping their traffic within expected limits.
### Acceptance Criteria
* RPC traffic can be submitted to Besu at any rate and Besu does not crash and does not stop tracking the head of the network.
### Steps to Reproduce (Bug)
1. Spam Besu with a ton of Transactions
or
3. Spam Besu with a ton of transaction trace requests
or
3. Spam basu with a ton of `eth_call` requests
(there likely are plenty of other heavy RPC calls that will either cause crashes or cause the node to fall behind)
**Expected behavior:**
Node returns a rate limiting error response when it is too busy to process incoming RPC messages, or applies back-pressure on the request rte through some other means.
**Actual behavior:**
Node keeps trying to process RPC requests past the point where it can no longer maintain network state, or until it runs out of memory and hangs without crashing.
**Frequency:**
I expect in a large scale production environment this will be a common issue.
Recently I've seen this happen on our test nodes when a misconfiguration caused RPC traffic to be sent to a node that wasn't spec'd to handle the load that was being put on it.
I've also caused plenty of nodes to hang while running fixed-rate performance tests with Caliper.
### Versions (Add all that apply)
* Software version: 1.4.X (1.4.5 is current release)
| True | Shape RPC traffic to prevent DoS from heavy load - ### Description
As an person running a Besu node, I want it to rate limit RPC requests when there are lots of pending requests that have not yet completed.
Today it's fairly easy for me to run operations over RPC that cause a node to become bogged down and stop tracking the network head, or worse, to throw OOM exceptions and hang. In enterprise environments it's likely that nodes will be shared across many services/users. In this sort of environment, we don't want a surge in RPC traffic from one user to crash the node or deny service to other users who are keeping their traffic within expected limits.
### Acceptance Criteria
* RPC traffic can be submitted to Besu at any rate and Besu does not crash and does not stop tracking the head of the network.
### Steps to Reproduce (Bug)
1. Spam Besu with a ton of Transactions
or
3. Spam Besu with a ton of transaction trace requests
or
3. Spam basu with a ton of `eth_call` requests
(there likely are plenty of other heavy RPC calls that will either cause crashes or cause the node to fall behind)
**Expected behavior:**
Node returns a rate limiting error response when it is too busy to process incoming RPC messages, or applies back-pressure on the request rte through some other means.
**Actual behavior:**
Node keeps trying to process RPC requests past the point where it can no longer maintain network state, or until it runs out of memory and hangs without crashing.
**Frequency:**
I expect in a large scale production environment this will be a common issue.
Recently I've seen this happen on our test nodes when a misconfiguration caused RPC traffic to be sent to a node that wasn't spec'd to handle the load that was being put on it.
I've also caused plenty of nodes to hang while running fixed-rate performance tests with Caliper.
### Versions (Add all that apply)
* Software version: 1.4.X (1.4.5 is current release)
| non_main | shape rpc traffic to prevent dos from heavy load description as an person running a besu node i want it to rate limit rpc requests when there are lots of pending requests that have not yet completed today it s fairly easy for me to run operations over rpc that cause a node to become bogged down and stop tracking the network head or worse to throw oom exceptions and hang in enterprise environments it s likely that nodes will be shared across many services users in this sort of environment we don t want a surge in rpc traffic from one user to crash the node or deny service to other users who are keeping their traffic within expected limits acceptance criteria rpc traffic can be submitted to besu at any rate and besu does not crash and does not stop tracking the head of the network steps to reproduce bug spam besu with a ton of transactions or spam besu with a ton of transaction trace requests or spam basu with a ton of eth call requests there likely are plenty of other heavy rpc calls that will either cause crashes or cause the node to fall behind expected behavior node returns a rate limiting error response when it is too busy to process incoming rpc messages or applies back pressure on the request rte through some other means actual behavior node keeps trying to process rpc requests past the point where it can no longer maintain network state or until it runs out of memory and hangs without crashing frequency i expect in a large scale production environment this will be a common issue recently i ve seen this happen on our test nodes when a misconfiguration caused rpc traffic to be sent to a node that wasn t spec d to handle the load that was being put on it i ve also caused plenty of nodes to hang while running fixed rate performance tests with caliper versions add all that apply software version x is current release | 0 |
1,309 | 5,557,765,703 | IssuesEvent | 2017-03-24 13:04:22 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Add a cluster filter to the vmware_portgroup module. | affects_2.3 cloud feature_idea vmware waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_portgroup
##### SUMMARY
In certain situations, adding a portgroup to a vCenter may only be desirable upon a single cluster. For example if other clusters are also present in the vCenter that are not identical in hardware and won't be compatible with the desired changes.
I imagine it would be another option to the module, called 'cluster' or something alike.
```
- vmware_portgroup:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
cluster: "DC1"
vlan_id: "{{ vlan_id }}"
switch_name: "vSwitch3"
validate_certs: False
``` | True | Add a cluster filter to the vmware_portgroup module. - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_portgroup
##### SUMMARY
In certain situations, adding a portgroup to a vCenter may only be desirable upon a single cluster. For example if other clusters are also present in the vCenter that are not identical in hardware and won't be compatible with the desired changes.
I imagine it would be another option to the module, called 'cluster' or something alike.
```
- vmware_portgroup:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
cluster: "DC1"
vlan_id: "{{ vlan_id }}"
switch_name: "vSwitch3"
validate_certs: False
``` | main | add a cluster filter to the vmware portgroup module issue type feature idea component name vmware portgroup summary in certain situations adding a portgroup to a vcenter may only be desirable upon a single cluster for example if other clusters are also present in the vcenter that are not identical in hardware and won t be compatible with the desired changes i imagine it would be another option to the module called cluster or something alike vmware portgroup hostname vcenter hostname username vcenter username password vcenter password cluster vlan id vlan id switch name validate certs false | 1 |
25,482 | 3,933,049,766 | IssuesEvent | 2016-04-25 17:48:10 | OSTraining/OSCampus | https://api.github.com/repos/OSTraining/OSCampus | closed | Add published date to class display | Design | @billtomczak It seems to be based on when the class was first created rather than the published date:
https://www.ostraining.com/new-classes/
@htmgarcia Could we add the published date to this view? | 1.0 | Add published date to class display - @billtomczak It seems to be based on when the class was first created rather than the published date:
https://www.ostraining.com/new-classes/
@htmgarcia Could we add the published date to this view? | non_main | add published date to class display billtomczak it seems to be based on when the class was first created rather than the published date htmgarcia could we add the published date to this view | 0 |
200,901 | 22,916,018,245 | IssuesEvent | 2022-07-17 01:10:02 | cfscode/resque-web | https://api.github.com/repos/cfscode/resque-web | opened | CVE-2022-32224 (High) detected in activerecord-5.0.2.gem | security vulnerability | ## CVE-2022-32224 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>activerecord-5.0.2.gem</b></p></summary>
<p>Databases on Rails. Build a persistent domain model by mapping database tables to Ruby classes. Strong conventions for associations, validations, aggregations, migrations, and testing come baked-in.</p>
<p>Library home page: <a href="https://rubygems.org/gems/activerecord-5.0.2.gem">https://rubygems.org/gems/activerecord-5.0.2.gem</a></p>
<p>
Dependency Hierarchy:
- minitest-spec-rails-5.4.0.gem (Root Library)
- rails-5.0.2.gem
- :x: **activerecord-5.0.2.gem** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
RCE bug with Serialized Columns in Active Record before 5.2.8.1, 6.0.0 and before 6.0.5.1, 6.1.0 and before 6.1.6.1, 7.0.0 and before 7.0.3.
When serialized columns that use YAML (the default) are deserialized, Rails uses YAML.unsafe_load to convert the YAML data in to Ruby objects. If an attacker can manipulate data in the database (via means like SQL injection), then it may be possible for the attacker to escalate to an RCE.
<p>Publish Date: 2022-06-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-32224>CVE-2022-32224</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-3hhc-qp5v-9p2j">https://github.com/advisories/GHSA-3hhc-qp5v-9p2j</a></p>
<p>Release Date: 2022-06-02</p>
<p>Fix Resolution: activerecord - 5.2.8.1,6.0.5.1,6.1.6.1,7.0.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-32224 (High) detected in activerecord-5.0.2.gem - ## CVE-2022-32224 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>activerecord-5.0.2.gem</b></p></summary>
<p>Databases on Rails. Build a persistent domain model by mapping database tables to Ruby classes. Strong conventions for associations, validations, aggregations, migrations, and testing come baked-in.</p>
<p>Library home page: <a href="https://rubygems.org/gems/activerecord-5.0.2.gem">https://rubygems.org/gems/activerecord-5.0.2.gem</a></p>
<p>
Dependency Hierarchy:
- minitest-spec-rails-5.4.0.gem (Root Library)
- rails-5.0.2.gem
- :x: **activerecord-5.0.2.gem** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
RCE bug with Serialized Columns in Active Record before 5.2.8.1, 6.0.0 and before 6.0.5.1, 6.1.0 and before 6.1.6.1, 7.0.0 and before 7.0.3.
When serialized columns that use YAML (the default) are deserialized, Rails uses YAML.unsafe_load to convert the YAML data in to Ruby objects. If an attacker can manipulate data in the database (via means like SQL injection), then it may be possible for the attacker to escalate to an RCE.
<p>Publish Date: 2022-06-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-32224>CVE-2022-32224</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-3hhc-qp5v-9p2j">https://github.com/advisories/GHSA-3hhc-qp5v-9p2j</a></p>
<p>Release Date: 2022-06-02</p>
<p>Fix Resolution: activerecord - 5.2.8.1,6.0.5.1,6.1.6.1,7.0.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in activerecord gem cve high severity vulnerability vulnerable library activerecord gem databases on rails build a persistent domain model by mapping database tables to ruby classes strong conventions for associations validations aggregations migrations and testing come baked in library home page a href dependency hierarchy minitest spec rails gem root library rails gem x activerecord gem vulnerable library found in base branch master vulnerability details rce bug with serialized columns in active record before and before and before and before when serialized columns that use yaml the default are deserialized rails uses yaml unsafe load to convert the yaml data in to ruby objects if an attacker can manipulate data in the database via means like sql injection then it may be possible for the attacker to escalate to an rce publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution activerecord step up your open source security game with mend | 0 |
79,886 | 9,962,294,396 | IssuesEvent | 2019-07-07 13:24:06 | mhmdtshref/portfolio | https://api.github.com/repos/mhmdtshref/portfolio | closed | Refactor/frontend | design refactor technical | The frontend should use these routes:
- [ ] `/api/project`
- [ ] `/api/service`
- [ ] `/api/technology`
- [ ] `/api/language`
Should be used to create cards.. | 1.0 | Refactor/frontend - The frontend should use these routes:
- [ ] `/api/project`
- [ ] `/api/service`
- [ ] `/api/technology`
- [ ] `/api/language`
Should be used to create cards.. | non_main | refactor frontend the frontend should use these routes api project api service api technology api language should be used to create cards | 0 |
285,774 | 31,155,567,841 | IssuesEvent | 2023-08-16 12:56:11 | nidhi7598/linux-4.1.15_CVE-2018-5873 | https://api.github.com/repos/nidhi7598/linux-4.1.15_CVE-2018-5873 | opened | CVE-2023-1076 (Medium) detected in linuxlinux-4.1.52 | Mend: dependency security vulnerability | ## CVE-2023-1076 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/tun.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/tun.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux Kernel. The tun/tap sockets have their socket UID hardcoded to 0 due to a type confusion in their initialization function. While it will be often correct, as tuntap devices require CAP_NET_ADMIN, it may not always be the case, e.g., a non-root user only having that capability. This would make tun/tap sockets being incorrectly treated in filtering/routing decisions, possibly bypassing network filters.
<p>Publish Date: 2023-03-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1076>CVE-2023-1076</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1076">https://www.linuxkernelcves.com/cves/CVE-2023-1076</a></p>
<p>Release Date: 2023-03-27</p>
<p>Fix Resolution: v5.4.235,v5.10.173,v5.15.99,v6.1.16,v6.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2023-1076 (Medium) detected in linuxlinux-4.1.52 - ## CVE-2023-1076 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/tun.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/tun.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux Kernel. The tun/tap sockets have their socket UID hardcoded to 0 due to a type confusion in their initialization function. While it will be often correct, as tuntap devices require CAP_NET_ADMIN, it may not always be the case, e.g., a non-root user only having that capability. This would make tun/tap sockets being incorrectly treated in filtering/routing decisions, possibly bypassing network filters.
<p>Publish Date: 2023-03-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-1076>CVE-2023-1076</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-1076">https://www.linuxkernelcves.com/cves/CVE-2023-1076</a></p>
<p>Release Date: 2023-03-27</p>
<p>Fix Resolution: v5.4.235,v5.10.173,v5.15.99,v6.1.16,v6.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers net tun c drivers net tun c vulnerability details a flaw was found in the linux kernel the tun tap sockets have their socket uid hardcoded to due to a type confusion in their initialization function while it will be often correct as tuntap devices require cap net admin it may not always be the case e g a non root user only having that capability this would make tun tap sockets being incorrectly treated in filtering routing decisions possibly bypassing network filters publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
747,421 | 26,083,414,216 | IssuesEvent | 2022-12-25 18:46:39 | RoboJackets/urc-drone | https://api.github.com/repos/RoboJackets/urc-drone | closed | Research Mavlink-Compatabile Flight Controller | area ➤ misc priority ➤ high | It appears that Mavlink systems are the most commonly used flight controllers for research drones and have multiple projects using them integrated with ROS for drone flight commands. Note that DJI does not produce mavlink-compatible drones and as such cannot be programmed with standard ros libraries.
See the following:
https://docs.px4.io/main/en/ros/ros2.html
https://ardupilot.org/dev/docs/ros.html
https://github.com/ctu-mrs/mrs_uav_system
https://github.com/mavlink/mavros | 1.0 | Research Mavlink-Compatabile Flight Controller - It appears that Mavlink systems are the most commonly used flight controllers for research drones and have multiple projects using them integrated with ROS for drone flight commands. Note that DJI does not produce mavlink-compatible drones and as such cannot be programmed with standard ros libraries.
See the following:
https://docs.px4.io/main/en/ros/ros2.html
https://ardupilot.org/dev/docs/ros.html
https://github.com/ctu-mrs/mrs_uav_system
https://github.com/mavlink/mavros | non_main | research mavlink compatabile flight controller it appears that mavlink systems are the most commonly used flight controllers for research drones and have multiple projects using them integrated with ros for drone flight commands note that dji does not produce mavlink compatible drones and as such cannot be programmed with standard ros libraries see the following | 0 |
214,012 | 16,544,565,104 | IssuesEvent | 2021-05-27 21:41:53 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | closed | Question: is there a way to delete the snapshots and keep them too? | Status: Stale Type: Documentation | This is not system-specific: we have several clients with small-ish 1-2TB "data" drives, and we have a storage system with lots of room. What I would like to do is basically incremental backups: snapshot the clients and transfer incremental snapshots to the big box. This is working fine except
* I need to delete old snapshots on the clients to keep disk usage there down
* and I want to keep them on the big box because that's the point.
-- And that's the part I can't get to work: `send -R`/`receive -F` deletes old snapshots on the receiving end. Without them I get
> cannot receive incremental stream: destination has been modified since most recent snapshot
Even `receive -u` does that when the storage system flips from one controller to another (it's a dual-headed setup) -- and `-u` is already suboptimal as you can't browse snapshots, you have to pick one and clone it to browse the files.
So the question is, is there a way around `destination has been modified since most recent snapshot` that I am not aware of? Or is this just not doable?
TIA
| 1.0 | Question: is there a way to delete the snapshots and keep them too? - This is not system-specific: we have several clients with small-ish 1-2TB "data" drives, and we have a storage system with lots of room. What I would like to do is basically incremental backups: snapshot the clients and transfer incremental snapshots to the big box. This is working fine except
* I need to delete old snapshots on the clients to keep disk usage there down
* and I want to keep them on the big box because that's the point.
-- And that's the part I can't get to work: `send -R`/`receive -F` deletes old snapshots on the receiving end. Without them I get
> cannot receive incremental stream: destination has been modified since most recent snapshot
Even `receive -u` does that when the storage system flips from one controller to another (it's a dual-headed setup) -- and `-u` is already suboptimal as you can't browse snapshots, you have to pick one and clone it to browse the files.
So the question is, is there a way around `destination has been modified since most recent snapshot` that I am not aware of? Or is this just not doable?
TIA
| non_main | question is there a way to delete the snapshots and keep them too this is not system specific we have several clients with small ish data drives and we have a storage system with lots of room what i would like to do is basically incremental backups snapshot the clients and transfer incremental snapshots to the big box this is working fine except i need to delete old snapshots on the clients to keep disk usage there down and i want to keep them on the big box because that s the point and that s the part i can t get to work send r receive f deletes old snapshots on the receiving end without them i get cannot receive incremental stream destination has been modified since most recent snapshot even receive u does that when the storage system flips from one controller to another it s a dual headed setup and u is already suboptimal as you can t browse snapshots you have to pick one and clone it to browse the files so the question is is there a way around destination has been modified since most recent snapshot that i am not aware of or is this just not doable tia | 0 |
1,476 | 6,402,431,592 | IssuesEvent | 2017-08-06 09:23:43 | openwrt/packages | https://api.github.com/repos/openwrt/packages | closed | the lua-cjson crash on my ar71xx board | waiting for maintainer | when I use latest lua-cjson_2.1.0-2_ar71xx.ipk from snapshort, it crush like below.
> require'cjson'
> Segmentation fault
---
but the original lua-cjson_2.1.0-1_ar71xx.ipk from CC is OK
| True | the lua-cjson crash on my ar71xx board - when I use latest lua-cjson_2.1.0-2_ar71xx.ipk from snapshort, it crush like below.
> require'cjson'
> Segmentation fault
---
but the original lua-cjson_2.1.0-1_ar71xx.ipk from CC is OK
| main | the lua cjson crash on my board when i use latest lua cjson ipk from snapshort it crush like below require cjson segmentation fault but the original lua cjson ipk from cc is ok | 1 |
441,139 | 12,708,661,107 | IssuesEvent | 2020-06-23 10:56:48 | Tangerine-Community/Tangerine | https://api.github.com/repos/Tangerine-Community/Tangerine | closed | Editor User exports CSV file that contains the group and form name | Education Project Priority type: user story workflow: review | The filename of the csv export should be GROUP_NAME-FORM_NAME where each space in the group name or the form name is replaced with an underscore. | 1.0 | Editor User exports CSV file that contains the group and form name - The filename of the csv export should be GROUP_NAME-FORM_NAME where each space in the group name or the form name is replaced with an underscore. | non_main | editor user exports csv file that contains the group and form name the filename of the csv export should be group name form name where each space in the group name or the form name is replaced with an underscore | 0 |
1,447 | 6,287,525,610 | IssuesEvent | 2017-07-19 15:07:42 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | os_server module not updating metadata on a running instance | affects_2.2 bug_report cloud openstack waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
os_server module
##### ANSIBLE VERSION
```
ansible 2.2.0.0 (stable-2.2 c5d4134f37) last updated 2016/10/27 16:10:22 (GMT +100)
lib/ansible/modules/core: (detached HEAD 0881ba15c6) last updated 2016/10/27 16:10:37 (GMT +100)
lib/ansible/modules/extras: (detached HEAD 47f4dd44f4) last updated 2016/10/27 16:10:37 (GMT +100)
config file = /home/luisg/provision/boxes/test/openstack/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Running ansible on Debian.
Targeting OpenStack instances with CentOS 7.2
##### SUMMARY
Module `os_server` does not add meta key/value pairs (using option `meta`) to a running OS instance. Using the same options while creating the OS instance in the first place does add the meta key/value pairs.
##### STEPS TO REPRODUCE
This is an example playbook (it assumes your openstack env is correctly set and the same playbook has been run before without the `meta` option):
```
---
- hosts: localhost
tasks:
- name: Create instance
os_server:
name: instance
image: some-image
state: present
meta:
groups: 'some-group'
register: instance
- debug: var=instance
```
##### EXPECTED RESULTS
The above playbook should return the metadata argument within the debug output (abbreviated here):
```
TASK [debug] *******************************************************************
ok: [localhost] => {
"instance": {
"changed": true,
"openstack": {
"metadata": {
"groups": "some-group"
},
},
}
}
```
##### ACTUAL RESULTS
In contrast, the following is obtained, where metadata is returned empty:
```
TASK [debug] *******************************************************************
ok: [localhost] => {
"instance": {
"changed": true,
"openstack": {
"metadata": {},
},
}
}
```
Note the task notifies it changed, but nothing happens to the metadata, nor to any other result provided by ansible-playbook's output (just did a diff of two consecutive runs).
| True | os_server module not updating metadata on a running instance - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
os_server module
##### ANSIBLE VERSION
```
ansible 2.2.0.0 (stable-2.2 c5d4134f37) last updated 2016/10/27 16:10:22 (GMT +100)
lib/ansible/modules/core: (detached HEAD 0881ba15c6) last updated 2016/10/27 16:10:37 (GMT +100)
lib/ansible/modules/extras: (detached HEAD 47f4dd44f4) last updated 2016/10/27 16:10:37 (GMT +100)
config file = /home/luisg/provision/boxes/test/openstack/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
##### OS / ENVIRONMENT
Running ansible on Debian.
Targeting OpenStack instances with CentOS 7.2
##### SUMMARY
Module `os_server` does not add meta key/value pairs (using option `meta`) to a running OS instance. Using the same options while creating the OS instance in the first place does add the meta key/value pairs.
##### STEPS TO REPRODUCE
This is an example playbook (it assumes your openstack env is correctly set and the same playbook has been run before without the `meta` option):
```
---
- hosts: localhost
tasks:
- name: Create instance
os_server:
name: instance
image: some-image
state: present
meta:
groups: 'some-group'
register: instance
- debug: var=instance
```
##### EXPECTED RESULTS
The above playbook should return the metadata argument within the debug output (abbreviated here):
```
TASK [debug] *******************************************************************
ok: [localhost] => {
"instance": {
"changed": true,
"openstack": {
"metadata": {
"groups": "some-group"
},
},
}
}
```
##### ACTUAL RESULTS
In contrast, the following is obtained, where metadata is returned empty:
```
TASK [debug] *******************************************************************
ok: [localhost] => {
"instance": {
"changed": true,
"openstack": {
"metadata": {},
},
}
}
```
Note the task notifies it changed, but nothing happens to the metadata, nor to any other result provided by ansible-playbook's output (just did a diff of two consecutive runs).
| main | os server module not updating metadata on a running instance issue type bug report component name os server module ansible version ansible stable last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file home luisg provision boxes test openstack ansible cfg configured module search path default w o overrides configuration os environment running ansible on debian targeting openstack instances with centos summary module os server does not add meta key value pairs using option meta to a running os instance using the same options while creating the os instance in the first place does add the meta key value pairs steps to reproduce this is an example playbook it assumes your openstack env is correctly set and the same playbook has been run before without the meta option hosts localhost tasks name create instance os server name instance image some image state present meta groups some group register instance debug var instance expected results the above playbook should return the metadata argument within the debug output abbreviated here task ok instance changed true openstack metadata groups some group actual results in contrast the following is obtained where metadata is returned empty task ok instance changed true openstack metadata note the task notifies it changed but nothing happens to the metadata nor to any other result provided by ansible playbook s output just did a diff of two consecutive runs | 1 |
5,745 | 30,386,081,784 | IssuesEvent | 2023-07-13 00:55:11 | MozillaFoundation/foundation.mozilla.org | https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org | closed | Improve CSS build in development | engineering frontend maintain | It takes pretty long after starting the stack with `docker compose up` until the styles are working properly. There might be a bunch of reasons, but I think we should definitely not run css optimization while in local dev. That's something for deployments, but not something we should be doing for local development.
```
...
foundationmozillaorg-watch-static-files-1 | > @ build:sass /app
foundationmozillaorg-watch-static-files-1 | > run-s build:sass:clean && run-p build:sass:main build:sass:bg && run-s optimize:css
...
``` | True | Improve CSS build in development - It takes pretty long after starting the stack with `docker compose up` until the styles are working properly. There might be a bunch of reasons, but I think we should definitely not run css optimization while in local dev. That's something for deployments, but not something we should be doing for local development.
```
...
foundationmozillaorg-watch-static-files-1 | > @ build:sass /app
foundationmozillaorg-watch-static-files-1 | > run-s build:sass:clean && run-p build:sass:main build:sass:bg && run-s optimize:css
...
``` | main | improve css build in development it takes pretty long after starting the stack with docker compose up until the styles are working properly there might be a bunch of reasons but i think we should definitely not run css optimization while in local dev that s something for deployments but not something we should be doing for local development foundationmozillaorg watch static files build sass app foundationmozillaorg watch static files run s build sass clean run p build sass main build sass bg run s optimize css | 1 |
427 | 3,516,618,137 | IssuesEvent | 2016-01-12 00:48:52 | Homebrew/homebrew | https://api.github.com/repos/Homebrew/homebrew | closed | conflicts_with doesn't work with references to untapped taps | maintainer feedback | We recently moved a tap from one repository to another, and renamed it as well. We received a PR to add a conflicts_with line for the old tap, which seemed legitimate until we received a complaint that someone who had just tapped the new tap wasn't able to install the formula: https://github.com/cloudfoundry/cli/issues/726
`conflicts_with` works if you have the original tap tapped, but not otherwise. | True | conflicts_with doesn't work with references to untapped taps - We recently moved a tap from one repository to another, and renamed it as well. We received a PR to add a conflicts_with line for the old tap, which seemed legitimate until we received a complaint that someone who had just tapped the new tap wasn't able to install the formula: https://github.com/cloudfoundry/cli/issues/726
`conflicts_with` works if you have the original tap tapped, but not otherwise. | main | conflicts with doesn t work with references to untapped taps we recently moved a tap from one repository to another and renamed it as well we received a pr to add a conflicts with line for the old tap which seemed legitimate until we received a complaint that someone who had just tapped the new tap wasn t able to install the formula conflicts with works if you have the original tap tapped but not otherwise | 1 |
115,623 | 11,883,368,021 | IssuesEvent | 2020-03-27 15:50:36 | nih-cfde/cfde-deriva | https://api.github.com/repos/nih-cfde/cfde-deriva | opened | Scripts & Catalog | documentation | Creates scripts to extract and annotate with metadata CF tools and workflows and creates a catalog of CF tools and workflows. Create a catalog of all CF DCC published bioinformatics tools and databases, and provide these for browsing and searching from the CFDE portal.
(4.2.6)
| 1.0 | Scripts & Catalog - Creates scripts to extract and annotate with metadata CF tools and workflows and creates a catalog of CF tools and workflows. Create a catalog of all CF DCC published bioinformatics tools and databases, and provide these for browsing and searching from the CFDE portal.
(4.2.6)
| non_main | scripts catalog creates scripts to extract and annotate with metadata cf tools and workflows and creates a catalog of cf tools and workflows create a catalog of all cf dcc published bioinformatics tools and databases and provide these for browsing and searching from the cfde portal | 0 |
76,592 | 15,496,147,263 | IssuesEvent | 2021-03-11 02:08:47 | jinuem/React-Type-Script-Starter | https://api.github.com/repos/jinuem/React-Type-Script-Starter | opened | WS-2018-0588 (High) detected in querystringify-1.0.0.tgz, querystringify-0.0.4.tgz | security vulnerability | ## WS-2018-0588 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>querystringify-1.0.0.tgz</b>, <b>querystringify-0.0.4.tgz</b></p></summary>
<p>
<details><summary><b>querystringify-1.0.0.tgz</b></p></summary>
<p>Querystringify - Small, simple but powerful query string parser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/querystringify/-/querystringify-1.0.0.tgz">https://registry.npmjs.org/querystringify/-/querystringify-1.0.0.tgz</a></p>
<p>Path to dependency file: /React-Type-Script-Starter/package.json</p>
<p>Path to vulnerable library: React-Type-Script-Starter/node_modules/url-parse/node_modules/querystringify/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-ts-2.5.0.tgz (Root Library)
- react-dev-utils-2.0.1.tgz
- sockjs-client-1.1.4.tgz
- url-parse-1.1.9.tgz
- :x: **querystringify-1.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>querystringify-0.0.4.tgz</b></p></summary>
<p>Querystringify - Small, simple but powerful query string parser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/querystringify/-/querystringify-0.0.4.tgz">https://registry.npmjs.org/querystringify/-/querystringify-0.0.4.tgz</a></p>
<p>Path to dependency file: /React-Type-Script-Starter/package.json</p>
<p>Path to vulnerable library: React-Type-Script-Starter/node_modules/querystringify/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-ts-2.5.0.tgz (Root Library)
- react-dev-utils-2.0.1.tgz
- sockjs-client-1.1.4.tgz
- eventsource-0.1.6.tgz
- original-1.0.0.tgz
- url-parse-1.0.5.tgz
- :x: **querystringify-0.0.4.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in querystringify before 2.0.0. It's possible to override built-in properties of the resulting query string object if a malicious string is inserted in the query string.
<p>Publish Date: 2018-04-19
<p>URL: <a href=https://github.com/unshiftio/querystringify/pull/19>WS-2018-0588</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.6</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/unshiftio/querystringify/commit/422eb4f6c7c28ee5f100dcc64177d3b68bb2b080">https://github.com/unshiftio/querystringify/commit/422eb4f6c7c28ee5f100dcc64177d3b68bb2b080</a></p>
<p>Release Date: 2019-06-04</p>
<p>Fix Resolution: 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2018-0588 (High) detected in querystringify-1.0.0.tgz, querystringify-0.0.4.tgz - ## WS-2018-0588 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>querystringify-1.0.0.tgz</b>, <b>querystringify-0.0.4.tgz</b></p></summary>
<p>
<details><summary><b>querystringify-1.0.0.tgz</b></p></summary>
<p>Querystringify - Small, simple but powerful query string parser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/querystringify/-/querystringify-1.0.0.tgz">https://registry.npmjs.org/querystringify/-/querystringify-1.0.0.tgz</a></p>
<p>Path to dependency file: /React-Type-Script-Starter/package.json</p>
<p>Path to vulnerable library: React-Type-Script-Starter/node_modules/url-parse/node_modules/querystringify/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-ts-2.5.0.tgz (Root Library)
- react-dev-utils-2.0.1.tgz
- sockjs-client-1.1.4.tgz
- url-parse-1.1.9.tgz
- :x: **querystringify-1.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>querystringify-0.0.4.tgz</b></p></summary>
<p>Querystringify - Small, simple but powerful query string parser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/querystringify/-/querystringify-0.0.4.tgz">https://registry.npmjs.org/querystringify/-/querystringify-0.0.4.tgz</a></p>
<p>Path to dependency file: /React-Type-Script-Starter/package.json</p>
<p>Path to vulnerable library: React-Type-Script-Starter/node_modules/querystringify/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-ts-2.5.0.tgz (Root Library)
- react-dev-utils-2.0.1.tgz
- sockjs-client-1.1.4.tgz
- eventsource-0.1.6.tgz
- original-1.0.0.tgz
- url-parse-1.0.5.tgz
- :x: **querystringify-0.0.4.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in querystringify before 2.0.0. It's possible to override built-in properties of the resulting query string object if a malicious string is inserted in the query string.
<p>Publish Date: 2018-04-19
<p>URL: <a href=https://github.com/unshiftio/querystringify/pull/19>WS-2018-0588</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.6</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/unshiftio/querystringify/commit/422eb4f6c7c28ee5f100dcc64177d3b68bb2b080">https://github.com/unshiftio/querystringify/commit/422eb4f6c7c28ee5f100dcc64177d3b68bb2b080</a></p>
<p>Release Date: 2019-06-04</p>
<p>Fix Resolution: 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | ws high detected in querystringify tgz querystringify tgz ws high severity vulnerability vulnerable libraries querystringify tgz querystringify tgz querystringify tgz querystringify small simple but powerful query string parser library home page a href path to dependency file react type script starter package json path to vulnerable library react type script starter node modules url parse node modules querystringify package json dependency hierarchy react scripts ts tgz root library react dev utils tgz sockjs client tgz url parse tgz x querystringify tgz vulnerable library querystringify tgz querystringify small simple but powerful query string parser library home page a href path to dependency file react type script starter package json path to vulnerable library react type script starter node modules querystringify package json dependency hierarchy react scripts ts tgz root library react dev utils tgz sockjs client tgz eventsource tgz original tgz url parse tgz x querystringify tgz vulnerable library vulnerability details a vulnerability was found in querystringify before it s possible to override built in properties of the resulting query string object if a malicious string is inserted in the query string publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
74,953 | 9,171,468,923 | IssuesEvent | 2019-03-04 01:59:41 | Justin-Terry/SafeHopper | https://api.github.com/repos/Justin-Terry/SafeHopper | closed | Message sequence chart - only need one | Design Specification review | "Message sequence chart and description: Provide one message sequence chart for a
central functionality of your system to show you also know how to work with this kind
of notation. As it will be somewhat redundant to the activity diagrams I am only asking
you for one of those." | 1.0 | Message sequence chart - only need one - "Message sequence chart and description: Provide one message sequence chart for a
central functionality of your system to show you also know how to work with this kind
of notation. As it will be somewhat redundant to the activity diagrams I am only asking
you for one of those." | non_main | message sequence chart only need one message sequence chart and description provide one message sequence chart for a central functionality of your system to show you also know how to work with this kind of notation as it will be somewhat redundant to the activity diagrams i am only asking you for one of those | 0 |
5,162 | 26,280,625,788 | IssuesEvent | 2023-01-07 08:51:06 | kkkkan/CsvToQrConverter | https://api.github.com/repos/kkkkan/CsvToQrConverter | closed | [コード改善] アプリバーとハンバーガーメニューに表示しているページ名を共通定数で定義してそれを見るようにしたい | enhancement maintainer's-memo | 今は2ヶ所個別に作成しているため、容易に不一致するしメンテナンスコストも高め。
`slack絵文字`で作った`ScaleType`みたいのを作って両方ともそれを見るようにすれば良さそう。 | True | [コード改善] アプリバーとハンバーガーメニューに表示しているページ名を共通定数で定義してそれを見るようにしたい - 今は2ヶ所個別に作成しているため、容易に不一致するしメンテナンスコストも高め。
`slack絵文字`で作った`ScaleType`みたいのを作って両方ともそれを見るようにすれば良さそう。 | main | アプリバーとハンバーガーメニューに表示しているページ名を共通定数で定義してそれを見るようにしたい 、容易に不一致するしメンテナンスコストも高め。 slack絵文字 で作った scaletype みたいのを作って両方ともそれを見るようにすれば良さそう。 | 1 |
1,481 | 6,415,998,952 | IssuesEvent | 2017-08-08 13:59:29 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Roles support in vmware_local_user_manager.py module | affects_2.2 cloud feature_idea vmware waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
vmware_local_user_manager.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
ansible 2.2.0.0
##### SUMMARY
<!--- Explain the problem briefly -->
Can you please add a feature to create local roles and add local users in local roles? We'd like create role, with needed permissions and add give them to the created user.
| True | Roles support in vmware_local_user_manager.py module - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
vmware_local_user_manager.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
ansible 2.2.0.0
##### SUMMARY
<!--- Explain the problem briefly -->
Can you please add a feature to create local roles and add local users in local roles? We'd like create role, with needed permissions and add give them to the created user.
| main | roles support in vmware local user manager py module issue type feature idea component name vmware local user manager py ansible version ansible summary can you please add a feature to create local roles and add local users in local roles we d like create role with needed permissions and add give them to the created user | 1 |
4,193 | 20,510,019,506 | IssuesEvent | 2022-03-01 04:46:47 | diofant/diofant | https://api.github.com/repos/diofant/diofant | closed | Replace softprops/action-gh-release with actions/create-release & actions/upload-release-asset | maintainability | Probably, this will wait for https://github.com/actions/upload-release-asset/issues/47. | True | Replace softprops/action-gh-release with actions/create-release & actions/upload-release-asset - Probably, this will wait for https://github.com/actions/upload-release-asset/issues/47. | main | replace softprops action gh release with actions create release actions upload release asset probably this will wait for | 1 |
588,245 | 17,650,609,133 | IssuesEvent | 2021-08-20 12:43:27 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | Remove `data` folder when uninstalling/removing Status | idea general installer priority F2: important | Current, when uninstalling/removing Status and then reinstalling it, it remembers the accounts from the previous installation, which is unexpected.
Clean installations should not know about previous keys, unless they have been manually moved there. | 1.0 | Remove `data` folder when uninstalling/removing Status - Current, when uninstalling/removing Status and then reinstalling it, it remembers the accounts from the previous installation, which is unexpected.
Clean installations should not know about previous keys, unless they have been manually moved there. | non_main | remove data folder when uninstalling removing status current when uninstalling removing status and then reinstalling it it remembers the accounts from the previous installation which is unexpected clean installations should not know about previous keys unless they have been manually moved there | 0 |
5,293 | 26,747,884,224 | IssuesEvent | 2023-01-30 17:13:58 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | opened | "optimize:css:copy" task is not working | bug 🦠 engineering maintain | ### Describe the bug
"optimize:css:copy" task is not working
### To Reproduce
Steps to reproduce the behavior:
1. run `docker-compose up`
2. observe error
### Expected behavior
No errors
### Screenshots
<img width="1326" alt="image" src="https://user-images.githubusercontent.com/2896608/215546456-04c9b7ba-278b-4827-852c-7d514b12d54b.png">
| True | "optimize:css:copy" task is not working - ### Describe the bug
"optimize:css:copy" task is not working
### To Reproduce
Steps to reproduce the behavior:
1. run `docker-compose up`
2. observe error
### Expected behavior
No errors
### Screenshots
<img width="1326" alt="image" src="https://user-images.githubusercontent.com/2896608/215546456-04c9b7ba-278b-4827-852c-7d514b12d54b.png">
| main | optimize css copy task is not working describe the bug optimize css copy task is not working to reproduce steps to reproduce the behavior run docker compose up observe error expected behavior no errors screenshots img width alt image src | 1 |
2,104 | 7,124,344,077 | IssuesEvent | 2018-01-19 18:31:17 | clearlinux/swupd-client | https://api.github.com/repos/clearlinux/swupd-client | closed | Consider adding a string_free() function | maintainability | So that the code is consistent about resetting a pointer to NULL after freeing dynamic memory allocated at that location, consider adding a `string_free()` wrapper function. For an implementation idea, see the code snippet below (from #356).
```
void free_string(char **s)
{
if (s) {
free(*s);
*s = NULL;
}
}
``` | True | Consider adding a string_free() function - So that the code is consistent about resetting a pointer to NULL after freeing dynamic memory allocated at that location, consider adding a `string_free()` wrapper function. For an implementation idea, see the code snippet below (from #356).
```
void free_string(char **s)
{
if (s) {
free(*s);
*s = NULL;
}
}
``` | main | consider adding a string free function so that the code is consistent about resetting a pointer to null after freeing dynamic memory allocated at that location consider adding a string free wrapper function for an implementation idea see the code snippet below from void free string char s if s free s s null | 1 |
4,300 | 21,672,713,779 | IssuesEvent | 2022-05-08 07:57:44 | svengreb/tmpl-go | https://api.github.com/repos/svengreb/tmpl-go | closed | Update to `tmpl` template repository version `0.11.0` | type-improvement context-techstack scope-compatibility scope-maintainability | Update to [`tmpl` version `0.11.0`][1] which comes with…
1. [an opt-in Dependabot version update configuration][2] — this will disable the currently used [`.github/dependabot.yml` file][3] in order to remove the PR noise and reduce the maintenance overhead. Dependency updates will be made by keeping up-to-date with new `tmpl` repository versions instead which take care of this.
[1]: https://github.com/svengreb/tmpl/releases/tag/v0.11.0
[2]: https://github.com/svengreb/tmpl/issues/94
[3]: https://github.com/svengreb/tmpl-go/blob/39cf0b85/.github/dependabot.yml
| True | Update to `tmpl` template repository version `0.11.0` - Update to [`tmpl` version `0.11.0`][1] which comes with…
1. [an opt-in Dependabot version update configuration][2] — this will disable the currently used [`.github/dependabot.yml` file][3] in order to remove the PR noise and reduce the maintenance overhead. Dependency updates will be made by keeping up-to-date with new `tmpl` repository versions instead which take care of this.
[1]: https://github.com/svengreb/tmpl/releases/tag/v0.11.0
[2]: https://github.com/svengreb/tmpl/issues/94
[3]: https://github.com/svengreb/tmpl-go/blob/39cf0b85/.github/dependabot.yml
| main | update to tmpl template repository version update to which comes with… — this will disable the currently used in order to remove the pr noise and reduce the maintenance overhead dependency updates will be made by keeping up to date with new tmpl repository versions instead which take care of this | 1 |
400,846 | 27,303,026,294 | IssuesEvent | 2023-02-24 04:53:38 | risingwavelabs/risingwave-docs | https://api.github.com/repos/risingwavelabs/risingwave-docs | closed | Document exp function and the updates to the pow function | documentation | ### Related code PR
https://github.com/risingwavelabs/risingwave/pull/7971
### Which part(s) of the docs might be affected or should be updated? And how?
SQL -> Functions and operators -> Math. Note that the decimal version of the `exp` function is not supported.
### Reference
https://www.postgresql.org/docs/15/functions-math.html | 1.0 | Document exp function and the updates to the pow function - ### Related code PR
https://github.com/risingwavelabs/risingwave/pull/7971
### Which part(s) of the docs might be affected or should be updated? And how?
SQL -> Functions and operators -> Math. Note that the decimal version of the `exp` function is not supported.
### Reference
https://www.postgresql.org/docs/15/functions-math.html | non_main | document exp function and the updates to the pow function related code pr which part s of the docs might be affected or should be updated and how sql functions and operators math note that the decimal version of the exp function is not supported reference | 0 |
1,181 | 5,097,443,189 | IssuesEvent | 2017-01-03 21:30:31 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | postgres_user module: `role_attr_flags` does nothing when combined with `no_password_changes` | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
postgresql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
Ansible 2.1.0.0
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Mac OS X El Capitan, Version 10.11.4
Tested on Docker official postgres container and on Amazon RDS PostgreSQL DB instance
##### SUMMARY
<!--- Explain the problem briefly -->
When running a task with postgres_user module, the option `role_attr_flags` does not set any role attributes when `no_password_change` is set to `yes`
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Run a task with module postgres_user, set any attribute in `role_attr_flags` and the option `no_password_changes` to `yes`
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: localhost
tasks:
- name: create user
postgresql_user:
name: testing_user
password: somerandompassword
state: present
login_host: your.amazon.url.to.postges.instance
login_user: yourdefaultpostgresuser
login_password: yoursecretpasswordforthedefaultuser
- name: add attributes to user
postgresql_user:
name: testing_user
no_password_changes: yes
role_attr_flags: CREATEDB
login_host: your.amazon.url.to.postges.instance
login_user: yourdefaultpostgresuser
login_password: yoursecretpasswordforthedefaultuser
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
The user testing_user should have the CreateDB attribute
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
No changes made.
<!--- Paste verbatim command output between quotes below -->
```
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [create user] *************************************************************
changed: [localhost] => {"changed": true, "user": "test_user"}
TASK [add attributes to user] **************************************************
ok: [localhost] => {"changed": false, "user": "test_user"}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
```
| True | postgres_user module: `role_attr_flags` does nothing when combined with `no_password_changes` - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
postgresql_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
Ansible 2.1.0.0
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Mac OS X El Capitan, Version 10.11.4
Tested on Docker official postgres container and on Amazon RDS PostgreSQL DB instance
##### SUMMARY
<!--- Explain the problem briefly -->
When running a task with postgres_user module, the option `role_attr_flags` does not set any role attributes when `no_password_change` is set to `yes`
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Run a task with module postgres_user, set any attribute in `role_attr_flags` and the option `no_password_changes` to `yes`
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: localhost
tasks:
- name: create user
postgresql_user:
name: testing_user
password: somerandompassword
state: present
login_host: your.amazon.url.to.postges.instance
login_user: yourdefaultpostgresuser
login_password: yoursecretpasswordforthedefaultuser
- name: add attributes to user
postgresql_user:
name: testing_user
no_password_changes: yes
role_attr_flags: CREATEDB
login_host: your.amazon.url.to.postges.instance
login_user: yourdefaultpostgresuser
login_password: yoursecretpasswordforthedefaultuser
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
The user testing_user should have the CreateDB attribute
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
No changes made.
<!--- Paste verbatim command output between quotes below -->
```
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [create user] *************************************************************
changed: [localhost] => {"changed": true, "user": "test_user"}
TASK [add attributes to user] **************************************************
ok: [localhost] => {"changed": false, "user": "test_user"}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
```
| main | postgres user module role attr flags does nothing when combined with no password changes issue type bug report component name postgresql user ansible version ansible os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific mac os x el capitan version tested on docker official postgres container and on amazon rds postgresql db instance summary when running a task with postgres user module the option role attr flags does not set any role attributes when no password change is set to yes steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run a task with module postgres user set any attribute in role attr flags and the option no password changes to yes hosts localhost tasks name create user postgresql user name testing user password somerandompassword state present login host your amazon url to postges instance login user yourdefaultpostgresuser login password yoursecretpasswordforthedefaultuser name add attributes to user postgresql user name testing user no password changes yes role attr flags createdb login host your amazon url to postges instance login user yourdefaultpostgresuser login password yoursecretpasswordforthedefaultuser expected results the user testing user should have the createdb attribute actual results no changes made play task ok task changed changed true user test user task ok changed false user test user play recap localhost ok changed unreachable failed | 1 |
240,534 | 7,802,748,677 | IssuesEvent | 2018-06-10 15:59:10 | opencaching/opencaching-pl | https://api.github.com/repos/opencaching/opencaching-pl | opened | Tables 'scores' and 'cache_moved' should have 'deleted' field | Component_Cache Component_CacheLog Priority_Low Type_Enhancement | When user deletes log - log is marked as deleted and hidden. But in the same moment - data is deleted from scores table (and cache_moved table - for movable caches). When OC Team restores deleted log - system remove delete mark from cache_logs. But... cannot restore data from scores and cache_moved tables. Both of them should support "deleted" flag. | 1.0 | Tables 'scores' and 'cache_moved' should have 'deleted' field - When user deletes log - log is marked as deleted and hidden. But in the same moment - data is deleted from scores table (and cache_moved table - for movable caches). When OC Team restores deleted log - system remove delete mark from cache_logs. But... cannot restore data from scores and cache_moved tables. Both of them should support "deleted" flag. | non_main | tables scores and cache moved should have deleted field when user deletes log log is marked as deleted and hidden but in the same moment data is deleted from scores table and cache moved table for movable caches when oc team restores deleted log system remove delete mark from cache logs but cannot restore data from scores and cache moved tables both of them should support deleted flag | 0 |
324,690 | 24,012,812,366 | IssuesEvent | 2022-09-14 20:31:03 | 0x0is1/profanity | https://api.github.com/repos/0x0is1/profanity | closed | README and Bug Acknowledgement Required | bug documentation | Although exploit has been released publicly without vendor consent, The readme containing POC of bug and exploit usage method is required. | 1.0 | README and Bug Acknowledgement Required - Although exploit has been released publicly without vendor consent, The readme containing POC of bug and exploit usage method is required. | non_main | readme and bug acknowledgement required although exploit has been released publicly without vendor consent the readme containing poc of bug and exploit usage method is required | 0 |
143,138 | 13,055,318,864 | IssuesEvent | 2020-07-30 01:18:27 | webbgeorge/lambdah | https://api.github.com/repos/webbgeorge/lambdah | closed | Complete all community getting started tasks | documentation | All tasks from **Insights** -> **Community** checklist | 1.0 | Complete all community getting started tasks - All tasks from **Insights** -> **Community** checklist | non_main | complete all community getting started tasks all tasks from insights community checklist | 0 |
65,874 | 27,264,142,593 | IssuesEvent | 2023-02-22 16:47:06 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | closed | Move the KLAB2 API to a private IP | team/DXC tech/networking ops and shared services NSXT/SDN | **Describe the issue**
Currently the API is on a public IP, but restricted to only access within the datacenter. We've got approval from Nick and Brian to make the change.
**What is the Value/Impact?**
Improved security for KLAB2 cluster
**What is the plan? How will this get completed?**
- [ ] Announce change
- [x] Work with Dan to update / recreate the VIP in AVI
- [x] Update DNS in NNR
- [x] Test
**Identify any dependencies**
Dan from VMware team
**Definition of done**
API moved to a low dataclass internal IP
| 1.0 | Move the KLAB2 API to a private IP - **Describe the issue**
Currently the API is on a public IP, but restricted to only access within the datacenter. We've got approval from Nick and Brian to make the change.
**What is the Value/Impact?**
Improved security for KLAB2 cluster
**What is the plan? How will this get completed?**
- [ ] Announce change
- [x] Work with Dan to update / recreate the VIP in AVI
- [x] Update DNS in NNR
- [x] Test
**Identify any dependencies**
Dan from VMware team
**Definition of done**
API moved to a low dataclass internal IP
| non_main | move the api to a private ip describe the issue currently the api is on a public ip but restricted to only access within the datacenter we ve got approval from nick and brian to make the change what is the value impact improved security for cluster what is the plan how will this get completed announce change work with dan to update recreate the vip in avi update dns in nnr test identify any dependencies dan from vmware team definition of done api moved to a low dataclass internal ip | 0 |
3,530 | 13,909,923,210 | IssuesEvent | 2020-10-20 15:27:48 | coq-community/manifesto | https://api.github.com/repos/coq-community/manifesto | closed | Change maintainer of project Bertrand | change-maintainer maintainer-wanted | **Project name and URL:** https://github.com/coq-community/bertrand
**Current maintainer:** @herbelin
**Status:** maintained
**New maintainer:** looking for a volunteer
As described by @Zimmi48 in #104, @herbelin is taking care of the coq-contribs, so a new maintainer is wanted for this project. | True | Change maintainer of project Bertrand - **Project name and URL:** https://github.com/coq-community/bertrand
**Current maintainer:** @herbelin
**Status:** maintained
**New maintainer:** looking for a volunteer
As described by @Zimmi48 in #104, @herbelin is taking care of the coq-contribs, so a new maintainer is wanted for this project. | main | change maintainer of project bertrand project name and url current maintainer herbelin status maintained new maintainer looking for a volunteer as described by in herbelin is taking care of the coq contribs so a new maintainer is wanted for this project | 1 |
192,198 | 22,215,914,411 | IssuesEvent | 2022-06-08 01:36:37 | AlexRogalskiy/github-action-node-dependency | https://api.github.com/repos/AlexRogalskiy/github-action-node-dependency | closed | CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz - autoclosed | security vulnerability | ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/json-schema/package.json,/node_modules/npm/node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- coveralls-3.1.0.tgz (Root Library)
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-node-dependency/commit/9efebd512079fd078a61b29a95c853ab223a97a5">9efebd512079fd078a61b29a95c853ab223a97a5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution: json-schema - 0.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz - autoclosed - ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/json-schema/package.json,/node_modules/npm/node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- coveralls-3.1.0.tgz (Root Library)
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-node-dependency/commit/9efebd512079fd078a61b29a95c853ab223a97a5">9efebd512079fd078a61b29a95c853ab223a97a5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution: json-schema - 0.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in json schema tgz autoclosed cve high severity vulnerability vulnerable library json schema tgz json schema validation and specifications library home page a href path to dependency file package json path to vulnerable library node modules json schema package json node modules npm node modules json schema package json dependency hierarchy coveralls tgz root library request tgz http signature tgz jsprim tgz x json schema tgz vulnerable library found in head commit a href vulnerability details json schema is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution json schema step up your open source security game with whitesource | 0 |
148,935 | 23,403,762,716 | IssuesEvent | 2022-08-12 10:39:13 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | opened | [Bug]: Scroll Indicator inconsistencies across browsers | Bug Design System Pod UI Improvement UX Improvement Low Production Needs Triaging | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Below are some inconsistencies observed w.r.t the `Scroll Indicator` Component in different browsers:
1. Scroll Indicator does not appear for the JS Objects (Editor) in Chrome whereas we are able to see it in Firefox.
<img width="1440" alt="Screenshot 2022-08-12 at 3 59 48 PM" src="https://user-images.githubusercontent.com/39921438/184337399-95874d3d-f007-4d6e-a8da-e7cf9cb93a62.png">
2. The shape of the Scroll Indicator itself is different in Chrome & Firefox.
<img width="1440" alt="Screenshot 2022-08-12 at 4 01 51 PM" src="https://user-images.githubusercontent.com/39921438/184337675-6557d0ae-f5d3-4698-840a-1b3cb35aafe2.png">
### Steps To Reproduce
This can be seen in the JS Object Editor page & places all over the app where scroll is enabled.
### Public Sample App
_No response_
### Version
Cloud | 1.0 | [Bug]: Scroll Indicator inconsistencies across browsers - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Below are some inconsistencies observed w.r.t the `Scroll Indicator` Component in different browsers:
1. Scroll Indicator does not appear for the JS Objects (Editor) in Chrome whereas we are able to see it in Firefox.
<img width="1440" alt="Screenshot 2022-08-12 at 3 59 48 PM" src="https://user-images.githubusercontent.com/39921438/184337399-95874d3d-f007-4d6e-a8da-e7cf9cb93a62.png">
2. The shape of the Scroll Indicator itself is different in Chrome & Firefox.
<img width="1440" alt="Screenshot 2022-08-12 at 4 01 51 PM" src="https://user-images.githubusercontent.com/39921438/184337675-6557d0ae-f5d3-4698-840a-1b3cb35aafe2.png">
### Steps To Reproduce
This can be seen in the JS Object Editor page & places all over the app where scroll is enabled.
### Public Sample App
_No response_
### Version
Cloud | non_main | scroll indicator inconsistencies across browsers is there an existing issue for this i have searched the existing issues description below are some inconsistencies observed w r t the scroll indicator component in different browsers scroll indicator does not appear for the js objects editor in chrome whereas we are able to see it in firefox img width alt screenshot at pm src the shape of the scroll indicator itself is different in chrome firefox img width alt screenshot at pm src steps to reproduce this can be seen in the js object editor page places all over the app where scroll is enabled public sample app no response version cloud | 0 |
113,930 | 17,171,924,002 | IssuesEvent | 2021-07-15 06:22:28 | Thanraj/libpng_ | https://api.github.com/repos/Thanraj/libpng_ | opened | CVE-2019-9936 (High) detected in sqliteversion-3.22.0 | security vulnerability | ## CVE-2019-9936 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sqliteversion-3.22.0</b></p></summary>
<p>
<p>Official Git mirror of the SQLite source tree</p>
<p>Library home page: <a href=https://github.com/sqlite/sqlite.git>https://github.com/sqlite/sqlite.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Thanraj/libpng_/commit/a12fbbf25c0b2b5501a8f34cbb3e195203a8ec37">a12fbbf25c0b2b5501a8f34cbb3e195203a8ec37</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>libpng_/ext/fts5/fts5_hash.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In SQLite 3.27.2, running fts5 prefix queries inside a transaction could trigger a heap-based buffer over-read in fts5HashEntrySort in sqlite3.c, which may lead to an information leak. This is related to ext/fts5/fts5_hash.c.
<p>Publish Date: 2019-03-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-9936>CVE-2019-9936</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.sqlite.org/releaselog/3_28_0.html">https://www.sqlite.org/releaselog/3_28_0.html</a></p>
<p>Release Date: 2019-03-22</p>
<p>Fix Resolution: 3.28.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-9936 (High) detected in sqliteversion-3.22.0 - ## CVE-2019-9936 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sqliteversion-3.22.0</b></p></summary>
<p>
<p>Official Git mirror of the SQLite source tree</p>
<p>Library home page: <a href=https://github.com/sqlite/sqlite.git>https://github.com/sqlite/sqlite.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Thanraj/libpng_/commit/a12fbbf25c0b2b5501a8f34cbb3e195203a8ec37">a12fbbf25c0b2b5501a8f34cbb3e195203a8ec37</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>libpng_/ext/fts5/fts5_hash.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In SQLite 3.27.2, running fts5 prefix queries inside a transaction could trigger a heap-based buffer over-read in fts5HashEntrySort in sqlite3.c, which may lead to an information leak. This is related to ext/fts5/fts5_hash.c.
<p>Publish Date: 2019-03-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-9936>CVE-2019-9936</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.sqlite.org/releaselog/3_28_0.html">https://www.sqlite.org/releaselog/3_28_0.html</a></p>
<p>Release Date: 2019-03-22</p>
<p>Fix Resolution: 3.28.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in sqliteversion cve high severity vulnerability vulnerable library sqliteversion official git mirror of the sqlite source tree library home page a href found in head commit a href found in base branch master vulnerable source files libpng ext hash c vulnerability details in sqlite running prefix queries inside a transaction could trigger a heap based buffer over read in in c which may lead to an information leak this is related to ext hash c publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
5,223 | 26,493,265,635 | IssuesEvent | 2023-01-18 01:42:12 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Bug: "disable rollback" message is unclear / deployment fails | type/bug maintainer/need-followup | ### Description:
```
CloudFormation stack changeset
-----------------------------------------------------------------------------------------------------------------------------------------
Operation LogicalResourceId ResourceType Replacement
-----------------------------------------------------------------------------------------------------------------------------------------
* Modify FunctionEvery2HoursPermission AWS::Lambda::Permission True
* Modify FunctionEvery2Hours AWS::Events::Rule True
* Modify Function AWS::Lambda::Function False
-----------------------------------------------------------------------------------------------------------------------------------------
CloudFormation events from stack operations (refresh every 0.5 seconds)
-----------------------------------------------------------------------------------------------------------------------------------------
ResourceStatus ResourceType LogicalResourceId ResourceStatusReason
-----------------------------------------------------------------------------------------------------------------------------------------
UPDATE_IN_PROGRESS AWS::Lambda::Function Function -
UPDATE_COMPLETE AWS::Lambda::Function Function -
UPDATE_FAILED AWS::Events::Rule FunctionEvery2Hours Replacement type updates not
supported on stack with disable-
rollback.
UPDATE_IN_PROGRESS AWS::CloudFormation::Stack pull-dcop-nvg Replacement type updates not
supported on stack with disable-
rollback.
UPDATE_ROLLBACK_COMPLETE AWS::Events::Rule FunctionEvery2Hours Rollback succeeded for the
failed resources.
UPDATE_FAILED AWS::CloudFormation::Stack pull-dcop-nvg The following resource(s) failed
to update:
[FunctionEvery2Hours].
-----------------------------------------------------------------------------------------------------------------------------------------
```
Note the message:
> Replacement type updates not supported on stack with disable-rollback.
As far as I know, I do not have disable-rollback specified/set. I've had the stack rollback during deployment many times, so I know that is/was working. I've tried supplying the CLI argument `--no-disable-rollback` and then `--disable-rollback`, thinking maybe the message was written backwards. But every time I get the same output.
### Steps to reproduce:
Unclear. Perhaps any changes which would require "replacement" of a resource?
### Observed result:
Error message is unclear / does not provide a solution, and deployment fails:
> Replacement type updates not supported on stack with disable-rollback.
### Expected result:
A clear error message, and/or for the deployment to succeed.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Manjaro Linux
2. `sam --version`: 1.55.0
3. AWS region: GovCloud
| True | Bug: "disable rollback" message is unclear / deployment fails - ### Description:
```
CloudFormation stack changeset
-----------------------------------------------------------------------------------------------------------------------------------------
Operation LogicalResourceId ResourceType Replacement
-----------------------------------------------------------------------------------------------------------------------------------------
* Modify FunctionEvery2HoursPermission AWS::Lambda::Permission True
* Modify FunctionEvery2Hours AWS::Events::Rule True
* Modify Function AWS::Lambda::Function False
-----------------------------------------------------------------------------------------------------------------------------------------
CloudFormation events from stack operations (refresh every 0.5 seconds)
-----------------------------------------------------------------------------------------------------------------------------------------
ResourceStatus ResourceType LogicalResourceId ResourceStatusReason
-----------------------------------------------------------------------------------------------------------------------------------------
UPDATE_IN_PROGRESS AWS::Lambda::Function Function -
UPDATE_COMPLETE AWS::Lambda::Function Function -
UPDATE_FAILED AWS::Events::Rule FunctionEvery2Hours Replacement type updates not
supported on stack with disable-
rollback.
UPDATE_IN_PROGRESS AWS::CloudFormation::Stack pull-dcop-nvg Replacement type updates not
supported on stack with disable-
rollback.
UPDATE_ROLLBACK_COMPLETE AWS::Events::Rule FunctionEvery2Hours Rollback succeeded for the
failed resources.
UPDATE_FAILED AWS::CloudFormation::Stack pull-dcop-nvg The following resource(s) failed
to update:
[FunctionEvery2Hours].
-----------------------------------------------------------------------------------------------------------------------------------------
```
Note the message:
> Replacement type updates not supported on stack with disable-rollback.
As far as I know, I do not have disable-rollback specified/set. I've had the stack rollback during deployment many times, so I know that is/was working. I've tried supplying the CLI argument `--no-disable-rollback` and then `--disable-rollback`, thinking maybe the message was written backwards. But every time I get the same output.
### Steps to reproduce:
Unclear. Perhaps any changes which would require "replacement" of a resource?
### Observed result:
Error message is unclear / does not provide a solution, and deployment fails:
> Replacement type updates not supported on stack with disable-rollback.
### Expected result:
A clear error message, and/or for the deployment to succeed.
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Manjaro Linux
2. `sam --version`: 1.55.0
3. AWS region: GovCloud
| main | bug disable rollback message is unclear deployment fails description cloudformation stack changeset operation logicalresourceid resourcetype replacement modify aws lambda permission true modify aws events rule true modify function aws lambda function false cloudformation events from stack operations refresh every seconds resourcestatus resourcetype logicalresourceid resourcestatusreason update in progress aws lambda function function update complete aws lambda function function update failed aws events rule replacement type updates not supported on stack with disable rollback update in progress aws cloudformation stack pull dcop nvg replacement type updates not supported on stack with disable rollback update rollback complete aws events rule rollback succeeded for the failed resources update failed aws cloudformation stack pull dcop nvg the following resource s failed to update note the message replacement type updates not supported on stack with disable rollback as far as i know i do not have disable rollback specified set i ve had the stack rollback during deployment many times so i know that is was working i ve tried supplying the cli argument no disable rollback and then disable rollback thinking maybe the message was written backwards but every time i get the same output steps to reproduce unclear perhaps any changes which would require replacement of a resource observed result error message is unclear does not provide a solution and deployment fails replacement type updates not supported on stack with disable rollback expected result a clear error message and or for the deployment to succeed additional environment details ex windows mac amazon linux etc os manjaro linux sam version aws region govcloud | 1 |
4,813 | 24,769,438,505 | IssuesEvent | 2022-10-23 00:17:20 | amyjko/faculty | https://api.github.com/repos/amyjko/faculty | closed | Migrate to TypeScript | maintainability | This will increase speed of development, streamline refactoring, and prevent errors. | True | Migrate to TypeScript - This will increase speed of development, streamline refactoring, and prevent errors. | main | migrate to typescript this will increase speed of development streamline refactoring and prevent errors | 1 |
2,019 | 6,757,623,027 | IssuesEvent | 2017-10-24 11:31:58 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | closed | [Feature request] Удаление индикатора загрузки в png-изображениях | css need-maintainer | ### 1. Запрос
Неплохо было бы, если б не было индикатора загрузки в левом верхнем углу png-изображений вопроса.
### 2. Аргументация
Лишний элемент, портит вид изображений.
### 3. Пример
Вопрос:
```markdown
http://i.imgur.com/TZvFzle.png Если переместить тяжёлый предмет из одного места на другое, колёса могут сломаться или покоситься. Лучше использовать валы цилиндрической формы. Но и если изготовить валы, сечением которых будет ОН, платформа будет двигаться столь же плавно, как и на цилиндрических валах.*Треугольник Рёло*-info-Его форму имеют свёрла, позволяющие высверливать квадратные отверстия*-proof-126
```
Отображение:

Индикатор загрузки не исчезает вплоть до закрытия комнаты.
| True | [Feature request] Удаление индикатора загрузки в png-изображениях - ### 1. Запрос
Неплохо было бы, если б не было индикатора загрузки в левом верхнем углу png-изображений вопроса.
### 2. Аргументация
Лишний элемент, портит вид изображений.
### 3. Пример
Вопрос:
```markdown
http://i.imgur.com/TZvFzle.png Если переместить тяжёлый предмет из одного места на другое, колёса могут сломаться или покоситься. Лучше использовать валы цилиндрической формы. Но и если изготовить валы, сечением которых будет ОН, платформа будет двигаться столь же плавно, как и на цилиндрических валах.*Треугольник Рёло*-info-Его форму имеют свёрла, позволяющие высверливать квадратные отверстия*-proof-126
```
Отображение:

Индикатор загрузки не исчезает вплоть до закрытия комнаты.
| main | удаление индикатора загрузки в png изображениях запрос неплохо было бы если б не было индикатора загрузки в левом верхнем углу png изображений вопроса аргументация лишний элемент портит вид изображений пример вопрос markdown если переместить тяжёлый предмет из одного места на другое колёса могут сломаться или покоситься лучше использовать валы цилиндрической формы но и если изготовить валы сечением которых будет он платформа будет двигаться столь же плавно как и на цилиндрических валах треугольник рёло info его форму имеют свёрла позволяющие высверливать квадратные отверстия proof отображение индикатор загрузки не исчезает вплоть до закрытия комнаты | 1 |
1,734 | 6,574,851,466 | IssuesEvent | 2017-09-11 14:17:16 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Modifying VM with vsphere_guest failing with current_devices is not defined. | affects_2.1 bug_report cloud vmware waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
[core@bhudcent7 coreos]$ ansible --version
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
[core@bhudcent7 coreos]
```
##### CONFIGURATION
<!---
Only mod I have done is to disable host_key_checking
# uncomment this to disable SSH key host checking
host_key_checking = False
-->
##### OS / ENVIRONMENT
<!---
CentOS 7
-->
##### SUMMARY
When trying to perform reconfigure of exiting vm failing with global name ' current_devices' is not defined
##### STEPS TO REPRODUCE
<!---
clone template and try and modify cdrom iso image
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Create mesos VMS
hosts: localhost
connection: local
tasks:
- name: modify master vms
vsphere_guest:
vcenter_hostname: IP
username: USERNAME
password: PW
guest: "{{ item }}"
state: reconfigured
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: Mesos Master
folder: Mesos
vm_hardware:
memory_mb: 16384
num_cpus: 2
osid: centos64Guest
scsi: paravirtual
vm_cdrom:
type: "iso"
iso_path: "CENTOS2/ISO/mesos/configdrive-{{ item }}.iso"
esxi:
datacenter: HomeLab
hostname: 192.168.1.21
with_items:
- mesosm01
- mesosm02
- mesosm03
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Modify vm ram/cpu and ISO attached to VM
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
496600.14-13851980734504/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py", line 1929, in <module>
main()
File "/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py", line 1856, in main
force=force
File "/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py", line 924, in reconfigure_vm
for dev in current_devices:
NameError: global name 'current_devices' is not defined
failed: [localhost](item=mesosm03) => {"failed": true, "invocation": {"module_name": "vsphere_guest"}, "item": "mesosm03", "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\", line 1929, in <module>\n main()\n File \"/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\", line 1856, in main\n force=force\n File \"/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\", line 924, in reconfigure_vm\n for dev in current_devices:\nNameError: global name 'current_devices' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
| True | Modifying VM with vsphere_guest failing with current_devices is not defined. - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
[core@bhudcent7 coreos]$ ansible --version
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
[core@bhudcent7 coreos]
```
##### CONFIGURATION
<!---
Only mod I have done is to disable host_key_checking
# uncomment this to disable SSH key host checking
host_key_checking = False
-->
##### OS / ENVIRONMENT
<!---
CentOS 7
-->
##### SUMMARY
When trying to perform reconfigure of exiting vm failing with global name ' current_devices' is not defined
##### STEPS TO REPRODUCE
<!---
clone template and try and modify cdrom iso image
-->
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Create mesos VMS
hosts: localhost
connection: local
tasks:
- name: modify master vms
vsphere_guest:
vcenter_hostname: IP
username: USERNAME
password: PW
guest: "{{ item }}"
state: reconfigured
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: Mesos Master
folder: Mesos
vm_hardware:
memory_mb: 16384
num_cpus: 2
osid: centos64Guest
scsi: paravirtual
vm_cdrom:
type: "iso"
iso_path: "CENTOS2/ISO/mesos/configdrive-{{ item }}.iso"
esxi:
datacenter: HomeLab
hostname: 192.168.1.21
with_items:
- mesosm01
- mesosm02
- mesosm03
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Modify vm ram/cpu and ISO attached to VM
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
496600.14-13851980734504/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py", line 1929, in <module>
main()
File "/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py", line 1856, in main
force=force
File "/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py", line 924, in reconfigure_vm
for dev in current_devices:
NameError: global name 'current_devices' is not defined
failed: [localhost](item=mesosm03) => {"failed": true, "invocation": {"module_name": "vsphere_guest"}, "item": "mesosm03", "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\", line 1929, in <module>\n main()\n File \"/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\", line 1856, in main\n force=force\n File \"/tmp/ansible_gzRBRs/ansible_module_vsphere_guest.py\", line 924, in reconfigure_vm\n for dev in current_devices:\nNameError: global name 'current_devices' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
| main | modifying vm with vsphere guest failing with current devices is not defined issue type bug report component name vsphere guest ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration only mod i have done is to disable host key checking uncomment this to disable ssh key host checking host key checking false os environment centos summary when trying to perform reconfigure of exiting vm failing with global name current devices is not defined steps to reproduce clone template and try and modify cdrom iso image name create mesos vms hosts localhost connection local tasks name modify master vms vsphere guest vcenter hostname ip username username password pw guest item state reconfigured vm extra config vcpu hotadd yes mem hotadd yes notes mesos master folder mesos vm hardware memory mb num cpus osid scsi paravirtual vm cdrom type iso iso path iso mesos configdrive item iso esxi datacenter homelab hostname with items expected results modify vm ram cpu and iso attached to vm actual results dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible gzrbrs ansible module vsphere guest py line in main file tmp ansible gzrbrs ansible module vsphere guest py line in main force force file tmp ansible gzrbrs ansible module vsphere guest py line in reconfigure vm for dev in current devices nameerror global name current devices is not defined failed item failed true invocation module name vsphere guest item module stderr traceback most recent call last n file tmp ansible gzrbrs ansible module vsphere guest py line in n main n file tmp ansible gzrbrs ansible module vsphere guest py line in main n force force n file tmp ansible gzrbrs ansible module vsphere guest py line in reconfigure vm n for dev in current devices nnameerror global name current devices is not defined n module stdout msg module failure parsed false | 1 |
28,289 | 12,826,755,506 | IssuesEvent | 2020-07-06 17:08:35 | cityofaustin/atd-data-tech | https://api.github.com/repos/cityofaustin/atd-data-tech | closed | Add Font File to APPServers | Product: AMANDA Provider: LaunchIT Service: Apps Type: Enhancement Workgroup: ROW | ### Font file is needed in order for the Barcode to appear on the forms
Issue is related to #1893
**Font file needs to be added to** C:\Windows\Fonts path and possibly C:\Program Files\CSDC\WebServer\webapps\AMANDA5\images\report\font
On servers (coaamandabodev01, coaamandabotst, coaamandabotst2
Request for CTM: SCTASK0077368 | 1.0 | Add Font File to APPServers - ### Font file is needed in order for the Barcode to appear on the forms
Issue is related to #1893
**Font file needs to be added to** C:\Windows\Fonts path and possibly C:\Program Files\CSDC\WebServer\webapps\AMANDA5\images\report\font
On servers (coaamandabodev01, coaamandabotst, coaamandabotst2
Request for CTM: SCTASK0077368 | non_main | add font file to appservers font file is needed in order for the barcode to appear on the forms issue is related to font file needs to be added to c windows fonts path and possibly c program files csdc webserver webapps images report font on servers coaamandabotst request for ctm | 0 |
2,526 | 8,655,460,661 | IssuesEvent | 2018-11-27 16:00:36 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | QCMA crashes when transferring | unmaintained | I can connect the vita and browse files fine, but QCMA just crashes without any error message as soon as I try to copy a file.
This happens both when I transfer using Wifi and USB so I believe it is a bug with QCMA itself. | True | QCMA crashes when transferring - I can connect the vita and browse files fine, but QCMA just crashes without any error message as soon as I try to copy a file.
This happens both when I transfer using Wifi and USB so I believe it is a bug with QCMA itself. | main | qcma crashes when transferring i can connect the vita and browse files fine but qcma just crashes without any error message as soon as i try to copy a file this happens both when i transfer using wifi and usb so i believe it is a bug with qcma itself | 1 |
53,428 | 22,783,632,914 | IssuesEvent | 2022-07-09 00:18:19 | dockstore/dockstore | https://api.github.com/repos/dockstore/dockstore | closed | Fetching tools for a particular Nextflow workflow throwing 500 | bug web-service review | **Describe the bug**
Fetching a particular workflow's tools throws a 500.
**To Reproduce**
Steps to reproduce the behavior:
```curl -H "Authorization: bearer token" localhost:8080/workflows/17371/tools/109203```
See 500
**Expected behavior**
It shouldn't throw a 500.
**Screenshots**

**Additional context**
See #3928, which had a related fix.
In screenshot of debugger, notice how `mainDescriptor` is an empty string, which presumably causes `processList` to be null. The fix for #3928 was to add a null check for `mainDescriptor`, maybe the fix is to also check if it's not blank?
[Webservice](https://github.com/dockstore/dockstore/commits/46e528f) - 46e528f
[UI](https://github.com/dockstore/dockstore-ui2/commits/f4399850) - 2.8.4-134-gf4399850
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-2066)
┆fixVersions: Dockstore 1.12.x
┆friendlyId: DOCK-2066
┆sprint: 87 Octocat
┆taskType: Story
| 1.0 | Fetching tools for a particular Nextflow workflow throwing 500 - **Describe the bug**
Fetching a particular workflow's tools throws a 500.
**To Reproduce**
Steps to reproduce the behavior:
```curl -H "Authorization: bearer token" localhost:8080/workflows/17371/tools/109203```
See 500
**Expected behavior**
It shouldn't throw a 500.
**Screenshots**

**Additional context**
See #3928, which had a related fix.
In screenshot of debugger, notice how `mainDescriptor` is an empty string, which presumably causes `processList` to be null. The fix for #3928 was to add a null check for `mainDescriptor`, maybe the fix is to also check if it's not blank?
[Webservice](https://github.com/dockstore/dockstore/commits/46e528f) - 46e528f
[UI](https://github.com/dockstore/dockstore-ui2/commits/f4399850) - 2.8.4-134-gf4399850
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-2066)
┆fixVersions: Dockstore 1.12.x
┆friendlyId: DOCK-2066
┆sprint: 87 Octocat
┆taskType: Story
| non_main | fetching tools for a particular nextflow workflow throwing describe the bug fetching a particular workflow s tools throws a to reproduce steps to reproduce the behavior curl h authorization bearer token localhost workflows tools see expected behavior it shouldn t throw a screenshots additional context see which had a related fix in screenshot of debugger notice how maindescriptor is an empty string which presumably causes processlist to be null the fix for was to add a null check for maindescriptor maybe the fix is to also check if it s not blank ┆issue is synchronized with this ┆fixversions dockstore x ┆friendlyid dock ┆sprint octocat ┆tasktype story | 0 |
4,106 | 19,488,542,728 | IssuesEvent | 2021-12-26 22:01:09 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | opened | MAINT: Python 3.10 and Windows -- a few tasks | maintainability OpSys-Windows | Based on some temporary Windows-only workarounds in gh-3461 we eventually need:
- [ ] to remove the warning filters placed on `test_easy_trigger_warning` and `test_flag_index_error`, which are apparently related to the use of `six` somewhere in our dependency chain (I would expect a library of compat shims for Python 2 to be something that we/the ecosystem can shed sooner than later...)
- [ ] to remove the `xfail` placed on `test_write_with_drivers` for `h5py`; I note that this `xfail` has cycled between usage and being commented out in the past, so the test/`h5py` clearly still needs to refine this | True | MAINT: Python 3.10 and Windows -- a few tasks - Based on some temporary Windows-only workarounds in gh-3461 we eventually need:
- [ ] to remove the warning filters placed on `test_easy_trigger_warning` and `test_flag_index_error`, which are apparently related to the use of `six` somewhere in our dependency chain (I would expect a library of compat shims for Python 2 to be something that we/the ecosystem can shed sooner than later...)
- [ ] to remove the `xfail` placed on `test_write_with_drivers` for `h5py`; I note that this `xfail` has cycled between usage and being commented out in the past, so the test/`h5py` clearly still needs to refine this | main | maint python and windows a few tasks based on some temporary windows only workarounds in gh we eventually need to remove the warning filters placed on test easy trigger warning and test flag index error which are apparently related to the use of six somewhere in our dependency chain i would expect a library of compat shims for python to be something that we the ecosystem can shed sooner than later to remove the xfail placed on test write with drivers for i note that this xfail has cycled between usage and being commented out in the past so the test clearly still needs to refine this | 1 |
3,690 | 15,057,736,236 | IssuesEvent | 2021-02-03 22:10:27 | IITIDIDX597/sp_2021_team2 | https://api.github.com/repos/IITIDIDX597/sp_2021_team2 | opened | Clear communication to supplier | Maintaining Inventory Story | As a supplier
I want seamless communication between Cabi kitchen software and my ordering system
So that orders are communicated when needed | True | Clear communication to supplier - As a supplier
I want seamless communication between Cabi kitchen software and my ordering system
So that orders are communicated when needed | main | clear communication to supplier as a supplier i want seamless communication between cabi kitchen software and my ordering system so that orders are communicated when needed | 1 |
484 | 3,770,668,200 | IssuesEvent | 2016-03-16 15:17:29 | uzh/gc3pie | https://api.github.com/repos/uzh/gc3pie | opened | Separate "platform"-dependent commands in `ShellcmdLrms` into a separate pluggable class | enhancement Maintainability | The `ShellcmdLrms` code is littered with `if`s to select which command to use depending on whether the execution site is a Linux or MacOSX machine. This approach has organically grown from previous versions of the code, but has become unreadable and will be a complete mess if we ever need to add a third supported platform.
We should refactor the code to delegate host command selection to a `Platform` class, selected via a factory method. | True | Separate "platform"-dependent commands in `ShellcmdLrms` into a separate pluggable class - The `ShellcmdLrms` code is littered with `if`s to select which command to use depending on whether the execution site is a Linux or MacOSX machine. This approach has organically grown from previous versions of the code, but has become unreadable and will be a complete mess if we ever need to add a third supported platform.
We should refactor the code to delegate host command selection to a `Platform` class, selected via a factory method. | main | separate platform dependent commands in shellcmdlrms into a separate pluggable class the shellcmdlrms code is littered with if s to select which command to use depending on whether the execution site is a linux or macosx machine this approach has organically grown from previous versions of the code but has become unreadable and will be a complete mess if we ever need to add a third supported platform we should refactor the code to delegate host command selection to a platform class selected via a factory method | 1 |
3,708 | 15,180,727,599 | IssuesEvent | 2021-02-15 01:01:58 | EMS-TU-Ilmenau/fastmat | https://api.github.com/repos/EMS-TU-Ilmenau/fastmat | closed | Error with running demos: compOmpIsta, lowRankApprox and sparseReco | maintainance polishing | Trying to run any of the demos directly from default fastmat directory (fastmat/demo) does't work and results following error:
```
File "~/git/fastmat/fastmat/__init__.py", line 62, in <module>
from .Matrix import Matrix, Hermitian, Conjugate, Transpose, flags
ModuleNotFoundError: No module named 'fastmat.Matrix'
```
But copying the demos to an independent directory and running `compOmpIsta`, `lowRankApprox` or `sparseReco` gives:
```
File "~/fastmat_demo/compOmpIsta.py", line 111, in <module>
fastmat.algorithms.OMP, mat, b, K)
AttributeError: module 'fastmat' has no attribute 'algorithms'
```
OS: MacOS Big Sur version 11.0.1
Python: v3.8.6
C-compiler: Clang 6.0 (clang-600.0.57) | True | Error with running demos: compOmpIsta, lowRankApprox and sparseReco - Trying to run any of the demos directly from default fastmat directory (fastmat/demo) does't work and results following error:
```
File "~/git/fastmat/fastmat/__init__.py", line 62, in <module>
from .Matrix import Matrix, Hermitian, Conjugate, Transpose, flags
ModuleNotFoundError: No module named 'fastmat.Matrix'
```
But copying the demos to an independent directory and running `compOmpIsta`, `lowRankApprox` or `sparseReco` gives:
```
File "~/fastmat_demo/compOmpIsta.py", line 111, in <module>
fastmat.algorithms.OMP, mat, b, K)
AttributeError: module 'fastmat' has no attribute 'algorithms'
```
OS: MacOS Big Sur version 11.0.1
Python: v3.8.6
C-compiler: Clang 6.0 (clang-600.0.57) | main | error with running demos compompista lowrankapprox and sparsereco trying to run any of the demos directly from default fastmat directory fastmat demo does t work and results following error file git fastmat fastmat init py line in from matrix import matrix hermitian conjugate transpose flags modulenotfounderror no module named fastmat matrix but copying the demos to an independent directory and running compompista lowrankapprox or sparsereco gives file fastmat demo compompista py line in fastmat algorithms omp mat b k attributeerror module fastmat has no attribute algorithms os macos big sur version python c compiler clang clang | 1 |
2,761 | 9,872,945,395 | IssuesEvent | 2019-06-22 09:41:42 | arcticicestudio/snowsaw | https://api.github.com/repos/arcticicestudio/snowsaw | opened | lint-staged | context-workflow scope-maintainability scope-stability type-feature | <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48658851-01e38400-ea49-11e8-911e-d859eefe6dd5.png" width="25%" /></p>
> Must be resolved **after** #36 #37
> Must be resolved **before** #46
Integrate [lint-staged][gh-lint-staged] to run linters against staged Git files to prevent to add code that violates any style guide into the code base.
<p align="center"><img src="https://raw.githubusercontent.com/okonet/lint-staged/master/screenshots/lint-staged-prettier.gif" width="80%" /></p>
### Configuration
The configuration file `lint-staged.config.js` will be placed in the project root and includes the command that should be run for matching file extensions (globs). It will include at least the three following entries with the same order as listed here:
1. `prettier --list-different` - Run Prettier (#37) against `*.{json,md,yml}` to ensure all files are formatted correctly. The `--list-different` prints the found files that are not conform to the Prettier configuration.
3. `remark --no-stdout` - Run remark-lint (#36) against `*.md` to ensure all Markdown files are compliant to the style guide. The `--no-stdout` flag suppresses the output of the parsed file content.
## Tasks
- [ ] Install [lint-staged][npm-lint-staged] package.
- [ ] Implement `lint-staged.config.js` configuration file.
[gh-lint-staged]: https://github.com/okonet/lint-staged
[npm-lint-staged]: https://www.npmjs.com/package/lint-staged | True | lint-staged - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48658851-01e38400-ea49-11e8-911e-d859eefe6dd5.png" width="25%" /></p>
> Must be resolved **after** #36 #37
> Must be resolved **before** #46
Integrate [lint-staged][gh-lint-staged] to run linters against staged Git files to prevent to add code that violates any style guide into the code base.
<p align="center"><img src="https://raw.githubusercontent.com/okonet/lint-staged/master/screenshots/lint-staged-prettier.gif" width="80%" /></p>
### Configuration
The configuration file `lint-staged.config.js` will be placed in the project root and includes the command that should be run for matching file extensions (globs). It will include at least the three following entries with the same order as listed here:
1. `prettier --list-different` - Run Prettier (#37) against `*.{json,md,yml}` to ensure all files are formatted correctly. The `--list-different` prints the found files that are not conform to the Prettier configuration.
3. `remark --no-stdout` - Run remark-lint (#36) against `*.md` to ensure all Markdown files are compliant to the style guide. The `--no-stdout` flag suppresses the output of the parsed file content.
## Tasks
- [ ] Install [lint-staged][npm-lint-staged] package.
- [ ] Implement `lint-staged.config.js` configuration file.
[gh-lint-staged]: https://github.com/okonet/lint-staged
[npm-lint-staged]: https://www.npmjs.com/package/lint-staged | main | lint staged must be resolved after must be resolved before integrate to run linters against staged git files to prevent to add code that violates any style guide into the code base configuration the configuration file lint staged config js will be placed in the project root and includes the command that should be run for matching file extensions globs it will include at least the three following entries with the same order as listed here prettier list different run prettier against json md yml to ensure all files are formatted correctly the list different prints the found files that are not conform to the prettier configuration remark no stdout run remark lint against md to ensure all markdown files are compliant to the style guide the no stdout flag suppresses the output of the parsed file content tasks install package implement lint staged config js configuration file | 1 |
253,788 | 27,319,755,097 | IssuesEvent | 2023-02-24 18:41:54 | pustovitDmytro/json-logs | https://api.github.com/repos/pustovitDmytro/json-logs | closed | CVE-2022-0355 (High) detected in simple-get-3.1.0.tgz - autoclosed | security vulnerability | ## CVE-2022-0355 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>simple-get-3.1.0.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz">https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/simple-get/package.json</p>
<p>
Dependency Hierarchy:
- vsce-1.103.1.tgz (Root Library)
- keytar-7.7.0.tgz
- prebuild-install-6.1.4.tgz
- :x: **simple-get-3.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM simple-get prior to 4.0.1.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0355>CVE-2022-0355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution (simple-get): 3.1.1</p>
<p>Direct dependency fix Resolution (vsce): 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0355 (High) detected in simple-get-3.1.0.tgz - autoclosed - ## CVE-2022-0355 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>simple-get-3.1.0.tgz</b></p></summary>
<p>Simplest way to make http get requests. Supports HTTPS, redirects, gzip/deflate, streams in < 100 lines.</p>
<p>Library home page: <a href="https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz">https://registry.npmjs.org/simple-get/-/simple-get-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/simple-get/package.json</p>
<p>
Dependency Hierarchy:
- vsce-1.103.1.tgz (Root Library)
- keytar-7.7.0.tgz
- prebuild-install-6.1.4.tgz
- :x: **simple-get-3.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM simple-get prior to 4.0.1.
<p>Publish Date: 2022-01-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0355>CVE-2022-0355</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0355</a></p>
<p>Release Date: 2022-01-26</p>
<p>Fix Resolution (simple-get): 3.1.1</p>
<p>Direct dependency fix Resolution (vsce): 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in simple get tgz autoclosed cve high severity vulnerability vulnerable library simple get tgz simplest way to make http get requests supports https redirects gzip deflate streams in library home page a href path to dependency file package json path to vulnerable library node modules simple get package json dependency hierarchy vsce tgz root library keytar tgz prebuild install tgz x simple get tgz vulnerable library found in base branch master vulnerability details exposure of sensitive information to an unauthorized actor in npm simple get prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution simple get direct dependency fix resolution vsce step up your open source security game with mend | 0 |
3,462 | 13,252,144,608 | IssuesEvent | 2020-08-20 04:23:33 | short-d/short | https://api.github.com/repos/short-d/short | opened | [Refactor] Remaining Storybook 6 Upgrade Items | maintainability | Leftover stuff from #996.
- [ ] Consider switching `addons-knobs` to [Storybook Controls](https://medium.com/storybookjs/storybook-controls-ce82af93e430)
- [ ] Convert story items to use Storybook Args ([source](https://medium.com/storybookjs/introducing-storybook-args-2dadcdb777cc))
- [ ] Ensure everything else from Migrations guide is checked off
- [ ] Consider adding MDX documentation to stories (should probably make this a separate issue that can be dealt with later, since it's more of a nice-to-have) ([source](https://github.com/storybookjs/storybook/tree/next/addons/docs#mdx))
- [ ] Upgrade to latest minor release | True | [Refactor] Remaining Storybook 6 Upgrade Items - Leftover stuff from #996.
- [ ] Consider switching `addons-knobs` to [Storybook Controls](https://medium.com/storybookjs/storybook-controls-ce82af93e430)
- [ ] Convert story items to use Storybook Args ([source](https://medium.com/storybookjs/introducing-storybook-args-2dadcdb777cc))
- [ ] Ensure everything else from Migrations guide is checked off
- [ ] Consider adding MDX documentation to stories (should probably make this a separate issue that can be dealt with later, since it's more of a nice-to-have) ([source](https://github.com/storybookjs/storybook/tree/next/addons/docs#mdx))
- [ ] Upgrade to latest minor release | main | remaining storybook upgrade items leftover stuff from consider switching addons knobs to convert story items to use storybook args ensure everything else from migrations guide is checked off consider adding mdx documentation to stories should probably make this a separate issue that can be dealt with later since it s more of a nice to have upgrade to latest minor release | 1 |
850 | 2,517,147,701 | IssuesEvent | 2015-01-16 12:12:51 | ajency/Foodstree | https://api.github.com/repos/ajency/Foodstree | closed | Consistent formatting required for the labels of all fields | bug Pushed to test site UI | -The edit seller page
current behaviour: Both the words for some labels are capitalised whereas some its not
Expected behaviour: Label format has to be consistent. Let only the first letter be in caps.(Except when the word demands to be in caps)

| 1.0 | Consistent formatting required for the labels of all fields - -The edit seller page
current behaviour: Both the words for some labels are capitalised whereas some its not
Expected behaviour: Label format has to be consistent. Let only the first letter be in caps.(Except when the word demands to be in caps)

| non_main | consistent formatting required for the labels of all fields the edit seller page current behaviour both the words for some labels are capitalised whereas some its not expected behaviour label format has to be consistent let only the first letter be in caps except when the word demands to be in caps | 0 |
288,065 | 31,856,957,687 | IssuesEvent | 2023-09-15 08:10:57 | nidhi7598/linux-4.19.72_CVE-2022-3564 | https://api.github.com/repos/nidhi7598/linux-4.19.72_CVE-2022-3564 | closed | CVE-2022-41218 (Medium) detected in linuxlinux-4.19.294 - autoclosed | Mend: dependency security vulnerability | ## CVE-2022-41218 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/dvb-core/dmxdev.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/dvb-core/dmxdev.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In drivers/media/dvb-core/dmxdev.c in the Linux kernel through 5.19.10, there is a use-after-free caused by refcount races, affecting dvb_demux_open and dvb_dmxdev_release.
<p>Publish Date: 2022-09-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41218>CVE-2022-41218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-41218">https://www.linuxkernelcves.com/cves/CVE-2022-41218</a></p>
<p>Release Date: 2022-09-21</p>
<p>Fix Resolution: v4.14.303,v4.19.270,v5.4.229,v5.10.163,v5.15.87,v6.0.18,v6.1.4,v6.2-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-41218 (Medium) detected in linuxlinux-4.19.294 - autoclosed - ## CVE-2022-41218 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72_CVE-2022-3564/commit/454c7dacf6fa9a6de86d4067f5a08f25cffa519b">454c7dacf6fa9a6de86d4067f5a08f25cffa519b</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/dvb-core/dmxdev.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/dvb-core/dmxdev.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In drivers/media/dvb-core/dmxdev.c in the Linux kernel through 5.19.10, there is a use-after-free caused by refcount races, affecting dvb_demux_open and dvb_dmxdev_release.
<p>Publish Date: 2022-09-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41218>CVE-2022-41218</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-41218">https://www.linuxkernelcves.com/cves/CVE-2022-41218</a></p>
<p>Release Date: 2022-09-21</p>
<p>Fix Resolution: v4.14.303,v4.19.270,v5.4.229,v5.10.163,v5.15.87,v6.0.18,v6.1.4,v6.2-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files drivers media dvb core dmxdev c drivers media dvb core dmxdev c vulnerability details in drivers media dvb core dmxdev c in the linux kernel through there is a use after free caused by refcount races affecting dvb demux open and dvb dmxdev release publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.