Unnamed: 0 int64 3 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 742 | labels stringlengths 4 431 | body stringlengths 5 239k | index stringclasses 10 values | text_combine stringlengths 96 240k | label stringclasses 2 values | text stringlengths 96 200k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6,432 | 4,286,462,340 | IssuesEvent | 2016-07-16 04:32:00 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Crafting menu now annoying to use. | Bug tgui Usability | Each items part is now overly large and the crafting menu now feels overly clunky to and annoying to use. And a few times when I tried to go to the foods subsection, the crafting menu just decided to stop working and crashed. | True | Crafting menu now annoying to use. - Each items part is now overly large and the crafting menu now feels overly clunky to and annoying to use. And a few times when I tried to go to the foods subsection, the crafting menu just decided to stop working and crashed. | usab | crafting menu now annoying to use each items part is now overly large and the crafting menu now feels overly clunky to and annoying to use and a few times when i tried to go to the foods subsection the crafting menu just decided to stop working and crashed | 1 |
12,494 | 7,919,284,169 | IssuesEvent | 2018-07-04 16:07:04 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Godot is randomly unable to write to config files and file cache | bug platform:windows topic:editor usability | Godot 3.0 beta 1 and master and 2.1.4
Windows 10 64 bits
Every so often, under pretty random occasions, I see error messages in which I see Godot is unable to write to some config file or file cache, things like `Unable to write to file <path>, file in use, locked or lacking permissions`.
Example here, I got this one 1 second after opening a scene:

Another one with file cache, when I use Ctrl+S:

| True | Godot is randomly unable to write to config files and file cache - Godot 3.0 beta 1 and master and 2.1.4
Windows 10 64 bits
Every so often, under pretty random occasions, I see error messages in which I see Godot is unable to write to some config file or file cache, things like `Unable to write to file <path>, file in use, locked or lacking permissions`.
Example here, I got this one 1 second after opening a scene:

Another one with file cache, when I use Ctrl+S:

| usab | godot is randomly unable to write to config files and file cache godot beta and master and windows bits every so often under pretty random occasions i see error messages in which i see godot is unable to write to some config file or file cache things like unable to write to file file in use locked or lacking permissions example here i got this one second after opening a scene another one with file cache when i use ctrl s | 1 |
314,742 | 9,602,223,001 | IssuesEvent | 2019-05-10 14:07:08 | openpracticelibrary/openpracticelibrary | https://api.github.com/repos/openpracticelibrary/openpracticelibrary | closed | Adding a new practice when your author/perspective is not merged will fail the build | bug priority-high technical enhancement | **What is this about**
The CMS workflow will create a new author on a separate branch so if you in parallel create a new practice, the build will fail for this until the author is merged because a page _has_ to be associated with a given primary author.
**The solution**
1. better way to add new author to avoid this race condition
or
2. better error handling/ guard in the Hugo template. | 1.0 | Adding a new practice when your author/perspective is not merged will fail the build - **What is this about**
The CMS workflow will create a new author on a separate branch so if you in parallel create a new practice, the build will fail for this until the author is merged because a page _has_ to be associated with a given primary author.
**The solution**
1. better way to add new author to avoid this race condition
or
2. better error handling/ guard in the Hugo template. | non_usab | adding a new practice when your author perspective is not merged will fail the build what is this about the cms workflow will create a new author on a separate branch so if you in parallel create a new practice the build will fail for this until the author is merged because a page has to be associated with a given primary author the solution better way to add new author to avoid this race condition or better error handling guard in the hugo template | 0 |
16,868 | 11,439,942,222 | IssuesEvent | 2020-02-05 08:36:50 | clarity-h2020/marketplace | https://api.github.com/repos/clarity-h2020/marketplace | opened | References Screen Usability Issues | BB: Marketplace Usability bug | > You can reference here existing projects which use your offer as a use case. You can write a short description about the project and the use of the offer in the text field. If there is no project on the marketplace to reference yet you can add a project here [**here**](https://marketplace-dev.myclimateservices.eu/project/add).
Following [the link](https://marketplace-dev.myclimateservices.eu/project/add) `https://marketplace-dev.myclimateservices.eu/project/add` behind the second *here* results in **404 Page not found**. | True | References Screen Usability Issues - > You can reference here existing projects which use your offer as a use case. You can write a short description about the project and the use of the offer in the text field. If there is no project on the marketplace to reference yet you can add a project here [**here**](https://marketplace-dev.myclimateservices.eu/project/add).
Following [the link](https://marketplace-dev.myclimateservices.eu/project/add) `https://marketplace-dev.myclimateservices.eu/project/add` behind the second *here* results in **404 Page not found**. | usab | references screen usability issues you can reference here existing projects which use your offer as a use case you can write a short description about the project and the use of the offer in the text field if there is no project on the marketplace to reference yet you can add a project here following behind the second here results in page not found | 1 |
19,843 | 14,640,903,802 | IssuesEvent | 2020-12-25 04:24:12 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | shareProcessNamespace will cause the container no response, and restart docker will waiting for "Loading containers" | kind/bug lifecycle/rotten sig/node sig/usability | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
shareProcessNamespace will cause the container no response, and restart docker will waiting for "Loading containers"
**What you expected to happen**:
container has response, and restart docker no problem.
**How to reproduce it (as minimally and precisely as possible)**:
1. Create nginx pod as follows:
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
name: nginx
spec:
shareProcessNamespace: true
nodeName: master1
containers:
- image: registry.icp.com:5000/library/common/nginx-amd64:1.17.5
name: nginx
```
2. Find the `nginx: master process nginx -g daemon off` process, and kill it
```shell
root@xuewei81-tgquxn9spy-master-0:~/xjs# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-699575b7df-2bxfd 1/1 Running 0 7s 10.151.161.19 master1 <none> <none>
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker ps | grep nginx-699575b7df-2bxfd
6e2231761cbc 540a289bab6c "nginx -g 'daemon of…" 14 seconds ago Up 12 seconds k8s_nginx_nginx-699575b7df-2bxfd_default_24c7f7b4-b064-11ea-94b0-fa163e279982_0
849f76682e0a registry.icp.com:5000/library/cke/kubernetes/pause-amd64:3.1 "/pause" 17 seconds ago Up 16 seconds k8s_POD_nginx-699575b7df-2bxfd_default_24c7f7b4-b064-11ea-94b0-fa163e279982_0
root@xuewei81-tgquxn9spy-master-0:~/xjs#
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker inspect 6e2231761cbc | grep -i pid
"Pid": 2119,
"PidMode": "container:849f76682e0a322d0109250d0d5eb3767248f18c2ee7457593729339fb400eef",
"PidsLimit": 0,
root@xuewei81-tgquxn9spy-master-0:~/xjs# ps -ef | grep 2119 | grep -v color
root 2119 2093 0 14:31 ? 00:00:00 nginx: master process nginx -g daemon off;
systemd+ 2152 2119 0 14:31 ? 00:00:00 nginx: worker process
```
Then, the 2152 's parent process will become "/pause"
```shell
root@xuewei81-tgquxn9spy-master-0:~/xjs# kill -9 2119
root@xuewei81-tgquxn9spy-master-0:~/xjs#
root@xuewei81-tgquxn9spy-master-0:~/xjs# ps -f 2152
UID PID PPID C STIME TTY STAT TIME CMD
systemd+ 2152 1897 0 14:31 ? S 0:00 nginx: worker process
root@xuewei81-tgquxn9spy-master-0:~/xjs# ps -f 1897
UID PID PPID C STIME TTY STAT TIME CMD
root 1897 1863 0 14:31 ? Ss 0:00 /pause
root@xuewei81-tgquxn9spy-master-0:~/xjs#
```
3. At this moment, the commands "docker inspect" or "docker exec" or "docker logs" will no response:
```shell
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker ps | grep nginx-699575b7df-2bxfd
6e2231761cbc 540a289bab6c "nginx -g 'daemon of…" About a minute ago Up About a minute k8s_nginx_nginx-699575b7df-2bxfd_default_24c7f7b4-b064-11ea-94b0-fa163e279982_0
849f76682e0a registry.icp.com:5000/library/cke/kubernetes/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_nginx-699575b7df-2bxfd_default_24c7f7b4-b064-11ea-94b0-fa163e279982_0
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker inspect 6e2231761cbc
^C
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker exec -it 6e2231761cbc sh
^C
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker logs 6e2231761cbc
^C
```
4. And it will wait for "Loading containers" if restart docker:
```shell
root@xuewei81-tgquxn9spy-master-0:~/xjs# systemctl restart docker
Job for docker.service failed because a timeout was exceeded.
See "systemctl status docker.service" and "journalctl -xe" for details.
Jun 17 14:33:55 xuewei81-tgquxn9spy-master-0 dockerd[2814]: time="2020-06-17T14:33:55.525517831+08:00" level=info msg="Loading containers: start."
Jun 17 14:34:55 xuewei81-tgquxn9spy-master-0 systemd[1]: docker.service: Start operation timed out. Terminating.
Jun 17 14:34:55 xuewei81-tgquxn9spy-master-0 dockerd[2814]: time="2020-06-17T14:34:55.503401172+08:00" level=info msg="Processing signal 'terminated'"
```
5. The docker will restart successfully only I kill the `containerd-shim` process
```shell
root@xuewei81-tgquxn9spy-master-0:~# ps -ef | grep 6e2231761cbc | grep containerd-shim
root 2093 14254 0 14:31 ? 00:00:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/6e2231761cbc4b6ceff36dbcc4cfae67530377616aed47ba34cf0d057f37301d -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc -systemd-cgroup
root@xuewei81-tgquxn9spy-master-0:~# kill -9 2093
root@xuewei81-tgquxn9spy-master-0:~#
root@xuewei81-tgquxn9spy-master-0:~# journalctl -u docker -f
-- Logs begin at Wed 2020-01-01 11:46:34 CST. --
Jun 17 14:35:36 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:36.054660644+08:00" level=warning msg="Your kernel does not support cgroup rt runtime"
Jun 17 14:35:36 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:36.054673960+08:00" level=warning msg="Your kernel does not support cgroup blkio weight"
Jun 17 14:35:36 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:36.054684371+08:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jun 17 14:35:36 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:36.055271704+08:00" level=info msg="Loading containers: start."
Jun 17 14:35:37 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:37.806790934+08:00" level=info msg="There are old running containers, the network config will not take affect"
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:39.038308094+08:00" level=info msg="Loading containers: done."
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:39.178666784+08:00" level=info msg="Docker daemon" commit=0dd43dd graphdriver(s)=overlay2 version=18.09.8
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:39.178815965+08:00" level=info msg="Daemon has completed initialization"
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:39.258160782+08:00" level=info msg="API listen on /var/run/docker.sock"
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 systemd[1]: Started Docker Application Container Engine.
```
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
1.14.3
- Cloud provider or hardware configuration:
- OS (e.g: `cat /etc/os-release`):
```shell
root@xuewei81-tgquxn9spy-master-0:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
```
- Kernel (e.g. `uname -a`):
```shell
root@xuewei81-tgquxn9spy-master-0:~# uname -a
Linux xuewei81-tgquxn9spy-master-0 5.0.0-29-generic #31~18.04.1-Ubuntu SMP Thu Sep 12 18:29:21 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
````
- Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
My docker version is:
```shell
root@xuewei81-tgquxn9spy-master-0:~# docker version
Client:
Version: 18.09.8
API version: 1.39
Go version: go1.10.8
Git commit: 0dd43dd87f
Built: Wed Jul 17 17:41:19 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.8
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 0dd43dd
Built: Wed Jul 17 17:07:25 2019
OS/Arch: linux/amd64
Experimental: false
```
| True | shareProcessNamespace will cause the container no response, and restart docker will waiting for "Loading containers" - <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
shareProcessNamespace will cause the container no response, and restart docker will waiting for "Loading containers"
**What you expected to happen**:
container has response, and restart docker no problem.
**How to reproduce it (as minimally and precisely as possible)**:
1. Create nginx pod as follows:
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
name: nginx
spec:
shareProcessNamespace: true
nodeName: master1
containers:
- image: registry.icp.com:5000/library/common/nginx-amd64:1.17.5
name: nginx
```
2. Find the `nginx: master process nginx -g daemon off` process, and kill it
```shell
root@xuewei81-tgquxn9spy-master-0:~/xjs# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-699575b7df-2bxfd 1/1 Running 0 7s 10.151.161.19 master1 <none> <none>
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker ps | grep nginx-699575b7df-2bxfd
6e2231761cbc 540a289bab6c "nginx -g 'daemon of…" 14 seconds ago Up 12 seconds k8s_nginx_nginx-699575b7df-2bxfd_default_24c7f7b4-b064-11ea-94b0-fa163e279982_0
849f76682e0a registry.icp.com:5000/library/cke/kubernetes/pause-amd64:3.1 "/pause" 17 seconds ago Up 16 seconds k8s_POD_nginx-699575b7df-2bxfd_default_24c7f7b4-b064-11ea-94b0-fa163e279982_0
root@xuewei81-tgquxn9spy-master-0:~/xjs#
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker inspect 6e2231761cbc | grep -i pid
"Pid": 2119,
"PidMode": "container:849f76682e0a322d0109250d0d5eb3767248f18c2ee7457593729339fb400eef",
"PidsLimit": 0,
root@xuewei81-tgquxn9spy-master-0:~/xjs# ps -ef | grep 2119 | grep -v color
root 2119 2093 0 14:31 ? 00:00:00 nginx: master process nginx -g daemon off;
systemd+ 2152 2119 0 14:31 ? 00:00:00 nginx: worker process
```
Then, the 2152 's parent process will become "/pause"
```shell
root@xuewei81-tgquxn9spy-master-0:~/xjs# kill -9 2119
root@xuewei81-tgquxn9spy-master-0:~/xjs#
root@xuewei81-tgquxn9spy-master-0:~/xjs# ps -f 2152
UID PID PPID C STIME TTY STAT TIME CMD
systemd+ 2152 1897 0 14:31 ? S 0:00 nginx: worker process
root@xuewei81-tgquxn9spy-master-0:~/xjs# ps -f 1897
UID PID PPID C STIME TTY STAT TIME CMD
root 1897 1863 0 14:31 ? Ss 0:00 /pause
root@xuewei81-tgquxn9spy-master-0:~/xjs#
```
3. At this moment, the commands "docker inspect" or "docker exec" or "docker logs" will no response:
```shell
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker ps | grep nginx-699575b7df-2bxfd
6e2231761cbc 540a289bab6c "nginx -g 'daemon of…" About a minute ago Up About a minute k8s_nginx_nginx-699575b7df-2bxfd_default_24c7f7b4-b064-11ea-94b0-fa163e279982_0
849f76682e0a registry.icp.com:5000/library/cke/kubernetes/pause-amd64:3.1 "/pause" About a minute ago Up About a minute k8s_POD_nginx-699575b7df-2bxfd_default_24c7f7b4-b064-11ea-94b0-fa163e279982_0
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker inspect 6e2231761cbc
^C
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker exec -it 6e2231761cbc sh
^C
root@xuewei81-tgquxn9spy-master-0:~/xjs# docker logs 6e2231761cbc
^C
```
4. And it will wait for "Loading containers" if restart docker:
```shell
root@xuewei81-tgquxn9spy-master-0:~/xjs# systemctl restart docker
Job for docker.service failed because a timeout was exceeded.
See "systemctl status docker.service" and "journalctl -xe" for details.
Jun 17 14:33:55 xuewei81-tgquxn9spy-master-0 dockerd[2814]: time="2020-06-17T14:33:55.525517831+08:00" level=info msg="Loading containers: start."
Jun 17 14:34:55 xuewei81-tgquxn9spy-master-0 systemd[1]: docker.service: Start operation timed out. Terminating.
Jun 17 14:34:55 xuewei81-tgquxn9spy-master-0 dockerd[2814]: time="2020-06-17T14:34:55.503401172+08:00" level=info msg="Processing signal 'terminated'"
```
5. The docker will restart successfully only I kill the `containerd-shim` process
```shell
root@xuewei81-tgquxn9spy-master-0:~# ps -ef | grep 6e2231761cbc | grep containerd-shim
root 2093 14254 0 14:31 ? 00:00:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/6e2231761cbc4b6ceff36dbcc4cfae67530377616aed47ba34cf0d057f37301d -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc -systemd-cgroup
root@xuewei81-tgquxn9spy-master-0:~# kill -9 2093
root@xuewei81-tgquxn9spy-master-0:~#
root@xuewei81-tgquxn9spy-master-0:~# journalctl -u docker -f
-- Logs begin at Wed 2020-01-01 11:46:34 CST. --
Jun 17 14:35:36 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:36.054660644+08:00" level=warning msg="Your kernel does not support cgroup rt runtime"
Jun 17 14:35:36 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:36.054673960+08:00" level=warning msg="Your kernel does not support cgroup blkio weight"
Jun 17 14:35:36 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:36.054684371+08:00" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jun 17 14:35:36 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:36.055271704+08:00" level=info msg="Loading containers: start."
Jun 17 14:35:37 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:37.806790934+08:00" level=info msg="There are old running containers, the network config will not take affect"
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:39.038308094+08:00" level=info msg="Loading containers: done."
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:39.178666784+08:00" level=info msg="Docker daemon" commit=0dd43dd graphdriver(s)=overlay2 version=18.09.8
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:39.178815965+08:00" level=info msg="Daemon has completed initialization"
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 dockerd[3409]: time="2020-06-17T14:35:39.258160782+08:00" level=info msg="API listen on /var/run/docker.sock"
Jun 17 14:35:39 xuewei81-tgquxn9spy-master-0 systemd[1]: Started Docker Application Container Engine.
```
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
1.14.3
- Cloud provider or hardware configuration:
- OS (e.g: `cat /etc/os-release`):
```shell
root@xuewei81-tgquxn9spy-master-0:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
```
- Kernel (e.g. `uname -a`):
```shell
root@xuewei81-tgquxn9spy-master-0:~# uname -a
Linux xuewei81-tgquxn9spy-master-0 5.0.0-29-generic #31~18.04.1-Ubuntu SMP Thu Sep 12 18:29:21 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
````
- Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
My docker version is:
```shell
root@xuewei81-tgquxn9spy-master-0:~# docker version
Client:
Version: 18.09.8
API version: 1.39
Go version: go1.10.8
Git commit: 0dd43dd87f
Built: Wed Jul 17 17:41:19 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.8
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 0dd43dd
Built: Wed Jul 17 17:07:25 2019
OS/Arch: linux/amd64
Experimental: false
```
| usab | shareprocessnamespace will cause the container no response and restart docker will waiting for loading containers please use this template while reporting a bug and provide as much info as possible not doing so may result in your bug not being addressed in a timely manner thanks if the matter is security related please disclose it privately via what happened shareprocessnamespace will cause the container no response and restart docker will waiting for loading containers what you expected to happen container has response and restart docker no problem how to reproduce it as minimally and precisely as possible create nginx pod as follows yaml apiversion extensions kind deployment metadata labels app nginx name nginx spec selector matchlabels app nginx template metadata labels app nginx name nginx spec shareprocessnamespace true nodename containers image registry icp com library common nginx name nginx find the nginx master process nginx g daemon off process and kill it shell root master xjs kubectl get pod owide name ready status restarts age ip node nominated node readiness gates nginx running root master xjs docker ps grep nginx nginx g daemon of… seconds ago up seconds nginx nginx default registry icp com library cke kubernetes pause pause seconds ago up seconds pod nginx default root master xjs root master xjs docker inspect grep i pid pid pidmode container pidslimit root master xjs ps ef grep grep v color root nginx master process nginx g daemon off systemd nginx worker process then the s parent process will become pause shell root master xjs kill root master xjs root master xjs ps f uid pid ppid c stime tty stat time cmd systemd s nginx worker process root master xjs ps f uid pid ppid c stime tty stat time cmd root ss pause root master xjs at this moment the commands docker inspect or docker exec or docker logs will no response shell root master xjs docker ps grep nginx nginx g daemon of… about a minute ago up about a minute nginx nginx default registry icp com library cke kubernetes pause pause about a minute ago up about a minute pod nginx default root master xjs docker inspect c root master xjs docker exec it sh c root master xjs docker logs c and it will wait for loading containers if restart docker shell root master xjs systemctl restart docker job for docker service failed because a timeout was exceeded see systemctl status docker service and journalctl xe for details jun master dockerd time level info msg loading containers start jun master systemd docker service start operation timed out terminating jun master dockerd time level info msg processing signal terminated the docker will restart successfully only i kill the containerd shim process shell root master ps ef grep grep containerd shim root containerd shim namespace moby workdir var lib containerd io containerd runtime linux moby address run containerd containerd sock containerd binary usr bin containerd runtime root var run docker runtime runc systemd cgroup root master kill root master root master journalctl u docker f logs begin at wed cst jun master dockerd time level warning msg your kernel does not support cgroup rt runtime jun master dockerd time level warning msg your kernel does not support cgroup blkio weight jun master dockerd time level warning msg your kernel does not support cgroup blkio weight device jun master dockerd time level info msg loading containers start jun master dockerd time level info msg there are old running containers the network config will not take affect jun master dockerd time level info msg loading containers done jun master dockerd time level info msg docker daemon commit graphdriver s version jun master dockerd time level info msg daemon has completed initialization jun master dockerd time level info msg api listen on var run docker sock jun master systemd started docker application container engine anything else we need to know environment kubernetes version use kubectl version cloud provider or hardware configuration os e g cat etc os release shell root master cat etc os release name ubuntu version lts bionic beaver id ubuntu id like debian pretty name ubuntu lts version id home url support url bug report url privacy policy url version codename bionic ubuntu codename bionic kernel e g uname a shell root master uname a linux master generic ubuntu smp thu sep utc gnu linux install tools network plugin and version if this is a network related bug others my docker version is shell root master docker version client version api version go version git commit built wed jul os arch linux experimental false server docker engine community engine version api version minimum version go version git commit built wed jul os arch linux experimental false | 1 |
4,444 | 6,617,239,869 | IssuesEvent | 2017-09-21 00:11:03 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Catalog configuration file changes | area/catalog-service kind/enhancement status/resolved status/to-test | 1. Support `template.yml` in addition to `config.yml`
2. Support `icon.extension` and `icon-example.extension` in addition to `catalogIcon.extension` and `catalogIcon-example.extension`
3. Support `compose.yml` in addition to `docker-compose.yml`. It should support all keys in `docker-compose.yml` and `rancher-compose.yml`.
3. Support `template-version.yml` in addition to `rancher-compose.yml`. `template-version.yml` should only contain the keys in the `.catalog` section of `rancher-compose.yml`, but does not require the `.catalog`.
4. Add in `default_version` in `template.yml`, which should replace `version`.
4. Some keys in `template.yml/config.yml` use camel case rather than underscore separated. Both should be supported. | 1.0 | Catalog configuration file changes - 1. Support `template.yml` in addition to `config.yml`
2. Support `icon.extension` and `icon-example.extension` in addition to `catalogIcon.extension` and `catalogIcon-example.extension`
3. Support `compose.yml` in addition to `docker-compose.yml`. It should support all keys in `docker-compose.yml` and `rancher-compose.yml`.
3. Support `template-version.yml` in addition to `rancher-compose.yml`. `template-version.yml` should only contain the keys in the `.catalog` section of `rancher-compose.yml`, but does not require the `.catalog`.
4. Add in `default_version` in `template.yml`, which should replace `version`.
4. Some keys in `template.yml/config.yml` use camel case rather than underscore separated. Both should be supported. | non_usab | catalog configuration file changes support template yml in addition to config yml support icon extension and icon example extension in addition to catalogicon extension and catalogicon example extension support compose yml in addition to docker compose yml it should support all keys in docker compose yml and rancher compose yml support template version yml in addition to rancher compose yml template version yml should only contain the keys in the catalog section of rancher compose yml but does not require the catalog add in default version in template yml which should replace version some keys in template yml config yml use camel case rather than underscore separated both should be supported | 0 |
599,556 | 18,276,921,350 | IssuesEvent | 2021-10-04 20:02:22 | dtcenter/METplotpy | https://api.github.com/repos/dtcenter/METplotpy | opened | Revision series for MODE-TD | type: bug priority: high alert: NEED ACCOUNT KEY alert: NEED MORE DEFINITION alert: NEED PROJECT ASSIGNMENT METplotpy: Plots |
## Describe the Problem ##
MODE-TD Revision box plot is empty. Need to fix it and match the Rscript version.
This is the XML:
[plot_20211004_134424.xml.txt](https://github.com/dtcenter/METplotpy/files/7280968/plot_20211004_134424.xml.txt)
### Expected Behavior ###
Python and Rscript MODE-TD Revision plots should be similar
### Environment ###
Describe your runtime environment:
*1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)*
*2. OS: (e.g. RedHat Linux, MacOS)*
*3. Software version number(s)*
### To Reproduce ###
Describe the steps to reproduce the behavior:
*1. Go to '...'*
*2. Click on '....'*
*3. Scroll down to '....'*
*4. See error*
*Post relevant sample data following these instructions:*
*https://dtcenter.org/community-code/model-evaluation-tools-met/met-help-desk#ftp*
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
- [ ] Select **requestor(s)**
### Projects and Milestone ###
- [ ] Select **Organization** level **Project** for support of the current coordinated release
- [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label
- [ ] Select **Milestone** as the next bugfix version
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Bugfix Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **main_\<Version>**.
Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>`
- [ ] Fix the bug and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **main_\<Version>**.
Pull request: `bugfix <Issue Number> main_<Version> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Organization** level software support **Project** for the current coordinated release
Select: **Milestone** as the next bugfix version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Complete the steps above to fix the bug on the **develop** branch.
Branch name: `bugfix_<Issue Number>_develop_<Description>`
Pull request: `bugfix <Issue Number> develop <Description>`
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Close this issue.
| 1.0 | Revision series for MODE-TD -
## Describe the Problem ##
MODE-TD Revision box plot is empty. Need to fix it and match the Rscript version.
This is the XML:
[plot_20211004_134424.xml.txt](https://github.com/dtcenter/METplotpy/files/7280968/plot_20211004_134424.xml.txt)
### Expected Behavior ###
Python and Rscript MODE-TD Revision plots should be similar
### Environment ###
Describe your runtime environment:
*1. Machine: (e.g. HPC name, Linux Workstation, Mac Laptop)*
*2. OS: (e.g. RedHat Linux, MacOS)*
*3. Software version number(s)*
### To Reproduce ###
Describe the steps to reproduce the behavior:
*1. Go to '...'*
*2. Click on '....'*
*3. Scroll down to '....'*
*4. See error*
*Post relevant sample data following these instructions:*
*https://dtcenter.org/community-code/model-evaluation-tools-met/met-help-desk#ftp*
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
- [ ] Select **requestor(s)**
### Projects and Milestone ###
- [ ] Select **Organization** level **Project** for support of the current coordinated release
- [ ] Select **Repository** level **Project** for development toward the next official release or add **alert: NEED PROJECT ASSIGNMENT** label
- [ ] Select **Milestone** as the next bugfix version
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Bugfix Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **main_\<Version>**.
Branch name: `bugfix_<Issue Number>_main_<Version>_<Description>`
- [ ] Fix the bug and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **main_\<Version>**.
Pull request: `bugfix <Issue Number> main_<Version> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Organization** level software support **Project** for the current coordinated release
Select: **Milestone** as the next bugfix version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Complete the steps above to fix the bug on the **develop** branch.
Branch name: `bugfix_<Issue Number>_develop_<Description>`
Pull request: `bugfix <Issue Number> develop <Description>`
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Close this issue.
| non_usab | revision series for mode td describe the problem mode td revision box plot is empty need to fix it and match the rscript version this is the xml expected behavior python and rscript mode td revision plots should be similar environment describe your runtime environment machine e g hpc name linux workstation mac laptop os e g redhat linux macos software version number s to reproduce describe the steps to reproduce the behavior go to click on scroll down to see error post relevant sample data following these instructions relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select organization level project for support of the current coordinated release select repository level project for development toward the next official release or add alert need project assignment label select milestone as the next bugfix version define related issue s consider the impact to the other metplus components bugfix checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of main branch name bugfix main fix the bug and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into main pull request bugfix main define the pull request metadata as permissions allow select reviewer s and linked issues select organization level software support project for the current coordinated release select milestone as the next bugfix version iterate until the reviewer s accept and merge your changes delete your fork or branch complete the steps above to fix the bug on the develop branch branch name bugfix develop pull request bugfix develop select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version close this issue | 0 |
4,812 | 3,896,645,276 | IssuesEvent | 2016-04-16 00:02:35 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 16729740: Jonathan Morgan's user directory is hardcoded in Automator in Instruments 5.1.1 (55045) | classification:ui/usability reproducible:always status:open | #### Description
Summary:
When trying to run a default, unmodified Automator script, I get the following error:
Path not found '/Users/jonathan_morgan/Library/Developer/Xcode/DerivedData/lkj-efnmzqijwdkcixghgjaokmivpnnx/Build/Products/Release-iphonesimulator/lkj.app/lkj'
Steps to Reproduce:
1. Open Instruments.
2. Click Automation.
3. Click Choose.
4. Click Play arrow at bottom of Automator window.
5. Get error message in an untitled dialog that only has an OK button.
Expected Results:
I expect the default script to execute without the error.
-
Product Version: Instruments 5.1.1 (55045) Xcode Version 5.1.1 (5B1008) OS X 10.9.2 Build 13C1021
Created: 2014-04-25T21:25:04.465577
Originated: 2014-04-25T00:00:00
Open Radar Link: http://www.openradar.me/16729740 | True | 16729740: Jonathan Morgan's user directory is hardcoded in Automator in Instruments 5.1.1 (55045) - #### Description
Summary:
When trying to run a default, unmodified Automator script, I get the following error:
Path not found '/Users/jonathan_morgan/Library/Developer/Xcode/DerivedData/lkj-efnmzqijwdkcixghgjaokmivpnnx/Build/Products/Release-iphonesimulator/lkj.app/lkj'
Steps to Reproduce:
1. Open Instruments.
2. Click Automation.
3. Click Choose.
4. Click Play arrow at bottom of Automator window.
5. Get error message in an untitled dialog that only has an OK button.
Expected Results:
I expect the default script to execute without the error.
-
Product Version: Instruments 5.1.1 (55045) Xcode Version 5.1.1 (5B1008) OS X 10.9.2 Build 13C1021
Created: 2014-04-25T21:25:04.465577
Originated: 2014-04-25T00:00:00
Open Radar Link: http://www.openradar.me/16729740 | usab | jonathan morgan s user directory is hardcoded in automator in instruments description summary when trying to run a default unmodified automator script i get the following error path not found users jonathan morgan library developer xcode deriveddata lkj efnmzqijwdkcixghgjaokmivpnnx build products release iphonesimulator lkj app lkj steps to reproduce open instruments click automation click choose click play arrow at bottom of automator window get error message in an untitled dialog that only has an ok button expected results i expect the default script to execute without the error product version instruments xcode version os x build created originated open radar link | 1 |
125,245 | 12,254,814,701 | IssuesEvent | 2020-05-06 09:07:57 | crate/crate-docs-theme | https://api.github.com/repos/crate/crate-docs-theme | opened | formatting test | documentation | ### Documentation feedback
<!--Please do not edit or remove the following information -->
- Page title: Crate Docs Theme
- Page URL: https://crate.io/docs/fake/en/latest/index.rst
- Source file: https://github.com/crate/crate-docs-theme/blob/master/docs/index.rst
- DocID: 6a992d55
---
just testing the formatting
| 1.0 | formatting test - ### Documentation feedback
<!--Please do not edit or remove the following information -->
- Page title: Crate Docs Theme
- Page URL: https://crate.io/docs/fake/en/latest/index.rst
- Source file: https://github.com/crate/crate-docs-theme/blob/master/docs/index.rst
- DocID: 6a992d55
---
just testing the formatting
| non_usab | formatting test documentation feedback page title crate docs theme page url source file docid just testing the formatting | 0 |
6,911 | 6,661,674,233 | IssuesEvent | 2017-10-02 09:39:39 | datawire/telepresence | https://api.github.com/repos/datawire/telepresence | closed | Unable to start telepresence 0.67 on arch linux | bug infrastructure | ### What were you trying to do?
Start telepresence to access services deployed on local minikube.
Telepresence is installed from AUR package: https://aur.archlinux.org/packages/telepresence/
Not sure what additional info could help, please ask if you need something.
### What did you expect to happen?
Telepresence started
### What happened instead?
It crashed
### Automatically included information
Command line: `['/usr/bin/telepresence', '--logfile', '/tmp/telepresence.log']`
Version: `0.67`
Python version: `3.6.2 (default, Jul 20 2017, 03:52:27)
[GCC 7.1.1 20170630]`
kubectl version: `Client Version: v1.7.6`
oc version: `(error: [Errno 2] No such file or directory: 'oc')`
OS: `Linux vstepchik_macpro 4.12.13-1-ARCH #1 SMP PREEMPT Fri Sep 15 06:36:43 UTC 2017 x86_64 GNU/Linux`
Traceback:
```
Traceback (most recent call last):
File "/usr/bin/telepresence", line 257, in call_f
return f(*args, **kwargs)
File "/usr/bin/telepresence", line 2350, in go
runner, args
File "/usr/bin/telepresence", line 1516, in start_proxy
processes, socks_port, ssh = connect(runner, remote_info, args)
File "/usr/bin/telepresence", line 1099, in connect
bufsize=0,
File "/usr/bin/telepresence", line 404, in popen
return self.launch_command(track, *args, **kwargs)
File "/usr/bin/telepresence", line 367, in launch_command
stderr=self.logfile
File "/usr/lib/python3.6/subprocess.py", line 707, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1333, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'stamp-telepresence'
```
Logs:
```
kube', '--namespace', 'default', 'get', 'pod', 'telepresence-1506520541-048964-25778-1327857849-08rns', '-o', 'json'],)...
18.4 TL | [58] captured.
18.6 TL | [59] Capturing: (['kubectl', '--context', 'minikube', '--namespace', 'default', 'get', 'pod', 'telepresence-1506520541-048964-25778-1327857849-08rns', '-o', 'json'],)...
18.8 TL | [59] captured.
19.0 TL | [60] Capturing: (['kubectl', '--context', 'minikube', '--namespace', 'default', 'get', 'pod', 'telepresence-1506520541-048964-25778-1327857849-08rns', '-o', 'json'],)...
19.2 TL | [60] captured.
19.4 TL | [61] Capturing: (['kubectl', '--context', 'minikube', '--namespace', 'default', 'get', 'pod', 'telepresence-1506520541-048964-25778-1327857849-08rns', '-o', 'json'],)...
19.5 TL | [61] captured.
19.5 TL | [62] Launching: (['kubectl', '--context', 'minikube', '--namespace', 'default', 'logs', '-f', 'telepresence-1506520541-048964-25778-1327857849-08rns', '--container', 'telepresence-1506520541-048964-25778'],)...
```
| 1.0 | Unable to start telepresence 0.67 on arch linux - ### What were you trying to do?
Start telepresence to access services deployed on local minikube.
Telepresence is installed from AUR package: https://aur.archlinux.org/packages/telepresence/
Not sure what additional info could help, please ask if you need something.
### What did you expect to happen?
Telepresence started
### What happened instead?
It crashed
### Automatically included information
Command line: `['/usr/bin/telepresence', '--logfile', '/tmp/telepresence.log']`
Version: `0.67`
Python version: `3.6.2 (default, Jul 20 2017, 03:52:27)
[GCC 7.1.1 20170630]`
kubectl version: `Client Version: v1.7.6`
oc version: `(error: [Errno 2] No such file or directory: 'oc')`
OS: `Linux vstepchik_macpro 4.12.13-1-ARCH #1 SMP PREEMPT Fri Sep 15 06:36:43 UTC 2017 x86_64 GNU/Linux`
Traceback:
```
Traceback (most recent call last):
File "/usr/bin/telepresence", line 257, in call_f
return f(*args, **kwargs)
File "/usr/bin/telepresence", line 2350, in go
runner, args
File "/usr/bin/telepresence", line 1516, in start_proxy
processes, socks_port, ssh = connect(runner, remote_info, args)
File "/usr/bin/telepresence", line 1099, in connect
bufsize=0,
File "/usr/bin/telepresence", line 404, in popen
return self.launch_command(track, *args, **kwargs)
File "/usr/bin/telepresence", line 367, in launch_command
stderr=self.logfile
File "/usr/lib/python3.6/subprocess.py", line 707, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1333, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'stamp-telepresence'
```
Logs:
```
kube', '--namespace', 'default', 'get', 'pod', 'telepresence-1506520541-048964-25778-1327857849-08rns', '-o', 'json'],)...
18.4 TL | [58] captured.
18.6 TL | [59] Capturing: (['kubectl', '--context', 'minikube', '--namespace', 'default', 'get', 'pod', 'telepresence-1506520541-048964-25778-1327857849-08rns', '-o', 'json'],)...
18.8 TL | [59] captured.
19.0 TL | [60] Capturing: (['kubectl', '--context', 'minikube', '--namespace', 'default', 'get', 'pod', 'telepresence-1506520541-048964-25778-1327857849-08rns', '-o', 'json'],)...
19.2 TL | [60] captured.
19.4 TL | [61] Capturing: (['kubectl', '--context', 'minikube', '--namespace', 'default', 'get', 'pod', 'telepresence-1506520541-048964-25778-1327857849-08rns', '-o', 'json'],)...
19.5 TL | [61] captured.
19.5 TL | [62] Launching: (['kubectl', '--context', 'minikube', '--namespace', 'default', 'logs', '-f', 'telepresence-1506520541-048964-25778-1327857849-08rns', '--container', 'telepresence-1506520541-048964-25778'],)...
```
| non_usab | unable to start telepresence on arch linux what were you trying to do start telepresence to access services deployed on local minikube telepresence is installed from aur package not sure what additional info could help please ask if you need something what did you expect to happen telepresence started what happened instead it crashed automatically included information command line version python version default jul kubectl version client version oc version error no such file or directory oc os linux vstepchik macpro arch smp preempt fri sep utc gnu linux traceback traceback most recent call last file usr bin telepresence line in call f return f args kwargs file usr bin telepresence line in go runner args file usr bin telepresence line in start proxy processes socks port ssh connect runner remote info args file usr bin telepresence line in connect bufsize file usr bin telepresence line in popen return self launch command track args kwargs file usr bin telepresence line in launch command stderr self logfile file usr lib subprocess py line in init restore signals start new session file usr lib subprocess py line in execute child raise child exception type errno num err msg filenotfounderror no such file or directory stamp telepresence logs kube namespace default get pod telepresence o json tl captured tl capturing tl captured tl capturing tl captured tl capturing tl captured tl launching | 0 |
289,861 | 25,018,386,916 | IssuesEvent | 2022-11-03 21:05:27 | psu-libraries/researcher-metadata | https://api.github.com/repos/psu-libraries/researcher-metadata | opened | Make "Proxies" link in menu more descriptive | 2022-user-testing | In user testing, we got the suggestion to make the "Proxies" link in the menu more descriptive. | 1.0 | Make "Proxies" link in menu more descriptive - In user testing, we got the suggestion to make the "Proxies" link in the menu more descriptive. | non_usab | make proxies link in menu more descriptive in user testing we got the suggestion to make the proxies link in the menu more descriptive | 0 |
400,023 | 11,765,751,624 | IssuesEvent | 2020-03-14 18:51:16 | ayumi-cloud/oc-security-module | https://api.github.com/repos/ayumi-cloud/oc-security-module | opened | Add automatic protection for 'error_log' files being created | Add to Blacklist Firewall Priority: Low enhancement in-progress | ### Enhancement idea
- [ ] Add automatic protection for 'error_log' files being created.
| 1.0 | Add automatic protection for 'error_log' files being created - ### Enhancement idea
- [ ] Add automatic protection for 'error_log' files being created.
| non_usab | add automatic protection for error log files being created enhancement idea add automatic protection for error log files being created | 0 |
10,338 | 6,671,093,801 | IssuesEvent | 2017-10-04 04:49:48 | loconomics/loconomics | https://api.github.com/repos/loconomics/loconomics | closed | Meet 3.2.1 - On Focus | C: Usability F: Accessbility | ## Summary
Provides that user interface components do not initiate a change of context when receiving focus
**Conformance Level:** A
**Existing 508 Corresponding Provision:** 1194.21(l) and .22(n) | True | Meet 3.2.1 - On Focus - ## Summary
Provides that user interface components do not initiate a change of context when receiving focus
**Conformance Level:** A
**Existing 508 Corresponding Provision:** 1194.21(l) and .22(n) | usab | meet on focus summary provides that user interface components do not initiate a change of context when receiving focus conformance level a existing corresponding provision l and n | 1 |
14,727 | 9,441,110,130 | IssuesEvent | 2019-04-14 23:03:03 | factbox/factbox | https://api.github.com/repos/factbox/factbox | closed | Form errors without special style | usability | When a wrong form is submit, the errors that are be shown are not styled. | True | Form errors without special style - When a wrong form is submit, the errors that are be shown are not styled. | usab | form errors without special style when a wrong form is submit the errors that are be shown are not styled | 1 |
143,924 | 22,204,414,814 | IssuesEvent | 2022-06-07 13:50:38 | deke207/turn-it-around-dev | https://api.github.com/repos/deke207/turn-it-around-dev | closed | Theme development | design dev | **Description**
Develop a theme based on approved mockups.
- Content pages
* Homepage
* Resource Directory
* Resource Membership
* Resource Submissions
* Student Section
* Shop
* Coaching Sessions
* About Us
* News (blog roll)
* Events
* Contact Us
**Resources**
[Approved mocks](https://greatdesigns.me)
| 1.0 | Theme development - **Description**
Develop a theme based on approved mockups.
- Content pages
* Homepage
* Resource Directory
* Resource Membership
* Resource Submissions
* Student Section
* Shop
* Coaching Sessions
* About Us
* News (blog roll)
* Events
* Contact Us
**Resources**
[Approved mocks](https://greatdesigns.me)
| non_usab | theme development description develop a theme based on approved mockups content pages homepage resource directory resource membership resource submissions student section shop coaching sessions about us news blog roll events contact us resources | 0 |
22,369 | 19,186,340,105 | IssuesEvent | 2021-12-05 09:02:04 | bgo-bioimagerie/platformmanager | https://api.github.com/repos/bgo-bioimagerie/platformmanager | closed | Helpdesk: clean spam tickets with delay | enhancement usability | When setting a ticket as spam, do not delete it, just set status to spam (to avoid errors).
In ui, allow access to spam status to allow revert to other status
On pfm-helpdesk process, clean spam tagged tickets only if update time > x hours. | True | Helpdesk: clean spam tickets with delay - When setting a ticket as spam, do not delete it, just set status to spam (to avoid errors).
In ui, allow access to spam status to allow revert to other status
On pfm-helpdesk process, clean spam tagged tickets only if update time > x hours. | usab | helpdesk clean spam tickets with delay when setting a ticket as spam do not delete it just set status to spam to avoid errors in ui allow access to spam status to allow revert to other status on pfm helpdesk process clean spam tagged tickets only if update time x hours | 1 |
131,476 | 10,697,197,367 | IssuesEvent | 2019-10-23 16:02:12 | fedora-python/tox-current-env | https://api.github.com/repos/fedora-python/tox-current-env | closed | Allow paralel test execution of integration tests | enhancement tests | Be it pyest-xdist or `tox --parallel`/`detox`, current integartion testes will fight over the `tests/.tox` directory. We should adapt the test suite to use temporary directories and copy files into it. | 1.0 | Allow paralel test execution of integration tests - Be it pyest-xdist or `tox --parallel`/`detox`, current integartion testes will fight over the `tests/.tox` directory. We should adapt the test suite to use temporary directories and copy files into it. | non_usab | allow paralel test execution of integration tests be it pyest xdist or tox parallel detox current integartion testes will fight over the tests tox directory we should adapt the test suite to use temporary directories and copy files into it | 0 |
2,772 | 3,163,731,398 | IssuesEvent | 2015-09-20 15:51:26 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 22773990: No way to disable picture in picture (PiP) when starting another video | classification:ui usability reproducible:always status:open | #### Description
Summary:
It seems the when using picture in picture, there's no way to programmatically stop it. For example, when launching a video (AVPlayerViewController) in my app, I want to stop PiP from another app.
Steps to Reproduce:
1. Play video
2. Start picture in picture
3. Play another video
Expected Results:
Existing picture in picture would stop
Actual Results:
Picture in picture plays over the top of the new video.
Version:
iOS 9.0 (simulator)
Notes:
Even if it wasn't automated, a way to stop the video programatically on a kind of sharedInstance of AVPictureInPictureController would be good.
Configuration:
iPad Air
Attachments:
'Simulator Screen Shot 20 Sep 2015, 16.09.59.png' was successfully uploaded.
-
Product Version: 9.0
Created: 2015-09-20T15:13:49.554590
Originated: 2015-09-20T00:00:00
Open Radar Link: http://www.openradar.me/22773990 | True | 22773990: No way to disable picture in picture (PiP) when starting another video - #### Description
Summary:
It seems the when using picture in picture, there's no way to programmatically stop it. For example, when launching a video (AVPlayerViewController) in my app, I want to stop PiP from another app.
Steps to Reproduce:
1. Play video
2. Start picture in picture
3. Play another video
Expected Results:
Existing picture in picture would stop
Actual Results:
Picture in picture plays over the top of the new video.
Version:
iOS 9.0 (simulator)
Notes:
Even if it wasn't automated, a way to stop the video programatically on a kind of sharedInstance of AVPictureInPictureController would be good.
Configuration:
iPad Air
Attachments:
'Simulator Screen Shot 20 Sep 2015, 16.09.59.png' was successfully uploaded.
-
Product Version: 9.0
Created: 2015-09-20T15:13:49.554590
Originated: 2015-09-20T00:00:00
Open Radar Link: http://www.openradar.me/22773990 | usab | no way to disable picture in picture pip when starting another video description summary it seems the when using picture in picture there s no way to programmatically stop it for example when launching a video avplayerviewcontroller in my app i want to stop pip from another app steps to reproduce play video start picture in picture play another video expected results existing picture in picture would stop actual results picture in picture plays over the top of the new video version ios simulator notes even if it wasn t automated a way to stop the video programatically on a kind of sharedinstance of avpictureinpicturecontroller would be good configuration ipad air attachments simulator screen shot sep png was successfully uploaded product version created originated open radar link | 1 |
8,491 | 5,756,732,748 | IssuesEvent | 2017-04-26 00:47:43 | unfoldingWord-dev/translationCore | https://api.github.com/repos/unfoldingWord-dev/translationCore | closed | When an edit is removed the user should not be required to indicate a reason | duplicate Usability | If the text is changed back to the original the reason for the change should not be required. It may make sense to indicate in the file system that the previous edit was deleted. Eventually we may want to have an undo button that would allow the user to roll back through the edits one by one. | True | When an edit is removed the user should not be required to indicate a reason - If the text is changed back to the original the reason for the change should not be required. It may make sense to indicate in the file system that the previous edit was deleted. Eventually we may want to have an undo button that would allow the user to roll back through the edits one by one. | usab | when an edit is removed the user should not be required to indicate a reason if the text is changed back to the original the reason for the change should not be required it may make sense to indicate in the file system that the previous edit was deleted eventually we may want to have an undo button that would allow the user to roll back through the edits one by one | 1 |
53,475 | 3,040,580,422 | IssuesEvent | 2015-08-07 16:08:21 | OpenBEL/bel.rb | https://api.github.com/repos/OpenBEL/bel.rb | closed | Installation in JRuby | high priority | JRuby does not support C extensions which bel.rb contains (i.e. BEL Script, C-based parser).
If running in JRuby we should do the following:
- Do not attempt to load libbel C extension.
- Provide alternative implementations for APIs relying on C extension (lib/parser, lib/completion).
- Strip C extension code from gem.
- Push a java architecture gem. | 1.0 | Installation in JRuby - JRuby does not support C extensions which bel.rb contains (i.e. BEL Script, C-based parser).
If running in JRuby we should do the following:
- Do not attempt to load libbel C extension.
- Provide alternative implementations for APIs relying on C extension (lib/parser, lib/completion).
- Strip C extension code from gem.
- Push a java architecture gem. | non_usab | installation in jruby jruby does not support c extensions which bel rb contains i e bel script c based parser if running in jruby we should do the following do not attempt to load libbel c extension provide alternative implementations for apis relying on c extension lib parser lib completion strip c extension code from gem push a java architecture gem | 0 |
13,428 | 8,454,558,811 | IssuesEvent | 2018-10-21 04:52:45 | MarkBind/markbind | https://api.github.com/repos/MarkBind/markbind | closed | Support custom keywords | a-AuthorUsability c.Feature p.Medium | Extension of #428
With the current implementation of keywords, users are forced to include these keywords in the text if they want them to be tagged to a heading.
Allow the option of having keywords invisibly tagged to a heading, for example:
```
# Using a Java IDE
<span class="keyword hidden">Eclipse</span>
<span class="keyword hidden">Netbeans</span>
<span class="keyword hidden">IntelliJ</span>
``` | True | Support custom keywords - Extension of #428
With the current implementation of keywords, users are forced to include these keywords in the text if they want them to be tagged to a heading.
Allow the option of having keywords invisibly tagged to a heading, for example:
```
# Using a Java IDE
<span class="keyword hidden">Eclipse</span>
<span class="keyword hidden">Netbeans</span>
<span class="keyword hidden">IntelliJ</span>
``` | usab | support custom keywords extension of with the current implementation of keywords users are forced to include these keywords in the text if they want them to be tagged to a heading allow the option of having keywords invisibly tagged to a heading for example using a java ide eclipse netbeans intellij | 1 |
140,082 | 18,893,691,003 | IssuesEvent | 2021-11-15 15:41:39 | Zolyn/vuepress-plugin-waline | https://api.github.com/repos/Zolyn/vuepress-plugin-waline | closed | CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz | security vulnerability | ## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: vuepress-plugin-waline/package.json</p>
<p>Path to vulnerable library: vuepress-plugin-waline/node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- minivaline-5.1.7.tgz (Root Library)
- webpack-dev-server-4.0.0-beta.3.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23424 (High) detected in ansi-html-0.0.7.tgz - ## CVE-2021-23424 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansi-html-0.0.7.tgz</b></p></summary>
<p>An elegant lib that converts the chalked (ANSI) text to HTML.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz">https://registry.npmjs.org/ansi-html/-/ansi-html-0.0.7.tgz</a></p>
<p>Path to dependency file: vuepress-plugin-waline/package.json</p>
<p>Path to vulnerable library: vuepress-plugin-waline/node_modules/ansi-html/package.json</p>
<p>
Dependency Hierarchy:
- minivaline-5.1.7.tgz (Root Library)
- webpack-dev-server-4.0.0-beta.3.tgz
- :x: **ansi-html-0.0.7.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package ansi-html. If an attacker provides a malicious string, it will get stuck processing the input for an extremely long time.
<p>Publish Date: 2021-08-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23424>CVE-2021-23424</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve high detected in ansi html tgz cve high severity vulnerability vulnerable library ansi html tgz an elegant lib that converts the chalked ansi text to html library home page a href path to dependency file vuepress plugin waline package json path to vulnerable library vuepress plugin waline node modules ansi html package json dependency hierarchy minivaline tgz root library webpack dev server beta tgz x ansi html tgz vulnerable library found in base branch main vulnerability details this affects all versions of package ansi html if an attacker provides a malicious string it will get stuck processing the input for an extremely long time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
38,948 | 5,019,772,397 | IssuesEvent | 2016-12-14 12:56:24 | ELENA-LANG/elena-lang | https://api.github.com/repos/ELENA-LANG/elena-lang | closed | Open argument list : assigning | Design Idea Discussion | Open argument list should support setAt operator.
This will extremely help dynamic programming in Tape, because direct manipulation with the tape stack will be possible
| 1.0 | Open argument list : assigning - Open argument list should support setAt operator.
This will extremely help dynamic programming in Tape, because direct manipulation with the tape stack will be possible
| non_usab | open argument list assigning open argument list should support setat operator this will extremely help dynamic programming in tape because direct manipulation with the tape stack will be possible | 0 |
15,742 | 10,269,938,422 | IssuesEvent | 2019-08-23 10:14:29 | glam-lab/degender-the-web | https://api.github.com/repos/glam-lab/degender-the-web | closed | Try moving the DGtW header to the bottom of the window | enhancement question usability wontfix | **Is your feature request related to a problem? Please describe.**
While it's very visible, the DGtW header often covers up important functions at the top of the window. Its interactions with site components are quite unpredictable, as well.
**Describe the solution you'd like**
Try putting the DGtW header at the bottom of the window instead of the top, similar to the "cookies" div on StackOverflow.com:

Conveniently, this screen shot is from a discussion of what I'm proposing: https://stackoverflow.com/questions/31942227/stick-div-to-bottom-of-browser-window
One challenge is how multiple such headers might stack. Hiding the cookie div would be a problem. | True | Try moving the DGtW header to the bottom of the window - **Is your feature request related to a problem? Please describe.**
While it's very visible, the DGtW header often covers up important functions at the top of the window. Its interactions with site components are quite unpredictable, as well.
**Describe the solution you'd like**
Try putting the DGtW header at the bottom of the window instead of the top, similar to the "cookies" div on StackOverflow.com:

Conveniently, this screen shot is from a discussion of what I'm proposing: https://stackoverflow.com/questions/31942227/stick-div-to-bottom-of-browser-window
One challenge is how multiple such headers might stack. Hiding the cookie div would be a problem. | usab | try moving the dgtw header to the bottom of the window is your feature request related to a problem please describe while it s very visible the dgtw header often covers up important functions at the top of the window its interactions with site components are quite unpredictable as well describe the solution you d like try putting the dgtw header at the bottom of the window instead of the top similar to the cookies div on stackoverflow com conveniently this screen shot is from a discussion of what i m proposing one challenge is how multiple such headers might stack hiding the cookie div would be a problem | 1 |
18,546 | 13,032,433,518 | IssuesEvent | 2020-07-28 04:15:25 | OBOFoundry/OBOFoundry.github.io | https://api.github.com/repos/OBOFoundry/OBOFoundry.github.io | closed | Scrolling down headers | usability feature website | Is there a way to have headers following as you scroll down, or repeat them in the middle?
| True | Scrolling down headers - Is there a way to have headers following as you scroll down, or repeat them in the middle?
| usab | scrolling down headers is there a way to have headers following as you scroll down or repeat them in the middle | 1 |
35,023 | 4,622,933,364 | IssuesEvent | 2016-09-27 09:18:06 | vector-im/vector-android | https://api.github.com/repos/vector-im/vector-android | opened | Media picker: Switching camera button and exit button are not very visible | design P1 | Related to https://github.com/vector-im/vector-ios/issues/610.
A solution is required for android too | 1.0 | Media picker: Switching camera button and exit button are not very visible - Related to https://github.com/vector-im/vector-ios/issues/610.
A solution is required for android too | non_usab | media picker switching camera button and exit button are not very visible related to a solution is required for android too | 0 |
9,189 | 6,155,148,277 | IssuesEvent | 2017-06-28 14:14:08 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Cant hide CanvasModulate | enhancement topic:editor usability | It seems like there is some problem with all CanvasItem options in CanvasModulate. 'Visible' checkbox doesn't work, so as all other options there. In order to hide it, you need to delete it basically.
| True | Cant hide CanvasModulate - It seems like there is some problem with all CanvasItem options in CanvasModulate. 'Visible' checkbox doesn't work, so as all other options there. In order to hide it, you need to delete it basically.
| usab | cant hide canvasmodulate it seems like there is some problem with all canvasitem options in canvasmodulate visible checkbox doesn t work so as all other options there in order to hide it you need to delete it basically | 1 |
12,115 | 7,703,362,145 | IssuesEvent | 2018-05-21 08:07:31 | github/VisualStudio | https://api.github.com/repos/github/VisualStudio | opened | Scrolling is broken in PRs with lots of changed files. | usability | When looking at the details for a large PR, the horizontal scroll bar is shown at the bottom of the scrollable area, meaning that one has to scroll right to the bottom in order to scroll horizontally. This makes navigating this panel very difficult:

| True | Scrolling is broken in PRs with lots of changed files. - When looking at the details for a large PR, the horizontal scroll bar is shown at the bottom of the scrollable area, meaning that one has to scroll right to the bottom in order to scroll horizontally. This makes navigating this panel very difficult:

| usab | scrolling is broken in prs with lots of changed files when looking at the details for a large pr the horizontal scroll bar is shown at the bottom of the scrollable area meaning that one has to scroll right to the bottom in order to scroll horizontally this makes navigating this panel very difficult | 1 |
260,678 | 27,784,696,408 | IssuesEvent | 2023-03-17 01:29:32 | michaeldotson/home-inventory-vue-app | https://api.github.com/repos/michaeldotson/home-inventory-vue-app | opened | CVE-2022-38900 (High) detected in decode-uri-component-0.2.0.tgz | Mend: dependency security vulnerability | ## CVE-2022-38900 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>decode-uri-component-0.2.0.tgz</b></p></summary>
<p>A better decodeURIComponent</p>
<p>Library home page: <a href="https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz">https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz</a></p>
<p>Path to dependency file: /home-inventory-vue-app/package.json</p>
<p>Path to vulnerable library: /node_modules/decode-uri-component/package.json</p>
<p>
Dependency Hierarchy:
- cli-plugin-babel-3.5.1.tgz (Root Library)
- webpack-4.28.4.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- source-map-resolve-0.5.2.tgz
- :x: **decode-uri-component-0.2.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
decode-uri-component 0.2.0 is vulnerable to Improper Input Validation resulting in DoS.
<p>Publish Date: 2022-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-38900>CVE-2022-38900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-w573-4hg7-7wgq">https://github.com/advisories/GHSA-w573-4hg7-7wgq</a></p>
<p>Release Date: 2022-11-28</p>
<p>Fix Resolution (decode-uri-component): 0.2.1</p>
<p>Direct dependency fix Resolution (@vue/cli-plugin-babel): 3.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-38900 (High) detected in decode-uri-component-0.2.0.tgz - ## CVE-2022-38900 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>decode-uri-component-0.2.0.tgz</b></p></summary>
<p>A better decodeURIComponent</p>
<p>Library home page: <a href="https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz">https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz</a></p>
<p>Path to dependency file: /home-inventory-vue-app/package.json</p>
<p>Path to vulnerable library: /node_modules/decode-uri-component/package.json</p>
<p>
Dependency Hierarchy:
- cli-plugin-babel-3.5.1.tgz (Root Library)
- webpack-4.28.4.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- source-map-resolve-0.5.2.tgz
- :x: **decode-uri-component-0.2.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
decode-uri-component 0.2.0 is vulnerable to Improper Input Validation resulting in DoS.
<p>Publish Date: 2022-11-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-38900>CVE-2022-38900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-w573-4hg7-7wgq">https://github.com/advisories/GHSA-w573-4hg7-7wgq</a></p>
<p>Release Date: 2022-11-28</p>
<p>Fix Resolution (decode-uri-component): 0.2.1</p>
<p>Direct dependency fix Resolution (@vue/cli-plugin-babel): 3.5.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve high detected in decode uri component tgz cve high severity vulnerability vulnerable library decode uri component tgz a better decodeuricomponent library home page a href path to dependency file home inventory vue app package json path to vulnerable library node modules decode uri component package json dependency hierarchy cli plugin babel tgz root library webpack tgz micromatch tgz snapdragon tgz source map resolve tgz x decode uri component tgz vulnerable library vulnerability details decode uri component is vulnerable to improper input validation resulting in dos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution decode uri component direct dependency fix resolution vue cli plugin babel step up your open source security game with mend | 0 |
7,786 | 5,202,328,305 | IssuesEvent | 2017-01-24 09:11:58 | zaproxy/zaproxy | https://api.github.com/repos/zaproxy/zaproxy | closed | Autoselect Imported Certificate | Component-UI enhancement Usability | In ZAP Options, when you import a new PKCS#12 cert (and probably works the same for the other types as well), after successful import, it takes you back to the KeyStore tab (which is good). But then to make that the Active cert the user has to:
a) Click on the cert in the left column,
b) Click on the username in the right column,
c) Click on the Set Active button
I'm requesting that when a cert is successfully imported, and they are sent back to the KeyStore tab, you automatically do a) and b) for them so they just have to do step c).
This doesn't hurt anything if they don't want to do step c) right now, but makes it MUCH easier if they do. This should probably take like 5 minutes or less to implement this usability enhancement.
Given that #2489 is still open, I have to do this every time I start ZAP up again. If #2489 was addressed, I wouldn't care as much (but that's a much harder ticket to address).
| True | Autoselect Imported Certificate - In ZAP Options, when you import a new PKCS#12 cert (and probably works the same for the other types as well), after successful import, it takes you back to the KeyStore tab (which is good). But then to make that the Active cert the user has to:
a) Click on the cert in the left column,
b) Click on the username in the right column,
c) Click on the Set Active button
I'm requesting that when a cert is successfully imported, and they are sent back to the KeyStore tab, you automatically do a) and b) for them so they just have to do step c).
This doesn't hurt anything if they don't want to do step c) right now, but makes it MUCH easier if they do. This should probably take like 5 minutes or less to implement this usability enhancement.
Given that #2489 is still open, I have to do this every time I start ZAP up again. If #2489 was addressed, I wouldn't care as much (but that's a much harder ticket to address).
| usab | autoselect imported certificate in zap options when you import a new pkcs cert and probably works the same for the other types as well after successful import it takes you back to the keystore tab which is good but then to make that the active cert the user has to a click on the cert in the left column b click on the username in the right column c click on the set active button i m requesting that when a cert is successfully imported and they are sent back to the keystore tab you automatically do a and b for them so they just have to do step c this doesn t hurt anything if they don t want to do step c right now but makes it much easier if they do this should probably take like minutes or less to implement this usability enhancement given that is still open i have to do this every time i start zap up again if was addressed i wouldn t care as much but that s a much harder ticket to address | 1 |
363,607 | 10,745,085,733 | IssuesEvent | 2019-10-30 08:12:58 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.customink.com - see bug description | browser-firefox engine-gecko priority-normal | <!-- @browser: Firefox 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0 -->
<!-- @reported_with: web -->
**URL**: https://www.customink.com/
**Browser / Version**: Firefox 70.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: "Live Chat" option at the bottom doesn't work
**Steps to Reproduce**:
1. Scroll all the way down to the bottom.
2. Click on Live Chat.
3. A small window should appear on the bottom right corner.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.customink.com - see bug description - <!-- @browser: Firefox 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0 -->
<!-- @reported_with: web -->
**URL**: https://www.customink.com/
**Browser / Version**: Firefox 70.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: "Live Chat" option at the bottom doesn't work
**Steps to Reproduce**:
1. Scroll all the way down to the bottom.
2. Click on Live Chat.
3. A small window should appear on the bottom right corner.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_usab | see bug description url browser version firefox operating system windows tested another browser yes problem type something else description live chat option at the bottom doesn t work steps to reproduce scroll all the way down to the bottom click on live chat a small window should appear on the bottom right corner browser configuration none from with ❤️ | 0 |
9,518 | 8,656,529,769 | IssuesEvent | 2018-11-27 18:43:19 | terraform-providers/terraform-provider-azurerm | https://api.github.com/repos/terraform-providers/terraform-provider-azurerm | closed | Support geo-replication for azurerm_container_registry resource | enhancement service/container-registry | ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Description
The AzureRM provider does not support [geo-replication](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-geo-replication) for Azure Container Registry.
### New or Affected Resource(s)
* azurerm_container_registry
### References
* https://docs.microsoft.com/en-us/azure/container-registry/container-registry-geo-replication
| 1.0 | Support geo-replication for azurerm_container_registry resource - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Description
The AzureRM provider does not support [geo-replication](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-geo-replication) for Azure Container Registry.
### New or Affected Resource(s)
* azurerm_container_registry
### References
* https://docs.microsoft.com/en-us/azure/container-registry/container-registry-geo-replication
| non_usab | support geo replication for azurerm container registry resource community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description the azurerm provider does not support for azure container registry new or affected resource s azurerm container registry references | 0 |
27,837 | 30,500,267,693 | IssuesEvent | 2023-07-18 13:32:52 | LCA-ActivityBrowser/activity-browser | https://api.github.com/repos/LCA-ActivityBrowser/activity-browser | closed | Make the Graph Explorer preview for wide graphs scrollable | type:feature info:usability type:javascript | Some previews, such as Forwast's "103 Waste treatment, Composting of food waste, DK", are way too wide to fit on the screen, so right now it just overflows. Even though those graphs are not that informative, it's still nice to have functionality of the preview function correctly. To fix this, I would think giving the preview a scrollbar to fix the overflow issue would sole this issue. | True | Make the Graph Explorer preview for wide graphs scrollable - Some previews, such as Forwast's "103 Waste treatment, Composting of food waste, DK", are way too wide to fit on the screen, so right now it just overflows. Even though those graphs are not that informative, it's still nice to have functionality of the preview function correctly. To fix this, I would think giving the preview a scrollbar to fix the overflow issue would sole this issue. | usab | make the graph explorer preview for wide graphs scrollable some previews such as forwast s waste treatment composting of food waste dk are way too wide to fit on the screen so right now it just overflows even though those graphs are not that informative it s still nice to have functionality of the preview function correctly to fix this i would think giving the preview a scrollbar to fix the overflow issue would sole this issue | 1 |
5,877 | 4,048,926,844 | IssuesEvent | 2016-05-23 12:23:34 | Virtual-Labs/physical-sciences-iiith | https://api.github.com/repos/Virtual-Labs/physical-sciences-iiith | closed | QA_Geiger Muller Counter_Prerequisites_p1 | Category: Usability Developed By: VLEAD Release Number: Production Resolved Severity: S2 | Defect Description :
In the "Geiger Muller Counter " experiment, the minimum requirement to run the experiment is not displayed in the page instead a page or Scrolling should appear providing information on minimum requirement to run this experiment, information like Bandwidth,Device Resolution,Hardware Configuration and Software Required.
Actual Result :
In the "Geiger Muller Counter " experiment, the minimum requirement to run the experiment is not displayed in the page.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/physical-sciences-iiith/blob/master/test-cases/integration_test-cases/Geiger%20Muller%20Counter%20/Geiger%20Muller%20Counter%20_15_Prerequisites_p1.org
Attachment:

| True | QA_Geiger Muller Counter_Prerequisites_p1 - Defect Description :
In the "Geiger Muller Counter " experiment, the minimum requirement to run the experiment is not displayed in the page instead a page or Scrolling should appear providing information on minimum requirement to run this experiment, information like Bandwidth,Device Resolution,Hardware Configuration and Software Required.
Actual Result :
In the "Geiger Muller Counter " experiment, the minimum requirement to run the experiment is not displayed in the page.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/physical-sciences-iiith/blob/master/test-cases/integration_test-cases/Geiger%20Muller%20Counter%20/Geiger%20Muller%20Counter%20_15_Prerequisites_p1.org
Attachment:

| usab | qa geiger muller counter prerequisites defect description in the geiger muller counter experiment the minimum requirement to run the experiment is not displayed in the page instead a page or scrolling should appear providing information on minimum requirement to run this experiment information like bandwidth device resolution hardware configuration and software required actual result in the geiger muller counter experiment the minimum requirement to run the experiment is not displayed in the page environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor test step link attachment | 1 |
293,283 | 25,281,456,481 | IssuesEvent | 2022-11-16 16:03:39 | apache/beam | https://api.github.com/repos/apache/beam | reopened | Flink Go XVR tests fail on TestXLang_Multi: Insufficient number of network buffers | go P3 bug cross-language failing test done & done | When running the cross-language test suites () Flink fails on TestXLang_Multi with the following error:
```
19:29:14 2021/08/27 02:29:14 (): java.io.IOException: Insufficient number of network buffers: required
17, but only 16 available. The total number of network buffers is currently set to 2048 of 32768 bytes
each. You can increase this number by setting the configuration keys 'taskmanager.memory.network.fraction',
'taskmanager.memory.network.min', and 'taskmanager.memory.network.max'.
19:29:14 2021/08/27 02:29:14
Job state: FAILED
19:29:14 --- FAIL: TestXLang_Multi (6.26s)
```
This doesn't seem to be a parallelism problem (go test is run with "-p 1" as expected) and is only happening on this specific test.
Imported from Jira [BEAM-12815](https://issues.apache.org/jira/browse/BEAM-12815). Original Jira may contain additional context.
Reported by: danoliveira. | 1.0 | Flink Go XVR tests fail on TestXLang_Multi: Insufficient number of network buffers - When running the cross-language test suites () Flink fails on TestXLang_Multi with the following error:
```
19:29:14 2021/08/27 02:29:14 (): java.io.IOException: Insufficient number of network buffers: required
17, but only 16 available. The total number of network buffers is currently set to 2048 of 32768 bytes
each. You can increase this number by setting the configuration keys 'taskmanager.memory.network.fraction',
'taskmanager.memory.network.min', and 'taskmanager.memory.network.max'.
19:29:14 2021/08/27 02:29:14
Job state: FAILED
19:29:14 --- FAIL: TestXLang_Multi (6.26s)
```
This doesn't seem to be a parallelism problem (go test is run with "-p 1" as expected) and is only happening on this specific test.
Imported from Jira [BEAM-12815](https://issues.apache.org/jira/browse/BEAM-12815). Original Jira may contain additional context.
Reported by: danoliveira. | non_usab | flink go xvr tests fail on testxlang multi insufficient number of network buffers when running the cross language test suites flink fails on testxlang multi with the following error java io ioexception insufficient number of network buffers required but only available the total number of network buffers is currently set to of bytes each you can increase this number by setting the configuration keys taskmanager memory network fraction taskmanager memory network min and taskmanager memory network max job state failed fail testxlang multi this doesn t seem to be a parallelism problem go test is run with p as expected and is only happening on this specific test imported from jira original jira may contain additional context reported by danoliveira | 0 |
15,941 | 10,429,506,069 | IssuesEvent | 2019-09-17 03:02:55 | kzngt/webdev | https://api.github.com/repos/kzngt/webdev | closed | Nav issue - Gradient ontop of navbar when scrolling down sections | RI-Accessibility RI-Usability bug | 
As seen in the image above, when you scroll down the page, the nav bar will slowly fade out of the visibility.
This may be caused by the gradient having a higher z-index than the nav bar, should be addressed in next commit. | True | Nav issue - Gradient ontop of navbar when scrolling down sections - 
As seen in the image above, when you scroll down the page, the nav bar will slowly fade out of the visibility.
This may be caused by the gradient having a higher z-index than the nav bar, should be addressed in next commit. | usab | nav issue gradient ontop of navbar when scrolling down sections as seen in the image above when you scroll down the page the nav bar will slowly fade out of the visibility this may be caused by the gradient having a higher z index than the nav bar should be addressed in next commit | 1 |
184,787 | 32,046,419,859 | IssuesEvent | 2023-09-23 03:32:44 | Shopify/polaris | https://api.github.com/repos/Shopify/polaris | closed | Revisit the purpose and technical constraints of code examples | Design #gsd:30559 Engineering no-issue-activity | The code example feature raises a few recurring questions that are both practical and existential in nature. These need to be considered and explored together and in more depth.
| Recurring topic | Comment | Visual |
|--------|--------|--------|
| **Real content or placeholders** | For example, the resource details pattern could be populated with real content that exemplifies a typical page, or with skeleton content which is easier to build and maintain. As discussed in #7944. |  |
| **Realistic or generic scenarios** | For example, the resource index pattern could show Products which would be more recognizable, or something imaginary that's more widely applicable especially in apps. What has more value to builders? |  |
| **Simple or exemplifying content** | For example, the app settings layout could provide only barebones page layout which might require less gutting, or include a variety of typical form components that showcase proper in-card layouts that demonstrate and ensure more good practices. |  |
| **Visually accurate examples or clean code** | For example, adding containers and/or styling that ensure that the date picker renders in a way that is recognizable from the admin, aligns better with the page narrative and is easier to understand, or provides as clean code as possible to be faster to work with. As discussed in [this comment](https://github.com/Shopify/polaris/issues/7942#issuecomment-1412975059). | Visual tba |
| **Cost vs benefit** | We need to balance how much time are we investing in creating and maintaining code examples, vs how much time are we saving builders and increasing quality for merchants? | n/a |
| **Polaris only or multiple sources:** | What code we could and should include in code examples. For example, the date range picker in the admin has code from `web`, utilizes [helpers](https://github.com/Shopify/quilt), and i18n, which currently isn't compatible with playroom and our on-page render. | n/a |
| 1.0 | Revisit the purpose and technical constraints of code examples - The code example feature raises a few recurring questions that are both practical and existential in nature. These need to be considered and explored together and in more depth.
| Recurring topic | Comment | Visual |
|--------|--------|--------|
| **Real content or placeholders** | For example, the resource details pattern could be populated with real content that exemplifies a typical page, or with skeleton content which is easier to build and maintain. As discussed in #7944. |  |
| **Realistic or generic scenarios** | For example, the resource index pattern could show Products which would be more recognizable, or something imaginary that's more widely applicable especially in apps. What has more value to builders? |  |
| **Simple or exemplifying content** | For example, the app settings layout could provide only barebones page layout which might require less gutting, or include a variety of typical form components that showcase proper in-card layouts that demonstrate and ensure more good practices. |  |
| **Visually accurate examples or clean code** | For example, adding containers and/or styling that ensure that the date picker renders in a way that is recognizable from the admin, aligns better with the page narrative and is easier to understand, or provides as clean code as possible to be faster to work with. As discussed in [this comment](https://github.com/Shopify/polaris/issues/7942#issuecomment-1412975059). | Visual tba |
| **Cost vs benefit** | We need to balance how much time are we investing in creating and maintaining code examples, vs how much time are we saving builders and increasing quality for merchants? | n/a |
| **Polaris only or multiple sources:** | What code we could and should include in code examples. For example, the date range picker in the admin has code from `web`, utilizes [helpers](https://github.com/Shopify/quilt), and i18n, which currently isn't compatible with playroom and our on-page render. | n/a |
| non_usab | revisit the purpose and technical constraints of code examples the code example feature raises a few recurring questions that are both practical and existential in nature these need to be considered and explored together and in more depth recurring topic comment visual real content or placeholders for example the resource details pattern could be populated with real content that exemplifies a typical page or with skeleton content which is easier to build and maintain as discussed in realistic or generic scenarios for example the resource index pattern could show products which would be more recognizable or something imaginary that s more widely applicable especially in apps what has more value to builders simple or exemplifying content for example the app settings layout could provide only barebones page layout which might require less gutting or include a variety of typical form components that showcase proper in card layouts that demonstrate and ensure more good practices visually accurate examples or clean code for example adding containers and or styling that ensure that the date picker renders in a way that is recognizable from the admin aligns better with the page narrative and is easier to understand or provides as clean code as possible to be faster to work with as discussed in visual tba cost vs benefit we need to balance how much time are we investing in creating and maintaining code examples vs how much time are we saving builders and increasing quality for merchants n a polaris only or multiple sources what code we could and should include in code examples for example the date range picker in the admin has code from web utilizes and which currently isn t compatible with playroom and our on page render n a | 0 |
8,064 | 5,376,951,949 | IssuesEvent | 2017-02-23 10:37:40 | cortoproject/corto | https://api.github.com/repos/cortoproject/corto | opened | Support for automatic unit conversions | Corto:TypeSystem Corto:Usability | To facilitate easier integration between applications that use different sets of units for measurement data (Fahrenheit vs. Celcius, Miles vs. Meters) an architecture is required that can automatically detect and translate measurements from one unit to another.
To ensure that such an architecture can be deployed in distributed systems, it should rely on minimal exchange of information (if any at all) before applications can start communicating with each other.
Corto has been designed with a "be generous in what you accept, strict in what you send out" philosophy, which in practice means that data that enters an application can take many forms before it is normalized to the local representation (type) that is defined by the application. There can be as many local representations (types) in a system as there are applications.
The same philosophy should apply to the system that will support unit conversions.
In this system, an application shall define the *semantics* of a measurement, whereas the (serialized) data contains meta information about the *units* of the measurement. For example, a system can have two applications which both publish `temperature` (semantics), where one application publishes `Fahrenheit`, and the other publishes `Celcius` (units).
The types defined by these applications will specify that they contain `temperature` data, whereas the serialized data specifies with which unit the measurement was published. Upon receiving this data, the application can lookup the unit that it received within the semantic "group" defined by its type, and perform the conversion (if necessary).
### Unit group
Units should be grouped by semantic meaning. For example, `meter`, `kilometer` and `centimeter` would all belong to the `distance` group. If units are in the same group, it means that they are different representations of the same thing, and conversions are be possible between them.
### Unit system
Units can be optionally annotated by a "unit system", which is an overlay concept that can span units from multiple semantic groups. A unit system allows a subscriber to receive all data in units annotated by that system without having to manually specify units for each value. This would for example allow a web application to switch from `imperial` to `metric` simply by changing the unit system of the subscriber.
### Default unit
A type shall be able to specify a default unit for that type. The default unit specifies the unit of the data that is stored in instances of that type (in that application). Data will be converted from and to the default unit. Once set, the default unit shall not be changed. This simplifies mapping of datatypes and makes writing applications easier, as code otherwise would always have to check/convert units before doing anything with the data.
To prevent an explosion of types, where for each unit a different type is required, it shall also be possible to specify default units for members and collection elements. This allows a member for example to specify that its type is `int32`, and the unit is `Temperature`.
An alternative to the latter would be to make a unit extend from `corto/lang/type`, in which case the unit itself could be used as type for a member, thus specifying the default unit.
### Automatic unit scaling
When measurements can span multiple orders of magnitude, it would be convenient if the framework supports dynamic scaling of units based on the measurement value. For example, if the default unit is `bytes`, but the measurement is `150MB`, it would be inconvenient if this would be represented as `150000000B`. To facilitate this, it should be possible to annotate units with a range that allows the framework to select the unit that best fits the measurement value.
When automatically selecting a unit, only units from the same unit system should be selected. This would prevent the system from automatically switching from `Kilometer` to `Mile`.
### Dynamic conversions
Some units, like currency, do not have fixed conversion ratios. The framework shall support an API with which a user can retrieve the most up-to-date conversion ratio.
### Unit notation
Units shall be uniquely identified within a semantic group with a unit symbol. It shall be possible to use this symbol in serialized formats to indicate the unit of a measurement. For example, the following definition shows how a measurement could potentially be created and populated:
```
distance/meter myMeasurement: 15km
```
Here, `distance` is the semantical group, `meter` is the unit (we'll assume for now that units can be used as types) and `km` is the unit symbol for `distance/kilometer`. After this operation, the value of `myMeasurement` should be `15000`.
Using a unit symbol as a postfix may not be suitable for all serialization formats. For example, JSON data is typically converted to JavaScript objects, and having to parse a string like "15km" is less convenient than storing the unit and measurement separately (`{"value":15,"unit":"km"}`)
| True | Support for automatic unit conversions - To facilitate easier integration between applications that use different sets of units for measurement data (Fahrenheit vs. Celcius, Miles vs. Meters) an architecture is required that can automatically detect and translate measurements from one unit to another.
To ensure that such an architecture can be deployed in distributed systems, it should rely on minimal exchange of information (if any at all) before applications can start communicating with each other.
Corto has been designed with a "be generous in what you accept, strict in what you send out" philosophy, which in practice means that data that enters an application can take many forms before it is normalized to the local representation (type) that is defined by the application. There can be as many local representations (types) in a system as there are applications.
The same philosophy should apply to the system that will support unit conversions.
In this system, an application shall define the *semantics* of a measurement, whereas the (serialized) data contains meta information about the *units* of the measurement. For example, a system can have two applications which both publish `temperature` (semantics), where one application publishes `Fahrenheit`, and the other publishes `Celcius` (units).
The types defined by these applications will specify that they contain `temperature` data, whereas the serialized data specifies with which unit the measurement was published. Upon receiving this data, the application can lookup the unit that it received within the semantic "group" defined by its type, and perform the conversion (if necessary).
### Unit group
Units should be grouped by semantic meaning. For example, `meter`, `kilometer` and `centimeter` would all belong to the `distance` group. If units are in the same group, it means that they are different representations of the same thing, and conversions are be possible between them.
### Unit system
Units can be optionally annotated by a "unit system", which is an overlay concept that can span units from multiple semantic groups. A unit system allows a subscriber to receive all data in units annotated by that system without having to manually specify units for each value. This would for example allow a web application to switch from `imperial` to `metric` simply by changing the unit system of the subscriber.
### Default unit
A type shall be able to specify a default unit for that type. The default unit specifies the unit of the data that is stored in instances of that type (in that application). Data will be converted from and to the default unit. Once set, the default unit shall not be changed. This simplifies mapping of datatypes and makes writing applications easier, as code otherwise would always have to check/convert units before doing anything with the data.
To prevent an explosion of types, where for each unit a different type is required, it shall also be possible to specify default units for members and collection elements. This allows a member for example to specify that its type is `int32`, and the unit is `Temperature`.
An alternative to the latter would be to make a unit extend from `corto/lang/type`, in which case the unit itself could be used as type for a member, thus specifying the default unit.
### Automatic unit scaling
When measurements can span multiple orders of magnitude, it would be convenient if the framework supports dynamic scaling of units based on the measurement value. For example, if the default unit is `bytes`, but the measurement is `150MB`, it would be inconvenient if this would be represented as `150000000B`. To facilitate this, it should be possible to annotate units with a range that allows the framework to select the unit that best fits the measurement value.
When automatically selecting a unit, only units from the same unit system should be selected. This would prevent the system from automatically switching from `Kilometer` to `Mile`.
### Dynamic conversions
Some units, like currency, do not have fixed conversion ratios. The framework shall support an API with which a user can retrieve the most up-to-date conversion ratio.
### Unit notation
Units shall be uniquely identified within a semantic group with a unit symbol. It shall be possible to use this symbol in serialized formats to indicate the unit of a measurement. For example, the following definition shows how a measurement could potentially be created and populated:
```
distance/meter myMeasurement: 15km
```
Here, `distance` is the semantical group, `meter` is the unit (we'll assume for now that units can be used as types) and `km` is the unit symbol for `distance/kilometer`. After this operation, the value of `myMeasurement` should be `15000`.
Using a unit symbol as a postfix may not be suitable for all serialization formats. For example, JSON data is typically converted to JavaScript objects, and having to parse a string like "15km" is less convenient than storing the unit and measurement separately (`{"value":15,"unit":"km"}`)
| usab | support for automatic unit conversions to facilitate easier integration between applications that use different sets of units for measurement data fahrenheit vs celcius miles vs meters an architecture is required that can automatically detect and translate measurements from one unit to another to ensure that such an architecture can be deployed in distributed systems it should rely on minimal exchange of information if any at all before applications can start communicating with each other corto has been designed with a be generous in what you accept strict in what you send out philosophy which in practice means that data that enters an application can take many forms before it is normalized to the local representation type that is defined by the application there can be as many local representations types in a system as there are applications the same philosophy should apply to the system that will support unit conversions in this system an application shall define the semantics of a measurement whereas the serialized data contains meta information about the units of the measurement for example a system can have two applications which both publish temperature semantics where one application publishes fahrenheit and the other publishes celcius units the types defined by these applications will specify that they contain temperature data whereas the serialized data specifies with which unit the measurement was published upon receiving this data the application can lookup the unit that it received within the semantic group defined by its type and perform the conversion if necessary unit group units should be grouped by semantic meaning for example meter kilometer and centimeter would all belong to the distance group if units are in the same group it means that they are different representations of the same thing and conversions are be possible between them unit system units can be optionally annotated by a unit system which is an overlay concept that can span units from multiple semantic groups a unit system allows a subscriber to receive all data in units annotated by that system without having to manually specify units for each value this would for example allow a web application to switch from imperial to metric simply by changing the unit system of the subscriber default unit a type shall be able to specify a default unit for that type the default unit specifies the unit of the data that is stored in instances of that type in that application data will be converted from and to the default unit once set the default unit shall not be changed this simplifies mapping of datatypes and makes writing applications easier as code otherwise would always have to check convert units before doing anything with the data to prevent an explosion of types where for each unit a different type is required it shall also be possible to specify default units for members and collection elements this allows a member for example to specify that its type is and the unit is temperature an alternative to the latter would be to make a unit extend from corto lang type in which case the unit itself could be used as type for a member thus specifying the default unit automatic unit scaling when measurements can span multiple orders of magnitude it would be convenient if the framework supports dynamic scaling of units based on the measurement value for example if the default unit is bytes but the measurement is it would be inconvenient if this would be represented as to facilitate this it should be possible to annotate units with a range that allows the framework to select the unit that best fits the measurement value when automatically selecting a unit only units from the same unit system should be selected this would prevent the system from automatically switching from kilometer to mile dynamic conversions some units like currency do not have fixed conversion ratios the framework shall support an api with which a user can retrieve the most up to date conversion ratio unit notation units shall be uniquely identified within a semantic group with a unit symbol it shall be possible to use this symbol in serialized formats to indicate the unit of a measurement for example the following definition shows how a measurement could potentially be created and populated distance meter mymeasurement here distance is the semantical group meter is the unit we ll assume for now that units can be used as types and km is the unit symbol for distance kilometer after this operation the value of mymeasurement should be using a unit symbol as a postfix may not be suitable for all serialization formats for example json data is typically converted to javascript objects and having to parse a string like is less convenient than storing the unit and measurement separately value unit km | 1 |
6,304 | 4,216,726,022 | IssuesEvent | 2016-06-30 10:17:32 | ff36/halo-gui | https://api.github.com/repos/ff36/halo-gui | closed | Incident report graph | change enhancement usability ux | The incident graph is still not performing as expected. Lets make the following changes:
- Change the line graph to a bar chart.
- Add a toggle switch top right to change between incident `ACTIVE`, `STARTED`, `ENDED`.
- Default to `ACTIVE`.
- `ACTIVE` = blue, `STARTED` = red, `ENDED` = green

Make sure you subdivide the logic into time slots to examine the incident at that moment in time by using the history. The logical process should be:
- Get all incidents that fall into the selected time range:
- Get all incidents where `history[0].time < {earliest_range} && history[history.size - 1].time > {latest_range}`
- Subdivide the reporting period into time unit blocks
- 1% of the selected range (so that a chart is always made up of 100 samples)
- Iterate through the incidents and add them to their respective time unit blocks.
- For `Started` add incident if `history[0].time > {time_unit_block_start} && history[0].time < {time_unit_block_end}`
- For `Ended` add incident if `history[history.size - 1].time > {time_unit_block_start} && history[history.size - 1].time < {time_unit_block_end}`
- For `Active`:
- Get closest history entry prior to time block start. (lets call this __history_block_start__)
- if `{history_block_start}.type == STARTED` then add incident.
- if `{history_block_start}.type == ENDED` then do not add incident.
- if `{history_block_start}.type == CHANGED && {history_block_start}.status == RED` then add incident.
- if `{history_block_start}.type == CHANGED && {history_block_start}.status == YELLOW` then do not add incident.
- if `{history_block_start}.type == CHANGED && {history_block_start}.status == GREEN` then do not add incident. | True | Incident report graph - The incident graph is still not performing as expected. Lets make the following changes:
- Change the line graph to a bar chart.
- Add a toggle switch top right to change between incident `ACTIVE`, `STARTED`, `ENDED`.
- Default to `ACTIVE`.
- `ACTIVE` = blue, `STARTED` = red, `ENDED` = green

Make sure you subdivide the logic into time slots to examine the incident at that moment in time by using the history. The logical process should be:
- Get all incidents that fall into the selected time range:
- Get all incidents where `history[0].time < {earliest_range} && history[history.size - 1].time > {latest_range}`
- Subdivide the reporting period into time unit blocks
- 1% of the selected range (so that a chart is always made up of 100 samples)
- Iterate through the incidents and add them to their respective time unit blocks.
- For `Started` add incident if `history[0].time > {time_unit_block_start} && history[0].time < {time_unit_block_end}`
- For `Ended` add incident if `history[history.size - 1].time > {time_unit_block_start} && history[history.size - 1].time < {time_unit_block_end}`
- For `Active`:
- Get closest history entry prior to time block start. (lets call this __history_block_start__)
- if `{history_block_start}.type == STARTED` then add incident.
- if `{history_block_start}.type == ENDED` then do not add incident.
- if `{history_block_start}.type == CHANGED && {history_block_start}.status == RED` then add incident.
- if `{history_block_start}.type == CHANGED && {history_block_start}.status == YELLOW` then do not add incident.
- if `{history_block_start}.type == CHANGED && {history_block_start}.status == GREEN` then do not add incident. | usab | incident report graph the incident graph is still not performing as expected lets make the following changes change the line graph to a bar chart add a toggle switch top right to change between incident active started ended default to active active blue started red ended green make sure you subdivide the logic into time slots to examine the incident at that moment in time by using the history the logical process should be get all incidents that fall into the selected time range get all incidents where history time latest range subdivide the reporting period into time unit blocks of the selected range so that a chart is always made up of samples iterate through the incidents and add them to their respective time unit blocks for started add incident if history time time unit block start history time time unit block end for ended add incident if history time time unit block start history time time unit block end for active get closest history entry prior to time block start lets call this history block start if history block start type started then add incident if history block start type ended then do not add incident if history block start type changed history block start status red then add incident if history block start type changed history block start status yellow then do not add incident if history block start type changed history block start status green then do not add incident | 1 |
297,163 | 25,604,488,569 | IssuesEvent | 2022-12-01 23:47:31 | mbmartin44/Aurora-2022 | https://api.github.com/repos/mbmartin44/Aurora-2022 | closed | [Verification Feature] - Add_LLE_MATLAB | Testing/Verification Feature | ** ONLY FOR ISOLATED FEATURES (SHOULD AFFECT NO OTHER PROJECTS)
** SHOULD BE SI ISSUE IF AFFECTING OTHER PROJECTS
** Give a description of the new feature **
** Identify any issues that may arise for other projects (UI, DSP, Networking)**
** Link any issues related to this one in the comment section **
| 1.0 | [Verification Feature] - Add_LLE_MATLAB - ** ONLY FOR ISOLATED FEATURES (SHOULD AFFECT NO OTHER PROJECTS)
** SHOULD BE SI ISSUE IF AFFECTING OTHER PROJECTS
** Give a description of the new feature **
** Identify any issues that may arise for other projects (UI, DSP, Networking)**
** Link any issues related to this one in the comment section **
| non_usab | add lle matlab only for isolated features should affect no other projects should be si issue if affecting other projects give a description of the new feature identify any issues that may arise for other projects ui dsp networking link any issues related to this one in the comment section | 0 |
24,921 | 24,489,323,579 | IssuesEvent | 2022-10-09 21:21:56 | bevyengine/bevy | https://api.github.com/repos/bevyengine/bevy | closed | Make default background color of UI `NodeBundle`s transparent | C-Enhancement A-UI C-Usability | ## What problem does this solve or what need does it fill?
The `NodeBundle` is mostly used as a layout tool in Bevy UI.
However, it derives the default `BackgroundColor`, which is white.
In almost all cases, this is not what you want and you end up manually specifying `Color::NONE.into()` or a color other than white as a background color.
## What solution would you like?
- Set `Color::NONE.into()` as a default for `NodeBundle.background_color`.
- Yeet the occurrences of explicitly setting `Color::NONE.into()` for `NodeBundle`s.
## What alternative(s) have you considered?
- Leave it as is. It's annoying, but workable.
- Make the default `BackgroundColor` transparent, instead of just changing the default for `NodeBundle`. However, `BackgroundColor` is also used for tinting, where white is a sensible default.
| True | Make default background color of UI `NodeBundle`s transparent - ## What problem does this solve or what need does it fill?
The `NodeBundle` is mostly used as a layout tool in Bevy UI.
However, it derives the default `BackgroundColor`, which is white.
In almost all cases, this is not what you want and you end up manually specifying `Color::NONE.into()` or a color other than white as a background color.
## What solution would you like?
- Set `Color::NONE.into()` as a default for `NodeBundle.background_color`.
- Yeet the occurrences of explicitly setting `Color::NONE.into()` for `NodeBundle`s.
## What alternative(s) have you considered?
- Leave it as is. It's annoying, but workable.
- Make the default `BackgroundColor` transparent, instead of just changing the default for `NodeBundle`. However, `BackgroundColor` is also used for tinting, where white is a sensible default.
| usab | make default background color of ui nodebundle s transparent what problem does this solve or what need does it fill the nodebundle is mostly used as a layout tool in bevy ui however it derives the default backgroundcolor which is white in almost all cases this is not what you want and you end up manually specifying color none into or a color other than white as a background color what solution would you like set color none into as a default for nodebundle background color yeet the occurrences of explicitly setting color none into for nodebundle s what alternative s have you considered leave it as is it s annoying but workable make the default backgroundcolor transparent instead of just changing the default for nodebundle however backgroundcolor is also used for tinting where white is a sensible default | 1 |
92,954 | 8,389,255,933 | IssuesEvent | 2018-10-09 09:05:59 | eth-cscs/reframe | https://api.github.com/repos/eth-cscs/reframe | opened | Port MCH tests to the C2SM environments | prio: normal regression test | Tests to be ported:
- [ ] cscs-checks/mch/automatic_arrays.py
- [ ] cscs-checks/mch/g2g_meteoswiss_check.py
- [ ] cscs-checks/mch/gpu_direct_acc.py
- [ ] cscs-checks/mch/gpu_direct_cuda.py
- [ ] cscs-checks/mch/openacc_cuda_mpi_cppstd.py
- [ ] cscs-checks/prgenv/openacc_checks.py (I'm not sure about this one)
- [ ] cscs-checks/libraries/io/netcdf_compile_run.py
- [ ] cscs-checks/tools/io/cdo.py
- [ ] cscs-checks/tools/io/nco.py
| 1.0 | Port MCH tests to the C2SM environments - Tests to be ported:
- [ ] cscs-checks/mch/automatic_arrays.py
- [ ] cscs-checks/mch/g2g_meteoswiss_check.py
- [ ] cscs-checks/mch/gpu_direct_acc.py
- [ ] cscs-checks/mch/gpu_direct_cuda.py
- [ ] cscs-checks/mch/openacc_cuda_mpi_cppstd.py
- [ ] cscs-checks/prgenv/openacc_checks.py (I'm not sure about this one)
- [ ] cscs-checks/libraries/io/netcdf_compile_run.py
- [ ] cscs-checks/tools/io/cdo.py
- [ ] cscs-checks/tools/io/nco.py
| non_usab | port mch tests to the environments tests to be ported cscs checks mch automatic arrays py cscs checks mch meteoswiss check py cscs checks mch gpu direct acc py cscs checks mch gpu direct cuda py cscs checks mch openacc cuda mpi cppstd py cscs checks prgenv openacc checks py i m not sure about this one cscs checks libraries io netcdf compile run py cscs checks tools io cdo py cscs checks tools io nco py | 0 |
4,790 | 3,886,470,128 | IssuesEvent | 2016-04-14 01:11:18 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 19923944: Photos 1.0: Can not command click multiple suggested faces | classification:ui/usability reproducible:always status:open | #### Description
Summary:
In a library with many photos you may want to ignore a lot of suggested faces at one time. Currently you can not command click suggested faces to select multiple faces at once.
Steps to Reproduce:
1. Have a library with a lot of suggested faces.
2. Try to command click two faces.
Expected Results:
1. Have a library with a lot of suggested faces.
2. Try to command click two faces.
3. Both faces become selected allowing you to ignore both faces at once.
Actual Results:
1. Have a library with a lot of suggested faces.
2. Try to command click two faces.
3. Command clicking another face deselects the first face.
Regression:
None.
Notes:
None.
-
Product Version: Photos 1.0 (205.34.0)
Created: 2015-02-23T19:05:44.093018
Originated: 2015-02-23T11:05:00
Open Radar Link: http://www.openradar.me/19923944 | True | 19923944: Photos 1.0: Can not command click multiple suggested faces - #### Description
Summary:
In a library with many photos you may want to ignore a lot of suggested faces at one time. Currently you can not command click suggested faces to select multiple faces at once.
Steps to Reproduce:
1. Have a library with a lot of suggested faces.
2. Try to command click two faces.
Expected Results:
1. Have a library with a lot of suggested faces.
2. Try to command click two faces.
3. Both faces become selected allowing you to ignore both faces at once.
Actual Results:
1. Have a library with a lot of suggested faces.
2. Try to command click two faces.
3. Command clicking another face deselects the first face.
Regression:
None.
Notes:
None.
-
Product Version: Photos 1.0 (205.34.0)
Created: 2015-02-23T19:05:44.093018
Originated: 2015-02-23T11:05:00
Open Radar Link: http://www.openradar.me/19923944 | usab | photos can not command click multiple suggested faces description summary in a library with many photos you may want to ignore a lot of suggested faces at one time currently you can not command click suggested faces to select multiple faces at once steps to reproduce have a library with a lot of suggested faces try to command click two faces expected results have a library with a lot of suggested faces try to command click two faces both faces become selected allowing you to ignore both faces at once actual results have a library with a lot of suggested faces try to command click two faces command clicking another face deselects the first face regression none notes none product version photos created originated open radar link | 1 |
15,275 | 9,922,839,500 | IssuesEvent | 2019-07-01 04:52:09 | peeringdb/peeringdb | https://api.github.com/repos/peeringdb/peeringdb | closed | Translation missing and might be wrong | usability | https://www.peeringdb.com/net/694
When you are in “Netzwerke” there ist still says “Öffentliche Peering-Exchanges” even so “Exchanges” already have been translated to “Austauschpunkte”
It also says “Private Peering Facilities” and not “Private Peering Liegenschaften”
IMHO I would have translated facility rather to “Einrichtung” than “Liegenschaft”. | True | Translation missing and might be wrong - https://www.peeringdb.com/net/694
When you are in “Netzwerke” there ist still says “Öffentliche Peering-Exchanges” even so “Exchanges” already have been translated to “Austauschpunkte”
It also says “Private Peering Facilities” and not “Private Peering Liegenschaften”
IMHO I would have translated facility rather to “Einrichtung” than “Liegenschaft”. | usab | translation missing and might be wrong when you are in “netzwerke” there ist still says “öffentliche peering exchanges” even so “exchanges” already have been translated to “austauschpunkte” it also says “private peering facilities” and not “private peering liegenschaften” imho i would have translated facility rather to “einrichtung” than “liegenschaft” | 1 |
21,890 | 18,043,545,823 | IssuesEvent | 2021-09-18 13:26:57 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Errors in debugger aren't cleared on run | bug topic:editor usability regression | ### Godot version
a1a8afa
### System information
W10
### Issue description
Seems like the Errors tab in the debugger isn't auto-cleared anymore. What's more interesting, the error count for Debugger tab does reset, so there is a mismatch between the number and the actual accumulated amount of errors:

### Steps to reproduce
1. Make some code with errors/warnings (very easy on master, you don't even need code lol)
2. Run
3. Run
4. Notice how the errors accumulated in the Debugger
### Minimal reproduction project
_No response_ | True | Errors in debugger aren't cleared on run - ### Godot version
a1a8afa
### System information
W10
### Issue description
Seems like the Errors tab in the debugger isn't auto-cleared anymore. What's more interesting, the error count for Debugger tab does reset, so there is a mismatch between the number and the actual accumulated amount of errors:

### Steps to reproduce
1. Make some code with errors/warnings (very easy on master, you don't even need code lol)
2. Run
3. Run
4. Notice how the errors accumulated in the Debugger
### Minimal reproduction project
_No response_ | usab | errors in debugger aren t cleared on run godot version system information issue description seems like the errors tab in the debugger isn t auto cleared anymore what s more interesting the error count for debugger tab does reset so there is a mismatch between the number and the actual accumulated amount of errors steps to reproduce make some code with errors warnings very easy on master you don t even need code lol run run notice how the errors accumulated in the debugger minimal reproduction project no response | 1 |
335,429 | 24,468,107,568 | IssuesEvent | 2022-10-07 16:53:06 | bondaleksey/credit-card-fraud-detection | https://api.github.com/repos/bondaleksey/credit-card-fraud-detection | closed | Decompose the project and create a preliminary set of tasks | documentation good first issue | Decompose the project and form a set of tasks by type: infrastructure, data preparation, modeling, deployment, etc.
Сreate a preliminary set of tasks for the project. | 1.0 | Decompose the project and create a preliminary set of tasks - Decompose the project and form a set of tasks by type: infrastructure, data preparation, modeling, deployment, etc.
Сreate a preliminary set of tasks for the project. | non_usab | decompose the project and create a preliminary set of tasks decompose the project and form a set of tasks by type infrastructure data preparation modeling deployment etc сreate a preliminary set of tasks for the project | 0 |
4,477 | 3,870,024,549 | IssuesEvent | 2016-04-10 23:01:31 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 23112422: full featured Xcode zoom | classification:ui/usability reproducible:always status:open | #### Description
Summary:
Designing, arranging and adjusting objects in the "Interface Builder" is currently only possible at 100% zoom. Esp. with iPad pro and tvOS this is a time wrecking and cumbersome process.
Other applications like Pages, Keynote, Numbers do support their editing and layout features at more than just the 100% zoom - and so should Xcode.
Steps to Reproduce (example):
1. open a storyboard for tvOS
2. add a navigation view controller and two ore more navigation controllers
3. zoom out to 50%
3. add segues to create a typical navigation stack between the view controllers
Expected Results:
At all zoom scales all editing features should work as they work at 100%.
Actual Results:
Xcode either zooms out to 100% zoom automatically and interrupts the segues creation or does not let the user create a segue. The solution involves heavy scrolling, missing targets and rearranging objects.
Regression:
At the moment only the 27" iMac can display a full tvOS view controller with the usual palettes. For multi-monitor or MacBook usage this is a pain significantly wasting time and efficiency while developing for these platforms.
Notes:
It's important to note that more than just segue creation via drag&drop is affected by this limitation. Xcode should support editing in all supported zoom scales for (but not limited to):
- segues drag&drop
- view/scene drag&drop
- alignment of views
- resizing of objects
- creating objects by means of drag&drop from the objects palette
- selecting objects for inspection
- editing of layout constraints for Auto Layout
- display/editing of size classes
-
Product Version:
Created: 2015-10-14T19:06:24.434300
Originated: 2015-10-14T21:05:00
Open Radar Link: http://www.openradar.me/23112422 | True | 23112422: full featured Xcode zoom - #### Description
Summary:
Designing, arranging and adjusting objects in the "Interface Builder" is currently only possible at 100% zoom. Esp. with iPad pro and tvOS this is a time wrecking and cumbersome process.
Other applications like Pages, Keynote, Numbers do support their editing and layout features at more than just the 100% zoom - and so should Xcode.
Steps to Reproduce (example):
1. open a storyboard for tvOS
2. add a navigation view controller and two ore more navigation controllers
3. zoom out to 50%
3. add segues to create a typical navigation stack between the view controllers
Expected Results:
At all zoom scales all editing features should work as they work at 100%.
Actual Results:
Xcode either zooms out to 100% zoom automatically and interrupts the segues creation or does not let the user create a segue. The solution involves heavy scrolling, missing targets and rearranging objects.
Regression:
At the moment only the 27" iMac can display a full tvOS view controller with the usual palettes. For multi-monitor or MacBook usage this is a pain significantly wasting time and efficiency while developing for these platforms.
Notes:
It's important to note that more than just segue creation via drag&drop is affected by this limitation. Xcode should support editing in all supported zoom scales for (but not limited to):
- segues drag&drop
- view/scene drag&drop
- alignment of views
- resizing of objects
- creating objects by means of drag&drop from the objects palette
- selecting objects for inspection
- editing of layout constraints for Auto Layout
- display/editing of size classes
-
Product Version:
Created: 2015-10-14T19:06:24.434300
Originated: 2015-10-14T21:05:00
Open Radar Link: http://www.openradar.me/23112422 | usab | full featured xcode zoom description summary designing arranging and adjusting objects in the interface builder is currently only possible at zoom esp with ipad pro and tvos this is a time wrecking and cumbersome process other applications like pages keynote numbers do support their editing and layout features at more than just the zoom and so should xcode steps to reproduce example open a storyboard for tvos add a navigation view controller and two ore more navigation controllers zoom out to add segues to create a typical navigation stack between the view controllers expected results at all zoom scales all editing features should work as they work at actual results xcode either zooms out to zoom automatically and interrupts the segues creation or does not let the user create a segue the solution involves heavy scrolling missing targets and rearranging objects regression at the moment only the imac can display a full tvos view controller with the usual palettes for multi monitor or macbook usage this is a pain significantly wasting time and efficiency while developing for these platforms notes it s important to note that more than just segue creation via drag drop is affected by this limitation xcode should support editing in all supported zoom scales for but not limited to segues drag drop view scene drag drop alignment of views resizing of objects creating objects by means of drag drop from the objects palette selecting objects for inspection editing of layout constraints for auto layout display editing of size classes product version created originated open radar link | 1 |
307,502 | 9,417,951,635 | IssuesEvent | 2019-04-10 17:59:53 | hotosm/tasking-manager | https://api.github.com/repos/hotosm/tasking-manager | closed | Fix deprecated nodejs packages | Difficulty: Easy Priority: Low Type: Enhancement | We use a lot of deprecated nodejs packages. Let's update the nodejs packages to the latest version.
cc/ @hotosm/tech | 1.0 | Fix deprecated nodejs packages - We use a lot of deprecated nodejs packages. Let's update the nodejs packages to the latest version.
cc/ @hotosm/tech | non_usab | fix deprecated nodejs packages we use a lot of deprecated nodejs packages let s update the nodejs packages to the latest version cc hotosm tech | 0 |
8,529 | 5,798,457,847 | IssuesEvent | 2017-05-03 01:52:15 | usnistgov/800-63-3 | https://api.github.com/repos/usnistgov/800-63-3 | closed | Biometric usability focuses on fingerprint, face, and iris; however, voice is also frequently used, particularly with mobile devices. | 63B Biometrics decline usability | **Organization Name (N/A, if individual)**: DHS S&T FRG
**Organization Type**: 1
**Document (63-3, 63A, 63B, or 63C)**: 800-63B
**Reference (Include section and paragraph number)**: 10.4, 2nd para
**Comment (Include rationale for comment)**: Biometric usability focuses on fingerprint, face, and iris; however, voice is also frequently used, particularly with mobile devices.
Completeness.
**Suggested Change**: Suggest that voice be covered as well, especially since it is the only non-image based biometric and may therefore have unique considerations.
---
Organization Type: 1 = Federal, 2 = Industry, 3 = Academia, 4 = Self, 5 = Other | True | Biometric usability focuses on fingerprint, face, and iris; however, voice is also frequently used, particularly with mobile devices. - **Organization Name (N/A, if individual)**: DHS S&T FRG
**Organization Type**: 1
**Document (63-3, 63A, 63B, or 63C)**: 800-63B
**Reference (Include section and paragraph number)**: 10.4, 2nd para
**Comment (Include rationale for comment)**: Biometric usability focuses on fingerprint, face, and iris; however, voice is also frequently used, particularly with mobile devices.
Completeness.
**Suggested Change**: Suggest that voice be covered as well, especially since it is the only non-image based biometric and may therefore have unique considerations.
---
Organization Type: 1 = Federal, 2 = Industry, 3 = Academia, 4 = Self, 5 = Other | usab | biometric usability focuses on fingerprint face and iris however voice is also frequently used particularly with mobile devices organization name n a if individual dhs s t frg organization type document or reference include section and paragraph number para comment include rationale for comment biometric usability focuses on fingerprint face and iris however voice is also frequently used particularly with mobile devices completeness suggested change suggest that voice be covered as well especially since it is the only non image based biometric and may therefore have unique considerations organization type federal industry academia self other | 1 |
26,179 | 26,520,832,616 | IssuesEvent | 2023-01-19 02:20:45 | microsoft/win32metadata | https://api.github.com/repos/microsoft/win32metadata | closed | QueryDisplayConfig return type, parameter touch-ups | usability | The `QueryDisplayConfig` method is [documented as returning](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-querydisplayconfig#return-value) values from the `WIN32_ERROR` error enum. Can the metadata be changed to reflect this instead of returning just an `int`? Same for [GetDisplayConfigBufferSizes](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-getdisplayconfigbuffersizes#return-value).
Also, and more importantly, the last parameter is documented as `[optional, out]` but the metadata only retains `[out]`. As a result, the C# projection emits a friendly overload with an `out` parameter, which makes it very difficult to learn to pass `null` to. Can we add the missing modifier?
These came from a customer report: https://github.com/microsoft/CsWin32/issues/844 | True | QueryDisplayConfig return type, parameter touch-ups - The `QueryDisplayConfig` method is [documented as returning](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-querydisplayconfig#return-value) values from the `WIN32_ERROR` error enum. Can the metadata be changed to reflect this instead of returning just an `int`? Same for [GetDisplayConfigBufferSizes](https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-getdisplayconfigbuffersizes#return-value).
Also, and more importantly, the last parameter is documented as `[optional, out]` but the metadata only retains `[out]`. As a result, the C# projection emits a friendly overload with an `out` parameter, which makes it very difficult to learn to pass `null` to. Can we add the missing modifier?
These came from a customer report: https://github.com/microsoft/CsWin32/issues/844 | usab | querydisplayconfig return type parameter touch ups the querydisplayconfig method is values from the error error enum can the metadata be changed to reflect this instead of returning just an int same for also and more importantly the last parameter is documented as but the metadata only retains as a result the c projection emits a friendly overload with an out parameter which makes it very difficult to learn to pass null to can we add the missing modifier these came from a customer report | 1 |
239,896 | 18,287,516,659 | IssuesEvent | 2021-10-05 12:01:53 | thejasvibr/bat_beamshapes | https://api.github.com/repos/thejasvibr/bat_beamshapes | closed | Link to documentation broken | documentation | Hi, this issue is related to your submission to JOSS, [#3740](https://github.com/openjournals/joss-reviews/issues/3740).
Currently your `README` links to the docs under
```
https://github.com/thejasvibr/bat_beamshapes/blob/dev/beamshapes.rtfd.io
```
while I believe the correct address should be
```
https://beamshapes.rtfd.io
```
Also to improve visibility of the docs I would also suggest to add this URL to the Repo info, and possibly also add an [RDT badge](https://docs.readthedocs.io/en/stable/badges.html) | 1.0 | Link to documentation broken - Hi, this issue is related to your submission to JOSS, [#3740](https://github.com/openjournals/joss-reviews/issues/3740).
Currently your `README` links to the docs under
```
https://github.com/thejasvibr/bat_beamshapes/blob/dev/beamshapes.rtfd.io
```
while I believe the correct address should be
```
https://beamshapes.rtfd.io
```
Also to improve visibility of the docs I would also suggest to add this URL to the Repo info, and possibly also add an [RDT badge](https://docs.readthedocs.io/en/stable/badges.html) | non_usab | link to documentation broken hi this issue is related to your submission to joss currently your readme links to the docs under while i believe the correct address should be also to improve visibility of the docs i would also suggest to add this url to the repo info and possibly also add an | 0 |
9,605 | 6,412,071,109 | IssuesEvent | 2017-08-08 01:35:41 | FReBOmusic/FReBO | https://api.github.com/repos/FReBOmusic/FReBO | opened | Facebook Sign-In Button | Usability | In the event that the user navigates to the Login Screen.
**Expected Response**: The Facebook Sign-In Button should be displayed on the Login Screen. | True | Facebook Sign-In Button - In the event that the user navigates to the Login Screen.
**Expected Response**: The Facebook Sign-In Button should be displayed on the Login Screen. | usab | facebook sign in button in the event that the user navigates to the login screen expected response the facebook sign in button should be displayed on the login screen | 1 |
39,389 | 15,984,169,297 | IssuesEvent | 2021-04-18 12:09:21 | localstack/localstack | https://api.github.com/repos/localstack/localstack | closed | Unable to find forwarding rule for host "localhost:4566", path "DELETE | service:s3 should-be-fixed | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
- [X] bug report
# Detailed description
This is my localstack version
> localstack --version
> 0.12.6.1
I am using the latest version of amazon java sdk
```
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>1.11.959</version>
</dependency>
```
This is my config for aws localstack connection
```
return AmazonS3ClientBuilder.standard()
.enablePathStyleAccess()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration("http://localhost:4566", region)
)
.build();
```
**Issue: Deleting an object on the localstack s3 service will produce an error.**
## Expected behavior
No errors and the file should be deleted.
## Actual behavior
INFO:localstack.services.edge: Unable to find forwarding rule for host "localhost:4566", path "DELETE /<_bucket_>/<_filename_>", target header "", auth header "", data "b''"
# Steps to reproduce
Delete an existing file using the aws java sdk.
## Command used to start LocalStack
localstack start
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
`amazonS3.deleteObject(bucket, filename);`
| 1.0 | Unable to find forwarding rule for host "localhost:4566", path "DELETE - <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
- [X] bug report
# Detailed description
This is my localstack version
> localstack --version
> 0.12.6.1
I am using the latest version of amazon java sdk
```
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>1.11.959</version>
</dependency>
```
This is my config for aws localstack connection
```
return AmazonS3ClientBuilder.standard()
.enablePathStyleAccess()
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration("http://localhost:4566", region)
)
.build();
```
**Issue: Deleting an object on the localstack s3 service will produce an error.**
## Expected behavior
No errors and the file should be deleted.
## Actual behavior
INFO:localstack.services.edge: Unable to find forwarding rule for host "localhost:4566", path "DELETE /<_bucket_>/<_filename_>", target header "", auth header "", data "b''"
# Steps to reproduce
Delete an existing file using the aws java sdk.
## Command used to start LocalStack
localstack start
## Client code (AWS SDK code snippet, or sequence of "awslocal" commands)
`amazonS3.deleteObject(bucket, filename);`
| non_usab | unable to find forwarding rule for host localhost path delete love localstack please consider supporting our collective 👉 type of request this is a bug report detailed description this is my localstack version localstack version i am using the latest version of amazon java sdk com amazonaws aws java sdk this is my config for aws localstack connection return standard enablepathstyleaccess withendpointconfiguration new awsclientbuilder endpointconfiguration region build issue deleting an object on the localstack service will produce an error expected behavior no errors and the file should be deleted actual behavior info localstack services edge unable to find forwarding rule for host localhost path delete target header auth header data b steps to reproduce delete an existing file using the aws java sdk command used to start localstack localstack start client code aws sdk code snippet or sequence of awslocal commands deleteobject bucket filename | 0 |
21,903 | 18,058,447,184 | IssuesEvent | 2021-09-20 11:16:06 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Can't close all scripts if some are unsaved | bug topic:editor usability | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if using non-official build. -->
3.2.4 rc3 / dd1881a
**Issue description:**
<!-- What happened, and what was expected. -->
When there is an unsaved script on your script list, using Close All or Close Others will stop on that script. The unsaved dialog will interrupt closing scripts, so you have to use it again.
This is especially problematic with built-in scripts, which tend to ask you to save changes even if they are unmodified.
**Steps to reproduce:**
1. Open few scripts
2. Modify 1 of them (or 2 for best effect)
3. Right click script list and Close All
4. Not all scripts are closed, because of the dialog | True | Can't close all scripts if some are unsaved - <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if using non-official build. -->
3.2.4 rc3 / dd1881a
**Issue description:**
<!-- What happened, and what was expected. -->
When there is an unsaved script on your script list, using Close All or Close Others will stop on that script. The unsaved dialog will interrupt closing scripts, so you have to use it again.
This is especially problematic with built-in scripts, which tend to ask you to save changes even if they are unmodified.
**Steps to reproduce:**
1. Open few scripts
2. Modify 1 of them (or 2 for best effect)
3. Right click script list and Close All
4. Not all scripts are closed, because of the dialog | usab | can t close all scripts if some are unsaved please search existing issues for potential duplicates before filing yours godot version issue description when there is an unsaved script on your script list using close all or close others will stop on that script the unsaved dialog will interrupt closing scripts so you have to use it again this is especially problematic with built in scripts which tend to ask you to save changes even if they are unmodified steps to reproduce open few scripts modify of them or for best effect right click script list and close all not all scripts are closed because of the dialog | 1 |
15,218 | 9,523,097,038 | IssuesEvent | 2019-04-27 14:32:50 | henriquecarv/gulp-sass-helper | https://api.github.com/repos/henriquecarv/gulp-sass-helper | closed | CVE-2018-19838 Medium Severity Vulnerability detected by WhiteSource | security vulnerability | ## CVE-2018-19838 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (125)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/expand.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/utf8/unchecked.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/output.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_values.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/util.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/emitter.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/lexer.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/test/test_node.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/plugins.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass/base.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/position.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/operation.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /gulp-sass-helper/node_modules/node-sass/src/custom_importer_bridge.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/contrib/plugin.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/functions.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/test/test_superselector.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/eval.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/utf8_string.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_context_wrapper.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/node.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/parser.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/subset_map.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/emitter.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/listize.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_functions.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/output.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/functions.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/cssize.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/paths.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/inspect.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/color.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/test/test_unification.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/values.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_util.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/source_map.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/list.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/json.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/units.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/units.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/context.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/utf8/checked.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/listize.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/string.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/context.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/boolean.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass2scss.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/eval.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/expand.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/factory.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/operators.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/boolean.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/source_map.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/value.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/utf8_string.cpp
- /gulp-sass-helper/node_modules/node-sass/src/callback_bridge.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/file.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/node.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/environment.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/extend.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/operators.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/constants.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/parser.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/constants.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/list.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/cssize.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass/functions.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/util.cpp
- /gulp-sass-helper/node_modules/node-sass/src/custom_function_bridge.cpp
- /gulp-sass-helper/node_modules/node-sass/src/custom_importer_bridge.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/bind.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/inspect.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_functions.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/extend.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/debugger.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/cencode.c
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/base64vlq.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/number.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/color.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/c99func.c
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/position.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass/values.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/test/test_subset_map.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/null.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass/context.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/to_c.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/to_value.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_context_wrapper.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/script/test-leaks.pl
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/lexer.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/to_c.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/map.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/to_value.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/b64/encode.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/file.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/environment.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/plugins.hpp
- /gulp-sass-helper/node_modules/node-sass/src/binding.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/debug.hpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, functions inside ast.cpp for IMPLEMENT_AST_OPERATORS expansion allow attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, as demonstrated by recursive calls involving clone(), cloneChildren(), and copy().
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19838>CVE-2018-19838</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19838">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19838</a></p>
<p>Fix Resolution: 3.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-19838 Medium Severity Vulnerability detected by WhiteSource - ## CVE-2018-19838 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (125)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/expand.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/utf8/unchecked.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/output.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_values.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/util.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/emitter.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/lexer.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/test/test_node.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/plugins.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass/base.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/position.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/operation.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /gulp-sass-helper/node_modules/node-sass/src/custom_importer_bridge.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/contrib/plugin.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/functions.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/test/test_superselector.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/eval.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/utf8_string.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_context_wrapper.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/node.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/parser.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/subset_map.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/emitter.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/listize.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_functions.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/output.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/functions.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/cssize.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/paths.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/inspect.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/color.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/test/test_unification.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/values.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_util.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/source_map.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/list.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/json.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/units.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/units.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/context.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/utf8/checked.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/listize.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/string.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/context.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/boolean.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass2scss.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/eval.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/expand.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/factory.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/operators.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/boolean.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/source_map.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/value.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/utf8_string.cpp
- /gulp-sass-helper/node_modules/node-sass/src/callback_bridge.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/file.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/node.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/environment.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/extend.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/operators.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/constants.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/parser.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/constants.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/list.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/cssize.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass/functions.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/util.cpp
- /gulp-sass-helper/node_modules/node-sass/src/custom_function_bridge.cpp
- /gulp-sass-helper/node_modules/node-sass/src/custom_importer_bridge.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/bind.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/inspect.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_functions.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/extend.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/debugger.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/cencode.c
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/base64vlq.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/number.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/color.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/c99func.c
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/position.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass/values.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/test/test_subset_map.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/null.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/ast.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/include/sass/context.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/to_c.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/to_value.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_context_wrapper.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/script/test-leaks.pl
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/lexer.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/to_c.hpp
- /gulp-sass-helper/node_modules/node-sass/src/sass_types/map.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/to_value.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/b64/encode.h
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/file.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/environment.hpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/plugins.hpp
- /gulp-sass-helper/node_modules/node-sass/src/binding.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /gulp-sass-helper/node_modules/node-sass/src/libsass/src/debug.hpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, functions inside ast.cpp for IMPLEMENT_AST_OPERATORS expansion allow attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, as demonstrated by recursive calls involving clone(), cloneChildren(), and copy().
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19838>CVE-2018-19838</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19838">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19838</a></p>
<p>Fix Resolution: 3.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve medium severity vulnerability detected by whitesource cve medium severity vulnerability vulnerable library node rainbow node js bindings to libsass library home page a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries gulp sass helper node modules node sass src libsass src expand hpp gulp sass helper node modules node sass src libsass src color maps cpp gulp sass helper node modules node sass src libsass src sass util hpp gulp sass helper node modules node sass src libsass src unchecked h gulp sass helper node modules node sass src libsass src output hpp gulp sass helper node modules node sass src libsass src sass values hpp gulp sass helper node modules node sass src libsass src util hpp gulp sass helper node modules node sass src libsass src emitter hpp gulp sass helper node modules node sass src libsass src lexer cpp gulp sass helper node modules node sass src libsass test test node cpp gulp sass helper node modules node sass src libsass src plugins cpp gulp sass helper node modules node sass src libsass include sass base h gulp sass helper node modules node sass src libsass src position hpp gulp sass helper node modules node sass src libsass src subset map hpp gulp sass helper node modules node sass src libsass src operation hpp gulp sass helper node modules node sass src libsass src remove placeholders cpp gulp sass helper node modules node sass src libsass src error handling hpp gulp sass helper node modules node sass src custom importer bridge cpp gulp sass helper node modules node sass src libsass contrib plugin cpp gulp sass helper node modules node sass src libsass src functions hpp gulp sass helper node modules node sass src libsass test test superselector cpp gulp sass helper node modules node sass src libsass src eval hpp gulp sass helper node modules node sass src libsass src string hpp gulp sass helper node modules node sass src sass context wrapper h gulp sass helper node modules node sass src libsass src error handling cpp gulp sass helper node modules node sass src libsass src node cpp gulp sass helper node modules node sass src libsass src parser cpp gulp sass helper node modules node sass src libsass src subset map cpp gulp sass helper node modules node sass src libsass src emitter cpp gulp sass helper node modules node sass src libsass src listize cpp gulp sass helper node modules node sass src libsass src ast hpp gulp sass helper node modules node sass src libsass src sass functions hpp gulp sass helper node modules node sass src libsass src memory sharedptr cpp gulp sass helper node modules node sass src libsass src output cpp gulp sass helper node modules node sass src libsass src check nesting cpp gulp sass helper node modules node sass src libsass src ast def macros hpp gulp sass helper node modules node sass src libsass src functions cpp gulp sass helper node modules node sass src libsass src cssize hpp gulp sass helper node modules node sass src libsass src prelexer cpp gulp sass helper node modules node sass src libsass src paths hpp gulp sass helper node modules node sass src libsass src ast fwd decl hpp gulp sass helper node modules node sass src libsass src inspect hpp gulp sass helper node modules node sass src sass types color cpp gulp sass helper node modules node sass src libsass test test unification cpp gulp sass helper node modules node sass src libsass src values cpp gulp sass helper node modules node sass src libsass src sass util cpp gulp sass helper node modules node sass src libsass src source map hpp gulp sass helper node modules node sass src sass types list h gulp sass helper node modules node sass src libsass src check nesting hpp gulp sass helper node modules node sass src libsass src json cpp gulp sass helper node modules node sass src libsass src units cpp gulp sass helper node modules node sass src libsass src units hpp gulp sass helper node modules node sass src libsass src context cpp gulp sass helper node modules node sass src libsass src checked h gulp sass helper node modules node sass src libsass src listize hpp gulp sass helper node modules node sass src sass types string cpp gulp sass helper node modules node sass src libsass src prelexer hpp gulp sass helper node modules node sass src libsass src context hpp gulp sass helper node modules node sass src sass types boolean h gulp sass helper node modules node sass src libsass include h gulp sass helper node modules node sass src libsass src eval cpp gulp sass helper node modules node sass src libsass src expand cpp gulp sass helper node modules node sass src sass types factory cpp gulp sass helper node modules node sass src libsass src operators cpp gulp sass helper node modules node sass src sass types boolean cpp gulp sass helper node modules node sass src libsass src source map cpp gulp sass helper node modules node sass src sass types value h gulp sass helper node modules node sass src libsass src string cpp gulp sass helper node modules node sass src callback bridge h gulp sass helper node modules node sass src libsass src file cpp gulp sass helper node modules node sass src libsass src sass cpp gulp sass helper node modules node sass src libsass src node hpp gulp sass helper node modules node sass src libsass src environment cpp gulp sass helper node modules node sass src libsass src extend hpp gulp sass helper node modules node sass src libsass src sass context hpp gulp sass helper node modules node sass src libsass src operators hpp gulp sass helper node modules node sass src libsass src constants hpp gulp sass helper node modules node sass src libsass src sass hpp gulp sass helper node modules node sass src libsass src ast fwd decl cpp gulp sass helper node modules node sass src libsass src parser hpp gulp sass helper node modules node sass src libsass src constants cpp gulp sass helper node modules node sass src sass types list cpp gulp sass helper node modules node sass src libsass src cssize cpp gulp sass helper node modules node sass src libsass include sass functions h gulp sass helper node modules node sass src libsass src util cpp gulp sass helper node modules node sass src custom function bridge cpp gulp sass helper node modules node sass src custom importer bridge h gulp sass helper node modules node sass src libsass src bind cpp gulp sass helper node modules node sass src libsass src inspect cpp gulp sass helper node modules node sass src libsass src sass functions cpp gulp sass helper node modules node sass src libsass src backtrace cpp gulp sass helper node modules node sass src libsass src extend cpp gulp sass helper node modules node sass src sass types sass value wrapper h gulp sass helper node modules node sass src libsass src debugger hpp gulp sass helper node modules node sass src libsass src cencode c gulp sass helper node modules node sass src libsass src cpp gulp sass helper node modules node sass src sass types number cpp gulp sass helper node modules node sass src sass types color h gulp sass helper node modules node sass src libsass src c gulp sass helper node modules node sass src libsass src position cpp gulp sass helper node modules node sass src libsass src remove placeholders hpp gulp sass helper node modules node sass src libsass src sass values cpp gulp sass helper node modules node sass src libsass include sass values h gulp sass helper node modules node sass src libsass test test subset map cpp gulp sass helper node modules node sass src libsass src cpp gulp sass helper node modules node sass src sass types null cpp gulp sass helper node modules node sass src libsass src ast cpp gulp sass helper node modules node sass src libsass include sass context h gulp sass helper node modules node sass src libsass src to c cpp gulp sass helper node modules node sass src libsass src to value hpp gulp sass helper node modules node sass src libsass src color maps hpp gulp sass helper node modules node sass src sass context wrapper cpp gulp sass helper node modules node sass src libsass script test leaks pl gulp sass helper node modules node sass src libsass src lexer hpp gulp sass helper node modules node sass src libsass src memory sharedptr hpp gulp sass helper node modules node sass src libsass src to c hpp gulp sass helper node modules node sass src sass types map cpp gulp sass helper node modules node sass src libsass src to value cpp gulp sass helper node modules node sass src libsass src encode h gulp sass helper node modules node sass src libsass src file hpp gulp sass helper node modules node sass src libsass src environment hpp gulp sass helper node modules node sass src libsass src plugins hpp gulp sass helper node modules node sass src binding cpp gulp sass helper node modules node sass src libsass src sass context cpp gulp sass helper node modules node sass src libsass src debug hpp vulnerability details in libsass prior to functions inside ast cpp for implement ast operators expansion allow attackers to cause a denial of service resulting from stack consumption via a crafted sass file as demonstrated by recursive calls involving clone clonechildren and copy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href fix resolution step up your open source security game with whitesource | 0 |
12,236 | 7,759,422,132 | IssuesEvent | 2018-05-31 23:26:48 | OctopusDeploy/Issues | https://api.github.com/repos/OctopusDeploy/Issues | opened | Password managers auto fill sensitive variables | area/usability kind/bug | When filling in the value for sensitive variables password managers can automatically populate the sensitive field (usually with the user's Octopus password).
Reported by DealerSocket.
See https://lastpass.com/support.php?cmd=showfaq&id=10512
Related: #4450 ? | True | Password managers auto fill sensitive variables - When filling in the value for sensitive variables password managers can automatically populate the sensitive field (usually with the user's Octopus password).
Reported by DealerSocket.
See https://lastpass.com/support.php?cmd=showfaq&id=10512
Related: #4450 ? | usab | password managers auto fill sensitive variables when filling in the value for sensitive variables password managers can automatically populate the sensitive field usually with the user s octopus password reported by dealersocket see related | 1 |
443,084 | 12,759,427,257 | IssuesEvent | 2020-06-29 05:48:23 | apcountryman/toolchain-avr-gcc | https://api.github.com/repos/apcountryman/toolchain-avr-gcc | opened | Flash programming target fails | priority-normal status-awaiting_traige type-bug | The `foo-program-flash` target generated by `add_avrdude_programming_targets()` fails. This appears to be due to semicolons being present in the arguments to `avrdude`. | 1.0 | Flash programming target fails - The `foo-program-flash` target generated by `add_avrdude_programming_targets()` fails. This appears to be due to semicolons being present in the arguments to `avrdude`. | non_usab | flash programming target fails the foo program flash target generated by add avrdude programming targets fails this appears to be due to semicolons being present in the arguments to avrdude | 0 |
593,386 | 17,988,783,639 | IssuesEvent | 2021-09-15 01:18:23 | hashicorp/flight | https://api.github.com/repos/hashicorp/flight | opened | Investigate linting in GitHub Actions CI | priority: low triage | Not urgent but just noting as something nice to have
- Is linting running as expected? Didn't seem to be failing this week on weird .hbs formatting
- Add Prettier?
- Look into pre-commit hooks, Husky? | 1.0 | Investigate linting in GitHub Actions CI - Not urgent but just noting as something nice to have
- Is linting running as expected? Didn't seem to be failing this week on weird .hbs formatting
- Add Prettier?
- Look into pre-commit hooks, Husky? | non_usab | investigate linting in github actions ci not urgent but just noting as something nice to have is linting running as expected didn t seem to be failing this week on weird hbs formatting add prettier look into pre commit hooks husky | 0 |
182,827 | 30,989,698,931 | IssuesEvent | 2023-08-09 02:49:00 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | opened | Support headings props to the Menu Component. | Design System Pod | I noticed that the menu component doesn't have the ability to support a heading as a prop. This could potentially limit its usefulness in various scenarios.


Design
 | 1.0 | Support headings props to the Menu Component. - I noticed that the menu component doesn't have the ability to support a heading as a prop. This could potentially limit its usefulness in various scenarios.


Design
 | non_usab | support headings props to the menu component i noticed that the menu component doesn t have the ability to support a heading as a prop this could potentially limit its usefulness in various scenarios design | 0 |
4,193 | 3,758,180,813 | IssuesEvent | 2016-03-14 07:20:13 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | Front java2D displays are not updated until they are resized/activated | > Bug Affects Usability Concerns Interface OS Windows | When running a simulation, if a display is java2D and is focused from the beginning (ex : one unique display java2d), the panel containing the display need to be resized to be updated. | True | Front java2D displays are not updated until they are resized/activated - When running a simulation, if a display is java2D and is focused from the beginning (ex : one unique display java2d), the panel containing the display need to be resized to be updated. | usab | front displays are not updated until they are resized activated when running a simulation if a display is and is focused from the beginning ex one unique display the panel containing the display need to be resized to be updated | 1 |
48,211 | 10,222,445,155 | IssuesEvent | 2019-08-16 06:41:05 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | opened | Introduce Ballerina sub commands for Lang server & Debugger | Area/Tooling Component/Debugger Component/IntellijPlugin Component/LanguageServer Component/VScodePlugin Type/Task | This is to get rid of launcher scripts for LangServer and Debugger | 1.0 | Introduce Ballerina sub commands for Lang server & Debugger - This is to get rid of launcher scripts for LangServer and Debugger | non_usab | introduce ballerina sub commands for lang server debugger this is to get rid of launcher scripts for langserver and debugger | 0 |
17,609 | 12,194,760,828 | IssuesEvent | 2020-04-29 16:17:05 | zeebe-io/zeebe | https://api.github.com/repos/zeebe-io/zeebe | opened | Endless loop on job activation if job does not fit in max message size | Impact: Availability Impact: Usability Scope: broker Severity: High Type: Bug | **Describe the bug**
When activating the oldest job in the job state would result in a `JobBatchRecord` which is too large to fit in the max message size, we currently end up in an endless loop - the gateway sees that `truncated` property of the response is true, and will retry that partition over and over again, but will never get anything while the job exists, as jobs are deterministically ordered.
Not only does this brick a gateway, but it's also not obvious what is going on at all, making this hard to react to.
**To Reproduce**
<details><summary>Integration Test</summary>
```java
package io.zeebe.broker.it.network;
import static org.assertj.core.api.Assertions.assertThat;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.zeebe.broker.it.util.GrpcClientRule;
import io.zeebe.broker.test.EmbeddedBrokerRule;
import io.zeebe.model.bpmn.Bpmn;
import io.zeebe.model.bpmn.BpmnModelInstance;
import io.zeebe.test.util.BrokerClassRuleHelper;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.RuleChain;
import org.springframework.util.unit.DataSize;
public final class SmallMessageSizeTest {
private static final ObjectMapper MAPPER = new ObjectMapper();
private static final DataSize MAX_MESSAGE_SIZE = DataSize.ofKilobytes(4);
private static final String LARGE_TEXT = "x".repeat((int) (MAX_MESSAGE_SIZE.toBytes() / 4));
private static final EmbeddedBrokerRule BROKER_RULE =
new EmbeddedBrokerRule(b -> b.getNetwork().setMaxMessageSize(MAX_MESSAGE_SIZE));
private static final GrpcClientRule CLIENT_RULE = new GrpcClientRule(BROKER_RULE);
@ClassRule
public static RuleChain ruleChain = RuleChain.outerRule(BROKER_RULE).around(CLIENT_RULE);
@Rule public final BrokerClassRuleHelper helper = new BrokerClassRuleHelper();
private String jobType;
private static BpmnModelInstance workflow(final String jobType) {
return Bpmn.createExecutableProcess("process")
.startEvent()
.serviceTask("task", t -> t.zeebeJobType(jobType))
.endEvent()
.done();
}
@Before
public void init() {
jobType = helper.getJobType();
}
@Test
public void shouldDeployLargeWorkflow() {
// given
final var workflowKey = CLIENT_RULE.deployWorkflow(workflow(jobType));
final var workflowInstanceKey = CLIENT_RULE.createWorkflowInstance(workflowKey);
// when
for (int i = 0; i < 4; i++) {
CLIENT_RULE
.getClient()
.newSetVariablesCommand(workflowInstanceKey)
.variables(Map.of(String.valueOf(i), LARGE_TEXT))
.send()
.join();
}
// then
final var response =
CLIENT_RULE
.getClient()
.newActivateJobsCommand()
.jobType(jobType)
.maxJobsToActivate(1)
.send()
.join(10, TimeUnit.SECONDS);
assertThat(response.getJobs()).hasSize(1);
}
}
```
</details>
**Expected behavior**
Busting the max message size should not result in an inability to use a partition until the workflow is terminated. In an ideal world, if the job is too large to fit by itself, we could mark it as failed and raise an incident, that way we can keep processing other jobs. I would also be OK with a solution where the issue is clear and easy to diagnose so users can increase the max message size.
**Environment:**
- OS: Fedora 31
- Zeebe Version: 0.24.0-SNAPSHOT
- Configuration: max message size 4kb
| True | Endless loop on job activation if job does not fit in max message size - **Describe the bug**
When activating the oldest job in the job state would result in a `JobBatchRecord` which is too large to fit in the max message size, we currently end up in an endless loop - the gateway sees that `truncated` property of the response is true, and will retry that partition over and over again, but will never get anything while the job exists, as jobs are deterministically ordered.
Not only does this brick a gateway, but it's also not obvious what is going on at all, making this hard to react to.
**To Reproduce**
<details><summary>Integration Test</summary>
```java
package io.zeebe.broker.it.network;
import static org.assertj.core.api.Assertions.assertThat;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.zeebe.broker.it.util.GrpcClientRule;
import io.zeebe.broker.test.EmbeddedBrokerRule;
import io.zeebe.model.bpmn.Bpmn;
import io.zeebe.model.bpmn.BpmnModelInstance;
import io.zeebe.test.util.BrokerClassRuleHelper;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.RuleChain;
import org.springframework.util.unit.DataSize;
public final class SmallMessageSizeTest {
private static final ObjectMapper MAPPER = new ObjectMapper();
private static final DataSize MAX_MESSAGE_SIZE = DataSize.ofKilobytes(4);
private static final String LARGE_TEXT = "x".repeat((int) (MAX_MESSAGE_SIZE.toBytes() / 4));
private static final EmbeddedBrokerRule BROKER_RULE =
new EmbeddedBrokerRule(b -> b.getNetwork().setMaxMessageSize(MAX_MESSAGE_SIZE));
private static final GrpcClientRule CLIENT_RULE = new GrpcClientRule(BROKER_RULE);
@ClassRule
public static RuleChain ruleChain = RuleChain.outerRule(BROKER_RULE).around(CLIENT_RULE);
@Rule public final BrokerClassRuleHelper helper = new BrokerClassRuleHelper();
private String jobType;
private static BpmnModelInstance workflow(final String jobType) {
return Bpmn.createExecutableProcess("process")
.startEvent()
.serviceTask("task", t -> t.zeebeJobType(jobType))
.endEvent()
.done();
}
@Before
public void init() {
jobType = helper.getJobType();
}
@Test
public void shouldDeployLargeWorkflow() {
// given
final var workflowKey = CLIENT_RULE.deployWorkflow(workflow(jobType));
final var workflowInstanceKey = CLIENT_RULE.createWorkflowInstance(workflowKey);
// when
for (int i = 0; i < 4; i++) {
CLIENT_RULE
.getClient()
.newSetVariablesCommand(workflowInstanceKey)
.variables(Map.of(String.valueOf(i), LARGE_TEXT))
.send()
.join();
}
// then
final var response =
CLIENT_RULE
.getClient()
.newActivateJobsCommand()
.jobType(jobType)
.maxJobsToActivate(1)
.send()
.join(10, TimeUnit.SECONDS);
assertThat(response.getJobs()).hasSize(1);
}
}
```
</details>
**Expected behavior**
Busting the max message size should not result in an inability to use a partition until the workflow is terminated. In an ideal world, if the job is too large to fit by itself, we could mark it as failed and raise an incident, that way we can keep processing other jobs. I would also be OK with a solution where the issue is clear and easy to diagnose so users can increase the max message size.
**Environment:**
- OS: Fedora 31
- Zeebe Version: 0.24.0-SNAPSHOT
- Configuration: max message size 4kb
| usab | endless loop on job activation if job does not fit in max message size describe the bug when activating the oldest job in the job state would result in a jobbatchrecord which is too large to fit in the max message size we currently end up in an endless loop the gateway sees that truncated property of the response is true and will retry that partition over and over again but will never get anything while the job exists as jobs are deterministically ordered not only does this brick a gateway but it s also not obvious what is going on at all making this hard to react to to reproduce integration test java package io zeebe broker it network import static org assertj core api assertions assertthat import com fasterxml jackson databind objectmapper import io zeebe broker it util grpcclientrule import io zeebe broker test embeddedbrokerrule import io zeebe model bpmn bpmn import io zeebe model bpmn bpmnmodelinstance import io zeebe test util brokerclassrulehelper import java util map import java util concurrent timeunit import org junit before import org junit classrule import org junit rule import org junit test import org junit rules rulechain import org springframework util unit datasize public final class smallmessagesizetest private static final objectmapper mapper new objectmapper private static final datasize max message size datasize ofkilobytes private static final string large text x repeat int max message size tobytes private static final embeddedbrokerrule broker rule new embeddedbrokerrule b b getnetwork setmaxmessagesize max message size private static final grpcclientrule client rule new grpcclientrule broker rule classrule public static rulechain rulechain rulechain outerrule broker rule around client rule rule public final brokerclassrulehelper helper new brokerclassrulehelper private string jobtype private static bpmnmodelinstance workflow final string jobtype return bpmn createexecutableprocess process startevent servicetask task t t zeebejobtype jobtype endevent done before public void init jobtype helper getjobtype test public void shoulddeploylargeworkflow given final var workflowkey client rule deployworkflow workflow jobtype final var workflowinstancekey client rule createworkflowinstance workflowkey when for int i i i client rule getclient newsetvariablescommand workflowinstancekey variables map of string valueof i large text send join then final var response client rule getclient newactivatejobscommand jobtype jobtype maxjobstoactivate send join timeunit seconds assertthat response getjobs hassize expected behavior busting the max message size should not result in an inability to use a partition until the workflow is terminated in an ideal world if the job is too large to fit by itself we could mark it as failed and raise an incident that way we can keep processing other jobs i would also be ok with a solution where the issue is clear and easy to diagnose so users can increase the max message size environment os fedora zeebe version snapshot configuration max message size | 1 |
5,141 | 3,900,509,945 | IssuesEvent | 2016-04-18 06:32:49 | varshanihanth/issues | https://api.github.com/repos/varshanihanth/issues | closed | QA_Homology Modelling_List of experiments_p2 | Category: Usability Developed By: VLEAD Release Number: Production Severity: S2 Status: Open | Defect Description :
In the Simulation page of Homology Modelling experiment , the back to experiments link is not present where the back to experiments link should be there inorder to view the list of experiments
Actual Result :
In the Simulation page of Homology Modelling experiment , the back to experiments link is not present
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/protein-engg-iitb/blob/master/test-cases/integration_test-cases/Homology%20Modelling/Homology%20Modelling_14_List%20of%20experiments_p2.org | True | QA_Homology Modelling_List of experiments_p2 - Defect Description :
In the Simulation page of Homology Modelling experiment , the back to experiments link is not present where the back to experiments link should be there inorder to view the list of experiments
Actual Result :
In the Simulation page of Homology Modelling experiment , the back to experiments link is not present
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Test Step Link:
https://github.com/Virtual-Labs/protein-engg-iitb/blob/master/test-cases/integration_test-cases/Homology%20Modelling/Homology%20Modelling_14_List%20of%20experiments_p2.org | usab | qa homology modelling list of experiments defect description in the simulation page of homology modelling experiment the back to experiments link is not present where the back to experiments link should be there inorder to view the list of experiments actual result in the simulation page of homology modelling experiment the back to experiments link is not present environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor test step link | 1 |
178,031 | 21,509,259,506 | IssuesEvent | 2022-04-28 01:21:47 | hiucimon/PF2Client | https://api.github.com/repos/hiucimon/PF2Client | opened | CVE-2022-29078 (High) detected in ejs-2.6.1.tgz | security vulnerability | ## CVE-2022-29078 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ejs-2.6.1.tgz</b></p></summary>
<p>Embedded JavaScript templates</p>
<p>Library home page: <a href="https://registry.npmjs.org/ejs/-/ejs-2.6.1.tgz">https://registry.npmjs.org/ejs/-/ejs-2.6.1.tgz</a></p>
<p>Path to dependency file: /PF2Client/package.json</p>
<p>Path to vulnerable library: /node_modules/ejs/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.6.8.tgz (Root Library)
- license-webpack-plugin-1.4.0.tgz
- :x: **ejs-2.6.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ejs (aka Embedded JavaScript templates) package 3.1.6 for Node.js allows server-side template injection in settings[view options][outputFunctionName]. This is parsed as an internal option, and overwrites the outputFunctionName option with an arbitrary OS command (which is executed upon template compilation).
<p>Publish Date: 2022-04-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29078>CVE-2022-29078</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29078~">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29078~</a></p>
<p>Release Date: 2022-04-25</p>
<p>Fix Resolution: ejs - v3.1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-29078 (High) detected in ejs-2.6.1.tgz - ## CVE-2022-29078 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ejs-2.6.1.tgz</b></p></summary>
<p>Embedded JavaScript templates</p>
<p>Library home page: <a href="https://registry.npmjs.org/ejs/-/ejs-2.6.1.tgz">https://registry.npmjs.org/ejs/-/ejs-2.6.1.tgz</a></p>
<p>Path to dependency file: /PF2Client/package.json</p>
<p>Path to vulnerable library: /node_modules/ejs/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.6.8.tgz (Root Library)
- license-webpack-plugin-1.4.0.tgz
- :x: **ejs-2.6.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ejs (aka Embedded JavaScript templates) package 3.1.6 for Node.js allows server-side template injection in settings[view options][outputFunctionName]. This is parsed as an internal option, and overwrites the outputFunctionName option with an arbitrary OS command (which is executed upon template compilation).
<p>Publish Date: 2022-04-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29078>CVE-2022-29078</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29078~">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29078~</a></p>
<p>Release Date: 2022-04-25</p>
<p>Fix Resolution: ejs - v3.1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve high detected in ejs tgz cve high severity vulnerability vulnerable library ejs tgz embedded javascript templates library home page a href path to dependency file package json path to vulnerable library node modules ejs package json dependency hierarchy build angular tgz root library license webpack plugin tgz x ejs tgz vulnerable library vulnerability details the ejs aka embedded javascript templates package for node js allows server side template injection in settings this is parsed as an internal option and overwrites the outputfunctionname option with an arbitrary os command which is executed upon template compilation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ejs step up your open source security game with whitesource | 0 |
55,456 | 6,477,006,009 | IssuesEvent | 2017-08-18 01:12:50 | mapbox/mapbox-gl-js | https://api.github.com/repos/mapbox/mapbox-gl-js | closed | Lint for features unsupported by Node v4 in cross-platform testing code | cross-platform testing | The modules in `test/integration` directory are used to run query, render, and (soon) expression integration tests for both GL JS and GL Native. Since GL Native [supports Node v4](https://github.com/mapbox/mapbox-gl-native/blob/master/package.json#L33), we should lint these sources for ES2015 features that aren't supported in node 4. | 1.0 | Lint for features unsupported by Node v4 in cross-platform testing code - The modules in `test/integration` directory are used to run query, render, and (soon) expression integration tests for both GL JS and GL Native. Since GL Native [supports Node v4](https://github.com/mapbox/mapbox-gl-native/blob/master/package.json#L33), we should lint these sources for ES2015 features that aren't supported in node 4. | non_usab | lint for features unsupported by node in cross platform testing code the modules in test integration directory are used to run query render and soon expression integration tests for both gl js and gl native since gl native we should lint these sources for features that aren t supported in node | 0 |
173,611 | 21,176,965,452 | IssuesEvent | 2022-04-08 01:41:46 | wahwihwuh/book-app | https://api.github.com/repos/wahwihwuh/book-app | opened | CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz | security vulnerability | ## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- cli-7.11.6.tgz (Root Library)
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-5.1.1.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/mocha/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- mocha-8.1.3.tgz (Root Library)
- chokidar-3.4.2.tgz
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (@babel/cli): 7.12.0</p><p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (mocha): 8.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz - ## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- cli-7.11.6.tgz (Root Library)
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-5.1.1.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/mocha/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- mocha-8.1.3.tgz (Root Library)
- chokidar-3.4.2.tgz
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (@babel/cli): 7.12.0</p><p>Fix Resolution (glob-parent): 5.1.2</p>
<p>Direct dependency fix Resolution (mocha): 8.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve high detected in glob parent tgz glob parent tgz cve high severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file package json path to vulnerable library node modules glob parent package json dependency hierarchy cli tgz root library chokidar tgz x glob parent tgz vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file package json path to vulnerable library node modules mocha node modules glob parent package json dependency hierarchy mocha tgz root library chokidar tgz x glob parent tgz vulnerable library vulnerability details this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent direct dependency fix resolution babel cli fix resolution glob parent direct dependency fix resolution mocha step up your open source security game with whitesource | 0 |
895 | 2,657,317,310 | IssuesEvent | 2015-03-18 07:57:42 | sequelpro/sequelpro | https://api.github.com/repos/sequelpro/sequelpro | closed | Program crashes when canceling user creation | Bug Usability WaitingOnUser | Sequel Pro will crash if you attempt to create a new user and cancel. Steps to reproduce:
1. Open Users Panel
2. Click add button to create new user.
3. Close the Users Panel
Program will crash. I have been able to reproduce this a good 10 times. If you need more info let me know. Also the connection was via remote server under the ssh protocol. Here is part of the crash report:
```
OS Version: Mac OS X 10.9.2 (13C64)
```
```
Application Specific Information:
objc_msgSend() selector name: outlineView:isGroupItem:
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libobjc.A.dylib 0x00007fff8ed85097 objc_msgSend + 23
1 com.apple.AppKit 0x00007fff87c3755c -[NSOutlineView _delegate_isGroupRow:] + 63
2 com.apple.AppKit 0x00007fff87c37503 -[NSTableView _isGroupRow:] + 77
3 com.apple.AppKit 0x00007fff87c69163 -[NSTableView _isSourceListGroupRow:] + 53
4 com.apple.AppKit 0x00007fff87c68b7e -[NSTableView rectOfRow:] + 288
5 com.apple.AppKit 0x00007fff87c74c3d -[NSTableView _highlightRectForRow:] + 41
6 com.apple.AppKit 0x00007fff87ec8090 -[NSTableView _highlightSourceListSelectionInRange:] + 141
7 com.apple.AppKit 0x00007fff87e6d840 -[NSTableView highlightSelectionInClipRect:] + 714
8 com.apple.AppKit 0x00007fff87d373cc -[NSTableView drawRect:] + 1367
9 com.apple.AppKit 0x00007fff87d1016f -[NSView _drawRect:clip:] + 3748
10 com.apple.AppKit 0x00007fff87d0e9e4 -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 1799
11 com.apple.AppKit 0x00007fff87d0edc0 -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 2787
12 com.apple.AppKit 0x00007fff87d0edc0 -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 2787
13 com.apple.AppKit 0x00007fff87d0c826 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 841
14 com.apple.AppKit 0x00007fff87d0dce4 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 6151
15 com.apple.AppKit 0x00007fff87d0dce4 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 6151
16 com.apple.AppKit 0x00007fff87d0dce4 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 6151
17 com.apple.AppKit 0x00007fff87d0dce4 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 6151
18 com.apple.AppKit 0x00007fff87d0bfd1 -[NSThemeFrame _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 314
19 com.apple.AppKit 0x00007fff87d08fbf -[NSView _displayRectIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:] + 2828
20 com.apple.AppKit 0x00007fff87ce842a -[NSView displayIfNeeded] + 1680
21 com.apple.AppKit 0x00007fff87d4d85e _handleWindowNeedsDisplayOrLayoutOrUpdateConstraints + 884
22 com.apple.AppKit 0x00007fff88323061 __83-[NSWindow _postWindowNeedsDisplayOrLayoutOrUpdateConstraintsUnlessPostingDisabled]_block_invoke1331 + 46
23 com.apple.CoreFoundation 0x00007fff862e3ee7 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23
24 com.apple.CoreFoundation 0x00007fff862e3e57 __CFRunLoopDoObservers + 391
25 com.apple.CoreFoundation 0x00007fff862d55f8 __CFRunLoopRun + 776
26 com.apple.CoreFoundation 0x00007fff862d50b5 CFRunLoopRunSpecific + 309
27 com.apple.HIToolbox 0x00007fff916e8a0d RunCurrentEventLoopInMode + 226
28 com.apple.HIToolbox 0x00007fff916e8685 ReceiveNextEventCommon + 173
29 com.apple.HIToolbox 0x00007fff916e85bc _BlockUntilNextEventMatchingListInModeWithFilter + 65
30 com.apple.AppKit 0x00007fff87bb13de _DPSNextEvent + 1434
31 com.apple.AppKit 0x00007fff87bb0a2b -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 122
32 com.apple.AppKit 0x00007fff87ba4b2c -[NSApplication run] + 553
33 com.apple.AppKit 0x00007fff87b8f913 NSApplicationMain + 940
34 com.sequelpro.SequelPro 0x0000000100002084 start + 52
``` | True | Program crashes when canceling user creation - Sequel Pro will crash if you attempt to create a new user and cancel. Steps to reproduce:
1. Open Users Panel
2. Click add button to create new user.
3. Close the Users Panel
Program will crash. I have been able to reproduce this a good 10 times. If you need more info let me know. Also the connection was via remote server under the ssh protocol. Here is part of the crash report:
```
OS Version: Mac OS X 10.9.2 (13C64)
```
```
Application Specific Information:
objc_msgSend() selector name: outlineView:isGroupItem:
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libobjc.A.dylib 0x00007fff8ed85097 objc_msgSend + 23
1 com.apple.AppKit 0x00007fff87c3755c -[NSOutlineView _delegate_isGroupRow:] + 63
2 com.apple.AppKit 0x00007fff87c37503 -[NSTableView _isGroupRow:] + 77
3 com.apple.AppKit 0x00007fff87c69163 -[NSTableView _isSourceListGroupRow:] + 53
4 com.apple.AppKit 0x00007fff87c68b7e -[NSTableView rectOfRow:] + 288
5 com.apple.AppKit 0x00007fff87c74c3d -[NSTableView _highlightRectForRow:] + 41
6 com.apple.AppKit 0x00007fff87ec8090 -[NSTableView _highlightSourceListSelectionInRange:] + 141
7 com.apple.AppKit 0x00007fff87e6d840 -[NSTableView highlightSelectionInClipRect:] + 714
8 com.apple.AppKit 0x00007fff87d373cc -[NSTableView drawRect:] + 1367
9 com.apple.AppKit 0x00007fff87d1016f -[NSView _drawRect:clip:] + 3748
10 com.apple.AppKit 0x00007fff87d0e9e4 -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 1799
11 com.apple.AppKit 0x00007fff87d0edc0 -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 2787
12 com.apple.AppKit 0x00007fff87d0edc0 -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:] + 2787
13 com.apple.AppKit 0x00007fff87d0c826 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 841
14 com.apple.AppKit 0x00007fff87d0dce4 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 6151
15 com.apple.AppKit 0x00007fff87d0dce4 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 6151
16 com.apple.AppKit 0x00007fff87d0dce4 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 6151
17 com.apple.AppKit 0x00007fff87d0dce4 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 6151
18 com.apple.AppKit 0x00007fff87d0bfd1 -[NSThemeFrame _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:] + 314
19 com.apple.AppKit 0x00007fff87d08fbf -[NSView _displayRectIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:] + 2828
20 com.apple.AppKit 0x00007fff87ce842a -[NSView displayIfNeeded] + 1680
21 com.apple.AppKit 0x00007fff87d4d85e _handleWindowNeedsDisplayOrLayoutOrUpdateConstraints + 884
22 com.apple.AppKit 0x00007fff88323061 __83-[NSWindow _postWindowNeedsDisplayOrLayoutOrUpdateConstraintsUnlessPostingDisabled]_block_invoke1331 + 46
23 com.apple.CoreFoundation 0x00007fff862e3ee7 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23
24 com.apple.CoreFoundation 0x00007fff862e3e57 __CFRunLoopDoObservers + 391
25 com.apple.CoreFoundation 0x00007fff862d55f8 __CFRunLoopRun + 776
26 com.apple.CoreFoundation 0x00007fff862d50b5 CFRunLoopRunSpecific + 309
27 com.apple.HIToolbox 0x00007fff916e8a0d RunCurrentEventLoopInMode + 226
28 com.apple.HIToolbox 0x00007fff916e8685 ReceiveNextEventCommon + 173
29 com.apple.HIToolbox 0x00007fff916e85bc _BlockUntilNextEventMatchingListInModeWithFilter + 65
30 com.apple.AppKit 0x00007fff87bb13de _DPSNextEvent + 1434
31 com.apple.AppKit 0x00007fff87bb0a2b -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 122
32 com.apple.AppKit 0x00007fff87ba4b2c -[NSApplication run] + 553
33 com.apple.AppKit 0x00007fff87b8f913 NSApplicationMain + 940
34 com.sequelpro.SequelPro 0x0000000100002084 start + 52
``` | usab | program crashes when canceling user creation sequel pro will crash if you attempt to create a new user and cancel steps to reproduce open users panel click add button to create new user close the users panel program will crash i have been able to reproduce this a good times if you need more info let me know also the connection was via remote server under the ssh protocol here is part of the crash report os version mac os x application specific information objc msgsend selector name outlineview isgroupitem thread crashed dispatch queue com apple main thread libobjc a dylib objc msgsend com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit com apple appkit handlewindowneedsdisplayorlayoutorupdateconstraints com apple appkit block com apple corefoundation cfrunloop is calling out to an observer callback function com apple corefoundation cfrunloopdoobservers com apple corefoundation cfrunlooprun com apple corefoundation cfrunlooprunspecific com apple hitoolbox runcurrenteventloopinmode com apple hitoolbox receivenexteventcommon com apple hitoolbox blockuntilnexteventmatchinglistinmodewithfilter com apple appkit dpsnextevent com apple appkit com apple appkit com apple appkit nsapplicationmain com sequelpro sequelpro start | 1 |
20,964 | 16,374,637,279 | IssuesEvent | 2021-05-15 21:10:22 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Launch Options / Arguments not working properly in Steam | topic:editor usability | **Godot version:**
Godot 3.1 (Steam)
**OS/device including version:**
macOS Mojave 10.14.3
**Issue description:**
Adding the launch options '/path/to/project' launches in maximized window but loads the Project Manager instead of the project.
Godot is launched with `--path $HOME -p` via Steam, which prevent it from working
**Steps to reproduce:**
In Steam > Library, right click Godot and open Properties, in General > Set Launch Options add the arguments '/path/to/project' to launch directly into the project.
**Minimal reproduction project:**
[Test.zip](https://github.com/godotengine/godot/files/2922441/Test.zip)
| True | Launch Options / Arguments not working properly in Steam - **Godot version:**
Godot 3.1 (Steam)
**OS/device including version:**
macOS Mojave 10.14.3
**Issue description:**
Adding the launch options '/path/to/project' launches in maximized window but loads the Project Manager instead of the project.
Godot is launched with `--path $HOME -p` via Steam, which prevent it from working
**Steps to reproduce:**
In Steam > Library, right click Godot and open Properties, in General > Set Launch Options add the arguments '/path/to/project' to launch directly into the project.
**Minimal reproduction project:**
[Test.zip](https://github.com/godotengine/godot/files/2922441/Test.zip)
| usab | launch options arguments not working properly in steam godot version godot steam os device including version macos mojave issue description adding the launch options path to project launches in maximized window but loads the project manager instead of the project godot is launched with path home p via steam which prevent it from working steps to reproduce in steam library right click godot and open properties in general set launch options add the arguments path to project to launch directly into the project minimal reproduction project | 1 |
39,680 | 12,698,846,489 | IssuesEvent | 2020-06-22 14:02:04 | mahonec/WebGoat-Legacy | https://api.github.com/repos/mahonec/WebGoat-Legacy | opened | CVE-2018-5968 (High) detected in jackson-databind-2.0.4.jar | security vulnerability | ## CVE-2018-5968 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/WebGoat-Legacy/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.0.4/jackson-databind-2.0.4.jar,/WebGoat-Legacy/target/WebGoat-6.0.1/WEB-INF/lib/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mahonec/WebGoat-Legacy/commit/9b9155ac6645ae2fcb5f2195a346a9a39d3137e7">9b9155ac6645ae2fcb5f2195a346a9a39d3137e7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind through 2.8.11 and 2.9.x through 2.9.3 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 and CVE-2017-17485 deserialization flaws. This is exploitable via two different gadgets that bypass a blacklist.
<p>Publish Date: 2018-01-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-5968>CVE-2018-5968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-5968">http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-5968</a></p>
<p>Release Date: 2018-01-22</p>
<p>Fix Resolution: 2.8.11.1, 2.9.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.0.4","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.0.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.11.1, 2.9.4"}],"vulnerabilityIdentifier":"CVE-2018-5968","vulnerabilityDetails":"FasterXML jackson-databind through 2.8.11 and 2.9.x through 2.9.3 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 and CVE-2017-17485 deserialization flaws. This is exploitable via two different gadgets that bypass a blacklist.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-5968","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-5968 (High) detected in jackson-databind-2.0.4.jar - ## CVE-2018-5968 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/WebGoat-Legacy/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.0.4/jackson-databind-2.0.4.jar,/WebGoat-Legacy/target/WebGoat-6.0.1/WEB-INF/lib/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mahonec/WebGoat-Legacy/commit/9b9155ac6645ae2fcb5f2195a346a9a39d3137e7">9b9155ac6645ae2fcb5f2195a346a9a39d3137e7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind through 2.8.11 and 2.9.x through 2.9.3 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 and CVE-2017-17485 deserialization flaws. This is exploitable via two different gadgets that bypass a blacklist.
<p>Publish Date: 2018-01-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-5968>CVE-2018-5968</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-5968">http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-5968</a></p>
<p>Release Date: 2018-01-22</p>
<p>Fix Resolution: 2.8.11.1, 2.9.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.0.4","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.0.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.11.1, 2.9.4"}],"vulnerabilityIdentifier":"CVE-2018-5968","vulnerabilityDetails":"FasterXML jackson-databind through 2.8.11 and 2.9.x through 2.9.3 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 and CVE-2017-17485 deserialization flaws. This is exploitable via two different gadgets that bypass a blacklist.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-5968","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_usab | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file tmp ws scm webgoat legacy pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar webgoat legacy target webgoat web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind through and x through allows unauthenticated remote code execution because of an incomplete fix for the cve and cve deserialization flaws this is exploitable via two different gadgets that bypass a blacklist publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind through and x through allows unauthenticated remote code execution because of an incomplete fix for the cve and cve deserialization flaws this is exploitable via two different gadgets that bypass a blacklist vulnerabilityurl | 0 |
18,943 | 13,486,748,661 | IssuesEvent | 2020-09-11 09:54:06 | CARTAvis/carta-frontend | https://api.github.com/repos/CARTAvis/carta-frontend | closed | Clicking on numeric text field consistency issue | usability review | Usability Review:
It would be better if clicking on a numeric text field, e.g., 'X Min' in the settings window, would place the cursor at the position clicked. Instead, it highlights the text field and moves the display to the last digits of the (sometimes very long) number. | True | Clicking on numeric text field consistency issue - Usability Review:
It would be better if clicking on a numeric text field, e.g., 'X Min' in the settings window, would place the cursor at the position clicked. Instead, it highlights the text field and moves the display to the last digits of the (sometimes very long) number. | usab | clicking on numeric text field consistency issue usability review it would be better if clicking on a numeric text field e g x min in the settings window would place the cursor at the position clicked instead it highlights the text field and moves the display to the last digits of the sometimes very long number | 1 |
34,441 | 16,557,442,125 | IssuesEvent | 2021-05-28 15:26:08 | nrwl/nx-console | https://api.github.com/repos/nrwl/nx-console | closed | Extension causes high cpu load | bug performance | - Issue Type: `Performance`
- Extension Name: `angular-console`
- Extension Version: `10.0.0`
- OS Version: `Windows_NT x64 10.0.18362`
- VSCode version: `1.41.1`
:warning: Make sure to **attach** this file from your *home*-directory:
:warning:`C:\Users\vikic\nrwl.angular-console-unresponsive.cpuprofile.txt`
Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-high-cpu-load | True | Extension causes high cpu load - - Issue Type: `Performance`
- Extension Name: `angular-console`
- Extension Version: `10.0.0`
- OS Version: `Windows_NT x64 10.0.18362`
- VSCode version: `1.41.1`
:warning: Make sure to **attach** this file from your *home*-directory:
:warning:`C:\Users\vikic\nrwl.angular-console-unresponsive.cpuprofile.txt`
Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-high-cpu-load | non_usab | extension causes high cpu load issue type performance extension name angular console extension version os version windows nt vscode version warning make sure to attach this file from your home directory warning c users vikic nrwl angular console unresponsive cpuprofile txt find more details here | 0 |
104,233 | 4,203,591,709 | IssuesEvent | 2016-06-28 06:21:33 | uWebSockets/uWebSockets | https://api.github.com/repos/uWebSockets/uWebSockets | opened | Compression support | high priority | formatMessage should optionally compress according to established permessage-deflate options. | 1.0 | Compression support - formatMessage should optionally compress according to established permessage-deflate options. | non_usab | compression support formatmessage should optionally compress according to established permessage deflate options | 0 |
185,904 | 21,876,268,707 | IssuesEvent | 2022-05-19 10:25:18 | turkdevops/graphql-tools | https://api.github.com/repos/turkdevops/graphql-tools | closed | CVE-2021-3805 (High) detected in object-path-0.11.5.tgz - autoclosed | security vulnerability | ## CVE-2021-3805 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>object-path-0.11.5.tgz</b></p></summary>
<p>Access deep object properties using a path</p>
<p>Library home page: <a href="https://registry.npmjs.org/object-path/-/object-path-0.11.5.tgz">https://registry.npmjs.org/object-path/-/object-path-0.11.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/object-path/package.json</p>
<p>
Dependency Hierarchy:
- @graphql-tools/url-loader-6.10.2.tgz (Root Library)
- graphql-upload-12.0.0.tgz
- :x: **object-path-0.11.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/522129ce265decf86028571eea566ef21c50fd7f">522129ce265decf86028571eea566ef21c50fd7f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
object-path is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3805>CVE-2021-3805</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/571e3baf-7c46-46e3-9003-ba7e4e623053/">https://huntr.dev/bounties/571e3baf-7c46-46e3-9003-ba7e4e623053/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: object-path - 0.11.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3805 (High) detected in object-path-0.11.5.tgz - autoclosed - ## CVE-2021-3805 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>object-path-0.11.5.tgz</b></p></summary>
<p>Access deep object properties using a path</p>
<p>Library home page: <a href="https://registry.npmjs.org/object-path/-/object-path-0.11.5.tgz">https://registry.npmjs.org/object-path/-/object-path-0.11.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/object-path/package.json</p>
<p>
Dependency Hierarchy:
- @graphql-tools/url-loader-6.10.2.tgz (Root Library)
- graphql-upload-12.0.0.tgz
- :x: **object-path-0.11.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/522129ce265decf86028571eea566ef21c50fd7f">522129ce265decf86028571eea566ef21c50fd7f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
object-path is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3805>CVE-2021-3805</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/571e3baf-7c46-46e3-9003-ba7e4e623053/">https://huntr.dev/bounties/571e3baf-7c46-46e3-9003-ba7e4e623053/</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: object-path - 0.11.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve high detected in object path tgz autoclosed cve high severity vulnerability vulnerable library object path tgz access deep object properties using a path library home page a href path to dependency file package json path to vulnerable library node modules object path package json dependency hierarchy graphql tools url loader tgz root library graphql upload tgz x object path tgz vulnerable library found in head commit a href found in base branch master vulnerability details object path is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution object path step up your open source security game with whitesource | 0 |
9,252 | 6,187,258,241 | IssuesEvent | 2017-07-04 06:53:39 | Virtual-Labs/circular-dichronism-spectroscopy-iiith | https://api.github.com/repos/Virtual-Labs/circular-dichronism-spectroscopy-iiith | closed | QA_To understand the effect of chiral substances on plane polarized light as a function of wavelength_Menu-items_spelling-mistakes | Category:Usability Developed by: VLEAD Open-Edx Severity:S3 Status: Resolved | Defect Description :
Found spelling mistakes in the menu items of "To understand the effect of chiral substances on plane polarized light as a function of wavelength"experiment this lab.
Actual Result :
Found spelling mistakes in the menu items of "To understand the effect of chiral substances on plane polarized light as a function of wavelength"experiment this lab.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers:Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Attachments:

| True | QA_To understand the effect of chiral substances on plane polarized light as a function of wavelength_Menu-items_spelling-mistakes - Defect Description :
Found spelling mistakes in the menu items of "To understand the effect of chiral substances on plane polarized light as a function of wavelength"experiment this lab.
Actual Result :
Found spelling mistakes in the menu items of "To understand the effect of chiral substances on plane polarized light as a function of wavelength"experiment this lab.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers:Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Attachments:

| usab | qa to understand the effect of chiral substances on plane polarized light as a function of wavelength menu items spelling mistakes defect description found spelling mistakes in the menu items of to understand the effect of chiral substances on plane polarized light as a function of wavelength experiment this lab actual result found spelling mistakes in the menu items of to understand the effect of chiral substances on plane polarized light as a function of wavelength experiment this lab environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor attachments | 1 |
173,168 | 13,389,633,047 | IssuesEvent | 2020-09-02 19:12:07 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Test: Authentication contribution point | testplan-item | Refs: https://github.com/microsoft/vscode/issues/103507
- [x] macOS @chrisdias
- [x] linux @deepak1556
- [x] windows @fiveisprime
- [x] anyOS @sandy081
Complexity: 3
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23105773%0A%0A)
---
There is a new `authentication` contribution point for extensions, which allows specifying what authentication the extension will contribute.
Please test that:
- In a test extension, you have autocompletions for "authentication" under "contributions" in the `package.json`
- The extension details page renders a section for "Authentication" (can check the built in GitHub and Microsoft auth providers)
- Accessing `vscode.authentication.providers` returns a list of providers that have been statically registered | 1.0 | Test: Authentication contribution point - Refs: https://github.com/microsoft/vscode/issues/103507
- [x] macOS @chrisdias
- [x] linux @deepak1556
- [x] windows @fiveisprime
- [x] anyOS @sandy081
Complexity: 3
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23105773%0A%0A)
---
There is a new `authentication` contribution point for extensions, which allows specifying what authentication the extension will contribute.
Please test that:
- In a test extension, you have autocompletions for "authentication" under "contributions" in the `package.json`
- The extension details page renders a section for "Authentication" (can check the built in GitHub and Microsoft auth providers)
- Accessing `vscode.authentication.providers` returns a list of providers that have been statically registered | non_usab | test authentication contribution point refs macos chrisdias linux windows fiveisprime anyos complexity there is a new authentication contribution point for extensions which allows specifying what authentication the extension will contribute please test that in a test extension you have autocompletions for authentication under contributions in the package json the extension details page renders a section for authentication can check the built in github and microsoft auth providers accessing vscode authentication providers returns a list of providers that have been statically registered | 0 |
19,904 | 14,697,815,176 | IssuesEvent | 2021-01-04 04:41:58 | dotnet/machinelearning | https://api.github.com/repos/dotnet/machinelearning | opened | Improve usability of AutoML column not found error | AutoML.NET good first issue up-for-grabs usability | Let's make the error message more actionable.
**Error user sees:**

I would recommend adding similar named column(s):
```diff
- $"Provided {columnPurpose} column '{columnName}' not found in training data."
+ $"Provided {columnPurpose} column '{columnName}' not found in training data. Did you mean '{closestNamed}'."
```
For me this would print: `Provided ignored column 'tagMaxTotalItem' not found in training data. Did you mean 'tagMaxTotalItems'.`
I'd recommend using Levenshtein distance to find the closest named column ([code](https://www.dotnetperls.com/levenshtein)).
**Code location:**
https://github.com/dotnet/machinelearning/blob/5dbfd8acac0bf798957eea122f1413209cdf07dc/src/Microsoft.ML.AutoML/Utils/UserInputValidationUtil.cs#L248-L252
**Background:**
It took me ~20min to debug why this error was occurring (obvious in retrospect). My column existed in the dataset, it existed in my loader function, it existed in my IDataView, ...; simply was just misspelt ("tagMaxTotalItem" instead of "tagMaxTotalItems").
Improving the usability of this error message will save future users' time. | True | Improve usability of AutoML column not found error - Let's make the error message more actionable.
**Error user sees:**

I would recommend adding similar named column(s):
```diff
- $"Provided {columnPurpose} column '{columnName}' not found in training data."
+ $"Provided {columnPurpose} column '{columnName}' not found in training data. Did you mean '{closestNamed}'."
```
For me this would print: `Provided ignored column 'tagMaxTotalItem' not found in training data. Did you mean 'tagMaxTotalItems'.`
I'd recommend using Levenshtein distance to find the closest named column ([code](https://www.dotnetperls.com/levenshtein)).
**Code location:**
https://github.com/dotnet/machinelearning/blob/5dbfd8acac0bf798957eea122f1413209cdf07dc/src/Microsoft.ML.AutoML/Utils/UserInputValidationUtil.cs#L248-L252
**Background:**
It took me ~20min to debug why this error was occurring (obvious in retrospect). My column existed in the dataset, it existed in my loader function, it existed in my IDataView, ...; simply was just misspelt ("tagMaxTotalItem" instead of "tagMaxTotalItems").
Improving the usability of this error message will save future users' time. | usab | improve usability of automl column not found error let s make the error message more actionable error user sees i would recommend adding similar named column s diff provided columnpurpose column columnname not found in training data provided columnpurpose column columnname not found in training data did you mean closestnamed for me this would print provided ignored column tagmaxtotalitem not found in training data did you mean tagmaxtotalitems i d recommend using levenshtein distance to find the closest named column code location background it took me to debug why this error was occurring obvious in retrospect my column existed in the dataset it existed in my loader function it existed in my idataview simply was just misspelt tagmaxtotalitem instead of tagmaxtotalitems improving the usability of this error message will save future users time | 1 |
1,521 | 2,871,133,220 | IssuesEvent | 2015-06-07 21:16:39 | PeerioTechnologies/peerio-client | https://api.github.com/repos/PeerioTechnologies/peerio-client | closed | Add language selection for passphrase generation | i18n usability | Add dictionaries for multiple languages in passphrase generation to make passphrases more memorable by allowing a user to select a passphrase using words in their native language. Should implement this even if we do not support the language in the client yet. | True | Add language selection for passphrase generation - Add dictionaries for multiple languages in passphrase generation to make passphrases more memorable by allowing a user to select a passphrase using words in their native language. Should implement this even if we do not support the language in the client yet. | usab | add language selection for passphrase generation add dictionaries for multiple languages in passphrase generation to make passphrases more memorable by allowing a user to select a passphrase using words in their native language should implement this even if we do not support the language in the client yet | 1 |
3,214 | 3,371,637,293 | IssuesEvent | 2015-11-23 19:57:57 | USDepartmentofLabor/Child-Labor-Android | https://api.github.com/repos/USDepartmentofLabor/Child-Labor-Android | closed | Usability - Information Goods Screen - Missing Navigation Text on Screen | Navigation Usability | Why does ‘Goods’ text appear next to the item type in the navigation on the individual goods screen for the iOS version, but not on Android version? | True | Usability - Information Goods Screen - Missing Navigation Text on Screen - Why does ‘Goods’ text appear next to the item type in the navigation on the individual goods screen for the iOS version, but not on Android version? | usab | usability information goods screen missing navigation text on screen why does ‘goods’ text appear next to the item type in the navigation on the individual goods screen for the ios version but not on android version | 1 |
4,535 | 3,871,200,922 | IssuesEvent | 2016-04-11 08:50:39 | MISP/MISP | https://api.github.com/repos/MISP/MISP | opened | Allow users to give feedback on proposals | enhancement usability | Have a way for users to comment on a proposal. Still to be discussed how this should be implemented and how it should be handled from a UI perspective.
For the first iteration, keep this local only. | True | Allow users to give feedback on proposals - Have a way for users to comment on a proposal. Still to be discussed how this should be implemented and how it should be handled from a UI perspective.
For the first iteration, keep this local only. | usab | allow users to give feedback on proposals have a way for users to comment on a proposal still to be discussed how this should be implemented and how it should be handled from a ui perspective for the first iteration keep this local only | 1 |
68,196 | 28,239,388,091 | IssuesEvent | 2023-04-06 05:28:56 | Azure/azure-sdk-for-net | https://api.github.com/repos/Azure/azure-sdk-for-net | closed | [BUG] Azure functions Service Bus trigger: SessionLockLost after previous run timeouts | Service Bus Client customer-reported question issue-addressed | ### Library name and version
Microsoft.Azure.WebJobs.Extensions.ServiceBus 5.9.0
### Describe the bug
When Session Service bus trigger timeouts, second re-run gets SessionLockLost.
MessageLock is set to 30s in service bus queue
### Expected behavior
The session lock is not lost.
### Actual behavior
Every run ends in timeout (which is correct), but the second run contains also this error:
> Azure.Messaging.ServiceBus: The session lock has expired on the MessageSession. Accept a new MessageSession. TrackingId:973f37c90000eca30000da4264269580_G16_B17, SystemTracker:G16:240034969:amqps://jm73010.servicebus.windows.net/-6cc1e03f;0:5:6:source(address:/myqueue,filter:[com.microsoft:session-filter:]), Timestamp:2023-03-31T08:14:09 (SessionLockLost). For troubleshooting information, see https://aka.ms/azsdk/net/servicebus/exceptions/troubleshoot.
Happening in local debug environment as well as in azure with this simple function.
This is annoying because session messages should not run concurrently but they are (in azure) in this case due to loosing session lock.
All logs from run:
> [2023-03-31T08:10:40.909Z] Executing 'Function1' (Reason='(null)', Id=06e22585-f628-4670-96d7-c7b4ebf78f30)
[2023-03-31T08:10:40.912Z] Trigger Details: MessageId: ba8e7ae5fb2845059279546b5ae313c0, SequenceNumber: 2, DeliveryCount: 5, EnqueuedTimeUtc: 2023-03-30T10:32:09.2810000+00:00, LockedUntilUtc: 9999-12-31T23:59:59.9999999+00:00, SessionId: ses2
[2023-03-31T08:10:40.929Z] C# ServiceBus queue trigger function processing message: mes2
[2023-03-31T08:10:44.532Z] Host lock lease acquired by instance ID '00000000000000000000000011C92764'.
[2023-03-31T08:12:40.937Z] Timeout value of 00:02:00 exceeded by function 'Function1' (Id: '06e22585-f628-4670-96d7-c7b4ebf78f30'). Initiating cancellation.
[2023-03-31T08:12:40.987Z] Executed 'Function1' (Failed, Id=06e22585-f628-4670-96d7-c7b4ebf78f30, Duration=120113ms)
[2023-03-31T08:12:40.988Z] Microsoft.Azure.WebJobs.Host: Timeout value of 00:02:00 was exceeded by function: Function1.
[2023-03-31T08:12:41.021Z] Message processing error (Action=ProcessMessageCallback, EntityPath=myqueue, Endpoint=jm73010.servicebus.windows.net)
[2023-03-31T08:12:41.022Z] Microsoft.Azure.WebJobs.Host: Timeout value of 00:02:00 was exceeded by function: Function1.
[2023-03-31T08:12:41.156Z] Executing 'Function1' (Reason='(null)', Id=a79a835f-95b0-408b-b031-38ea56d09302)
[2023-03-31T08:12:41.157Z] Trigger Details: MessageId: ba8e7ae5fb2845059279546b5ae313c0, SequenceNumber: 2, DeliveryCount: 6, EnqueuedTimeUtc: 2023-03-30T10:32:09.2810000+00:00, LockedUntilUtc: 9999-12-31T23:59:59.9999999+00:00, SessionId: ses2
[2023-03-31T08:12:41.158Z] C# ServiceBus queue trigger function processing message: mes2
[2023-03-31T08:14:41.172Z] Timeout value of 00:02:00 exceeded by function 'Function1' (Id: 'a79a835f-95b0-408b-b031-38ea56d09302'). Initiating cancellation.
[2023-03-31T08:14:41.193Z] Executed 'Function1' (Failed, Id=a79a835f-95b0-408b-b031-38ea56d09302, Duration=120037ms)
[2023-03-31T08:14:41.194Z] Microsoft.Azure.WebJobs.Host: Timeout value of 00:02:00 was exceeded by function: Function1.
[2023-03-31T08:14:41.197Z] Message processing error (Action=ProcessMessageCallback, EntityPath=myqueue, Endpoint=jm73010.servicebus.windows.net)
[2023-03-31T08:14:41.197Z] Microsoft.Azure.WebJobs.Host: Timeout value of 00:02:00 was exceeded by function: Function1.
[2023-03-31T08:14:41.206Z] Message processing error (Action=Abandon, EntityPath=myqueue, Endpoint=jm73010.servicebus.windows.net)
[2023-03-31T08:14:41.207Z] Azure.Messaging.ServiceBus: The session lock has expired on the MessageSession. Accept a new MessageSession. TrackingId:973f37c90000eca30000da4264269580_G16_B17, SystemTracker:G16:240034969:amqps://jm73010.servicebus.windows.net/-6cc1e03f;0:5:6:source(address:/myqueue,filter:[com.microsoft:session-filter:]), Timestamp:2023-03-31T08:14:09 (SessionLockLost). For troubleshooting information, see https://aka.ms/azsdk/net/servicebus/exceptions/troubleshoot.
### Reproduction Steps
Function:
```
using System;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
namespace FunctionApp5
{
public class Function1
{
[FunctionName("Function1")]
public async Task Run([ServiceBusTrigger("myqueue", Connection = "myservicebus", IsSessionsEnabled = true)] string myQueueItem,
ILogger log, CancellationToken cancellationToken)
{
log.LogInformation($"C# ServiceBus queue trigger function processing message: {myQueueItem}");
await Task.Delay(TimeSpan.FromMinutes(3), cancellationToken);
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
}
}
```
host.json
```
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"functionTimeout": "00:02:00",
"extensions": {
"serviceBus": {
"maxAutoLockRenewalDuration": "00:03:00",
"maxConcurrentCalls": 1,
"maxConcurrentSessions": 1
}
}
}
```
### Environment
OS Name: Windows
OS Version: 10.0.22621
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\7.0.202\ | 1.0 | [BUG] Azure functions Service Bus trigger: SessionLockLost after previous run timeouts - ### Library name and version
Microsoft.Azure.WebJobs.Extensions.ServiceBus 5.9.0
### Describe the bug
When Session Service bus trigger timeouts, second re-run gets SessionLockLost.
MessageLock is set to 30s in service bus queue
### Expected behavior
The session lock is not lost.
### Actual behavior
Every run ends in timeout (which is correct), but the second run contains also this error:
> Azure.Messaging.ServiceBus: The session lock has expired on the MessageSession. Accept a new MessageSession. TrackingId:973f37c90000eca30000da4264269580_G16_B17, SystemTracker:G16:240034969:amqps://jm73010.servicebus.windows.net/-6cc1e03f;0:5:6:source(address:/myqueue,filter:[com.microsoft:session-filter:]), Timestamp:2023-03-31T08:14:09 (SessionLockLost). For troubleshooting information, see https://aka.ms/azsdk/net/servicebus/exceptions/troubleshoot.
Happening in local debug environment as well as in azure with this simple function.
This is annoying because session messages should not run concurrently but they are (in azure) in this case due to loosing session lock.
All logs from run:
> [2023-03-31T08:10:40.909Z] Executing 'Function1' (Reason='(null)', Id=06e22585-f628-4670-96d7-c7b4ebf78f30)
[2023-03-31T08:10:40.912Z] Trigger Details: MessageId: ba8e7ae5fb2845059279546b5ae313c0, SequenceNumber: 2, DeliveryCount: 5, EnqueuedTimeUtc: 2023-03-30T10:32:09.2810000+00:00, LockedUntilUtc: 9999-12-31T23:59:59.9999999+00:00, SessionId: ses2
[2023-03-31T08:10:40.929Z] C# ServiceBus queue trigger function processing message: mes2
[2023-03-31T08:10:44.532Z] Host lock lease acquired by instance ID '00000000000000000000000011C92764'.
[2023-03-31T08:12:40.937Z] Timeout value of 00:02:00 exceeded by function 'Function1' (Id: '06e22585-f628-4670-96d7-c7b4ebf78f30'). Initiating cancellation.
[2023-03-31T08:12:40.987Z] Executed 'Function1' (Failed, Id=06e22585-f628-4670-96d7-c7b4ebf78f30, Duration=120113ms)
[2023-03-31T08:12:40.988Z] Microsoft.Azure.WebJobs.Host: Timeout value of 00:02:00 was exceeded by function: Function1.
[2023-03-31T08:12:41.021Z] Message processing error (Action=ProcessMessageCallback, EntityPath=myqueue, Endpoint=jm73010.servicebus.windows.net)
[2023-03-31T08:12:41.022Z] Microsoft.Azure.WebJobs.Host: Timeout value of 00:02:00 was exceeded by function: Function1.
[2023-03-31T08:12:41.156Z] Executing 'Function1' (Reason='(null)', Id=a79a835f-95b0-408b-b031-38ea56d09302)
[2023-03-31T08:12:41.157Z] Trigger Details: MessageId: ba8e7ae5fb2845059279546b5ae313c0, SequenceNumber: 2, DeliveryCount: 6, EnqueuedTimeUtc: 2023-03-30T10:32:09.2810000+00:00, LockedUntilUtc: 9999-12-31T23:59:59.9999999+00:00, SessionId: ses2
[2023-03-31T08:12:41.158Z] C# ServiceBus queue trigger function processing message: mes2
[2023-03-31T08:14:41.172Z] Timeout value of 00:02:00 exceeded by function 'Function1' (Id: 'a79a835f-95b0-408b-b031-38ea56d09302'). Initiating cancellation.
[2023-03-31T08:14:41.193Z] Executed 'Function1' (Failed, Id=a79a835f-95b0-408b-b031-38ea56d09302, Duration=120037ms)
[2023-03-31T08:14:41.194Z] Microsoft.Azure.WebJobs.Host: Timeout value of 00:02:00 was exceeded by function: Function1.
[2023-03-31T08:14:41.197Z] Message processing error (Action=ProcessMessageCallback, EntityPath=myqueue, Endpoint=jm73010.servicebus.windows.net)
[2023-03-31T08:14:41.197Z] Microsoft.Azure.WebJobs.Host: Timeout value of 00:02:00 was exceeded by function: Function1.
[2023-03-31T08:14:41.206Z] Message processing error (Action=Abandon, EntityPath=myqueue, Endpoint=jm73010.servicebus.windows.net)
[2023-03-31T08:14:41.207Z] Azure.Messaging.ServiceBus: The session lock has expired on the MessageSession. Accept a new MessageSession. TrackingId:973f37c90000eca30000da4264269580_G16_B17, SystemTracker:G16:240034969:amqps://jm73010.servicebus.windows.net/-6cc1e03f;0:5:6:source(address:/myqueue,filter:[com.microsoft:session-filter:]), Timestamp:2023-03-31T08:14:09 (SessionLockLost). For troubleshooting information, see https://aka.ms/azsdk/net/servicebus/exceptions/troubleshoot.
### Reproduction Steps
Function:
```
using System;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
namespace FunctionApp5
{
public class Function1
{
[FunctionName("Function1")]
public async Task Run([ServiceBusTrigger("myqueue", Connection = "myservicebus", IsSessionsEnabled = true)] string myQueueItem,
ILogger log, CancellationToken cancellationToken)
{
log.LogInformation($"C# ServiceBus queue trigger function processing message: {myQueueItem}");
await Task.Delay(TimeSpan.FromMinutes(3), cancellationToken);
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
}
}
```
host.json
```
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"functionTimeout": "00:02:00",
"extensions": {
"serviceBus": {
"maxAutoLockRenewalDuration": "00:03:00",
"maxConcurrentCalls": 1,
"maxConcurrentSessions": 1
}
}
}
```
### Environment
OS Name: Windows
OS Version: 10.0.22621
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\7.0.202\ | non_usab | azure functions service bus trigger sessionlocklost after previous run timeouts library name and version microsoft azure webjobs extensions servicebus describe the bug when session service bus trigger timeouts second re run gets sessionlocklost messagelock is set to in service bus queue expected behavior the session lock is not lost actual behavior every run ends in timeout which is correct but the second run contains also this error azure messaging servicebus the session lock has expired on the messagesession accept a new messagesession trackingid systemtracker amqps servicebus windows net source address myqueue filter timestamp sessionlocklost for troubleshooting information see happening in local debug environment as well as in azure with this simple function this is annoying because session messages should not run concurrently but they are in azure in this case due to loosing session lock all logs from run executing reason null id trigger details messageid sequencenumber deliverycount enqueuedtimeutc lockeduntilutc sessionid c servicebus queue trigger function processing message host lock lease acquired by instance id timeout value of exceeded by function id initiating cancellation executed failed id duration microsoft azure webjobs host timeout value of was exceeded by function message processing error action processmessagecallback entitypath myqueue endpoint servicebus windows net microsoft azure webjobs host timeout value of was exceeded by function executing reason null id trigger details messageid sequencenumber deliverycount enqueuedtimeutc lockeduntilutc sessionid c servicebus queue trigger function processing message timeout value of exceeded by function id initiating cancellation executed failed id duration microsoft azure webjobs host timeout value of was exceeded by function message processing error action processmessagecallback entitypath myqueue endpoint servicebus windows net microsoft azure webjobs host timeout value of was exceeded by function message processing error action abandon entitypath myqueue endpoint servicebus windows net azure messaging servicebus the session lock has expired on the messagesession accept a new messagesession trackingid systemtracker amqps servicebus windows net source address myqueue filter timestamp sessionlocklost for troubleshooting information see reproduction steps function using system using system threading using system threading tasks using microsoft azure webjobs using microsoft extensions logging namespace public class public async task run string myqueueitem ilogger log cancellationtoken cancellationtoken log loginformation c servicebus queue trigger function processing message myqueueitem await task delay timespan fromminutes cancellationtoken log loginformation c servicebus queue trigger function processed message myqueueitem host json version logging applicationinsights samplingsettings isenabled true excludedtypes request functiontimeout extensions servicebus maxautolockrenewalduration maxconcurrentcalls maxconcurrentsessions environment os name windows os version os platform windows rid base path c program files dotnet sdk | 0 |
19,281 | 13,765,792,750 | IssuesEvent | 2020-10-07 13:50:23 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | closed | Tuples IN bug | unexpected behaviour usability v20.8-affected v20.9-affected | При указании индексируемого столбца в кортеже при использовании оператора IN вылетает ошибка
DB::Exception: Number of columns in section IN doesn't match. 2 at left, 1 at right. (version 20.9.2.20 (official build))
Воспроизведение:
create table default.test
(
id1 UInt64,
id2 UInt64
)
engine = MergeTree order by id1 settings index_granularity = 8192;
select *
from default.test
where (id1, 0) in (select (1, 0));
В случае если в кортеже указать id2, то все будет работать, но любые комбинации с использованием id1 ведут к ошибке, подзапрос может быть любым, и константным и реальным подзапросом.
Если же указать набор без подзапроса, то все будет ок.
select *
from default.test
where (id1, 0) in (1, 0)
Данный баг не воспроизводится в старой версии КХ 19.4.5.1, в версиях 20.5 и 20.9 он уже есть. | True | Tuples IN bug - При указании индексируемого столбца в кортеже при использовании оператора IN вылетает ошибка
DB::Exception: Number of columns in section IN doesn't match. 2 at left, 1 at right. (version 20.9.2.20 (official build))
Воспроизведение:
create table default.test
(
id1 UInt64,
id2 UInt64
)
engine = MergeTree order by id1 settings index_granularity = 8192;
select *
from default.test
where (id1, 0) in (select (1, 0));
В случае если в кортеже указать id2, то все будет работать, но любые комбинации с использованием id1 ведут к ошибке, подзапрос может быть любым, и константным и реальным подзапросом.
Если же указать набор без подзапроса, то все будет ок.
select *
from default.test
where (id1, 0) in (1, 0)
Данный баг не воспроизводится в старой версии КХ 19.4.5.1, в версиях 20.5 и 20.9 он уже есть. | usab | tuples in bug при указании индексируемого столбца в кортеже при использовании оператора in вылетает ошибка db exception number of columns in section in doesn t match at left at right version official build воспроизведение create table default test engine mergetree order by settings index granularity select from default test where in select в случае если в кортеже указать то все будет работать но любые комбинации с использованием ведут к ошибке подзапрос может быть любым и константным и реальным подзапросом если же указать набор без подзапроса то все будет ок select from default test where in данный баг не воспроизводится в старой версии кх в версиях и он уже есть | 1 |
26,675 | 27,065,959,109 | IssuesEvent | 2023-02-14 00:27:37 | bevyengine/bevy | https://api.github.com/repos/bevyengine/bevy | closed | Unexpected SystemInMultipleBaseSets panic due to on_update | C-Bug A-ECS C-Usability | ## Bevy version
main branch a1e4114ebee5b903a95dc1438b4f0d946ac3c8be
## What you did
I am trying to order an exclusive system in a pretty standard way for Plugins using states:
1. The system should run early in every frame (before any "normal" user systems)
2. The system should only run in the given state
```rust
.add_system(
exclusive_system
.in_base_set(CoreSet::PreUpdate)
.in_set(MySet::Label)
.on_update(MyStates::State)
```
## What went wrong
This triggers the unexpected panic `SystemInMultipleBaseSets { system: "Some(System exclusive_system: {is_exclusive})", first_set: "StateTransitions", second_set: "PreUpdate" }'`
## Additional information
- The fact that this is an exclusive system seems irrelevant; the same panic happens with a normal system
- replacing `on_update` with `run_if(state_equals(MyStates::State))` removes the panic, but is the exact behaviour I would have expected from `on_update`
| True | Unexpected SystemInMultipleBaseSets panic due to on_update - ## Bevy version
main branch a1e4114ebee5b903a95dc1438b4f0d946ac3c8be
## What you did
I am trying to order an exclusive system in a pretty standard way for Plugins using states:
1. The system should run early in every frame (before any "normal" user systems)
2. The system should only run in the given state
```rust
.add_system(
exclusive_system
.in_base_set(CoreSet::PreUpdate)
.in_set(MySet::Label)
.on_update(MyStates::State)
```
## What went wrong
This triggers the unexpected panic `SystemInMultipleBaseSets { system: "Some(System exclusive_system: {is_exclusive})", first_set: "StateTransitions", second_set: "PreUpdate" }'`
## Additional information
- The fact that this is an exclusive system seems irrelevant; the same panic happens with a normal system
- replacing `on_update` with `run_if(state_equals(MyStates::State))` removes the panic, but is the exact behaviour I would have expected from `on_update`
| usab | unexpected systeminmultiplebasesets panic due to on update bevy version main branch what you did i am trying to order an exclusive system in a pretty standard way for plugins using states the system should run early in every frame before any normal user systems the system should only run in the given state rust add system exclusive system in base set coreset preupdate in set myset label on update mystates state what went wrong this triggers the unexpected panic systeminmultiplebasesets system some system exclusive system is exclusive first set statetransitions second set preupdate additional information the fact that this is an exclusive system seems irrelevant the same panic happens with a normal system replacing on update with run if state equals mystates state removes the panic but is the exact behaviour i would have expected from on update | 1 |
93,439 | 8,415,871,412 | IssuesEvent | 2018-10-13 19:03:33 | exercism/problem-specifications | https://api.github.com/repos/exercism/problem-specifications | closed | grep: conflicting flags | good first issue new test case idea | I just completed the grep exercise, and noticed that the tests didn't cover cases where flags conflict. E.g., what should happen in the following cases?
1. `-n` and `-l`
2. `-x` and `-v`
Some experimentation with grep shows that
1. `-n` takes precedence over `-l`
2. checks that the entire line does not match the pattern
The tests `oneFileNoMatchesVariousFlags()` and `multipleFilesNoMatchesVariousFlags()` contain the flags `-n`, `-l`, `-x`, and `-i`, but don't actually test the situations described above because there are no matches.
I did make sure that my solution produces the behavior described above (http://exercism.io/submissions/74bcc15ff83b4a00aeb7a919e0f1158d) although certainly I may have missed other conflicting flag combinations.
It might be helpful to either include tests that indicate how to resolve conflicting flags, or the README should indicate that this level of detail is not necessary. What do you think?
(Originally posted on the Java track and was encouraged to move it here.) | 1.0 | grep: conflicting flags - I just completed the grep exercise, and noticed that the tests didn't cover cases where flags conflict. E.g., what should happen in the following cases?
1. `-n` and `-l`
2. `-x` and `-v`
Some experimentation with grep shows that
1. `-n` takes precedence over `-l`
2. checks that the entire line does not match the pattern
The tests `oneFileNoMatchesVariousFlags()` and `multipleFilesNoMatchesVariousFlags()` contain the flags `-n`, `-l`, `-x`, and `-i`, but don't actually test the situations described above because there are no matches.
I did make sure that my solution produces the behavior described above (http://exercism.io/submissions/74bcc15ff83b4a00aeb7a919e0f1158d) although certainly I may have missed other conflicting flag combinations.
It might be helpful to either include tests that indicate how to resolve conflicting flags, or the README should indicate that this level of detail is not necessary. What do you think?
(Originally posted on the Java track and was encouraged to move it here.) | non_usab | grep conflicting flags i just completed the grep exercise and noticed that the tests didn t cover cases where flags conflict e g what should happen in the following cases n and l x and v some experimentation with grep shows that n takes precedence over l checks that the entire line does not match the pattern the tests onefilenomatchesvariousflags and multiplefilesnomatchesvariousflags contain the flags n l x and i but don t actually test the situations described above because there are no matches i did make sure that my solution produces the behavior described above although certainly i may have missed other conflicting flag combinations it might be helpful to either include tests that indicate how to resolve conflicting flags or the readme should indicate that this level of detail is not necessary what do you think originally posted on the java track and was encouraged to move it here | 0 |
6,186 | 4,173,242,426 | IssuesEvent | 2016-06-21 09:48:19 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | The slider "minimum duration" does a "jump" when we try to catch it | > Bug Affects Usability Concerns Interface | When clicking on the "minimum duration" slider, the slider "jumps" to
- the right side of the bar when the value is low
- the left side of the bar when the value is hight | True | The slider "minimum duration" does a "jump" when we try to catch it - When clicking on the "minimum duration" slider, the slider "jumps" to
- the right side of the bar when the value is low
- the left side of the bar when the value is hight | usab | the slider minimum duration does a jump when we try to catch it when clicking on the minimum duration slider the slider jumps to the right side of the bar when the value is low the left side of the bar when the value is hight | 1 |
7,501 | 5,032,386,344 | IssuesEvent | 2016-12-16 11:02:16 | postmanlabs/postman-app-support | https://api.github.com/repos/postmanlabs/postman-app-support | closed | Multiple scrollbars in UI | Usability | 1. Postman ver 4.4.3
2. Chrome app
3. OS details:Win 32
4. Is the Interceptor on and enabled in the app:No
5. Did you encounter this recently, or has this bug always been there:Dont Know
6. Expected behaviour:Single Scroll Bar
7. Console logs (http://blog.getpostman.com/2014/01/27/enabling-chrome-developer-tools-inside-postman/ for the Chrome App, View->Toggle Dev Tools for the Mac app):
8. Screenshots (if applicable)
<!--
Steps to reproduce the problem:
send any query with enough response to fill response area
-->

| True | Multiple scrollbars in UI - 1. Postman ver 4.4.3
2. Chrome app
3. OS details:Win 32
4. Is the Interceptor on and enabled in the app:No
5. Did you encounter this recently, or has this bug always been there:Dont Know
6. Expected behaviour:Single Scroll Bar
7. Console logs (http://blog.getpostman.com/2014/01/27/enabling-chrome-developer-tools-inside-postman/ for the Chrome App, View->Toggle Dev Tools for the Mac app):
8. Screenshots (if applicable)
<!--
Steps to reproduce the problem:
send any query with enough response to fill response area
-->

| usab | multiple scrollbars in ui postman ver chrome app os details win is the interceptor on and enabled in the app no did you encounter this recently or has this bug always been there dont know expected behaviour single scroll bar console logs for the chrome app view toggle dev tools for the mac app screenshots if applicable steps to reproduce the problem send any query with enough response to fill response area | 1 |
116,974 | 15,032,127,496 | IssuesEvent | 2021-02-02 09:47:58 | dotnet/project-system | https://api.github.com/repos/dotnet/project-system | opened | Prevent entire contents of UI from scrolling off the top of the screen | Feature-Project-Properties-Designer Tenet-User Friendly | In order to support navigating to the last category of the last page in the UI, we inject padding at the bottom of the scrollable area that is equal to the height of the scrollable area. This allows the last section to be positioned at the top of the screen.
A downside of this approach is that it's possible to end up seeing nothing on screen in either of two scenarios:
1. You scroll the property list to the bottom of its range (either with the mouse or keyboard), or
2. You scroll down a way then apply a search filter that means all matching properties appear in the space above the scroll area.
The second scenario is more likely to be a problem.
The idea solution would be to inject padding that equals the scrollable height minus the height of the final section. However calculating the height of the last section may introduce a performance penalty. It is worth investigating.
| 1.0 | Prevent entire contents of UI from scrolling off the top of the screen - In order to support navigating to the last category of the last page in the UI, we inject padding at the bottom of the scrollable area that is equal to the height of the scrollable area. This allows the last section to be positioned at the top of the screen.
A downside of this approach is that it's possible to end up seeing nothing on screen in either of two scenarios:
1. You scroll the property list to the bottom of its range (either with the mouse or keyboard), or
2. You scroll down a way then apply a search filter that means all matching properties appear in the space above the scroll area.
The second scenario is more likely to be a problem.
The idea solution would be to inject padding that equals the scrollable height minus the height of the final section. However calculating the height of the last section may introduce a performance penalty. It is worth investigating.
| non_usab | prevent entire contents of ui from scrolling off the top of the screen in order to support navigating to the last category of the last page in the ui we inject padding at the bottom of the scrollable area that is equal to the height of the scrollable area this allows the last section to be positioned at the top of the screen a downside of this approach is that it s possible to end up seeing nothing on screen in either of two scenarios you scroll the property list to the bottom of its range either with the mouse or keyboard or you scroll down a way then apply a search filter that means all matching properties appear in the space above the scroll area the second scenario is more likely to be a problem the idea solution would be to inject padding that equals the scrollable height minus the height of the final section however calculating the height of the last section may introduce a performance penalty it is worth investigating | 0 |
21,886 | 18,041,101,569 | IssuesEvent | 2021-09-18 03:50:47 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | closed | When one of zk nodes goes down, tasks will fail | usability comp-zookeeper | I'm working on v21.3.4.25-lts
When one of zk server goes down (for any reasons), the clickhouse server who's connecting to it will report some exceptions .
At the same time tasks executing on this clickhouse server will fail.
In fact, clickhouse will try to connect to other zk server from the zk cluster automatically from this:
https://github.com/ClickHouse/ClickHouse/blob/47c1bb34163d496a9fe97d1aa4dd73894c4ad5d5/src/Common/ZooKeeper/ZooKeeperImpl.cpp#L354
Once it builds a new connection with zookeeper(maybe another zk server) it returns.
Can we maintanence multi connections with all zookeeper servers at the same time. In this case if a zookeeper server goes down , clickhouse can switch to another connection without failing the tasks.
Or do we have any other plans for this scenario ?
thanks
| True | When one of zk nodes goes down, tasks will fail - I'm working on v21.3.4.25-lts
When one of zk server goes down (for any reasons), the clickhouse server who's connecting to it will report some exceptions .
At the same time tasks executing on this clickhouse server will fail.
In fact, clickhouse will try to connect to other zk server from the zk cluster automatically from this:
https://github.com/ClickHouse/ClickHouse/blob/47c1bb34163d496a9fe97d1aa4dd73894c4ad5d5/src/Common/ZooKeeper/ZooKeeperImpl.cpp#L354
Once it builds a new connection with zookeeper(maybe another zk server) it returns.
Can we maintanence multi connections with all zookeeper servers at the same time. In this case if a zookeeper server goes down , clickhouse can switch to another connection without failing the tasks.
Or do we have any other plans for this scenario ?
thanks
| usab | when one of zk nodes goes down tasks will fail i m working on lts when one of zk server goes down for any reasons the clickhouse server who s connecting to it will report some exceptions at the same time tasks executing on this clickhouse server will fail in fact clickhouse will try to connect to other zk server from the zk cluster automatically from this once it builds a new connection with zookeeper maybe another zk server it returns can we maintanence multi connections with all zookeeper servers at the same time in this case if a zookeeper server goes down clickhouse can switch to another connection without failing the tasks or do we have any other plans for this scenario thanks | 1 |
21,446 | 17,087,769,730 | IssuesEvent | 2021-07-08 13:53:50 | fecgov/fec-cms | https://api.github.com/repos/fecgov/fec-cms | closed | Draft updated methodology for snapshot | Needs refinement Usability finding | **What we're after:**
Based on [usability testing](https://github.com/fecgov/fec-cms/issues/4716), we need to make adjustments to the methodology to make it more clear to end users.
Testing takeaways:
- Unclear how percentages were being calculated
- How can we be more clear with how individual percentages are calculated.
- Methodology didn't align with what was being displayed in the snapshot
- Disbursement percentage will change depending on the committee, this should be clarified in the methodology
- Wanted calculations specific to one committee
- Because we can't achieve this at this time, we should consider providing example math equation for each
**Completion criteria:**
- [ ] Updated methodology is circulated and approved by relevant offices | True | Draft updated methodology for snapshot - **What we're after:**
Based on [usability testing](https://github.com/fecgov/fec-cms/issues/4716), we need to make adjustments to the methodology to make it more clear to end users.
Testing takeaways:
- Unclear how percentages were being calculated
- How can we be more clear with how individual percentages are calculated.
- Methodology didn't align with what was being displayed in the snapshot
- Disbursement percentage will change depending on the committee, this should be clarified in the methodology
- Wanted calculations specific to one committee
- Because we can't achieve this at this time, we should consider providing example math equation for each
**Completion criteria:**
- [ ] Updated methodology is circulated and approved by relevant offices | usab | draft updated methodology for snapshot what we re after based on we need to make adjustments to the methodology to make it more clear to end users testing takeaways unclear how percentages were being calculated how can we be more clear with how individual percentages are calculated methodology didn t align with what was being displayed in the snapshot disbursement percentage will change depending on the committee this should be clarified in the methodology wanted calculations specific to one committee because we can t achieve this at this time we should consider providing example math equation for each completion criteria updated methodology is circulated and approved by relevant offices | 1 |
102,314 | 4,153,847,198 | IssuesEvent | 2016-06-16 09:19:32 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite} | kind/flake priority/P2 | https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9287/
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:945
Expected error:
<*errors.errorString | 0xc820a03970>: {
s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.120.252 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-n5dnd] [] <nil> Error from server: the server does not allow access to the requested resource (get replicasets.extensions e2e-test-nginx-deployment-1517792476)\n [] <nil> 0xc8206fd9a0 exit status 1 <nil> true [0xc8202b81a8 0xc8202b81c8 0xc8202b81f0] [0xc8202b81a8 0xc8202b81c8 0xc8202b81f0] [0xc8202b81b8 0xc8202b81d8] [0xa79a30 0xa79a30] 0xc820acf560}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (get replicasets.extensions e2e-test-nginx-deployment-1517792476)\n\nerror:\nexit status 1\n",
}
Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.120.252 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-n5dnd] [] <nil> Error from server: the server does not allow access to the requested resource (get replicasets.extensions e2e-test-nginx-deployment-1517792476)
[] <nil> 0xc8206fd9a0 exit status 1 <nil> true [0xc8202b81a8 0xc8202b81c8 0xc8202b81f0] [0xc8202b81a8 0xc8202b81c8 0xc8202b81f0] [0xc8202b81b8 0xc8202b81d8] [0xa79a30 0xa79a30] 0xc820acf560}:
Command stdout:
stderr:
Error from server: the server does not allow access to the requested resource (get replicasets.extensions e2e-test-nginx-deployment-1517792476)
error:
exit status 1
not to have occurred
```
| 1.0 | [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite} - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke/9287/
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:945
Expected error:
<*errors.errorString | 0xc820a03970>: {
s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.120.252 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-n5dnd] [] <nil> Error from server: the server does not allow access to the requested resource (get replicasets.extensions e2e-test-nginx-deployment-1517792476)\n [] <nil> 0xc8206fd9a0 exit status 1 <nil> true [0xc8202b81a8 0xc8202b81c8 0xc8202b81f0] [0xc8202b81a8 0xc8202b81c8 0xc8202b81f0] [0xc8202b81b8 0xc8202b81d8] [0xa79a30 0xa79a30] 0xc820acf560}:\nCommand stdout:\n\nstderr:\nError from server: the server does not allow access to the requested resource (get replicasets.extensions e2e-test-nginx-deployment-1517792476)\n\nerror:\nexit status 1\n",
}
Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.120.252 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-n5dnd] [] <nil> Error from server: the server does not allow access to the requested resource (get replicasets.extensions e2e-test-nginx-deployment-1517792476)
[] <nil> 0xc8206fd9a0 exit status 1 <nil> true [0xc8202b81a8 0xc8202b81c8 0xc8202b81f0] [0xc8202b81a8 0xc8202b81c8 0xc8202b81f0] [0xc8202b81b8 0xc8202b81d8] [0xa79a30 0xa79a30] 0xc820acf560}:
Command stdout:
stderr:
Error from server: the server does not allow access to the requested resource (get replicasets.extensions e2e-test-nginx-deployment-1517792476)
error:
exit status 1
not to have occurred
```
| non_usab | kubectl client kubectl run deployment should create a deployment from an image kubernetes suite failed kubectl client kubectl run deployment should create a deployment from an image kubernetes suite go src io kubernetes output dockerized go src io kubernetes test kubectl go expected error s error running workspace kubernetes platforms linux kubectl error from server the server does not allow access to the requested resource get replicasets extensions test nginx deployment n exit status true ncommand stdout n nstderr nerror from server the server does not allow access to the requested resource get replicasets extensions test nginx deployment n nerror nexit status n error running workspace kubernetes platforms linux kubectl error from server the server does not allow access to the requested resource get replicasets extensions test nginx deployment exit status true command stdout stderr error from server the server does not allow access to the requested resource get replicasets extensions test nginx deployment error exit status not to have occurred | 0 |
3,614 | 3,507,355,876 | IssuesEvent | 2016-01-08 12:50:33 | mesosphere/marathon | https://api.github.com/repos/mesosphere/marathon | closed | Show health status bar breakdown on application detail view | gui ready for review usability | Currently on the application list a tooltip with an explanation for each of the colors in the health status bar is provided.
On the detail view only information on healthy, unhealthy and unknown is provided.
Does it make sense to also show (if great than 0)
* Staged
* Over Capacity
* Unscheduled
This expectation was highlighted in recent usability testing.

| True | Show health status bar breakdown on application detail view - Currently on the application list a tooltip with an explanation for each of the colors in the health status bar is provided.
On the detail view only information on healthy, unhealthy and unknown is provided.
Does it make sense to also show (if great than 0)
* Staged
* Over Capacity
* Unscheduled
This expectation was highlighted in recent usability testing.

| usab | show health status bar breakdown on application detail view currently on the application list a tooltip with an explanation for each of the colors in the health status bar is provided on the detail view only information on healthy unhealthy and unknown is provided does it make sense to also show if great than staged over capacity unscheduled this expectation was highlighted in recent usability testing | 1 |
391,624 | 26,901,166,555 | IssuesEvent | 2023-02-06 15:44:37 | ably/ably-python | https://api.github.com/repos/ably/ably-python | closed | Update README examples for 2.0.0-beta.3 release | documentation | Probably best to update the examples on `main` and then merge `main` into `integration/realtime` | 1.0 | Update README examples for 2.0.0-beta.3 release - Probably best to update the examples on `main` and then merge `main` into `integration/realtime` | non_usab | update readme examples for beta release probably best to update the examples on main and then merge main into integration realtime | 0 |
17,950 | 24,782,734,930 | IssuesEvent | 2022-10-24 07:12:30 | PorkStudios/FarPlaneTwo | https://api.github.com/repos/PorkStudios/FarPlaneTwo | closed | Oculus mod shader disabling the far render distance | Invalid Compatibility | basically, whenever you turn on shader using oculus mod, the far render distance does not show. I am not using Optifine as shader loader, because of some mod incompatibility | True | Oculus mod shader disabling the far render distance - basically, whenever you turn on shader using oculus mod, the far render distance does not show. I am not using Optifine as shader loader, because of some mod incompatibility | non_usab | oculus mod shader disabling the far render distance basically whenever you turn on shader using oculus mod the far render distance does not show i am not using optifine as shader loader because of some mod incompatibility | 0 |
2,646 | 3,560,609,679 | IssuesEvent | 2016-01-23 05:59:12 | DynamoRIO/drmemory | https://api.github.com/repos/DynamoRIO/drmemory | opened | leak scan at process exit is really slow on ARM | Component-LeakCheck OpSys-ARM Performance | Split from #1726
Running "base_unittests --single-process-tests --gtest_filter=FileTest.MemoryCorruption", the leak scan time really shows up as this test forks 9 separate child processes.
gdb snapshot:
```
(gdb) bt
#0 drsym_obj_addrsearch_symtab (mod_in=0x49d6da30, modoffs=7820238, idx=0x48dea0f8)
at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_elf.c:400
#1 0x73942fba in addrsearch_symtab (mod=0x49d6da00, modoffs=7820238, info=0x48dea3b0, flags=1)
at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_unix_common.c:403
#2 0x7394328a in drsym_unix_lookup_address (mod_in=0x49d6da00, modoffs=7820238, out=0x48dea3b0, flags=1)
at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_unix_common.c:560
#3 0x73942142 in drsym_lookup_address_local (modpath=0x48e2b324 "/mnt/usbkey/chrome-arm-build/Release/base_unittests",
modoffs=7820238, out=0x48dea3b0, flags=1) at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_unix_frontend.c:153
#4 0x73942316 in drsym_lookup_address (modpath=0x48e2b324 "/mnt/usbkey/chrome-arm-build/Release/base_unittests", modoffs=7820238,
out=0x48dea3b0, flags=1) at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_unix_frontend.c:245
#5 0x738b2670 in lookup_func_and_line (frame=0x48e75f28, name_info=0x48e2b2f4, modoffs=7820238)
at /home/derek/drmemory/git/src/common/callstack.c:541
#6 0x738c5384 in packed_frame_to_symbolized (pcs=0x493f84e0, frame=0x48e75f28, idx=6)
at /home/derek/drmemory/git/src/common/callstack.c:2115
#7 0x738c5f66 in packed_callstack_to_symbolized (pcs=0x493f84e0, scs=0x48dea4f8)
at /home/derek/drmemory/git/src/common/callstack.c:2160
#8 0x739002d8 in report_leak (known_malloc=1 '\001', addr=0x8c18f0 "\016", size=27, indirect_size=0, early=0 '\000',
reachable=1 '\001', maybe_reachable=0 '\000', shadow_state=5, pcs=0x493f84e0, count_reachable=1 '\001', show_reachable=0 '\000')
at /home/derek/drmemory/git/src/drmemory/report.c:3448
#9 0x738d78f0 in client_found_leak (start=0x8c18f0 "\016",
end=0x8c190b "\361\375\361\375\361\375\361\375\361\375\361\375\361\375\361\375\361\375\361\375\361X\205?I\030",
indirect_bytes=0, pre_us=0 '\000', reachable=1 '\001', maybe_reachable=0 '\000', client_data=0x493f84e0,
count_reachable=1 '\001', show_reachable=0 '\000') at /home/derek/drmemory/git/src/drmemory/alloc_drmem.c:2548
#10 0x7390da68 in malloc_iterate_cb (info=0x48dea71c, iter_data=0x48dea90c) at /home/derek/drmemory/git/src/drmemory/leak.c:1249
#11 0x738a0c48 in alloc_iter_own_arena (iter_arena_start=0x8ac000 "", iter_arena_end=0x9c3000 <Address 0x9c3000 out of bounds>,
flags=2, iter_data=0x48dea87c) at /home/derek/drmemory/git/src/common/alloc_replace.c:2336
#12 0x738b0d54 in rb_iter_cb (node=0x48e185ec, data=0x48dea830) at /home/derek/drmemory/git/src/common/heap.c:1024
#13 0x73918c18 in iterate_helper (tree=0x48e18d94, node=0x48e185ec, iter_cb=0x738b0add <rb_iter_cb>, iter_data=0x48dea830)
at /home/derek/drmemory/git/src/common/redblack.c:670
#14 0x73918f26 in rb_iterate (tree=0x48e18d94, iter_cb=0x738b0add <rb_iter_cb>, iter_data=0x48dea830)
at /home/derek/drmemory/git/src/common/redblack.c:684
#15 0x738b0dee in heap_region_iterate (iter_cb=0x738a050d <alloc_iter_own_arena>, data=0x48dea87c)
at /home/derek/drmemory/git/src/common/heap.c:1037
#16 0x738a1246 in alloc_iterate (cb=0x7390d075 <malloc_iterate_cb>, iter_data=0x48dea90c, only_live=1 '\001')
at /home/derek/drmemory/git/src/common/alloc_replace.c:2370
#17 0x738a92aa in malloc_replace__iterate (cb=0x7390d075 <malloc_iterate_cb>, iter_data=0x48dea90c)
at /home/derek/drmemory/git/src/common/alloc_replace.c:5189
#18 0x738844b6 in malloc_iterate (cb=0x7390d075 <malloc_iterate_cb>, iter_data=0x48dea90c)
at /home/derek/drmemory/git/src/common/alloc.c:3785
#19 0x7390f7a6 in leak_scan_for_leaks (at_exit=1 '\001') at /home/derek/drmemory/git/src/drmemory/leak.c:1482
#20 0x738d7a40 in check_reachability (at_exit=1 '\001') at /home/derek/drmemory/git/src/drmemory/alloc_drmem.c:2584
#21 0x738133c0 in event_exit () at /home/derek/drmemory/git/src/drmemory/drmemory.c:400
#22 0x711c2404 in instrument_exit () at /home/derek/drmemory/git/src/dynamorio/core/lib/instrument.c:765
```
Should we turn off reporting of reachable leaks?
By default we count and de-dup them, and we support suppressions:
```
~~Dr.M~~ ERRORS IGNORED:
~~Dr.M~~ 50 unique, 81 total, 9869 byte(s) of still-reachable allocation(s)
```
Although we only need to symbolize to check suppressions, right? Not to
de-dup. We could drop the feature of suppressing reachable when !show_reachable?
Perf tests running:
```
# /usr/bin/time bin/drmemory -pause_at_assert -dr_ops "-msgbox_mask 12 -checklevel 0" -- /mnt/usbkey/chrome-arm-build/Release/base_unittests --single-process-tests --gtest_filter=FileTest.MemoryCorruption
```
defaults:
1041.97user 17.25system 17:43.28elapsed 99%CPU (0avgtext+0avgdata 568224maxresident)k
-no_count_leaks:
83.70user 0.82system 1:24.94elapsed 99%CPU (0avgtext+0avgdata 171888maxresident)k
-no_leak_scan:
87.30user 0.88system 1:28.59elapsed 99%CPU (0avgtext+0avgdata 171920maxresident)k
defaults but w/ the proposed scheme of avoiding symbolization (and thus
suppression) of reachable leaks:
113.33user 15.14system 2:09.57elapsed 99%CPU (0avgtext+0avgdata 261312maxresident)k
exit report_leak right up front for reachable:
110.77user 14.67system 2:06.02elapsed 99%CPU (0avgtext+0avgdata 261280maxresident)k
avoid symbolization for all leaks:
86.88user 0.89system 1:28.17elapsed 99%CPU (0avgtext+0avgdata 171920maxresident)k
So symbolization is where the vast majority of the cost is. There must be
something specific to ARM?
Xref #1770: maybe there's some other low-hanging drsyms fruit.
| True | leak scan at process exit is really slow on ARM - Split from #1726
Running "base_unittests --single-process-tests --gtest_filter=FileTest.MemoryCorruption", the leak scan time really shows up as this test forks 9 separate child processes.
gdb snapshot:
```
(gdb) bt
#0 drsym_obj_addrsearch_symtab (mod_in=0x49d6da30, modoffs=7820238, idx=0x48dea0f8)
at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_elf.c:400
#1 0x73942fba in addrsearch_symtab (mod=0x49d6da00, modoffs=7820238, info=0x48dea3b0, flags=1)
at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_unix_common.c:403
#2 0x7394328a in drsym_unix_lookup_address (mod_in=0x49d6da00, modoffs=7820238, out=0x48dea3b0, flags=1)
at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_unix_common.c:560
#3 0x73942142 in drsym_lookup_address_local (modpath=0x48e2b324 "/mnt/usbkey/chrome-arm-build/Release/base_unittests",
modoffs=7820238, out=0x48dea3b0, flags=1) at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_unix_frontend.c:153
#4 0x73942316 in drsym_lookup_address (modpath=0x48e2b324 "/mnt/usbkey/chrome-arm-build/Release/base_unittests", modoffs=7820238,
out=0x48dea3b0, flags=1) at /home/derek/drmemory/git/src/dynamorio/ext/drsyms/drsyms_unix_frontend.c:245
#5 0x738b2670 in lookup_func_and_line (frame=0x48e75f28, name_info=0x48e2b2f4, modoffs=7820238)
at /home/derek/drmemory/git/src/common/callstack.c:541
#6 0x738c5384 in packed_frame_to_symbolized (pcs=0x493f84e0, frame=0x48e75f28, idx=6)
at /home/derek/drmemory/git/src/common/callstack.c:2115
#7 0x738c5f66 in packed_callstack_to_symbolized (pcs=0x493f84e0, scs=0x48dea4f8)
at /home/derek/drmemory/git/src/common/callstack.c:2160
#8 0x739002d8 in report_leak (known_malloc=1 '\001', addr=0x8c18f0 "\016", size=27, indirect_size=0, early=0 '\000',
reachable=1 '\001', maybe_reachable=0 '\000', shadow_state=5, pcs=0x493f84e0, count_reachable=1 '\001', show_reachable=0 '\000')
at /home/derek/drmemory/git/src/drmemory/report.c:3448
#9 0x738d78f0 in client_found_leak (start=0x8c18f0 "\016",
end=0x8c190b "\361\375\361\375\361\375\361\375\361\375\361\375\361\375\361\375\361\375\361\375\361X\205?I\030",
indirect_bytes=0, pre_us=0 '\000', reachable=1 '\001', maybe_reachable=0 '\000', client_data=0x493f84e0,
count_reachable=1 '\001', show_reachable=0 '\000') at /home/derek/drmemory/git/src/drmemory/alloc_drmem.c:2548
#10 0x7390da68 in malloc_iterate_cb (info=0x48dea71c, iter_data=0x48dea90c) at /home/derek/drmemory/git/src/drmemory/leak.c:1249
#11 0x738a0c48 in alloc_iter_own_arena (iter_arena_start=0x8ac000 "", iter_arena_end=0x9c3000 <Address 0x9c3000 out of bounds>,
flags=2, iter_data=0x48dea87c) at /home/derek/drmemory/git/src/common/alloc_replace.c:2336
#12 0x738b0d54 in rb_iter_cb (node=0x48e185ec, data=0x48dea830) at /home/derek/drmemory/git/src/common/heap.c:1024
#13 0x73918c18 in iterate_helper (tree=0x48e18d94, node=0x48e185ec, iter_cb=0x738b0add <rb_iter_cb>, iter_data=0x48dea830)
at /home/derek/drmemory/git/src/common/redblack.c:670
#14 0x73918f26 in rb_iterate (tree=0x48e18d94, iter_cb=0x738b0add <rb_iter_cb>, iter_data=0x48dea830)
at /home/derek/drmemory/git/src/common/redblack.c:684
#15 0x738b0dee in heap_region_iterate (iter_cb=0x738a050d <alloc_iter_own_arena>, data=0x48dea87c)
at /home/derek/drmemory/git/src/common/heap.c:1037
#16 0x738a1246 in alloc_iterate (cb=0x7390d075 <malloc_iterate_cb>, iter_data=0x48dea90c, only_live=1 '\001')
at /home/derek/drmemory/git/src/common/alloc_replace.c:2370
#17 0x738a92aa in malloc_replace__iterate (cb=0x7390d075 <malloc_iterate_cb>, iter_data=0x48dea90c)
at /home/derek/drmemory/git/src/common/alloc_replace.c:5189
#18 0x738844b6 in malloc_iterate (cb=0x7390d075 <malloc_iterate_cb>, iter_data=0x48dea90c)
at /home/derek/drmemory/git/src/common/alloc.c:3785
#19 0x7390f7a6 in leak_scan_for_leaks (at_exit=1 '\001') at /home/derek/drmemory/git/src/drmemory/leak.c:1482
#20 0x738d7a40 in check_reachability (at_exit=1 '\001') at /home/derek/drmemory/git/src/drmemory/alloc_drmem.c:2584
#21 0x738133c0 in event_exit () at /home/derek/drmemory/git/src/drmemory/drmemory.c:400
#22 0x711c2404 in instrument_exit () at /home/derek/drmemory/git/src/dynamorio/core/lib/instrument.c:765
```
Should we turn off reporting of reachable leaks?
By default we count and de-dup them, and we support suppressions:
```
~~Dr.M~~ ERRORS IGNORED:
~~Dr.M~~ 50 unique, 81 total, 9869 byte(s) of still-reachable allocation(s)
```
Although we only need to symbolize to check suppressions, right? Not to
de-dup. We could drop the feature of suppressing reachable when !show_reachable?
Perf tests running:
```
# /usr/bin/time bin/drmemory -pause_at_assert -dr_ops "-msgbox_mask 12 -checklevel 0" -- /mnt/usbkey/chrome-arm-build/Release/base_unittests --single-process-tests --gtest_filter=FileTest.MemoryCorruption
```
defaults:
1041.97user 17.25system 17:43.28elapsed 99%CPU (0avgtext+0avgdata 568224maxresident)k
-no_count_leaks:
83.70user 0.82system 1:24.94elapsed 99%CPU (0avgtext+0avgdata 171888maxresident)k
-no_leak_scan:
87.30user 0.88system 1:28.59elapsed 99%CPU (0avgtext+0avgdata 171920maxresident)k
defaults but w/ the proposed scheme of avoiding symbolization (and thus
suppression) of reachable leaks:
113.33user 15.14system 2:09.57elapsed 99%CPU (0avgtext+0avgdata 261312maxresident)k
exit report_leak right up front for reachable:
110.77user 14.67system 2:06.02elapsed 99%CPU (0avgtext+0avgdata 261280maxresident)k
avoid symbolization for all leaks:
86.88user 0.89system 1:28.17elapsed 99%CPU (0avgtext+0avgdata 171920maxresident)k
So symbolization is where the vast majority of the cost is. There must be
something specific to ARM?
Xref #1770: maybe there's some other low-hanging drsyms fruit.
| non_usab | leak scan at process exit is really slow on arm split from running base unittests single process tests gtest filter filetest memorycorruption the leak scan time really shows up as this test forks separate child processes gdb snapshot gdb bt drsym obj addrsearch symtab mod in modoffs idx at home derek drmemory git src dynamorio ext drsyms drsyms elf c in addrsearch symtab mod modoffs info flags at home derek drmemory git src dynamorio ext drsyms drsyms unix common c in drsym unix lookup address mod in modoffs out flags at home derek drmemory git src dynamorio ext drsyms drsyms unix common c in drsym lookup address local modpath mnt usbkey chrome arm build release base unittests modoffs out flags at home derek drmemory git src dynamorio ext drsyms drsyms unix frontend c in drsym lookup address modpath mnt usbkey chrome arm build release base unittests modoffs out flags at home derek drmemory git src dynamorio ext drsyms drsyms unix frontend c in lookup func and line frame name info modoffs at home derek drmemory git src common callstack c in packed frame to symbolized pcs frame idx at home derek drmemory git src common callstack c in packed callstack to symbolized pcs scs at home derek drmemory git src common callstack c in report leak known malloc addr size indirect size early reachable maybe reachable shadow state pcs count reachable show reachable at home derek drmemory git src drmemory report c in client found leak start end i indirect bytes pre us reachable maybe reachable client data count reachable show reachable at home derek drmemory git src drmemory alloc drmem c in malloc iterate cb info iter data at home derek drmemory git src drmemory leak c in alloc iter own arena iter arena start iter arena end flags iter data at home derek drmemory git src common alloc replace c in rb iter cb node data at home derek drmemory git src common heap c in iterate helper tree node iter cb iter data at home derek drmemory git src common redblack c in rb iterate tree iter cb iter data at home derek drmemory git src common redblack c in heap region iterate iter cb data at home derek drmemory git src common heap c in alloc iterate cb iter data only live at home derek drmemory git src common alloc replace c in malloc replace iterate cb iter data at home derek drmemory git src common alloc replace c in malloc iterate cb iter data at home derek drmemory git src common alloc c in leak scan for leaks at exit at home derek drmemory git src drmemory leak c in check reachability at exit at home derek drmemory git src drmemory alloc drmem c in event exit at home derek drmemory git src drmemory drmemory c in instrument exit at home derek drmemory git src dynamorio core lib instrument c should we turn off reporting of reachable leaks by default we count and de dup them and we support suppressions dr m errors ignored dr m unique total byte s of still reachable allocation s although we only need to symbolize to check suppressions right not to de dup we could drop the feature of suppressing reachable when show reachable perf tests running usr bin time bin drmemory pause at assert dr ops msgbox mask checklevel mnt usbkey chrome arm build release base unittests single process tests gtest filter filetest memorycorruption defaults cpu k no count leaks cpu k no leak scan cpu k defaults but w the proposed scheme of avoiding symbolization and thus suppression of reachable leaks cpu k exit report leak right up front for reachable cpu k avoid symbolization for all leaks cpu k so symbolization is where the vast majority of the cost is there must be something specific to arm xref maybe there s some other low hanging drsyms fruit | 0 |
22,751 | 20,092,949,920 | IssuesEvent | 2022-02-06 03:18:40 | tailscale/tailscale | https://api.github.com/repos/tailscale/tailscale | closed | macOS: tailscale backend crashes when I do File|Share|Tailscale in TextEdit app | OS-macos L2 Few P2 Aggravating T5 Usability | Version: 1.7.458
BUG-1b57f15d6fc0818cc8e1393b7683a2e44000267afc7de1d4eb8d3b23287eb4cf-20210514165505Z-e74d020a1e28d50a
Steps:
Not exactly sure what triggered it. I was playing with File|Share|Tailscale in the TextEdit app at the time, but that might not be relevant.
The backend disconnected and the frontend went back into "Needs login" state. Upon logging back in, all seems well.
| True | macOS: tailscale backend crashes when I do File|Share|Tailscale in TextEdit app - Version: 1.7.458
BUG-1b57f15d6fc0818cc8e1393b7683a2e44000267afc7de1d4eb8d3b23287eb4cf-20210514165505Z-e74d020a1e28d50a
Steps:
Not exactly sure what triggered it. I was playing with File|Share|Tailscale in the TextEdit app at the time, but that might not be relevant.
The backend disconnected and the frontend went back into "Needs login" state. Upon logging back in, all seems well.
| usab | macos tailscale backend crashes when i do file share tailscale in textedit app version bug steps not exactly sure what triggered it i was playing with file share tailscale in the textedit app at the time but that might not be relevant the backend disconnected and the frontend went back into needs login state upon logging back in all seems well | 1 |
186,451 | 6,736,253,394 | IssuesEvent | 2017-10-19 02:44:13 | leo-project/leofs | https://api.github.com/repos/leo-project/leofs | closed | [leo_manager] recover-file doesn't work against objects which key separated by LF | Bug Priority-HIGH v1.3 _leo_manager | because escape_large_obj_sep https://github.com/leo-project/leofs/blob/1.3.7/apps/leo_manager/src/leo_manager_console.erl#L2040 is not called on the path executed recover-file so now
- parts object of a large object
- temporary object suffixed by UploadId
can't be recovered through recover-file. | 1.0 | [leo_manager] recover-file doesn't work against objects which key separated by LF - because escape_large_obj_sep https://github.com/leo-project/leofs/blob/1.3.7/apps/leo_manager/src/leo_manager_console.erl#L2040 is not called on the path executed recover-file so now
- parts object of a large object
- temporary object suffixed by UploadId
can't be recovered through recover-file. | non_usab | recover file doesn t work against objects which key separated by lf because escape large obj sep is not called on the path executed recover file so now parts object of a large object temporary object suffixed by uploadid can t be recovered through recover file | 0 |
514,822 | 14,944,634,690 | IssuesEvent | 2021-01-26 01:58:49 | vmware/singleton | https://api.github.com/repos/vmware/singleton | closed | [BUG] [C# Client]Get offline localized messages rather than online localized messages by GetString(string locale, ISource source) when online and offline modes are provided. | area/c#-client discussion kind/bug priority/high | **Describe the bug**
Get offline localized messages rather than online localized messages by GetString(string locale, ISource source) when online and offline modes are provided.
Request a key with the locale both exist in online bundle and offline bundle, It should return the requested locale translation in online bundle.
**To Reproduce**
Steps to reproduce the behavior:
1. Set config as below, make sure server and localbundle urls are provided.
online_service_url: http://localhost:8091/singleton/CSharpClient/1.0.0/
offline_resources_base_url: CSharpClient/1.0.0
2. Request a key with the locale both exist in online bundle and offline bundle:
Translation.SetCurrentLocale("de");
String Currentlocale = Translation.GetCurrentLocale();
String result1 = Translation.GetString(Currentlocale, Sourcetest);
Sourcetest = Translation.CreateSource("RESX", "RESX.ARGUMENT")
the component "RESX" and key "RESX.ARGUMENT" both exist in online bundle and offline bundle
"de" is local, this local has corresponding translation for above component in online bundle and offline bundle.
**Expected behavior**
the return result1 should be de translation from online bundle.
| 1.0 | [BUG] [C# Client]Get offline localized messages rather than online localized messages by GetString(string locale, ISource source) when online and offline modes are provided. - **Describe the bug**
Get offline localized messages rather than online localized messages by GetString(string locale, ISource source) when online and offline modes are provided.
Request a key with the locale both exist in online bundle and offline bundle, It should return the requested locale translation in online bundle.
**To Reproduce**
Steps to reproduce the behavior:
1. Set config as below, make sure server and localbundle urls are provided.
online_service_url: http://localhost:8091/singleton/CSharpClient/1.0.0/
offline_resources_base_url: CSharpClient/1.0.0
2. Request a key with the locale both exist in online bundle and offline bundle:
Translation.SetCurrentLocale("de");
String Currentlocale = Translation.GetCurrentLocale();
String result1 = Translation.GetString(Currentlocale, Sourcetest);
Sourcetest = Translation.CreateSource("RESX", "RESX.ARGUMENT")
the component "RESX" and key "RESX.ARGUMENT" both exist in online bundle and offline bundle
"de" is local, this local has corresponding translation for above component in online bundle and offline bundle.
**Expected behavior**
the return result1 should be de translation from online bundle.
| non_usab | get offline localized messages rather than online localized messages by getstring string locale isource source when online and offline modes are provided describe the bug get offline localized messages rather than online localized messages by getstring string locale isource source when online and offline modes are provided request a key with the locale both exist in online bundle and offline bundle it should return the requested locale translation in online bundle to reproduce steps to reproduce the behavior set config as below make sure server and localbundle urls are provided online service url offline resources base url csharpclient request a key with the locale both exist in online bundle and offline bundle translation setcurrentlocale de string currentlocale translation getcurrentlocale string translation getstring currentlocale sourcetest sourcetest translation createsource resx resx argument the component resx and key resx argument both exist in online bundle and offline bundle de is local this local has corresponding translation for above component in online bundle and offline bundle expected behavior the return should be de translation from online bundle | 0 |
250,846 | 7,988,443,059 | IssuesEvent | 2018-07-19 11:04:33 | fxi/map-x-mgl | https://api.github.com/repos/fxi/map-x-mgl | opened | MapX V5: list of "Priority 1" bugs | Priority 1 bug | - [ ] Upload data - drag and drop method: doesn't work
- [ ] Upload data - upload module: Shapefile & GeoJSON -> OK; other formats fail (GPX, KML...)
- [ ] Download data: "Country used to spatially select the data. By default = entire world" for now it's not the case, by default = current project (highlighted countries). | 1.0 | MapX V5: list of "Priority 1" bugs - - [ ] Upload data - drag and drop method: doesn't work
- [ ] Upload data - upload module: Shapefile & GeoJSON -> OK; other formats fail (GPX, KML...)
- [ ] Download data: "Country used to spatially select the data. By default = entire world" for now it's not the case, by default = current project (highlighted countries). | non_usab | mapx list of priority bugs upload data drag and drop method doesn t work upload data upload module shapefile geojson ok other formats fail gpx kml download data country used to spatially select the data by default entire world for now it s not the case by default current project highlighted countries | 0 |
11,844 | 7,481,493,665 | IssuesEvent | 2018-04-04 20:51:01 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | Batch-exporting projects from the editor | feature proposal topic:editor topic:porting usability | It would be very nice to have something that lets you export for several platforms and architectures at once; this could speed up the project release process considerably, without relying on platform-specific command lines. It could be part of the Export dialog (with an additional "Batch Export" tab) and look like a list of checkboxes like this:
---
- [ ] Windows
- [ ] Windows 64-bit
- [ ] Windows 32-bit
- [ ] Linux
- [ ] Linux 64-bit
- [ ] Linux 32-bit
- …
---
Ticking the checkbox next to the operating system would enable it for all architectures at once, but the user could also choose to export for one architecture only.
There would probably be a global CheckButton for toggling debugging on all the exports. The naming scheme would probably follow the default SCons one, ideally it would be customizable as well.
| True | Batch-exporting projects from the editor - It would be very nice to have something that lets you export for several platforms and architectures at once; this could speed up the project release process considerably, without relying on platform-specific command lines. It could be part of the Export dialog (with an additional "Batch Export" tab) and look like a list of checkboxes like this:
---
- [ ] Windows
- [ ] Windows 64-bit
- [ ] Windows 32-bit
- [ ] Linux
- [ ] Linux 64-bit
- [ ] Linux 32-bit
- …
---
Ticking the checkbox next to the operating system would enable it for all architectures at once, but the user could also choose to export for one architecture only.
There would probably be a global CheckButton for toggling debugging on all the exports. The naming scheme would probably follow the default SCons one, ideally it would be customizable as well.
| usab | batch exporting projects from the editor it would be very nice to have something that lets you export for several platforms and architectures at once this could speed up the project release process considerably without relying on platform specific command lines it could be part of the export dialog with an additional batch export tab and look like a list of checkboxes like this windows windows bit windows bit linux linux bit linux bit … ticking the checkbox next to the operating system would enable it for all architectures at once but the user could also choose to export for one architecture only there would probably be a global checkbutton for toggling debugging on all the exports the naming scheme would probably follow the default scons one ideally it would be customizable as well | 1 |
195,716 | 15,553,823,497 | IssuesEvent | 2021-03-16 02:22:04 | notronaldmcdonald/fpm | https://api.github.com/repos/notronaldmcdonald/fpm | closed | [To-Do] Code cleanup + Better user experience | documentation enhancement | ### Main points
* The code has some unused elements. These should be removed.
* Some of the commands that shouldn't need an argument require a third argument.
* Solve this by moving the code that only needs two arguments to the top of the script.
* *Maybe* add some better comments throughout the scripts/code.
### Checklist
- [x] Remove any unused code.
- [x] Move two-argument commands above the syntax error message.
- [x] Possibly add comments. | 1.0 | [To-Do] Code cleanup + Better user experience - ### Main points
* The code has some unused elements. These should be removed.
* Some of the commands that shouldn't need an argument require a third argument.
* Solve this by moving the code that only needs two arguments to the top of the script.
* *Maybe* add some better comments throughout the scripts/code.
### Checklist
- [x] Remove any unused code.
- [x] Move two-argument commands above the syntax error message.
- [x] Possibly add comments. | non_usab | code cleanup better user experience main points the code has some unused elements these should be removed some of the commands that shouldn t need an argument require a third argument solve this by moving the code that only needs two arguments to the top of the script maybe add some better comments throughout the scripts code checklist remove any unused code move two argument commands above the syntax error message possibly add comments | 0 |
230,903 | 25,482,796,617 | IssuesEvent | 2022-11-26 01:33:25 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | reopened | CVE-2017-18257 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2017-18257 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/f2fs/data.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/f2fs/data.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The __get_data_block function in fs/f2fs/data.c in the Linux kernel before 4.11 allows local users to cause a denial of service (integer overflow and loop) via crafted use of the open and fallocate system calls with an FS_IOC_FIEMAP ioctl.
<p>Publish Date: 2018-04-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-18257>CVE-2017-18257</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-18257">https://nvd.nist.gov/vuln/detail/CVE-2017-18257</a></p>
<p>Release Date: 2018-04-04</p>
<p>Fix Resolution: 4.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-18257 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2017-18257 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/f2fs/data.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/f2fs/data.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The __get_data_block function in fs/f2fs/data.c in the Linux kernel before 4.11 allows local users to cause a denial of service (integer overflow and loop) via crafted use of the open and fallocate system calls with an FS_IOC_FIEMAP ioctl.
<p>Publish Date: 2018-04-04
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-18257>CVE-2017-18257</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-18257">https://nvd.nist.gov/vuln/detail/CVE-2017-18257</a></p>
<p>Release Date: 2018-04-04</p>
<p>Fix Resolution: 4.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_usab | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files fs data c fs data c vulnerability details the get data block function in fs data c in the linux kernel before allows local users to cause a denial of service integer overflow and loop via crafted use of the open and fallocate system calls with an fs ioc fiemap ioctl publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.