Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
449,895 | 31,877,429,442 | IssuesEvent | 2023-09-16 01:43:03 | jenkins-infra/jenkins.io | https://api.github.com/repos/jenkins-infra/jenkins.io | opened | docs within /doc/book/system-administration are not updating on jenkins.io live | documentation | ### Describe your use-case which is not covered by existing documentation.
documentation changes within repo are not being updated on jenkins.io live, at least within /doc/book/system-administration.
**to observe, compare;**
https://github.com/jenkins-infra/jenkins.io/blob/master/content/doc/book/system-administration/reverse-proxy-configuration-with-jenkins/reverse-proxy-configuration-nginx.adoc
**to;**
https://www.jenkins.io/doc/book/system-administration/reverse-proxy-configuration-nginx/
**or**
https://github.com/jenkins-infra/jenkins.io/blob/master/content/doc/book/system-administration/reverse-proxy-configuration-with-jenkins/reverse-proxy-configuration-haproxy.adoc
**to;**
https://www.jenkins.io/doc/book/system-administration/reverse-proxy-configuration-haproxy/
### Reference any relevant documentation, other materials or issues/pull requests that can be used for inspiration.
_No response_ | 1.0 | docs within /doc/book/system-administration are not updating on jenkins.io live - ### Describe your use-case which is not covered by existing documentation.
documentation changes within repo are not being updated on jenkins.io live, at least within /doc/book/system-administration.
**to observe, compare;**
https://github.com/jenkins-infra/jenkins.io/blob/master/content/doc/book/system-administration/reverse-proxy-configuration-with-jenkins/reverse-proxy-configuration-nginx.adoc
**to;**
https://www.jenkins.io/doc/book/system-administration/reverse-proxy-configuration-nginx/
**or**
https://github.com/jenkins-infra/jenkins.io/blob/master/content/doc/book/system-administration/reverse-proxy-configuration-with-jenkins/reverse-proxy-configuration-haproxy.adoc
**to;**
https://www.jenkins.io/doc/book/system-administration/reverse-proxy-configuration-haproxy/
### Reference any relevant documentation, other materials or issues/pull requests that can be used for inspiration.
_No response_ | non_priority | docs within doc book system administration are not updating on jenkins io live describe your use case which is not covered by existing documentation documentation changes within repo are not being updated on jenkins io live at least within doc book system administration to observe compare to or to reference any relevant documentation other materials or issues pull requests that can be used for inspiration no response | 0 |
157,599 | 13,697,420,710 | IssuesEvent | 2020-10-01 03:00:12 | hyperledger/cactus | https://api.github.com/repos/hyperledger/cactus | closed | Document and enforce convention of exact npm dependency versions | bug documentation | Auto-upgrade (`^`) and wildcards (`*`) are bad because they violate the reproducible build best practice that states that a given state of the source code (any given git commit hash) always results in the same run time software byte by byte.
For critical security updates we need to depend on the CI environment executing npm audit and ensuring that the CI job fails if npm audit doesn't return all green. | 1.0 | Document and enforce convention of exact npm dependency versions - Auto-upgrade (`^`) and wildcards (`*`) are bad because they violate the reproducible build best practice that states that a given state of the source code (any given git commit hash) always results in the same run time software byte by byte.
For critical security updates we need to depend on the CI environment executing npm audit and ensuring that the CI job fails if npm audit doesn't return all green. | non_priority | document and enforce convention of exact npm dependency versions auto upgrade and wildcards are bad because they violate the reproducible build best practice that states that a given state of the source code any given git commit hash always results in the same run time software byte by byte for critical security updates we need to depend on the ci environment executing npm audit and ensuring that the ci job fails if npm audit doesn t return all green | 0 |
162,501 | 6,154,631,104 | IssuesEvent | 2017-06-28 13:09:02 | vladyslav2/gfwhitelabels | https://api.github.com/repos/vladyslav2/gfwhitelabels | closed | sharing campaign link | Priority | user copies link and shares it to slack/twitter/facebook/SMS
ISSUE: does NOT pull image and content
Fix: Please fix campaign pages so that Link pulls thumbnail image and "about us"
Here are examples



| 1.0 | sharing campaign link - user copies link and shares it to slack/twitter/facebook/SMS
ISSUE: does NOT pull image and content
Fix: Please fix campaign pages so that Link pulls thumbnail image and "about us"
Here are examples



| priority | sharing campaign link user copies link and shares it to slack twitter facebook sms issue does not pull image and content fix please fix campaign pages so that link pulls thumbnail image and about us here are examples | 1 |
245,820 | 7,891,483,103 | IssuesEvent | 2018-06-28 12:20:24 | wso2/siddhi | https://api.github.com/repos/wso2/siddhi | opened | Add chunk window | Priority/Highest Type/New Feature | **Description:**
Chunk window is different from [length window](https://wso2.github.io/siddhi/api/4.1.46/#length-window). The chunk window splits incoming events into the given maximum chunk size and if the number of events left for the last chunk it doesn't wait to fill it, rather they are returned.
Sample:
```
from TempStream#window.chunck(10)
select avg(temp) as avgTemp, roomNo, deviceID
group by roomNo, deviceID
insert into AvgTempStream;
```
If `TempStream` contains `23` events, then above chunk window will produce `10`, `10`, and `3` event chunks.
**Affected Product Version:**
Siddhi v4.x.x
| 1.0 | Add chunk window - **Description:**
Chunk window is different from [length window](https://wso2.github.io/siddhi/api/4.1.46/#length-window). The chunk window splits incoming events into the given maximum chunk size and if the number of events left for the last chunk it doesn't wait to fill it, rather they are returned.
Sample:
```
from TempStream#window.chunck(10)
select avg(temp) as avgTemp, roomNo, deviceID
group by roomNo, deviceID
insert into AvgTempStream;
```
If `TempStream` contains `23` events, then above chunk window will produce `10`, `10`, and `3` event chunks.
**Affected Product Version:**
Siddhi v4.x.x
| priority | add chunk window description chunk window is different from the chunk window splits incoming events into the given maximum chunk size and if the number of events left for the last chunk it doesn t wait to fill it rather they are returned sample from tempstream window chunck select avg temp as avgtemp roomno deviceid group by roomno deviceid insert into avgtempstream if tempstream contains events then above chunk window will produce and event chunks affected product version siddhi x x | 1 |
135,535 | 12,686,020,059 | IssuesEvent | 2020-06-20 08:19:52 | bazelbuild/rules_k8s | https://api.github.com/repos/bazelbuild/rules_k8s | closed | Better docs for image substitutions | documentation | Lets assume I have a deployment:
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-frontend
spec:
replicas: 1
template:
spec:
- name: web-frontend
image: XXXX
```
and a build rule:
```
k8s_object(
name = "x-deployment",
kind = "deployment",
template = ":deployment.yaml",
images = {
"XXXX": "//src/docker/web_frontend:web-frontend",
}
)
```
I get:
```
INFO: Running command line: bazel-bin/src/kubernetes/web-frontend/x-deployment
Traceback (most recent call last):
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 180, in <module>
main()
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 166, in main
(tag, digest) = Publish(transport, args.image_chroot, **kwargs)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 136, in Publish
name_to_replace = docker_name.Tag(name)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 196, in __init__
_check_tag(self._tag)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 80, in _check_tag
_check_element('tag', tag, _TAG_CHARS, 1, 127)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 64, in _check_element
% (name, element, min_len))
containerregistry.client.docker_name_.BadNameException: Invalid tag: , must be at least 1 characters
ERROR: Non-zero return code '1' from command: Process exited with status 1
```
The docs need to explain what format the key in the images param in the BUILD file needs to be and how to specify this in the deployment.yaml.
I also tried `$(GCP_REGISTRY)/$(GCP_PROJECT)/web-frontend` where GCP_REGISTRY and GCP_PROJECT are defines in my bazelrc, but I don't want to hardcode this in the template (deployment.yaml).
It looks like it tries to parse the key. E.g. when I change "XXXX" to "XXXX:latest" I get
"""
containerregistry.client.docker_name_.BadNameException: A Docker registry domain must be specified.
"""
If that is the case, I don't understand how I can avoid hard-coding the registry domains into the templates. I think it would be nicer to specify this in k8s_defaults() as a param.
If I put `gcr.io/<project-id>/web-fontend:latest` into the deployment.yaml and `"gcr.io/<project-id>/web-fontend:latest": "//src/docker/web_frontend:web-frontend"` into the BUILD file, the rule runs, but it apparently interacts with the cloud (I need to have a valid regsitry login), I though it would only resolve (aka substitue) the template. Also the produced output template is not expanded, it only prints the resolved template to stdout. | 1.0 | Better docs for image substitutions - Lets assume I have a deployment:
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-frontend
spec:
replicas: 1
template:
spec:
- name: web-frontend
image: XXXX
```
and a build rule:
```
k8s_object(
name = "x-deployment",
kind = "deployment",
template = ":deployment.yaml",
images = {
"XXXX": "//src/docker/web_frontend:web-frontend",
}
)
```
I get:
```
INFO: Running command line: bazel-bin/src/kubernetes/web-frontend/x-deployment
Traceback (most recent call last):
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 180, in <module>
main()
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 166, in main
(tag, digest) = Publish(transport, args.image_chroot, **kwargs)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/__main__/../io_bazel_rules_k8s/k8s/resolver.py", line 136, in Publish
name_to_replace = docker_name.Tag(name)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 196, in __init__
_check_tag(self._tag)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 80, in _check_tag
_check_element('tag', tag, _TAG_CHARS, 1, 127)
File "/usr/local/google/home/ensonic/.cache/bazel/_bazel_ensonic/execroot/__main__/bazel-out/local-fastbuild/bin/src/kubernetes/web-frontend/x-deployment.runfiles/containerregistry/client/docker_name_.py", line 64, in _check_element
% (name, element, min_len))
containerregistry.client.docker_name_.BadNameException: Invalid tag: , must be at least 1 characters
ERROR: Non-zero return code '1' from command: Process exited with status 1
```
The docs need to explain what format the key in the images param in the BUILD file needs to be and how to specify this in the deployment.yaml.
I also tried `$(GCP_REGISTRY)/$(GCP_PROJECT)/web-frontend` where GCP_REGISTRY and GCP_PROJECT are defines in my bazelrc, but I don't want to hardcode this in the template (deployment.yaml).
It looks like it tries to parse the key. E.g. when I change "XXXX" to "XXXX:latest" I get
"""
containerregistry.client.docker_name_.BadNameException: A Docker registry domain must be specified.
"""
If that is the case, I don't understand how I can avoid hard-coding the registry domains into the templates. I think it would be nicer to specify this in k8s_defaults() as a param.
If I put `gcr.io/<project-id>/web-fontend:latest` into the deployment.yaml and `"gcr.io/<project-id>/web-fontend:latest": "//src/docker/web_frontend:web-frontend"` into the BUILD file, the rule runs, but it apparently interacts with the cloud (I need to have a valid regsitry login), I though it would only resolve (aka substitue) the template. Also the produced output template is not expanded, it only prints the resolved template to stdout. | non_priority | better docs for image substitutions lets assume i have a deployment yaml apiversion extensions kind deployment metadata name web frontend spec replicas template spec name web frontend image xxxx and a build rule object name x deployment kind deployment template deployment yaml images xxxx src docker web frontend web frontend i get info running command line bazel bin src kubernetes web frontend x deployment traceback most recent call last file usr local google home ensonic cache bazel bazel ensonic execroot main bazel out local fastbuild bin src kubernetes web frontend x deployment runfiles main io bazel rules resolver py line in main file usr local google home ensonic cache bazel bazel ensonic execroot main bazel out local fastbuild bin src kubernetes web frontend x deployment runfiles main io bazel rules resolver py line in main tag digest publish transport args image chroot kwargs file usr local google home ensonic cache bazel bazel ensonic execroot main bazel out local fastbuild bin src kubernetes web frontend x deployment runfiles main io bazel rules resolver py line in publish name to replace docker name tag name file usr local google home ensonic cache bazel bazel ensonic execroot main bazel out local fastbuild bin src kubernetes web frontend x deployment runfiles containerregistry client docker name py line in init check tag self tag file usr local google home ensonic cache bazel bazel ensonic execroot main bazel out local fastbuild bin src kubernetes web frontend x deployment runfiles containerregistry client docker name py line in check tag check element tag tag tag chars file usr local google home ensonic cache bazel bazel ensonic execroot main bazel out local fastbuild bin src kubernetes web frontend x deployment runfiles containerregistry client docker name py line in check element name element min len containerregistry client docker name badnameexception invalid tag must be at least characters error non zero return code from command process exited with status the docs need to explain what format the key in the images param in the build file needs to be and how to specify this in the deployment yaml i also tried gcp registry gcp project web frontend where gcp registry and gcp project are defines in my bazelrc but i don t want to hardcode this in the template deployment yaml it looks like it tries to parse the key e g when i change xxxx to xxxx latest i get containerregistry client docker name badnameexception a docker registry domain must be specified if that is the case i don t understand how i can avoid hard coding the registry domains into the templates i think it would be nicer to specify this in defaults as a param if i put gcr io web fontend latest into the deployment yaml and gcr io web fontend latest src docker web frontend web frontend into the build file the rule runs but it apparently interacts with the cloud i need to have a valid regsitry login i though it would only resolve aka substitue the template also the produced output template is not expanded it only prints the resolved template to stdout | 0 |
294,976 | 25,444,494,001 | IssuesEvent | 2022-11-24 03:54:55 | meshery/meshery | https://api.github.com/repos/meshery/meshery | closed | Adapter e2e tests failing after new meshery release | kind/bug issue/stale area/tests | #### Current Behavior
End to end tests in multple adaapters fail with inconsistent error messages.
<img width="1428" alt="Screenshot 2022-09-23 at 06 29 25" src="https://user-images.githubusercontent.com/55385490/191887638-d97e4d26-bf50-4265-86fd-2913d1a74ce2.png">
#### Expected Behavior
End to end tests should pass.
#### Screenshots/Logs
<!-- Add screenshots, if applicable, to help explain your problem. -->
#### Environment
- **Host OS:** Mac
- **Platform:** Docker or Kubernetes
- **Meshery Server Version:** edge-v0.6.9
- **Meshery Client Version:** edge-v0.6.9
<!-- Optional
#### To Reproduce
1. Go to any meshery adapter repository
2. Select any pull request and click on actions.
3. Click on Re-run adapter end to end tests.
4. See error
-->
---
#### Contributor [Guides](https://docs.meshery.io/project/contributing) and [Handbook](https://layer5.io/community/handbook)
- 🛠 [Meshery Build & Release Strategy](https://docs.meshery.io/project/build-and-release)
- 📚 [Instructions for contributing to documentation](https://github.com/meshery/meshery/blob/master/CONTRIBUTING.md#documentation-contribution-flow)
- Meshery documentation [site](https://docs.meshery.io/) and [source](https://github.com/meshery/meshery/tree/master/docs)
- 🎨 Wireframes and designs for Meshery UI in [Figma](https://www.figma.com/file/SMP3zxOjZztdOLtgN4dS2W/Meshery-UI)
- 🙋🏾🙋🏼 Questions: [Discussion Forum](https://discuss.layer5.io) and [Community Slack](http://slack.layer5.io)
| 1.0 | Adapter e2e tests failing after new meshery release - #### Current Behavior
End to end tests in multple adaapters fail with inconsistent error messages.
<img width="1428" alt="Screenshot 2022-09-23 at 06 29 25" src="https://user-images.githubusercontent.com/55385490/191887638-d97e4d26-bf50-4265-86fd-2913d1a74ce2.png">
#### Expected Behavior
End to end tests should pass.
#### Screenshots/Logs
<!-- Add screenshots, if applicable, to help explain your problem. -->
#### Environment
- **Host OS:** Mac
- **Platform:** Docker or Kubernetes
- **Meshery Server Version:** edge-v0.6.9
- **Meshery Client Version:** edge-v0.6.9
<!-- Optional
#### To Reproduce
1. Go to any meshery adapter repository
2. Select any pull request and click on actions.
3. Click on Re-run adapter end to end tests.
4. See error
-->
---
#### Contributor [Guides](https://docs.meshery.io/project/contributing) and [Handbook](https://layer5.io/community/handbook)
- 🛠 [Meshery Build & Release Strategy](https://docs.meshery.io/project/build-and-release)
- 📚 [Instructions for contributing to documentation](https://github.com/meshery/meshery/blob/master/CONTRIBUTING.md#documentation-contribution-flow)
- Meshery documentation [site](https://docs.meshery.io/) and [source](https://github.com/meshery/meshery/tree/master/docs)
- 🎨 Wireframes and designs for Meshery UI in [Figma](https://www.figma.com/file/SMP3zxOjZztdOLtgN4dS2W/Meshery-UI)
- 🙋🏾🙋🏼 Questions: [Discussion Forum](https://discuss.layer5.io) and [Community Slack](http://slack.layer5.io)
| non_priority | adapter tests failing after new meshery release current behavior end to end tests in multple adaapters fail with inconsistent error messages img width alt screenshot at src expected behavior end to end tests should pass screenshots logs environment host os mac platform docker or kubernetes meshery server version edge meshery client version edge optional to reproduce go to any meshery adapter repository select any pull request and click on actions click on re run adapter end to end tests see error contributor and 🛠 📚 meshery documentation and 🎨 wireframes and designs for meshery ui in 🙋🏾🙋🏼 questions and | 0 |
78,830 | 10,090,798,983 | IssuesEvent | 2019-07-26 12:42:48 | portainer/portainer | https://api.github.com/repos/portainer/portainer | closed | Google Analytics Note on Installation-Page | area/documentation status/stale | Hi,
I have just found out, that you use Google Analytics. Yes I know that it can be disabled with --no-analytics, but you don't inform the user when creating a new portainer instance. It's just actiated. This information is also not present on https://www.portainer.io/installation/ or https://portainer.readthedocs.io/en/stable/deployment.html.
I think that's quite unfair and I'm not sure if that's legal in Europe because of the GDPR.
Please put a note on this sites. | 1.0 | Google Analytics Note on Installation-Page - Hi,
I have just found out, that you use Google Analytics. Yes I know that it can be disabled with --no-analytics, but you don't inform the user when creating a new portainer instance. It's just actiated. This information is also not present on https://www.portainer.io/installation/ or https://portainer.readthedocs.io/en/stable/deployment.html.
I think that's quite unfair and I'm not sure if that's legal in Europe because of the GDPR.
Please put a note on this sites. | non_priority | google analytics note on installation page hi i have just found out that you use google analytics yes i know that it can be disabled with no analytics but you don t inform the user when creating a new portainer instance it s just actiated this information is also not present on or i think that s quite unfair and i m not sure if that s legal in europe because of the gdpr please put a note on this sites | 0 |
443,771 | 12,799,383,069 | IssuesEvent | 2020-07-02 15:18:44 | ETS-LOG680-E20/log680-02-equipe-04 | https://api.github.com/repos/ETS-LOG680-E20/log680-02-equipe-04 | opened | Développer des tests unitaires | High Priority | Développer _au moins 5 tests unitaires_ afin d’évaluer le fonctionnement des fonctionnalités de l’application.
**Suggéré** : un test unitaire pour chacune des fonctions calculant une métrique. | 1.0 | Développer des tests unitaires - Développer _au moins 5 tests unitaires_ afin d’évaluer le fonctionnement des fonctionnalités de l’application.
**Suggéré** : un test unitaire pour chacune des fonctions calculant une métrique. | priority | développer des tests unitaires développer au moins tests unitaires afin d’évaluer le fonctionnement des fonctionnalités de l’application suggéré un test unitaire pour chacune des fonctions calculant une métrique | 1 |
107,144 | 23,354,563,421 | IssuesEvent | 2022-08-10 05:54:36 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Code block: default borders overlaps with custom border for code block | [Block] Code CSS Styling | ### Description
When custom border is inserted for code blocks it overlaps with the existing default white border of the code block. This problem exists on the frontend and is not visible inside the editor.
### Step-by-step reproduction instructions
1 Go to FSE,
2 Insert a code block
3 Add border with rounded radius (50 px)
4 change the border colour and size (optional for better visibility)
5 Publish and view the page.
### Screenshots, screen recording, code snippet

### Environment info
WP 5.9 GB: 12.5.4 TT2
Browser Brave on Windows 10
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes | 1.0 | Code block: default borders overlaps with custom border for code block - ### Description
When custom border is inserted for code blocks it overlaps with the existing default white border of the code block. This problem exists on the frontend and is not visible inside the editor.
### Step-by-step reproduction instructions
1 Go to FSE,
2 Insert a code block
3 Add border with rounded radius (50 px)
4 change the border colour and size (optional for better visibility)
5 Publish and view the page.
### Screenshots, screen recording, code snippet

### Environment info
WP 5.9 GB: 12.5.4 TT2
Browser Brave on Windows 10
### Please confirm that you have searched existing issues in the repo.
Yes
### Please confirm that you have tested with all plugins deactivated except Gutenberg.
Yes | non_priority | code block default borders overlaps with custom border for code block description when custom border is inserted for code blocks it overlaps with the existing default white border of the code block this problem exists on the frontend and is not visible inside the editor step by step reproduction instructions go to fse insert a code block add border with rounded radius px change the border colour and size optional for better visibility publish and view the page screenshots screen recording code snippet environment info wp gb browser brave on windows please confirm that you have searched existing issues in the repo yes please confirm that you have tested with all plugins deactivated except gutenberg yes | 0 |
568,069 | 16,946,157,228 | IssuesEvent | 2021-06-28 07:06:53 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | Service Monitor is not deleted after the service is deleted | kind/bug kind/need-to-verify priority/medium | **Describe the Bug**
1.After creating the service, go to the service details page
2.Create the Service Monitor for the service
3.then delete the service and associated workload
4.Go to the CRD page of the Service Monitor
5.The Service Monitor for the above service has not been deleted


**Versions Used**
KubeSphere: ks v3.1.1
/kind bug
/assign @benjaminhuo
/priority medium
/milestone v3.1
| 1.0 | Service Monitor is not deleted after the service is deleted - **Describe the Bug**
1.After creating the service, go to the service details page
2.Create the Service Monitor for the service
3.then delete the service and associated workload
4.Go to the CRD page of the Service Monitor
5.The Service Monitor for the above service has not been deleted


**Versions Used**
KubeSphere: ks v3.1.1
/kind bug
/assign @benjaminhuo
/priority medium
/milestone v3.1
| priority | service monitor is not deleted after the service is deleted describe the bug after creating the service go to the service details page create the service monitor for the service then delete the service and associated workload go to the crd page of the service monitor the service monitor for the above service has not been deleted versions used kubesphere ks kind bug assign benjaminhuo priority medium milestone | 1 |
22,632 | 19,804,584,075 | IssuesEvent | 2022-01-19 04:16:53 | Curtis-VL/OVRToolkit-Issues | https://api.github.com/repos/Curtis-VL/OVRToolkit-Issues | opened | When changing content of overlays, try to ensure the height stays the same | usability | When changing content of overlays, try to ensure the height stays the same. | True | When changing content of overlays, try to ensure the height stays the same - When changing content of overlays, try to ensure the height stays the same. | non_priority | when changing content of overlays try to ensure the height stays the same when changing content of overlays try to ensure the height stays the same | 0 |
157,343 | 24,655,882,721 | IssuesEvent | 2022-10-17 23:26:34 | taskany-inc/issues | https://api.github.com/repos/taskany-inc/issues | closed | Project settings page | enhancement design | - features toggle: schedule, reminders
- flow changing
- title and description changing
- delete project | 1.0 | Project settings page - - features toggle: schedule, reminders
- flow changing
- title and description changing
- delete project | non_priority | project settings page features toggle schedule reminders flow changing title and description changing delete project | 0 |
285,143 | 8,755,145,300 | IssuesEvent | 2018-12-14 14:01:50 | estevez-dev/ha_client | https://api.github.com/repos/estevez-dev/ha_client | closed | device_class is not working for icons | bug priority: high | Hi
I have added to my HA by 1wire sensors of temp. I add it in lovelace in glance card. In HA web page show correct icon to temperature as termometer... but in app show icon EYE but should show termometer. | 1.0 | device_class is not working for icons - Hi
I have added to my HA by 1wire sensors of temp. I add it in lovelace in glance card. In HA web page show correct icon to temperature as termometer... but in app show icon EYE but should show termometer. | priority | device class is not working for icons hi i have added to my ha by sensors of temp i add it in lovelace in glance card in ha web page show correct icon to temperature as termometer but in app show icon eye but should show termometer | 1 |
142,941 | 5,486,683,293 | IssuesEvent | 2017-03-14 00:53:32 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite} | kind/flake priority/backlog priority/P3 sig/network | https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-serial/2/
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:475
Nov 15 18:25:26.416: Unexpected error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:4279
```
| 2.0 | [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite} - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-serial/2/
Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:475
Nov 15 18:25:26.416: Unexpected error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:4279
```
| priority | network partition should create new pods when node is partitioned kubernetes suite failed network partition should create new pods when node is partitioned kubernetes suite go src io kubernetes output dockerized go src io kubernetes test network partition go nov unexpected error go src io kubernetes output dockerized go src io kubernetes test framework util go | 1 |
51,933 | 21,918,493,560 | IssuesEvent | 2022-05-22 07:39:43 | JeongSeonggil/SubMarketWithGit | https://api.github.com/repos/JeongSeonggil/SubMarketWithGit | closed | Mapstruct 적용 | user-service item-service seller-service refactor | ## 📌 기능 설명
Mapstruct 적용하여 Vo, Dto, Entity 변환 하기
## 📑 완료 조건
- [x] Mapstruct
- [x] Mapper 생성 | 3.0 | Mapstruct 적용 - ## 📌 기능 설명
Mapstruct 적용하여 Vo, Dto, Entity 변환 하기
## 📑 완료 조건
- [x] Mapstruct
- [x] Mapper 생성 | non_priority | mapstruct 적용 📌 기능 설명 mapstruct 적용하여 vo dto entity 변환 하기 📑 완료 조건 mapstruct mapper 생성 | 0 |
81,300 | 3,588,450,133 | IssuesEvent | 2016-01-31 01:13:02 | ankidroid/Anki-Android | https://api.github.com/repos/ankidroid/Anki-Android | closed | Cards due in 1 or 10 min do not show up until hours later | bug Priority-Medium waitingforfeedback | Originally reported on Google Code with ID 2081
```
What steps will reproduce the problem?
1. For new cards or reviews, pressing again (<1 min) or good (10 min) will not bring
up the card in 1 or 10 minutes. Instead waiting for other, longer period cards (1
hour and 5 hours) to show up.
2.
3.
What is the expected output? What do you see instead?
My Learn ahead limit is set at 20 mins, Next day starts at 4 hrs past midnight, and
I have not reached the maximum reviews/day. My expected output is for the card to
show up immediately after I press the again button. Yet, the card does not show up
until longer review (1 hr and 5hrs) cards show up or until the next day. When I press
"Study Now", it says no cards are due.
Does it happen again every time you repeat the steps above? Or did it
happen only one time?
It doesn't happen every time, but it has happened more than once. Sometime, when I'm
in the Learning mode with 3 or 4 cards, it works fine. And sometime when all the cards
are due back in 10 mins, none of them show up until an hour later or so.
What version of AnkiDroid are you using? (Decks list > menu > About > Look
at the title)
AnkiDroid v2.1.3
On what version of Android? (Home screen > menu > About phone > Android
version) Droid Razr HD
If it is a crash or "Force close" and you can reproduce it, the following
would help immensely: 1) Install the "SendLog" app, 2) Reproduce the crash,
3) Immediately after, launch SendLog, 4) Attach the resulting file to this
report. That will make the bug much easier to fix.
Please provide any additional information below.
```
Reported by `svmordovin` on 2014-04-14 21:01:27
| 1.0 | Cards due in 1 or 10 min do not show up until hours later - Originally reported on Google Code with ID 2081
```
What steps will reproduce the problem?
1. For new cards or reviews, pressing again (<1 min) or good (10 min) will not bring
up the card in 1 or 10 minutes. Instead waiting for other, longer period cards (1
hour and 5 hours) to show up.
2.
3.
What is the expected output? What do you see instead?
My Learn ahead limit is set at 20 mins, Next day starts at 4 hrs past midnight, and
I have not reached the maximum reviews/day. My expected output is for the card to
show up immediately after I press the again button. Yet, the card does not show up
until longer review (1 hr and 5hrs) cards show up or until the next day. When I press
"Study Now", it says no cards are due.
Does it happen again every time you repeat the steps above? Or did it
happen only one time?
It doesn't happen every time, but it has happened more than once. Sometime, when I'm
in the Learning mode with 3 or 4 cards, it works fine. And sometime when all the cards
are due back in 10 mins, none of them show up until an hour later or so.
What version of AnkiDroid are you using? (Decks list > menu > About > Look
at the title)
AnkiDroid v2.1.3
On what version of Android? (Home screen > menu > About phone > Android
version) Droid Razr HD
If it is a crash or "Force close" and you can reproduce it, the following
would help immensely: 1) Install the "SendLog" app, 2) Reproduce the crash,
3) Immediately after, launch SendLog, 4) Attach the resulting file to this
report. That will make the bug much easier to fix.
Please provide any additional information below.
```
Reported by `svmordovin` on 2014-04-14 21:01:27
| priority | cards due in or min do not show up until hours later originally reported on google code with id what steps will reproduce the problem for new cards or reviews pressing again min or good min will not bring up the card in or minutes instead waiting for other longer period cards hour and hours to show up what is the expected output what do you see instead my learn ahead limit is set at mins next day starts at hrs past midnight and i have not reached the maximum reviews day my expected output is for the card to show up immediately after i press the again button yet the card does not show up until longer review hr and cards show up or until the next day when i press study now it says no cards are due does it happen again every time you repeat the steps above or did it happen only one time it doesn t happen every time but it has happened more than once sometime when i m in the learning mode with or cards it works fine and sometime when all the cards are due back in mins none of them show up until an hour later or so what version of ankidroid are you using decks list menu about look at the title ankidroid on what version of android home screen menu about phone android version droid razr hd if it is a crash or force close and you can reproduce it the following would help immensely install the sendlog app reproduce the crash immediately after launch sendlog attach the resulting file to this report that will make the bug much easier to fix please provide any additional information below reported by svmordovin on | 1 |
97,845 | 4,007,193,237 | IssuesEvent | 2016-05-12 17:17:05 | project8/katydid | https://api.github.com/repos/project8/katydid | closed | multi-peak tracks should be groupable | Feature High Priority | - [x] The KTProcessedTrackData should add a data field fEventSequenceID, which sequentially counts tracks in an event. The default value will be -1 and event builders can set it to something >=0.
- [x] The ROOTTreeWriter should be updated to deal with this field.
- [x] The MPTEventBuilder should assign the same value to each track within a multi-peak track, ordering the multi-peak objects in the event.
- [x] The above should be validated so to make sure sequence ids behave as expected.
- [ ] Other active event builders should also assign values. If not complete when the rest of this issue is resolved, that should be placed into a new issue of lower priority and this one closed (it isn't clear that we will actually use those event builders so upgrading them is lower priority). | 1.0 | multi-peak tracks should be groupable - - [x] The KTProcessedTrackData should add a data field fEventSequenceID, which sequentially counts tracks in an event. The default value will be -1 and event builders can set it to something >=0.
- [x] The ROOTTreeWriter should be updated to deal with this field.
- [x] The MPTEventBuilder should assign the same value to each track within a multi-peak track, ordering the multi-peak objects in the event.
- [x] The above should be validated so to make sure sequence ids behave as expected.
- [ ] Other active event builders should also assign values. If not complete when the rest of this issue is resolved, that should be placed into a new issue of lower priority and this one closed (it isn't clear that we will actually use those event builders so upgrading them is lower priority). | priority | multi peak tracks should be groupable the ktprocessedtrackdata should add a data field feventsequenceid which sequentially counts tracks in an event the default value will be and event builders can set it to something the roottreewriter should be updated to deal with this field the mpteventbuilder should assign the same value to each track within a multi peak track ordering the multi peak objects in the event the above should be validated so to make sure sequence ids behave as expected other active event builders should also assign values if not complete when the rest of this issue is resolved that should be placed into a new issue of lower priority and this one closed it isn t clear that we will actually use those event builders so upgrading them is lower priority | 1 |
198,435 | 15,709,440,104 | IssuesEvent | 2021-03-26 22:32:31 | garoque/crud-eng-software | https://api.github.com/repos/garoque/crud-eng-software | closed | Relatório Iteração 4 | documentation | ## Relatório Iteração 4
Relatório das atividades feitas durante a iteração 4, o que funcionou ou deu errado durante a iteração.
### Período da iteração
|Inicio:| 06/03/2021|
|-------|-----------|
| Fim: | 26/03/2021|
### Atividades planejadas
- Criação de testes.
- Desenvolvimento.
- Modelo de dados.
### Atividades realizadas
- [ ] ~~Criação de testes.~~
- [x] Desenvolvimento.
- [x] Modelo de dados.
### Atividades não realizadas
- [x] Criação de testes.
- [ ] ~~Desenvolvimento.~~
- [ ] ~~Modelo de dados.~~
_Lições aprendidas_
- Conseguimos compreender que em ambiente remoto as vezes traz dificuldade em compartilhar ideias e resolver problemas ao longo do desenvolvimento por se tratar de reuniões online em que até são difíceis de ter todos integrantes, assim atrasando coisas que seriam simples em um ambiente de convívio comum.
- E também faz falta de um programador experiente junto ao dia a dia de desenvolvimento para tirar dúvidas. | 1.0 | Relatório Iteração 4 - ## Relatório Iteração 4
Relatório das atividades feitas durante a iteração 4, o que funcionou ou deu errado durante a iteração.
### Período da iteração
|Inicio:| 06/03/2021|
|-------|-----------|
| Fim: | 26/03/2021|
### Atividades planejadas
- Criação de testes.
- Desenvolvimento.
- Modelo de dados.
### Atividades realizadas
- [ ] ~~Criação de testes.~~
- [x] Desenvolvimento.
- [x] Modelo de dados.
### Atividades não realizadas
- [x] Criação de testes.
- [ ] ~~Desenvolvimento.~~
- [ ] ~~Modelo de dados.~~
_Lições aprendidas_
- Conseguimos compreender que em ambiente remoto as vezes traz dificuldade em compartilhar ideias e resolver problemas ao longo do desenvolvimento por se tratar de reuniões online em que até são difíceis de ter todos integrantes, assim atrasando coisas que seriam simples em um ambiente de convívio comum.
- E também faz falta de um programador experiente junto ao dia a dia de desenvolvimento para tirar dúvidas. | non_priority | relatório iteração relatório iteração relatório das atividades feitas durante a iteração o que funcionou ou deu errado durante a iteração período da iteração inicio fim atividades planejadas criação de testes desenvolvimento modelo de dados atividades realizadas criação de testes desenvolvimento modelo de dados atividades não realizadas criação de testes desenvolvimento modelo de dados lições aprendidas conseguimos compreender que em ambiente remoto as vezes traz dificuldade em compartilhar ideias e resolver problemas ao longo do desenvolvimento por se tratar de reuniões online em que até são difíceis de ter todos integrantes assim atrasando coisas que seriam simples em um ambiente de convívio comum e também faz falta de um programador experiente junto ao dia a dia de desenvolvimento para tirar dúvidas | 0 |
142,608 | 5,476,865,104 | IssuesEvent | 2017-03-12 01:02:34 | NCEAS/eml | https://api.github.com/repos/NCEAS/eml | closed | ResourceVariation type not needed | Category: eml - general bugs Component: Bugzilla-Id Priority: Immediate Status: Resolved Tracker: Bug | ---
Author Name: **Matt Jones** (Matt Jones)
Original Redmine Issue: 43, https://projects.ecoinformatics.org/ecoinfo/issues/43
Original Date: 2000-07-26
Original Assignee: Chad Berkley
---
Inthe resource.xsd XML Schema document, the ResourceVariation type is not
needed. Instead, it would be better to just have a set of top-level elements
defined (like dataset and literature) that can be used as the docroot for
particular resource documents. This would eliminate the need for the whole file
"resourceExample.xsd".
| 1.0 | ResourceVariation type not needed - ---
Author Name: **Matt Jones** (Matt Jones)
Original Redmine Issue: 43, https://projects.ecoinformatics.org/ecoinfo/issues/43
Original Date: 2000-07-26
Original Assignee: Chad Berkley
---
Inthe resource.xsd XML Schema document, the ResourceVariation type is not
needed. Instead, it would be better to just have a set of top-level elements
defined (like dataset and literature) that can be used as the docroot for
particular resource documents. This would eliminate the need for the whole file
"resourceExample.xsd".
| priority | resourcevariation type not needed author name matt jones matt jones original redmine issue original date original assignee chad berkley inthe resource xsd xml schema document the resourcevariation type is not needed instead it would be better to just have a set of top level elements defined like dataset and literature that can be used as the docroot for particular resource documents this would eliminate the need for the whole file resourceexample xsd | 1 |
437,923 | 12,604,670,509 | IssuesEvent | 2020-06-11 15:18:37 | projectacrn/acrn-hypervisor | https://api.github.com/repos/projectacrn/acrn-hypervisor | closed | Hv: support xsave in context switch | priority: P2-High type: feature | content: xsave area:
legacy region: 512 bytes
xsave header: 64 bytes
extended region: < 3k bytes
So, pre-allocate 4k area for xsave. Use certain instruction to save or
restore the area according to hardware xsave feature set. | 1.0 | Hv: support xsave in context switch - content: xsave area:
legacy region: 512 bytes
xsave header: 64 bytes
extended region: < 3k bytes
So, pre-allocate 4k area for xsave. Use certain instruction to save or
restore the area according to hardware xsave feature set. | priority | hv support xsave in context switch content xsave area legacy region bytes xsave header bytes extended region bytes so pre allocate area for xsave use certain instruction to save or restore the area according to hardware xsave feature set | 1 |
737,374 | 25,513,557,348 | IssuesEvent | 2022-11-28 14:47:51 | AndreLiberato/project_silk | https://api.github.com/repos/AndreLiberato/project_silk | opened | Implementar back end com firebase | back-end high priority feature | - [ ] Implementar autenticação
- [ ] Implementar persistência de produtos no firestore
- [ ] Implementar persistência de listas de compras no firestore
- [ ] Implementar persistência do carrinho no firestore
- [ ] Implementar persistência dos pedidos no firestore | 1.0 | Implementar back end com firebase - - [ ] Implementar autenticação
- [ ] Implementar persistência de produtos no firestore
- [ ] Implementar persistência de listas de compras no firestore
- [ ] Implementar persistência do carrinho no firestore
- [ ] Implementar persistência dos pedidos no firestore | priority | implementar back end com firebase implementar autenticação implementar persistência de produtos no firestore implementar persistência de listas de compras no firestore implementar persistência do carrinho no firestore implementar persistência dos pedidos no firestore | 1 |
52,049 | 13,711,114,554 | IssuesEvent | 2020-10-02 03:20:39 | Watemlifts/Python-100-Days | https://api.github.com/repos/Watemlifts/Python-100-Days | opened | CVE-2017-18214 (High) detected in moment-2.15.1.js | security vulnerability | ## CVE-2017-18214 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.15.1.js</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.15.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.15.1/moment.js</a></p>
<p>Path to dependency file: Python-100-Days/Day61-65/code/project_of_tornado/assets/html/calendar.html</p>
<p>Path to vulnerable library: Python-100-Days/Day61-65/code/project_of_tornado/assets/html/../js/moment.js,Python-100-Days/Day61-65/code/project_of_tornado/assets/js/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-2.15.1.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Watemlifts/Python-100-Days/commit/53c8f2d511b1d277a31286a85a37f1a7903e3e37">53c8f2d511b1d277a31286a85a37f1a7903e3e37</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055.
<p>Publish Date: 2018-03-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18214>CVE-2017-18214</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214</a></p>
<p>Release Date: 2018-03-04</p>
<p>Fix Resolution: 2.19.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-18214 (High) detected in moment-2.15.1.js - ## CVE-2017-18214 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.15.1.js</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.15.1/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.15.1/moment.js</a></p>
<p>Path to dependency file: Python-100-Days/Day61-65/code/project_of_tornado/assets/html/calendar.html</p>
<p>Path to vulnerable library: Python-100-Days/Day61-65/code/project_of_tornado/assets/html/../js/moment.js,Python-100-Days/Day61-65/code/project_of_tornado/assets/js/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-2.15.1.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Watemlifts/Python-100-Days/commit/53c8f2d511b1d277a31286a85a37f1a7903e3e37">53c8f2d511b1d277a31286a85a37f1a7903e3e37</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The moment module before 2.19.3 for Node.js is prone to a regular expression denial of service via a crafted date string, a different vulnerability than CVE-2016-4055.
<p>Publish Date: 2018-03-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18214>CVE-2017-18214</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-18214</a></p>
<p>Release Date: 2018-03-04</p>
<p>Fix Resolution: 2.19.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in moment js cve high severity vulnerability vulnerable library moment js parse validate manipulate and display dates library home page a href path to dependency file python days code project of tornado assets html calendar html path to vulnerable library python days code project of tornado assets html js moment js python days code project of tornado assets js moment js dependency hierarchy x moment js vulnerable library found in head commit a href vulnerability details the moment module before for node js is prone to a regular expression denial of service via a crafted date string a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
63,953 | 26,561,933,216 | IssuesEvent | 2023-01-20 16:33:50 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | [Bug]: terraform keep trying to create spot_fleet_request whenever target_capacity changed | bug waiting-response service/ec2 | ### Terraform Core Version
1.3.6
### AWS Provider Version
4.48.0
### Affected Resource(s)
aws_spot_fleet_request
### Expected Behavior
spot_fleet_request should be modified instead of deleting old request and creating a new one.
### Actual Behavior
I was trying to update `target_capacity` and apply afterwards, I found that terraform tried to delete old spot_fleet_request and create new the request again everytime.
### Relevant Error/Panic Output Snippet
_No response_
### Terraform Configuration Files
```terraform
resource "aws_spot_fleet_request" "azdevops_agent" {
iam_fleet_role = aws_iam_role.ec2_fleet_role[count.index].arn
spot_price = var.spot_price
allocation_strategy = "diversified"
fleet_type = "request"
instance_interruption_behaviour = var.instance_interruption_behaviour
target_capacity = var.spot_fleet_instance_capacity
valid_until = timeadd(timestamp(), var.request_valid_until_minutes)
terminate_instances_with_expiration = true
terminate_instances_on_delete = true
dynamic "launch_specification" {
for_each = var.instance_type
content {
instance_type = launch_specification.value
ami = data.aws_ami.this.image_id
iam_instance_profile_arn = aws_iam_instance_profile.agent_profile[count.index].arn
subnet_id = module.vpc_data.subnet_ids[0]
vpc_security_group_ids = [aws_security_group.ec2-spot-fleet-sg.id]
root_block_device {
encrypted = true
kms_key_id = aws_kms_key.kms_key[count.index].arn
volume_size = var.volume_size
delete_on_termination = true
}
}
}
tags = {
Name = "devops-spot-fleet-agents"
}
}
```
### Steps to Reproduce
terraform plan and apply
### Debug Output
_No response_
### Panic Output
_No response_
### Important Factoids
_No response_
### References
_No response_
### Would you like to implement a fix?
None | 1.0 | [Bug]: terraform keep trying to create spot_fleet_request whenever target_capacity changed - ### Terraform Core Version
1.3.6
### AWS Provider Version
4.48.0
### Affected Resource(s)
aws_spot_fleet_request
### Expected Behavior
spot_fleet_request should be modified instead of deleting old request and creating a new one.
### Actual Behavior
I was trying to update `target_capacity` and apply afterwards, I found that terraform tried to delete old spot_fleet_request and create new the request again everytime.
### Relevant Error/Panic Output Snippet
_No response_
### Terraform Configuration Files
```terraform
resource "aws_spot_fleet_request" "azdevops_agent" {
iam_fleet_role = aws_iam_role.ec2_fleet_role[count.index].arn
spot_price = var.spot_price
allocation_strategy = "diversified"
fleet_type = "request"
instance_interruption_behaviour = var.instance_interruption_behaviour
target_capacity = var.spot_fleet_instance_capacity
valid_until = timeadd(timestamp(), var.request_valid_until_minutes)
terminate_instances_with_expiration = true
terminate_instances_on_delete = true
dynamic "launch_specification" {
for_each = var.instance_type
content {
instance_type = launch_specification.value
ami = data.aws_ami.this.image_id
iam_instance_profile_arn = aws_iam_instance_profile.agent_profile[count.index].arn
subnet_id = module.vpc_data.subnet_ids[0]
vpc_security_group_ids = [aws_security_group.ec2-spot-fleet-sg.id]
root_block_device {
encrypted = true
kms_key_id = aws_kms_key.kms_key[count.index].arn
volume_size = var.volume_size
delete_on_termination = true
}
}
}
tags = {
Name = "devops-spot-fleet-agents"
}
}
```
### Steps to Reproduce
terraform plan and apply
### Debug Output
_No response_
### Panic Output
_No response_
### Important Factoids
_No response_
### References
_No response_
### Would you like to implement a fix?
None | non_priority | terraform keep trying to create spot fleet request whenever target capacity changed terraform core version aws provider version affected resource s aws spot fleet request expected behavior spot fleet request should be modified instead of deleting old request and creating a new one actual behavior i was trying to update target capacity and apply afterwards i found that terraform tried to delete old spot fleet request and create new the request again everytime relevant error panic output snippet no response terraform configuration files terraform resource aws spot fleet request azdevops agent iam fleet role aws iam role fleet role arn spot price var spot price allocation strategy diversified fleet type request instance interruption behaviour var instance interruption behaviour target capacity var spot fleet instance capacity valid until timeadd timestamp var request valid until minutes terminate instances with expiration true terminate instances on delete true dynamic launch specification for each var instance type content instance type launch specification value ami data aws ami this image id iam instance profile arn aws iam instance profile agent profile arn subnet id module vpc data subnet ids vpc security group ids root block device encrypted true kms key id aws kms key kms key arn volume size var volume size delete on termination true tags name devops spot fleet agents steps to reproduce terraform plan and apply debug output no response panic output no response important factoids no response references no response would you like to implement a fix none | 0 |
401,419 | 11,790,088,905 | IssuesEvent | 2020-03-17 18:18:29 | mpv-player/mpv | https://api.github.com/repos/mpv-player/mpv | closed | Console default font size not High DPI aware | os:win priority:ignored-issue-template | The console default font size is not High DPI aware.
https://mpv.io/manual/master/#console-font-size | 1.0 | Console default font size not High DPI aware - The console default font size is not High DPI aware.
https://mpv.io/manual/master/#console-font-size | priority | console default font size not high dpi aware the console default font size is not high dpi aware | 1 |
527,323 | 15,340,016,302 | IssuesEvent | 2021-02-27 04:47:24 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | QGIS 3.18 raises an error when loading a raster layer from a geopackage that has multiple raster layers. | Bug High Priority QGIS Browser Regression | <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
QGIS 3.18 raises an error when loading a raster layer from a geopackage that has multiple raster layers. If it has only one raster layer, then is fine. Geopackage raster layers names also display '!!' ahead of their names.
<!-- A clear and concise description of what the bug is. -->
**How to Reproduce**
1. Get an GeoTiff
``` sh
cd ~/Downloads/
wget https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/raster/US_MSR_10M.zip
unzip ~/Downloads/US_MSR_10M.zip
```
2. Convert GeoTiff to gpkg
``` sh
gdal_translate -of GPKG ~/Downloads/US_MSR_10M/US_MSR.tif ~/Downloads/test.gpkg -co RASTER_TABLE=msr01 -co RASTER_DESCRIPTION='First raster layer'
```
3. Create connection to the gpkg file
4. At this point the layer loads in QGIS with no error message
5. Add the same image a second time with a different name. I also tried with a different raster, but result is the same.
``` sh
gdal_translate -co APPEND_SUBDATASET=YES -of GPKG ~/Downloads/US_MSR_10M/US_MSR.tif ~/Downloads/test.gpkg -co RASTER_TABLE=msr02 -co RASTER_DESCRIPTION='Second raster layer'
```
6. Raster layers names display '!!' ahead of their names. At this point both layers raise the same error message when trying to load layer in QGIS 3.18
7. Error message: Invalid Layer: GDAL provider Cannot open GDAL dataset GPKG:/home/user/Downloads/test.gpkg:msr01!!::!!msr01: Raster layer Provider is not valid (provider: gdal, URI: GPKG:/home/user/Downloads/test.gpkg:msr01!!::!!msr01
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome -->
**QGIS and OS versions**
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
QGIS version | 3.18.0-Zürich | QGIS code revision | bdef9fb328
Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8
Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
Compiled against PDAL | 2.0.1 | Running against PDAL | 2.0.1 (git-version: Release)
PostgreSQL Client Version | 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1) | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020
OS Version | Ubuntu 20.04.2 LTS
Active python plugins | postgis_geoprocessing; SpeciesExplorer; DigitizingTools; pointsamplingtool; contour; postgisQueryBuilder; SRTM-Downloader; eu_mapper; openlayers_plugin; qgis_versioning; geoscience; covid19_tracker; SentinelHub; postgis_sampling_tool; EuroDataCube; Nuclear-Energy-Plant-Radiations-; valuetool; quick_map_services; StreetView; DotMap; mapswipetool_plugin; db_manager; processing; MetaSearch
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
**Additional context**
In QGIS 3.16 I did not had this issue.
<!-- Add any other context about the problem here. -->
| 1.0 | QGIS 3.18 raises an error when loading a raster layer from a geopackage that has multiple raster layers. - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
QGIS 3.18 raises an error when loading a raster layer from a geopackage that has multiple raster layers. If it has only one raster layer, then is fine. Geopackage raster layers names also display '!!' ahead of their names.
<!-- A clear and concise description of what the bug is. -->
**How to Reproduce**
1. Get an GeoTiff
``` sh
cd ~/Downloads/
wget https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/raster/US_MSR_10M.zip
unzip ~/Downloads/US_MSR_10M.zip
```
2. Convert GeoTiff to gpkg
``` sh
gdal_translate -of GPKG ~/Downloads/US_MSR_10M/US_MSR.tif ~/Downloads/test.gpkg -co RASTER_TABLE=msr01 -co RASTER_DESCRIPTION='First raster layer'
```
3. Create connection to the gpkg file
4. At this point the layer loads in QGIS with no error message
5. Add the same image a second time with a different name. I also tried with a different raster, but result is the same.
``` sh
gdal_translate -co APPEND_SUBDATASET=YES -of GPKG ~/Downloads/US_MSR_10M/US_MSR.tif ~/Downloads/test.gpkg -co RASTER_TABLE=msr02 -co RASTER_DESCRIPTION='Second raster layer'
```
6. Raster layers names display '!!' ahead of their names. At this point both layers raise the same error message when trying to load layer in QGIS 3.18
7. Error message: Invalid Layer: GDAL provider Cannot open GDAL dataset GPKG:/home/user/Downloads/test.gpkg:msr01!!::!!msr01: Raster layer Provider is not valid (provider: gdal, URI: GPKG:/home/user/Downloads/test.gpkg:msr01!!::!!msr01
<!-- Steps, sample datasets and qgis project file to reproduce the behavior. Screencasts or screenshots welcome -->
**QGIS and OS versions**
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
QGIS version | 3.18.0-Zürich | QGIS code revision | bdef9fb328
Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8
Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
Compiled against PDAL | 2.0.1 | Running against PDAL | 2.0.1 (git-version: Release)
PostgreSQL Client Version | 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1) | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020
OS Version | Ubuntu 20.04.2 LTS
Active python plugins | postgis_geoprocessing; SpeciesExplorer; DigitizingTools; pointsamplingtool; contour; postgisQueryBuilder; SRTM-Downloader; eu_mapper; openlayers_plugin; qgis_versioning; geoscience; covid19_tracker; SentinelHub; postgis_sampling_tool; EuroDataCube; Nuclear-Energy-Plant-Radiations-; valuetool; quick_map_services; StreetView; DotMap; mapswipetool_plugin; db_manager; processing; MetaSearch
<!-- In the QGIS Help menu -> About, click in the table, Ctrl+A and then Ctrl+C. Finally paste here -->
**Additional context**
In QGIS 3.16 I did not had this issue.
<!-- Add any other context about the problem here. -->
| priority | qgis raises an error when loading a raster layer from a geopackage that has multiple raster layers bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue qgis raises an error when loading a raster layer from a geopackage that has multiple raster layers if it has only one raster layer then is fine geopackage raster layers names also display ahead of their names how to reproduce get an geotiff sh cd downloads wget unzip downloads us msr zip convert geotiff to gpkg sh gdal translate of gpkg downloads us msr us msr tif downloads test gpkg co raster table co raster description first raster layer create connection to the gpkg file at this point the layer loads in qgis with no error message add the same image a second time with a different name i also tried with a different raster but result is the same sh gdal translate co append subdataset yes of gpkg downloads us msr us msr tif downloads test gpkg co raster table co raster description second raster layer raster layers names display ahead of their names at this point both layers raise the same error message when trying to load layer in qgis error message invalid layer gdal provider cannot open gdal dataset gpkg home user downloads test gpkg raster layer provider is not valid provider gdal uri gpkg home user downloads test gpkg qgis and os versions name ubuntu version lts focal fossa qgis version zürich qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite compiled against pdal running against pdal git version release postgresql client version ubuntu spatialite version qwt version version compiled against proj running against proj rel february os version ubuntu lts active python plugins postgis geoprocessing speciesexplorer digitizingtools pointsamplingtool contour postgisquerybuilder srtm downloader eu mapper openlayers plugin qgis versioning geoscience tracker sentinelhub postgis sampling tool eurodatacube nuclear energy plant radiations valuetool quick map services streetview dotmap mapswipetool plugin db manager processing metasearch about click in the table ctrl a and then ctrl c finally paste here additional context in qgis i did not had this issue | 1 |
136,857 | 5,289,475,745 | IssuesEvent | 2017-02-08 17:28:24 | SIU-CS/J-JAM-production | https://api.github.com/repos/SIU-CS/J-JAM-production | opened | Bot chat page | Functional Priority -M Product Backlog | As a user, I would like to be able to chat with the bot, so that I can get resources, inspirational quotes, and assurance.
Acceptance criteria:
The bot shall greet you when you open the chat page, and if you greet it by saying “hi”, “hello”, “greetings”, or similar.
The bot shall provide mental health links and resources if you say “help”.
The bot shall give ‘sing’ Happy Birthday to you when you open the bot chat page on your birthday.
The bot shall provide an inspirational quote that you have not recently seen if you say “quote”, “quotes”, or “inspiration”.
Story source: SRS - FR12
Estimate: 20 man-hours
Risk: Medium
Value: Medium
Priority: M
| 1.0 | Bot chat page - As a user, I would like to be able to chat with the bot, so that I can get resources, inspirational quotes, and assurance.
Acceptance criteria:
The bot shall greet you when you open the chat page, and if you greet it by saying “hi”, “hello”, “greetings”, or similar.
The bot shall provide mental health links and resources if you say “help”.
The bot shall give ‘sing’ Happy Birthday to you when you open the bot chat page on your birthday.
The bot shall provide an inspirational quote that you have not recently seen if you say “quote”, “quotes”, or “inspiration”.
Story source: SRS - FR12
Estimate: 20 man-hours
Risk: Medium
Value: Medium
Priority: M
| priority | bot chat page as a user i would like to be able to chat with the bot so that i can get resources inspirational quotes and assurance acceptance criteria the bot shall greet you when you open the chat page and if you greet it by saying “hi” “hello” “greetings” or similar the bot shall provide mental health links and resources if you say “help” the bot shall give ‘sing’ happy birthday to you when you open the bot chat page on your birthday the bot shall provide an inspirational quote that you have not recently seen if you say “quote” “quotes” or “inspiration” story source srs estimate man hours risk medium value medium priority m | 1 |
62,078 | 12,197,664,106 | IssuesEvent | 2020-04-29 21:11:44 | mozilla-mobile/android-components | https://api.github.com/repos/mozilla-mobile/android-components | closed | Add better error vs info logging for non-fatal push exceptions | <push> E3 ⌨️ code | Related to https://github.com/mozilla-mobile/android-components/issues/5691, we don't need to log exceptions that are device network connectivity issues to our CrashReporter.
We need a better way to filter the valuable ones from the non-fatal. | 1.0 | Add better error vs info logging for non-fatal push exceptions - Related to https://github.com/mozilla-mobile/android-components/issues/5691, we don't need to log exceptions that are device network connectivity issues to our CrashReporter.
We need a better way to filter the valuable ones from the non-fatal. | non_priority | add better error vs info logging for non fatal push exceptions related to we don t need to log exceptions that are device network connectivity issues to our crashreporter we need a better way to filter the valuable ones from the non fatal | 0 |
88,732 | 17,652,108,483 | IssuesEvent | 2021-08-20 14:29:39 | cloud-native-toolkit/planning | https://api.github.com/repos/cloud-native-toolkit/planning | opened | Update links in starter kits readme files | documentation code pattern | The links in the readme files in the starter kits need to be updated
- Reference to Developer guide should point to https://cloudnativetoolkit.dev/ see [this issue](https://github.com/IBM/template-node-typescript/issues/34)
- Links to Cloud Native Toolkit should be https://cloudnativetoolkit.dev/images/catalyst.png | 1.0 | Update links in starter kits readme files - The links in the readme files in the starter kits need to be updated
- Reference to Developer guide should point to https://cloudnativetoolkit.dev/ see [this issue](https://github.com/IBM/template-node-typescript/issues/34)
- Links to Cloud Native Toolkit should be https://cloudnativetoolkit.dev/images/catalyst.png | non_priority | update links in starter kits readme files the links in the readme files in the starter kits need to be updated reference to developer guide should point to see links to cloud native toolkit should be | 0 |
15,752 | 2,611,514,221 | IssuesEvent | 2015-02-27 05:50:01 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | closed | Greater weapon customization | auto-migrated Priority-Low Type-Enhancement | ```
What steps will reproduce the problem?
1. Only 4 types of customization: Ammo, power, crate, and delay
2. Can't change Deagle ammo type
3. Can't make kamikaze controllable
Enchant hedge-wars
What is the expected output? What do you see instead?
More customization
Like, Worms 2-style customization
What version of the product are you using? On what operating system?
0.9.18
Windows
Please provide any additional information below.
Allow more customization, like worms 2-style customization
After all, it's freely customizable in files, so why not allow more
customization for weapons?
```
Original issue reported on code.google.com by `Openwor...@gmail.com` on 2 Jan 2013 at 8:01 | 1.0 | Greater weapon customization - ```
What steps will reproduce the problem?
1. Only 4 types of customization: Ammo, power, crate, and delay
2. Can't change Deagle ammo type
3. Can't make kamikaze controllable
Enchant hedge-wars
What is the expected output? What do you see instead?
More customization
Like, Worms 2-style customization
What version of the product are you using? On what operating system?
0.9.18
Windows
Please provide any additional information below.
Allow more customization, like worms 2-style customization
After all, it's freely customizable in files, so why not allow more
customization for weapons?
```
Original issue reported on code.google.com by `Openwor...@gmail.com` on 2 Jan 2013 at 8:01 | priority | greater weapon customization what steps will reproduce the problem only types of customization ammo power crate and delay can t change deagle ammo type can t make kamikaze controllable enchant hedge wars what is the expected output what do you see instead more customization like worms style customization what version of the product are you using on what operating system windows please provide any additional information below allow more customization like worms style customization after all it s freely customizable in files so why not allow more customization for weapons original issue reported on code google com by openwor gmail com on jan at | 1 |
34,440 | 7,835,400,596 | IssuesEvent | 2018-06-17 05:04:31 | Cryptonomic/Conseil | https://api.github.com/repos/Cryptonomic/Conseil | opened | Tune the number of Slick database threads | code_audit enhancement | The number of Slick database threads is currently set naively. It should be set correctly in a principled manner. | 1.0 | Tune the number of Slick database threads - The number of Slick database threads is currently set naively. It should be set correctly in a principled manner. | non_priority | tune the number of slick database threads the number of slick database threads is currently set naively it should be set correctly in a principled manner | 0 |
40,856 | 8,862,387,693 | IssuesEvent | 2019-01-10 05:43:16 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Joomla 3.9.1 Breadcrumbs issue | No Code Attached Yet | ### Steps to reproduce the issue
category1/category-bbb/article-name
then you click on category-bbb in Breadcrumbs you get this
/categorya/category-bbb?filter_tag[0]=2&filter_tag[1]=3&filter_tag[2]=4
### Expected result
categorya/category-bbb/article-name
### Actual result
/categorya/category-bbb?filter_tag[0]=2&filter_tag[1]=3&filter_tag[2]=4
### System information (as much as possible)
joomla 3.9.1
### Additional comments
| 1.0 | Joomla 3.9.1 Breadcrumbs issue - ### Steps to reproduce the issue
category1/category-bbb/article-name
then you click on category-bbb in Breadcrumbs you get this
/categorya/category-bbb?filter_tag[0]=2&filter_tag[1]=3&filter_tag[2]=4
### Expected result
categorya/category-bbb/article-name
### Actual result
/categorya/category-bbb?filter_tag[0]=2&filter_tag[1]=3&filter_tag[2]=4
### System information (as much as possible)
joomla 3.9.1
### Additional comments
| non_priority | joomla breadcrumbs issue steps to reproduce the issue category bbb article name then you click on category bbb in breadcrumbs you get this categorya category bbb filter tag filter tag filter tag expected result categorya category bbb article name actual result categorya category bbb filter tag filter tag filter tag system information as much as possible joomla additional comments | 0 |
200,861 | 15,161,117,951 | IssuesEvent | 2021-02-12 08:28:23 | thefrontside/bigtest | https://api.github.com/repos/thefrontside/bigtest | closed | What is up with NodeList | @bigtest/interactor question | When working on the `ui-courses` FOLIO test suite, @pittst3r and I ran across an issue where if we used a `NodeList` which is mutable in the context of a filter, it would not match.
https://github.com/cowboyd/ui-courses/blob/bigtest-spike/test/interactors.js#L108
Why? | 1.0 | What is up with NodeList - When working on the `ui-courses` FOLIO test suite, @pittst3r and I ran across an issue where if we used a `NodeList` which is mutable in the context of a filter, it would not match.
https://github.com/cowboyd/ui-courses/blob/bigtest-spike/test/interactors.js#L108
Why? | non_priority | what is up with nodelist when working on the ui courses folio test suite and i ran across an issue where if we used a nodelist which is mutable in the context of a filter it would not match why | 0 |
21,964 | 2,643,595,633 | IssuesEvent | 2015-03-12 12:12:33 | Araq/Nim | https://api.github.com/repos/Araq/Nim | closed | Strange issues with `$`[tuple|object] when using it on a TTable without tables being imported | Low Priority | Consider two files:
File foo.nim:
```nimrod
import tables, strtabs, asyncio
type
TRequest* = object
formData*: TTable[string, tuple[fields: PStringTable, body: string]]
proc test*(): TRequest =
var x = TRequest()
x.formData = initTable[string, tuple[fields: PStringTable, body: string]]()
x.formData["asd"] = (newStringTable(), "asdas")
result = x
```
and
```nimrod
import foo
var x = test()
echo($x.formData)
```
This results in an odd error:
```
a21.nim(4, 6) Info: instantiation from here
lib/system.nim(1605, 21) Error: undeclared field: 'data'
```
Importing ``tables`` inside the second file stops the error from happening. | 1.0 | Strange issues with `$`[tuple|object] when using it on a TTable without tables being imported - Consider two files:
File foo.nim:
```nimrod
import tables, strtabs, asyncio
type
TRequest* = object
formData*: TTable[string, tuple[fields: PStringTable, body: string]]
proc test*(): TRequest =
var x = TRequest()
x.formData = initTable[string, tuple[fields: PStringTable, body: string]]()
x.formData["asd"] = (newStringTable(), "asdas")
result = x
```
and
```nimrod
import foo
var x = test()
echo($x.formData)
```
This results in an odd error:
```
a21.nim(4, 6) Info: instantiation from here
lib/system.nim(1605, 21) Error: undeclared field: 'data'
```
Importing ``tables`` inside the second file stops the error from happening. | priority | strange issues with when using it on a ttable without tables being imported consider two files file foo nim nimrod import tables strtabs asyncio type trequest object formdata ttable proc test trequest var x trequest x formdata inittable x formdata newstringtable asdas result x and nimrod import foo var x test echo x formdata this results in an odd error nim info instantiation from here lib system nim error undeclared field data importing tables inside the second file stops the error from happening | 1 |
35,219 | 9,550,535,432 | IssuesEvent | 2019-05-02 12:24:33 | IgniteUI/igniteui-angular | https://api.github.com/repos/IgniteUI/igniteui-angular | closed | Azure Pipelines build is failing | build | Azure Pipelines build for master branch is failing with the following error:
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
See this build:
https://dev.azure.com/IgniteUI/igniteui-angular/_build/results?buildId=8583
| 1.0 | Azure Pipelines build is failing - Azure Pipelines build for master branch is failing with the following error:
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
See this build:
https://dev.azure.com/IgniteUI/igniteui-angular/_build/results?buildId=8583
| non_priority | azure pipelines build is failing azure pipelines build for master branch is failing with the following error fatal error ineffective mark compacts near heap limit allocation failed javascript heap out of memory see this build | 0 |
808,137 | 30,034,938,293 | IssuesEvent | 2023-06-27 12:09:56 | asastats/channel | https://api.github.com/repos/asastats/channel | closed | [B2] Listed NFTs on Rand and Shufl are not tracked | bug high priority | Bug description:
NFTs listed on R& and Shufl does not show up on portfolio tracker. The total number of NFTs owned does not include listed items.
Though listing on algoxnft and Octorand works as expected | 1.0 | [B2] Listed NFTs on Rand and Shufl are not tracked - Bug description:
NFTs listed on R& and Shufl does not show up on portfolio tracker. The total number of NFTs owned does not include listed items.
Though listing on algoxnft and Octorand works as expected | priority | listed nfts on rand and shufl are not tracked bug description nfts listed on r and shufl does not show up on portfolio tracker the total number of nfts owned does not include listed items though listing on algoxnft and octorand works as expected | 1 |
39,731 | 9,637,605,063 | IssuesEvent | 2019-05-16 09:11:11 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Support annotation mismatch between overloaded methods | C: Documentation C: Functionality E: All Editions P: Medium R: Fixed T: Defect | Some overloaded methods have mismatches between their `@Support` annotations. These methods include:
- `Field.likeIgnoreCase(Field, char)` (missing `HANA`)
- `DSL.escape(Field, char)` (missing `DERBY`) | 1.0 | Support annotation mismatch between overloaded methods - Some overloaded methods have mismatches between their `@Support` annotations. These methods include:
- `Field.likeIgnoreCase(Field, char)` (missing `HANA`)
- `DSL.escape(Field, char)` (missing `DERBY`) | non_priority | support annotation mismatch between overloaded methods some overloaded methods have mismatches between their support annotations these methods include field likeignorecase field char missing hana dsl escape field char missing derby | 0 |
639,738 | 20,763,559,364 | IssuesEvent | 2022-03-15 18:23:06 | GTorlai/PastaQ.jl | https://api.github.com/repos/GTorlai/PastaQ.jl | closed | Unify `PastaQ` `state` and `gate` with ITensors `state` and `op` | -> PRIORITY <- | Internally, we should unify the `state` and `gate` interface in `PastaQ` with the `state`, `op`, and `OpSum` interface in ITensors.
Part of this would be to have a hierarchy of types representing gates or operators which would be used _internally_ by both ITensors and PastaQ, to simplify parsing gate lists and to have a common format for making gate lists, Trotter circuits, and MPOs.
The starting point would be something like this:
```julia
struct SimpleOp{N}
name::String
sites::NTuple{N,Int}
params::NamedTuple
end
```
This is essentially just the `SiteOp` type used by `OpSum`: https://github.com/ITensor/ITensors.jl/blob/v0.2.9/src/physics/autompo.jl#L10-L14 with a new name (since `SiteOp` doesn't make much sense, since it can by multi-site). So this is just a "simple" gate representation like:
```julia
("CRz", (1, 2), (ϕ=0.1,))
("CRz", 1, 2, (ϕ=0.1,))
```
Then, these would get put together into a product (or sum of products) to make an operator `Op`, like this:
```julia
struct Op{F<:Function,T<:Number,N}
f::F
coefficient::T
opsum::Vector{Vector{SimpleOp{N}}}
end
```
This is similar to the `MPOTerm` type used by `OpSum` (https://github.com/ITensor/ITensors.jl/blob/v0.2.9/src/physics/autompo.jl#L90-L93). This would be the internal representation for more sophisticated (sums of) products of operators as well as function applications, like:
```julia
(2.0, "CX", 1, 2)
(exp, "CX", 1, 2)
(exp, 0.1im, "a†", 1, "a", 2)
(exp, 0.1im, "a† * a", 1, 2)
(exp, 0.1im, "a† * a + a * a†", 1, 2)
```
The reason it is a `Vector{Vector{SimpleOp}}` is that the inner `Vector` stores a product of `SimpleOp` and the outer `Vector` stores a sum of those products, so `(exp, 0.1im, "a† * a + a * a†", 1, 2)` would get parsed into a structure like:
```julia
f = exp
coefficient = 0.1im
opsum = [
[SimpleOp("a†", (1,), (;)), SimpleOp("a", (2,), (;))],
[SimpleOp("a", (1,), (;)), SimpleOp("a†", (2,), (;))]
]
```
In principle the field `opsum` could be a general lazy representation for adding and multiplying gates, but I think adding sums of gates is sufficient. Finally, I think we can restrict all of the terms in the sum to have support on the same sites, so maybe something like this:
```julia
struct Op{F<:Function,T<:Number,N}
f::F
coefficient::T
opsum::Vector{Vector{SimpleOp{N}}}
sites::NTuple{N,Int}
end
```
where `sites` is the same as the sites of all of the `Vector{SimpleOp{N}}` terms. The internal constructor could check that the sites of all of the `Vector{SimpleOp{N}}` are all the same and then store them in the `sites` field.
A question would be how to represent products of on-site terms as well. One proposal would be:
```julia
(exp, 0.1im, "X * Y + Y * X", 1, 1)
```
in which case it would fit into the same data structure as above. To me this makes sense since you can interpret the above as `exp(0.1im (X₁ Y₁ + Y₁ X₁))`. There would be a question about how to do mixtures of on-site and multi-site products. In that case you could do:
```julia
(exp, 0.1im, "X * Y * Z + Y * X * Z", 1, 1, 2)
```
If you don't have a product somewhere, you could use an identity:
```julia
(exp, 0.1im, "X * I * Z + Y * X * Z", 1, 1, 2)
```
I think we could make `"I"` interpreted as a universal identity `I` (Julia already has that defined) so it would be a no-op. In principle we could get pretty fancy and have a syntax like this:
```julia
(exp, 0.1im, "X(1) * Z(2) + Y(1) * X(1) * Z(2)")
```
where the sites in the parentheses get extracted when parsing.
Then, the ITensors.jl `OpSum` could be defined as:
```julia
struct OpSum
ops::Vector{Op}
end
```
so simply a sum of `Op`s.
Additionally, the `PastaQ`/`ITensor` operator or gate lists for gate evolution can be converted to a `Vector{Op}` internally. We could even consider defining something like:
```julia
struct Circuit
ops::Vector{Op}
end
```
or maybe `OpList`, `GateList`, `OpProd`, `GateProd`, etc. as an internal representation for a circuit. In that way, we can define functions like `inv`, `dag`, etc. of a circuit. We could have fields `inv` and `dag` in `Op` or `SimpleOp` to store the fact that an inverse or dagger needs to be performed (and maybe other standard fields, like `isunitary`, `isorthogonal`, `grad`, etc.).
This would help with unifying `gate` and `op`, since there could be a single functionality for converting an `Op` into a `Matrix` or `ITensor` (i.e. parsing the opname strings to get sums of products of gates that get turned into an `Op` which then gets turned into the correct `Matrix`/`ITensor`). So basically all of the complexity for converting an `Op` to a `Matrix`/`ITensor` would be in functions like `op(::Op, ::SiteType)`.
Ultimately I think `gate` could just be an alias for `op` (or a wrapper), since their functionality is very similar. The actual definitions of the gates for the `Qubit`/`Qudit` site types could be in either `PastaQ` or `ITensors` (core standard ones in `ITensors` and more complicated quantum-computing specific ones in `PastaQ`).
Finally, a small piece is unifying the `state` implementation. I think we can basically just import `state` from ITensors and remove the `PastaQ` implementation. | 1.0 | Unify `PastaQ` `state` and `gate` with ITensors `state` and `op` - Internally, we should unify the `state` and `gate` interface in `PastaQ` with the `state`, `op`, and `OpSum` interface in ITensors.
Part of this would be to have a hierarchy of types representing gates or operators which would be used _internally_ by both ITensors and PastaQ, to simplify parsing gate lists and to have a common format for making gate lists, Trotter circuits, and MPOs.
The starting point would be something like this:
```julia
struct SimpleOp{N}
name::String
sites::NTuple{N,Int}
params::NamedTuple
end
```
This is essentially just the `SiteOp` type used by `OpSum`: https://github.com/ITensor/ITensors.jl/blob/v0.2.9/src/physics/autompo.jl#L10-L14 with a new name (since `SiteOp` doesn't make much sense, since it can by multi-site). So this is just a "simple" gate representation like:
```julia
("CRz", (1, 2), (ϕ=0.1,))
("CRz", 1, 2, (ϕ=0.1,))
```
Then, these would get put together into a product (or sum of products) to make an operator `Op`, like this:
```julia
struct Op{F<:Function,T<:Number,N}
f::F
coefficient::T
opsum::Vector{Vector{SimpleOp{N}}}
end
```
This is similar to the `MPOTerm` type used by `OpSum` (https://github.com/ITensor/ITensors.jl/blob/v0.2.9/src/physics/autompo.jl#L90-L93). This would be the internal representation for more sophisticated (sums of) products of operators as well as function applications, like:
```julia
(2.0, "CX", 1, 2)
(exp, "CX", 1, 2)
(exp, 0.1im, "a†", 1, "a", 2)
(exp, 0.1im, "a† * a", 1, 2)
(exp, 0.1im, "a† * a + a * a†", 1, 2)
```
The reason it is a `Vector{Vector{SimpleOp}}` is that the inner `Vector` stores a product of `SimpleOp` and the outer `Vector` stores a sum of those products, so `(exp, 0.1im, "a† * a + a * a†", 1, 2)` would get parsed into a structure like:
```julia
f = exp
coefficient = 0.1im
opsum = [
[SimpleOp("a†", (1,), (;)), SimpleOp("a", (2,), (;))],
[SimpleOp("a", (1,), (;)), SimpleOp("a†", (2,), (;))]
]
```
In principle the field `opsum` could be a general lazy representation for adding and multiplying gates, but I think adding sums of gates is sufficient. Finally, I think we can restrict all of the terms in the sum to have support on the same sites, so maybe something like this:
```julia
struct Op{F<:Function,T<:Number,N}
f::F
coefficient::T
opsum::Vector{Vector{SimpleOp{N}}}
sites::NTuple{N,Int}
end
```
where `sites` is the same as the sites of all of the `Vector{SimpleOp{N}}` terms. The internal constructor could check that the sites of all of the `Vector{SimpleOp{N}}` are all the same and then store them in the `sites` field.
A question would be how to represent products of on-site terms as well. One proposal would be:
```julia
(exp, 0.1im, "X * Y + Y * X", 1, 1)
```
in which case it would fit into the same data structure as above. To me this makes sense since you can interpret the above as `exp(0.1im (X₁ Y₁ + Y₁ X₁))`. There would be a question about how to do mixtures of on-site and multi-site products. In that case you could do:
```julia
(exp, 0.1im, "X * Y * Z + Y * X * Z", 1, 1, 2)
```
If you don't have a product somewhere, you could use an identity:
```julia
(exp, 0.1im, "X * I * Z + Y * X * Z", 1, 1, 2)
```
I think we could make `"I"` interpreted as a universal identity `I` (Julia already has that defined) so it would be a no-op. In principle we could get pretty fancy and have a syntax like this:
```julia
(exp, 0.1im, "X(1) * Z(2) + Y(1) * X(1) * Z(2)")
```
where the sites in the parentheses get extracted when parsing.
Then, the ITensors.jl `OpSum` could be defined as:
```julia
struct OpSum
ops::Vector{Op}
end
```
so simply a sum of `Op`s.
Additionally, the `PastaQ`/`ITensor` operator or gate lists for gate evolution can be converted to a `Vector{Op}` internally. We could even consider defining something like:
```julia
struct Circuit
ops::Vector{Op}
end
```
or maybe `OpList`, `GateList`, `OpProd`, `GateProd`, etc. as an internal representation for a circuit. In that way, we can define functions like `inv`, `dag`, etc. of a circuit. We could have fields `inv` and `dag` in `Op` or `SimpleOp` to store the fact that an inverse or dagger needs to be performed (and maybe other standard fields, like `isunitary`, `isorthogonal`, `grad`, etc.).
This would help with unifying `gate` and `op`, since there could be a single functionality for converting an `Op` into a `Matrix` or `ITensor` (i.e. parsing the opname strings to get sums of products of gates that get turned into an `Op` which then gets turned into the correct `Matrix`/`ITensor`). So basically all of the complexity for converting an `Op` to a `Matrix`/`ITensor` would be in functions like `op(::Op, ::SiteType)`.
Ultimately I think `gate` could just be an alias for `op` (or a wrapper), since their functionality is very similar. The actual definitions of the gates for the `Qubit`/`Qudit` site types could be in either `PastaQ` or `ITensors` (core standard ones in `ITensors` and more complicated quantum-computing specific ones in `PastaQ`).
Finally, a small piece is unifying the `state` implementation. I think we can basically just import `state` from ITensors and remove the `PastaQ` implementation. | priority | unify pastaq state and gate with itensors state and op internally we should unify the state and gate interface in pastaq with the state op and opsum interface in itensors part of this would be to have a hierarchy of types representing gates or operators which would be used internally by both itensors and pastaq to simplify parsing gate lists and to have a common format for making gate lists trotter circuits and mpos the starting point would be something like this julia struct simpleop n name string sites ntuple n int params namedtuple end this is essentially just the siteop type used by opsum with a new name since siteop doesn t make much sense since it can by multi site so this is just a simple gate representation like julia crz ϕ crz ϕ then these would get put together into a product or sum of products to make an operator op like this julia struct op f function t number n f f coefficient t opsum vector vector simpleop n end this is similar to the mpoterm type used by opsum this would be the internal representation for more sophisticated sums of products of operators as well as function applications like julia cx exp cx exp a† a exp a† a exp a† a a a† the reason it is a vector vector simpleop is that the inner vector stores a product of simpleop and the outer vector stores a sum of those products so exp a† a a a† would get parsed into a structure like julia f exp coefficient opsum in principle the field opsum could be a general lazy representation for adding and multiplying gates but i think adding sums of gates is sufficient finally i think we can restrict all of the terms in the sum to have support on the same sites so maybe something like this julia struct op f function t number n f f coefficient t opsum vector vector simpleop n sites ntuple n int end where sites is the same as the sites of all of the vector simpleop n terms the internal constructor could check that the sites of all of the vector simpleop n are all the same and then store them in the sites field a question would be how to represent products of on site terms as well one proposal would be julia exp x y y x in which case it would fit into the same data structure as above to me this makes sense since you can interpret the above as exp x₁ y₁ y₁ x₁ there would be a question about how to do mixtures of on site and multi site products in that case you could do julia exp x y z y x z if you don t have a product somewhere you could use an identity julia exp x i z y x z i think we could make i interpreted as a universal identity i julia already has that defined so it would be a no op in principle we could get pretty fancy and have a syntax like this julia exp x z y x z where the sites in the parentheses get extracted when parsing then the itensors jl opsum could be defined as julia struct opsum ops vector op end so simply a sum of op s additionally the pastaq itensor operator or gate lists for gate evolution can be converted to a vector op internally we could even consider defining something like julia struct circuit ops vector op end or maybe oplist gatelist opprod gateprod etc as an internal representation for a circuit in that way we can define functions like inv dag etc of a circuit we could have fields inv and dag in op or simpleop to store the fact that an inverse or dagger needs to be performed and maybe other standard fields like isunitary isorthogonal grad etc this would help with unifying gate and op since there could be a single functionality for converting an op into a matrix or itensor i e parsing the opname strings to get sums of products of gates that get turned into an op which then gets turned into the correct matrix itensor so basically all of the complexity for converting an op to a matrix itensor would be in functions like op op sitetype ultimately i think gate could just be an alias for op or a wrapper since their functionality is very similar the actual definitions of the gates for the qubit qudit site types could be in either pastaq or itensors core standard ones in itensors and more complicated quantum computing specific ones in pastaq finally a small piece is unifying the state implementation i think we can basically just import state from itensors and remove the pastaq implementation | 1 |
135,396 | 30,288,566,739 | IssuesEvent | 2023-07-09 01:33:43 | quiqueck/BetterEnd | https://api.github.com/repos/quiqueck/BetterEnd | closed | [Bug] After placing an 8th End Stone Stalactite going down to connect it to another 8 End Stone Stalactites going up, the game crashes. | 🔥 bug 🎉 Dev Code | ### What happened?
The title is what happened, but this exact same thing also happens with the Cave Moss Endstone Stalactite. What I expected to happen was them connecting to each other with no issues like with every other stalactite in the screenshot attached. I've attached the latest.logs below aswell because I couldn't post with them in the "Relevant log output" section.

[latest.log](https://github.com/quiqueck/BetterEnd/files/11970731/latest.log)
[latest (1).log](https://github.com/quiqueck/BetterEnd/files/11970732/latest.1.log)
### BetterEnd
4.0.7
### BCLib
3.0.10
### Fabric API
0.85.0
### Fabric Loader
0.14.21
### Minecraft
1.20.1
### Relevant log output
_No response_
### Other Mods
```shell
[12:11:27] [main/INFO]: Loading 226 mods:
- alternate-current 1.7.0
- ambientsounds 5.2.20
- appleskin 2.5.0+mc1.20
- architectury 9.0.8
- betteradvancements 0.3.2.160
- betterarcheology 1.0.2
- betterfpsdist 1.20.1-2.7
- betternether 9.0.7
- biomemusic 1.20.1-1.6
- cardinal-components 5.2.1
|-- cardinal-components-base 5.2.1
|-- cardinal-components-block 5.2.1
|-- cardinal-components-chunk 5.2.1
|-- cardinal-components-entity 5.2.1
|-- cardinal-components-item 5.2.1
|-- cardinal-components-level 5.2.1
|-- cardinal-components-scoreboard 5.2.1
\-- cardinal-components-world 5.2.1
- cavedust 1.4.1
\-- kirin 1.15.0
- charmofundying 6.4.2+1.20.1
\-- spectrelib 0.13.13+1.20.1
- chunksending 1.20.1-2.5
- citresewn 1.1.3+1.20
\-- citresewn-defaults 1.1.3+1.20
- cloth-config 11.0.99
\-- cloth-basic-math 0.6.1
- clumps 12.0.0.3
- comforts 6.3.3+1.20.1
|-- cardinal-components-base 5.2.1
\-- cardinal-components-entity 5.2.1
- completionistsindex 8.0.0
- continuity 3.0.0-beta.2+1.20
- controlling 12.0.1
- creativecore 2.10.24
\-- net_minecraftforge_eventbus 6.0.3
- dark-loading-screen 1.6.14
- dawn 5.0.0
|-- terraform-shapes-api-v1 7.0.1
\-- terraform-wood-api-v1 7.0.1
- debugify 1.20.1+1.1
\-- com_github_llamalad7_mixinextras 0.2.0-beta.8
- dynamicfps 2.4.0
\-- com_moandjiezana_toml_toml4j 0.7.2
- eatinganimationid 1.9.4+1.20
- enhancedblockentities 0.9+1.20
|-- advanced_runtime_resource_pack 0.6.7
\-- spruceui 5.0.0+1.20
- entity_model_features 0.2.11
- entity_texture_features 4.4.4
\-- org_apache_httpcomponents_httpmime 4.5.10
- entityculling 1.6.2-mc1.20
- fabric-api 0.85.0+1.20.1
|-- fabric-api-base 0.4.29+b04edc7a77
|-- fabric-api-lookup-api-v1 1.6.34+4d8536c977
|-- fabric-biome-api-v1 13.0.10+b3afc78b77
|-- fabric-block-api-v1 1.0.9+e022e5d177
|-- fabric-blockrenderlayer-v1 1.1.39+b3afc78b77
|-- fabric-client-tags-api-v1 1.1.0+97bb207577
|-- fabric-command-api-v1 1.2.32+f71b366f77
|-- fabric-command-api-v2 2.2.11+b3afc78b77
|-- fabric-commands-v0 0.2.49+df3654b377
|-- fabric-containers-v0 0.1.61+df3654b377
|-- fabric-content-registries-v0 4.0.8+b3afc78b77
|-- fabric-convention-tags-v1 1.5.3+b3afc78b77
|-- fabric-crash-report-info-v1 0.2.18+aeb40ebe77
|-- fabric-data-generation-api-v1 12.1.12+b3afc78b77
|-- fabric-dimensions-v1 2.1.51+b3afc78b77
|-- fabric-entity-events-v1 1.5.21+b3afc78b77
|-- fabric-events-interaction-v0 0.6.0+b3afc78b77
|-- fabric-events-lifecycle-v0 0.2.61+df3654b377
|-- fabric-game-rule-api-v1 1.0.38+b04edc7a77
|-- fabric-item-api-v1 2.1.26+b3afc78b77
|-- fabric-item-group-api-v1 4.0.8+40e50c4677
|-- fabric-key-binding-api-v1 1.0.36+fb8d95da77
|-- fabric-keybindings-v0 0.2.34+df3654b377
|-- fabric-lifecycle-events-v1 2.2.20+b3afc78b77
|-- fabric-loot-api-v2 1.1.38+b3afc78b77
|-- fabric-loot-tables-v1 1.1.42+9e7660c677
|-- fabric-message-api-v1 5.1.6+b3afc78b77
|-- fabric-mining-level-api-v1 2.1.48+b3afc78b77
|-- fabric-models-v0 0.3.35+b3afc78b77
|-- fabric-networking-api-v1 1.3.8+b3afc78b77
|-- fabric-networking-v0 0.3.48+df3654b377
|-- fabric-object-builder-api-v1 11.1.0+6beca84877
|-- fabric-particles-v1 1.1.0+201a23a077
|-- fabric-recipe-api-v1 1.0.18+b3afc78b77
|-- fabric-registry-sync-v0 2.2.6+b3afc78b77
|-- fabric-renderer-api-v1 3.1.0+c154966e77
|-- fabric-renderer-indigo 1.4.0+c154966e77
|-- fabric-renderer-registries-v1 3.2.44+df3654b377
|-- fabric-rendering-data-attachment-v1 0.3.33+b3afc78b77
|-- fabric-rendering-fluids-v1 3.0.26+b3afc78b77
|-- fabric-rendering-v0 1.1.47+df3654b377
|-- fabric-rendering-v1 3.0.6+b3afc78b77
|-- fabric-resource-conditions-api-v1 2.3.5+ea08f9d877
|-- fabric-resource-loader-v0 0.11.8+e3d6ed2577
|-- fabric-screen-api-v1 2.0.6+b3afc78b77
|-- fabric-screen-handler-api-v1 1.3.27+b3afc78b77
|-- fabric-sound-api-v1 1.0.12+b3afc78b77
|-- fabric-transfer-api-v1 3.2.3+43a3fedd77
\-- fabric-transitive-access-wideners-v1 4.2.0+b3afc78b77
- fabric-language-kotlin 1.9.6+kotlin.1.8.22
|-- org_jetbrains_kotlin_kotlin-reflect 1.8.22
|-- org_jetbrains_kotlin_kotlin-stdlib 1.8.22
|-- org_jetbrains_kotlin_kotlin-stdlib-jdk7 1.8.22
|-- org_jetbrains_kotlin_kotlin-stdlib-jdk8 1.8.22
|-- org_jetbrains_kotlinx_atomicfu-jvm 0.21.0
|-- org_jetbrains_kotlinx_kotlinx-coroutines-core-jvm 1.7.1
|-- org_jetbrains_kotlinx_kotlinx-coroutines-jdk8 1.7.1
|-- org_jetbrains_kotlinx_kotlinx-datetime-jvm 0.4.0
|-- org_jetbrains_kotlinx_kotlinx-serialization-cbor-jvm 1.5.1
|-- org_jetbrains_kotlinx_kotlinx-serialization-core-jvm 1.5.1
\-- org_jetbrains_kotlinx_kotlinx-serialization-json-jvm 1.5.1
- fabricloader 0.14.21
- fallingleaves 1.15.1+1.20.1
- fallingtree 4.2.0
- ferritecore 6.0.0
- forgeconfigapiport 8.0.0
- fwaystones 3.1.2+mc1.20
- geckolib 4.2
\-- com_eliotlash_mclib_mclib 20
- geophilic v2.0.0-mc1.20u1.20.1
- gpumemleakfix 1.20.1-1.6
- gravestones v1.15
- hearths v1.0.0-mc1.20u1.20.1
- immediatelyfast 1.1.15+1.20.1
\-- net_lenni0451_reflect 1.1.0
- indium 1.0.18+mc1.20
- inventoryhud 3.4.13
- iris 1.6.4
|-- io_github_douira_glsl-transformer 2.0.0-pre13
|-- org_anarres_jcpp 1.4.14
\-- org_antlr_antlr4-runtime 4.11.1
- jade 11.1.4
- jamlib 0.6.0+1.20
- java 18
- kiwi 11.0.0
- krypton 0.2.3
\-- com_velocitypowered_velocity-native 3.2.0-SNAPSHOT
- lambdynlights 2.3.1+1.20.1
|-- pride 1.2.0+1.19.4
\-- spruceui 5.0.0+1.20
- lazydfu 0.1.3
- letmedespawn fabric-1.20-1.1.0
- litematica 0.15.3
- lithium 0.11.2
- malilib 0.16.1
- mavapi 1.1.1
- mavm 1.2.4
- memoryleakfix 1.1.1
- midnightlib 1.4.1
- mindfuldarkness 8.0.0
- minecraft 1.20.1
- minihud 0.27.0
- modernfix 5.1.1+mc1.20.1
- modmenu 7.1.0
- moreculling 1.20-0.18.1
\-- conditional-mixin 0.3.2
- moremobvariants 1.2.2
- mousetweaks 2.25
- naturescompass 1.20.1-2.2.1-fabric
- neruina 1.1.0
- nicer-skies 1.2.0
- overflowingbars 8.0.0
- owo 0.11.1+1.20
\-- blue_endless_jankson 1.2.2
- paperdoll 8.0.0
- pickupnotifier 8.0.0
- polymorph 0.49.0+1.20.1
|-- cardinal-components-base 5.2.1
|-- cardinal-components-block 5.2.1
|-- cardinal-components-entity 5.2.1
|-- cardinal-components-item 5.2.1
\-- spectrelib 0.13.13+1.20.1
- presencefootsteps 1.9.0
\-- kirin 1.15.0
- puzzleslib 8.0.7
- reacharound 1.1.2
- reeses-sodium-options 1.5.1+mc1.20-build.74
- regions_unexplored 0.4.1+1.20.1
- repurposed_structures 7.0.0+1.20-fabric
- rightclickharvest 3.2.2+1.19.x-1.20.1-fabric
- roughlyenoughitems 12.0.626
\-- error_notifier 1.0.9
- searchables 1.0.1
- servercore 1.3.7+1.20.1
|-- com_electronwill_night-config_core 3.6.6
|-- com_electronwill_night-config_toml 3.6.6
|-- fabric-permissions-api-v0 0.2-SNAPSHOT
\-- placeholder-api 2.1.1+1.20
- smoothchunk 1.20.1-3.0
- snowrealmagic 9.0.0
- sodium 0.4.10+build.27
- sodium-extra 0.4.20+mc1.20-build.103
|-- caffeineconfig 1.1.0+1.17
\-- crowdin-translate 1.4+1.19.3
- soulfired 3.2.0.0
- sound_physics_remastered 1.20.1-1.1.1
- starlight 1.1.2+fabric.dbc156f
- structureessentials 1.20.1-2.9
- terrablender 3.0.0.165
- tia 1.20-1.1
- travelersbackpack 1.20.1-9.1.1
- traverse 7.0.8
|-- biolith 1.0.0-alpha.8
| \-- terraform-surfaces-api-v1 7.0.1
|-- terraform-biome-remapper-api-v1 7.0.1
|-- terraform-config-api-v1 7.0.1
|-- terraform-surfaces-api-v1 7.0.1
|-- terraform-tree-api-v1 7.0.1
|-- terraform-wood-api-v1 7.0.1
|-- traverse-client 7.0.8
|-- traverse-common 7.0.8
\-- traverse-worldgen 7.0.8
- trinkets 3.7.0
- universal_ores 1.5.2
- variantbarrels 3.0
- variantchiseledbookshelves 1.0
- waterdripsound 1.19-0.3.2
- xaeroarrowfix 1.3+1.20
- xaerominimap 23.5.0
- xaeroworldmap 1.30.6
- yet_another_config_lib_v3 3.0.3+1.20
|-- com_twelvemonkeys_common_common-image 3.9.4
|-- com_twelvemonkeys_common_common-io 3.9.4
|-- com_twelvemonkeys_common_common-lang 3.9.4
|-- com_twelvemonkeys_imageio_imageio-core 3.9.4
|-- com_twelvemonkeys_imageio_imageio-metadata 3.9.4
\-- com_twelvemonkeys_imageio_imageio-webp 3.9.4
- zoomify 2.10.0
|-- com_akuleshov7_ktoml-core-jvm 0.4.1
|-- dev_isxander_settxi_settxi-core 2.10.6
\-- dev_isxander_settxi_settxi-kotlinx-serialization 2.10.6
```
| 1.0 | [Bug] After placing an 8th End Stone Stalactite going down to connect it to another 8 End Stone Stalactites going up, the game crashes. - ### What happened?
The title is what happened, but this exact same thing also happens with the Cave Moss Endstone Stalactite. What I expected to happen was them connecting to each other with no issues like with every other stalactite in the screenshot attached. I've attached the latest.logs below aswell because I couldn't post with them in the "Relevant log output" section.

[latest.log](https://github.com/quiqueck/BetterEnd/files/11970731/latest.log)
[latest (1).log](https://github.com/quiqueck/BetterEnd/files/11970732/latest.1.log)
### BetterEnd
4.0.7
### BCLib
3.0.10
### Fabric API
0.85.0
### Fabric Loader
0.14.21
### Minecraft
1.20.1
### Relevant log output
_No response_
### Other Mods
```shell
[12:11:27] [main/INFO]: Loading 226 mods:
- alternate-current 1.7.0
- ambientsounds 5.2.20
- appleskin 2.5.0+mc1.20
- architectury 9.0.8
- betteradvancements 0.3.2.160
- betterarcheology 1.0.2
- betterfpsdist 1.20.1-2.7
- betternether 9.0.7
- biomemusic 1.20.1-1.6
- cardinal-components 5.2.1
|-- cardinal-components-base 5.2.1
|-- cardinal-components-block 5.2.1
|-- cardinal-components-chunk 5.2.1
|-- cardinal-components-entity 5.2.1
|-- cardinal-components-item 5.2.1
|-- cardinal-components-level 5.2.1
|-- cardinal-components-scoreboard 5.2.1
\-- cardinal-components-world 5.2.1
- cavedust 1.4.1
\-- kirin 1.15.0
- charmofundying 6.4.2+1.20.1
\-- spectrelib 0.13.13+1.20.1
- chunksending 1.20.1-2.5
- citresewn 1.1.3+1.20
\-- citresewn-defaults 1.1.3+1.20
- cloth-config 11.0.99
\-- cloth-basic-math 0.6.1
- clumps 12.0.0.3
- comforts 6.3.3+1.20.1
|-- cardinal-components-base 5.2.1
\-- cardinal-components-entity 5.2.1
- completionistsindex 8.0.0
- continuity 3.0.0-beta.2+1.20
- controlling 12.0.1
- creativecore 2.10.24
\-- net_minecraftforge_eventbus 6.0.3
- dark-loading-screen 1.6.14
- dawn 5.0.0
|-- terraform-shapes-api-v1 7.0.1
\-- terraform-wood-api-v1 7.0.1
- debugify 1.20.1+1.1
\-- com_github_llamalad7_mixinextras 0.2.0-beta.8
- dynamicfps 2.4.0
\-- com_moandjiezana_toml_toml4j 0.7.2
- eatinganimationid 1.9.4+1.20
- enhancedblockentities 0.9+1.20
|-- advanced_runtime_resource_pack 0.6.7
\-- spruceui 5.0.0+1.20
- entity_model_features 0.2.11
- entity_texture_features 4.4.4
\-- org_apache_httpcomponents_httpmime 4.5.10
- entityculling 1.6.2-mc1.20
- fabric-api 0.85.0+1.20.1
|-- fabric-api-base 0.4.29+b04edc7a77
|-- fabric-api-lookup-api-v1 1.6.34+4d8536c977
|-- fabric-biome-api-v1 13.0.10+b3afc78b77
|-- fabric-block-api-v1 1.0.9+e022e5d177
|-- fabric-blockrenderlayer-v1 1.1.39+b3afc78b77
|-- fabric-client-tags-api-v1 1.1.0+97bb207577
|-- fabric-command-api-v1 1.2.32+f71b366f77
|-- fabric-command-api-v2 2.2.11+b3afc78b77
|-- fabric-commands-v0 0.2.49+df3654b377
|-- fabric-containers-v0 0.1.61+df3654b377
|-- fabric-content-registries-v0 4.0.8+b3afc78b77
|-- fabric-convention-tags-v1 1.5.3+b3afc78b77
|-- fabric-crash-report-info-v1 0.2.18+aeb40ebe77
|-- fabric-data-generation-api-v1 12.1.12+b3afc78b77
|-- fabric-dimensions-v1 2.1.51+b3afc78b77
|-- fabric-entity-events-v1 1.5.21+b3afc78b77
|-- fabric-events-interaction-v0 0.6.0+b3afc78b77
|-- fabric-events-lifecycle-v0 0.2.61+df3654b377
|-- fabric-game-rule-api-v1 1.0.38+b04edc7a77
|-- fabric-item-api-v1 2.1.26+b3afc78b77
|-- fabric-item-group-api-v1 4.0.8+40e50c4677
|-- fabric-key-binding-api-v1 1.0.36+fb8d95da77
|-- fabric-keybindings-v0 0.2.34+df3654b377
|-- fabric-lifecycle-events-v1 2.2.20+b3afc78b77
|-- fabric-loot-api-v2 1.1.38+b3afc78b77
|-- fabric-loot-tables-v1 1.1.42+9e7660c677
|-- fabric-message-api-v1 5.1.6+b3afc78b77
|-- fabric-mining-level-api-v1 2.1.48+b3afc78b77
|-- fabric-models-v0 0.3.35+b3afc78b77
|-- fabric-networking-api-v1 1.3.8+b3afc78b77
|-- fabric-networking-v0 0.3.48+df3654b377
|-- fabric-object-builder-api-v1 11.1.0+6beca84877
|-- fabric-particles-v1 1.1.0+201a23a077
|-- fabric-recipe-api-v1 1.0.18+b3afc78b77
|-- fabric-registry-sync-v0 2.2.6+b3afc78b77
|-- fabric-renderer-api-v1 3.1.0+c154966e77
|-- fabric-renderer-indigo 1.4.0+c154966e77
|-- fabric-renderer-registries-v1 3.2.44+df3654b377
|-- fabric-rendering-data-attachment-v1 0.3.33+b3afc78b77
|-- fabric-rendering-fluids-v1 3.0.26+b3afc78b77
|-- fabric-rendering-v0 1.1.47+df3654b377
|-- fabric-rendering-v1 3.0.6+b3afc78b77
|-- fabric-resource-conditions-api-v1 2.3.5+ea08f9d877
|-- fabric-resource-loader-v0 0.11.8+e3d6ed2577
|-- fabric-screen-api-v1 2.0.6+b3afc78b77
|-- fabric-screen-handler-api-v1 1.3.27+b3afc78b77
|-- fabric-sound-api-v1 1.0.12+b3afc78b77
|-- fabric-transfer-api-v1 3.2.3+43a3fedd77
\-- fabric-transitive-access-wideners-v1 4.2.0+b3afc78b77
- fabric-language-kotlin 1.9.6+kotlin.1.8.22
|-- org_jetbrains_kotlin_kotlin-reflect 1.8.22
|-- org_jetbrains_kotlin_kotlin-stdlib 1.8.22
|-- org_jetbrains_kotlin_kotlin-stdlib-jdk7 1.8.22
|-- org_jetbrains_kotlin_kotlin-stdlib-jdk8 1.8.22
|-- org_jetbrains_kotlinx_atomicfu-jvm 0.21.0
|-- org_jetbrains_kotlinx_kotlinx-coroutines-core-jvm 1.7.1
|-- org_jetbrains_kotlinx_kotlinx-coroutines-jdk8 1.7.1
|-- org_jetbrains_kotlinx_kotlinx-datetime-jvm 0.4.0
|-- org_jetbrains_kotlinx_kotlinx-serialization-cbor-jvm 1.5.1
|-- org_jetbrains_kotlinx_kotlinx-serialization-core-jvm 1.5.1
\-- org_jetbrains_kotlinx_kotlinx-serialization-json-jvm 1.5.1
- fabricloader 0.14.21
- fallingleaves 1.15.1+1.20.1
- fallingtree 4.2.0
- ferritecore 6.0.0
- forgeconfigapiport 8.0.0
- fwaystones 3.1.2+mc1.20
- geckolib 4.2
\-- com_eliotlash_mclib_mclib 20
- geophilic v2.0.0-mc1.20u1.20.1
- gpumemleakfix 1.20.1-1.6
- gravestones v1.15
- hearths v1.0.0-mc1.20u1.20.1
- immediatelyfast 1.1.15+1.20.1
\-- net_lenni0451_reflect 1.1.0
- indium 1.0.18+mc1.20
- inventoryhud 3.4.13
- iris 1.6.4
|-- io_github_douira_glsl-transformer 2.0.0-pre13
|-- org_anarres_jcpp 1.4.14
\-- org_antlr_antlr4-runtime 4.11.1
- jade 11.1.4
- jamlib 0.6.0+1.20
- java 18
- kiwi 11.0.0
- krypton 0.2.3
\-- com_velocitypowered_velocity-native 3.2.0-SNAPSHOT
- lambdynlights 2.3.1+1.20.1
|-- pride 1.2.0+1.19.4
\-- spruceui 5.0.0+1.20
- lazydfu 0.1.3
- letmedespawn fabric-1.20-1.1.0
- litematica 0.15.3
- lithium 0.11.2
- malilib 0.16.1
- mavapi 1.1.1
- mavm 1.2.4
- memoryleakfix 1.1.1
- midnightlib 1.4.1
- mindfuldarkness 8.0.0
- minecraft 1.20.1
- minihud 0.27.0
- modernfix 5.1.1+mc1.20.1
- modmenu 7.1.0
- moreculling 1.20-0.18.1
\-- conditional-mixin 0.3.2
- moremobvariants 1.2.2
- mousetweaks 2.25
- naturescompass 1.20.1-2.2.1-fabric
- neruina 1.1.0
- nicer-skies 1.2.0
- overflowingbars 8.0.0
- owo 0.11.1+1.20
\-- blue_endless_jankson 1.2.2
- paperdoll 8.0.0
- pickupnotifier 8.0.0
- polymorph 0.49.0+1.20.1
|-- cardinal-components-base 5.2.1
|-- cardinal-components-block 5.2.1
|-- cardinal-components-entity 5.2.1
|-- cardinal-components-item 5.2.1
\-- spectrelib 0.13.13+1.20.1
- presencefootsteps 1.9.0
\-- kirin 1.15.0
- puzzleslib 8.0.7
- reacharound 1.1.2
- reeses-sodium-options 1.5.1+mc1.20-build.74
- regions_unexplored 0.4.1+1.20.1
- repurposed_structures 7.0.0+1.20-fabric
- rightclickharvest 3.2.2+1.19.x-1.20.1-fabric
- roughlyenoughitems 12.0.626
\-- error_notifier 1.0.9
- searchables 1.0.1
- servercore 1.3.7+1.20.1
|-- com_electronwill_night-config_core 3.6.6
|-- com_electronwill_night-config_toml 3.6.6
|-- fabric-permissions-api-v0 0.2-SNAPSHOT
\-- placeholder-api 2.1.1+1.20
- smoothchunk 1.20.1-3.0
- snowrealmagic 9.0.0
- sodium 0.4.10+build.27
- sodium-extra 0.4.20+mc1.20-build.103
|-- caffeineconfig 1.1.0+1.17
\-- crowdin-translate 1.4+1.19.3
- soulfired 3.2.0.0
- sound_physics_remastered 1.20.1-1.1.1
- starlight 1.1.2+fabric.dbc156f
- structureessentials 1.20.1-2.9
- terrablender 3.0.0.165
- tia 1.20-1.1
- travelersbackpack 1.20.1-9.1.1
- traverse 7.0.8
|-- biolith 1.0.0-alpha.8
| \-- terraform-surfaces-api-v1 7.0.1
|-- terraform-biome-remapper-api-v1 7.0.1
|-- terraform-config-api-v1 7.0.1
|-- terraform-surfaces-api-v1 7.0.1
|-- terraform-tree-api-v1 7.0.1
|-- terraform-wood-api-v1 7.0.1
|-- traverse-client 7.0.8
|-- traverse-common 7.0.8
\-- traverse-worldgen 7.0.8
- trinkets 3.7.0
- universal_ores 1.5.2
- variantbarrels 3.0
- variantchiseledbookshelves 1.0
- waterdripsound 1.19-0.3.2
- xaeroarrowfix 1.3+1.20
- xaerominimap 23.5.0
- xaeroworldmap 1.30.6
- yet_another_config_lib_v3 3.0.3+1.20
|-- com_twelvemonkeys_common_common-image 3.9.4
|-- com_twelvemonkeys_common_common-io 3.9.4
|-- com_twelvemonkeys_common_common-lang 3.9.4
|-- com_twelvemonkeys_imageio_imageio-core 3.9.4
|-- com_twelvemonkeys_imageio_imageio-metadata 3.9.4
\-- com_twelvemonkeys_imageio_imageio-webp 3.9.4
- zoomify 2.10.0
|-- com_akuleshov7_ktoml-core-jvm 0.4.1
|-- dev_isxander_settxi_settxi-core 2.10.6
\-- dev_isxander_settxi_settxi-kotlinx-serialization 2.10.6
```
| non_priority | after placing an end stone stalactite going down to connect it to another end stone stalactites going up the game crashes what happened the title is what happened but this exact same thing also happens with the cave moss endstone stalactite what i expected to happen was them connecting to each other with no issues like with every other stalactite in the screenshot attached i ve attached the latest logs below aswell because i couldn t post with them in the relevant log output section betterend bclib fabric api fabric loader minecraft relevant log output no response other mods shell loading mods alternate current ambientsounds appleskin architectury betteradvancements betterarcheology betterfpsdist betternether biomemusic cardinal components cardinal components base cardinal components block cardinal components chunk cardinal components entity cardinal components item cardinal components level cardinal components scoreboard cardinal components world cavedust kirin charmofundying spectrelib chunksending citresewn citresewn defaults cloth config cloth basic math clumps comforts cardinal components base cardinal components entity completionistsindex continuity beta controlling creativecore net minecraftforge eventbus dark loading screen dawn terraform shapes api terraform wood api debugify com github mixinextras beta dynamicfps com moandjiezana toml eatinganimationid enhancedblockentities advanced runtime resource pack spruceui entity model features entity texture features org apache httpcomponents httpmime entityculling fabric api fabric api base fabric api lookup api fabric biome api fabric block api fabric blockrenderlayer fabric client tags api fabric command api fabric command api fabric commands fabric containers fabric content registries fabric convention tags fabric crash report info fabric data generation api fabric dimensions fabric entity events fabric events interaction fabric events lifecycle fabric game rule api fabric item api fabric item group api fabric key binding api fabric keybindings fabric lifecycle events fabric loot api fabric loot tables fabric message api fabric mining level api fabric models fabric networking api fabric networking fabric object builder api fabric particles fabric recipe api fabric registry sync fabric renderer api fabric renderer indigo fabric renderer registries fabric rendering data attachment fabric rendering fluids fabric rendering fabric rendering fabric resource conditions api fabric resource loader fabric screen api fabric screen handler api fabric sound api fabric transfer api fabric transitive access wideners fabric language kotlin kotlin org jetbrains kotlin kotlin reflect org jetbrains kotlin kotlin stdlib org jetbrains kotlin kotlin stdlib org jetbrains kotlin kotlin stdlib org jetbrains kotlinx atomicfu jvm org jetbrains kotlinx kotlinx coroutines core jvm org jetbrains kotlinx kotlinx coroutines org jetbrains kotlinx kotlinx datetime jvm org jetbrains kotlinx kotlinx serialization cbor jvm org jetbrains kotlinx kotlinx serialization core jvm org jetbrains kotlinx kotlinx serialization json jvm fabricloader fallingleaves fallingtree ferritecore forgeconfigapiport fwaystones geckolib com eliotlash mclib mclib geophilic gpumemleakfix gravestones hearths immediatelyfast net reflect indium inventoryhud iris io github douira glsl transformer org anarres jcpp org antlr runtime jade jamlib java kiwi krypton com velocitypowered velocity native snapshot lambdynlights pride spruceui lazydfu letmedespawn fabric litematica lithium malilib mavapi mavm memoryleakfix midnightlib mindfuldarkness minecraft minihud modernfix modmenu moreculling conditional mixin moremobvariants mousetweaks naturescompass fabric neruina nicer skies overflowingbars owo blue endless jankson paperdoll pickupnotifier polymorph cardinal components base cardinal components block cardinal components entity cardinal components item spectrelib presencefootsteps kirin puzzleslib reacharound reeses sodium options build regions unexplored repurposed structures fabric rightclickharvest x fabric roughlyenoughitems error notifier searchables servercore com electronwill night config core com electronwill night config toml fabric permissions api snapshot placeholder api smoothchunk snowrealmagic sodium build sodium extra build caffeineconfig crowdin translate soulfired sound physics remastered starlight fabric structureessentials terrablender tia travelersbackpack traverse biolith alpha terraform surfaces api terraform biome remapper api terraform config api terraform surfaces api terraform tree api terraform wood api traverse client traverse common traverse worldgen trinkets universal ores variantbarrels variantchiseledbookshelves waterdripsound xaeroarrowfix xaerominimap xaeroworldmap yet another config lib com twelvemonkeys common common image com twelvemonkeys common common io com twelvemonkeys common common lang com twelvemonkeys imageio imageio core com twelvemonkeys imageio imageio metadata com twelvemonkeys imageio imageio webp zoomify com ktoml core jvm dev isxander settxi settxi core dev isxander settxi settxi kotlinx serialization | 0 |
52,704 | 6,650,433,288 | IssuesEvent | 2017-09-28 16:18:15 | HewlettPackard/mds | https://api.github.com/repos/HewlettPackard/mds | opened | Ensure .h files #include what they need | 3 medium imported type: redesign | [imported from HPE issue 173]
Each .h file should include everything it needs (and, ideally, nothing else). To test this, we can add a build target that walks through the .h files and generates .cpp files that just include them and builds them. | 1.0 | Ensure .h files #include what they need - [imported from HPE issue 173]
Each .h file should include everything it needs (and, ideally, nothing else). To test this, we can add a build target that walks through the .h files and generates .cpp files that just include them and builds them. | non_priority | ensure h files include what they need each h file should include everything it needs and ideally nothing else to test this we can add a build target that walks through the h files and generates cpp files that just include them and builds them | 0 |
346,697 | 24,887,163,193 | IssuesEvent | 2022-10-28 08:46:44 | zylee348/ped | https://api.github.com/repos/zylee348/ped | opened | UG has no photos to reference | type.DocumentationBug severity.VeryLow | Quite difficult to see how things work or to know what is expected of the product without any screenshots of the product. Rather hard to use and test.
<!--session: 1666943824292-4279cd82-1050-4733-b649-3a6348d902be-->
<!--Version: Web v3.4.4--> | 1.0 | UG has no photos to reference - Quite difficult to see how things work or to know what is expected of the product without any screenshots of the product. Rather hard to use and test.
<!--session: 1666943824292-4279cd82-1050-4733-b649-3a6348d902be-->
<!--Version: Web v3.4.4--> | non_priority | ug has no photos to reference quite difficult to see how things work or to know what is expected of the product without any screenshots of the product rather hard to use and test | 0 |
99,426 | 30,450,294,674 | IssuesEvent | 2023-07-16 07:48:02 | neu5/rally-online | https://api.github.com/repos/neu5/rally-online | closed | Implement basic rooms | :game_die: game mechanics :dolphin: medium :building_construction: dev | In socket.io implementation of rooms socket leaves the room automatically on disconnection which is not desirable | 1.0 | Implement basic rooms - In socket.io implementation of rooms socket leaves the room automatically on disconnection which is not desirable | non_priority | implement basic rooms in socket io implementation of rooms socket leaves the room automatically on disconnection which is not desirable | 0 |
145,942 | 22,835,635,502 | IssuesEvent | 2022-07-12 16:22:50 | cagov/design-system | https://api.github.com/repos/cagov/design-system | closed | State template component data and usage tracking | Content Design Research State Web Template BETA | How can we better understand what components are being used in the state template currently?
Would like to track the following if possible:
- Component Usage patterns
- Component searches
Is this a way to track this type of data? | 1.0 | State template component data and usage tracking - How can we better understand what components are being used in the state template currently?
Would like to track the following if possible:
- Component Usage patterns
- Component searches
Is this a way to track this type of data? | non_priority | state template component data and usage tracking how can we better understand what components are being used in the state template currently would like to track the following if possible component usage patterns component searches is this a way to track this type of data | 0 |
288,095 | 24,882,768,273 | IssuesEvent | 2022-10-28 03:47:08 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Orçamento - Execução - Doresópolis | generalization test development template - Memory (66) tag - Orçamento subtag - Execução | DoD: Realizar o teste de Generalização do validador da tag Orçamento - Execução para o Município de Doresópolis. | 1.0 | Teste de generalizacao para a tag Orçamento - Execução - Doresópolis - DoD: Realizar o teste de Generalização do validador da tag Orçamento - Execução para o Município de Doresópolis. | non_priority | teste de generalizacao para a tag orçamento execução doresópolis dod realizar o teste de generalização do validador da tag orçamento execução para o município de doresópolis | 0 |
456,382 | 13,150,508,294 | IssuesEvent | 2020-08-09 11:58:17 | adonisjs/lucid | https://api.github.com/repos/adonisjs/lucid | closed | @column.dateTime() cannot be used on nullable columns | Priority: Medium Semver: Patch Status: Accepted Type: Bug | ### Issue
> TS2345: Argument of type 'Notification' is not assignable to parameter of type '{ seenAt: DateTime; }'.
Types of property 'seenAt' are incompatible.
Type 'DateTime | null' is not assignable to type 'DateTime'.
Type 'null' is not assignable to type 'DateTime'.
### Example code
@column.dateTime()
public seenAt: DateTime | null;
Same for `@column.date()`
### Package version
- `@adonisjs/core: 5.0.0-preview-rc-1.9`
- `@adonisjs/lucid: 8.2.2` | 1.0 | @column.dateTime() cannot be used on nullable columns - ### Issue
> TS2345: Argument of type 'Notification' is not assignable to parameter of type '{ seenAt: DateTime; }'.
Types of property 'seenAt' are incompatible.
Type 'DateTime | null' is not assignable to type 'DateTime'.
Type 'null' is not assignable to type 'DateTime'.
### Example code
@column.dateTime()
public seenAt: DateTime | null;
Same for `@column.date()`
### Package version
- `@adonisjs/core: 5.0.0-preview-rc-1.9`
- `@adonisjs/lucid: 8.2.2` | priority | column datetime cannot be used on nullable columns issue argument of type notification is not assignable to parameter of type seenat datetime types of property seenat are incompatible type datetime null is not assignable to type datetime type null is not assignable to type datetime example code column datetime public seenat datetime null same for column date package version adonisjs core preview rc adonisjs lucid | 1 |
72,339 | 3,384,428,667 | IssuesEvent | 2015-11-27 02:17:10 | mctaggaj/SolidArc_AdminPortal | https://api.github.com/repos/mctaggaj/SolidArc_AdminPortal | closed | Edit Team Details has no functionality | Difficulty: Easy Priority: Low Type: Enhancement | The page is empty other than the title. Maybe add a text box where we can change the name or something. Put things we would change if it worked with the database.
Its not going to work, but we can just say its the database teams fault. | 1.0 | Edit Team Details has no functionality - The page is empty other than the title. Maybe add a text box where we can change the name or something. Put things we would change if it worked with the database.
Its not going to work, but we can just say its the database teams fault. | priority | edit team details has no functionality the page is empty other than the title maybe add a text box where we can change the name or something put things we would change if it worked with the database its not going to work but we can just say its the database teams fault | 1 |
77,793 | 15,569,881,156 | IssuesEvent | 2021-03-17 01:12:38 | Magani-Stack/AngleView | https://api.github.com/repos/Magani-Stack/AngleView | opened | CVE-2020-28463 (Medium) detected in reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl | security vulnerability | ## CVE-2020-28463 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>The Reportlab Toolkit</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/dd/5d/aba3f29d2290c23df058c3d351074e463d465b7a9b038656df2e78e22eed/reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/dd/5d/aba3f29d2290c23df058c3d351074e463d465b7a9b038656df2e78e22eed/reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: AngleView</p>
<p>Path to vulnerable library: AngleView,AngleView/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package reportlab are vulnerable to Server-side Request Forgery (SSRF) via img tags. In order to reduce risk, use trustedSchemes & trustedHosts (see in Reportlab's documentation) Steps to reproduce by Karan Bamal: 1. Download and install the latest package of reportlab 2. Go to demos -> odyssey -> dodyssey 3. In the text file odyssey.txt that needs to be converted to pdf inject <img src="http://127.0.0.1:5000" valign="top"/> 4. Create a nc listener nc -lp 5000 5. Run python3 dodyssey.py 6. You will get a hit on your nc showing we have successfully proceded to send a server side request 7. dodyssey.py will show error since there is no img file on the url, but we are able to do SSRF
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28463>CVE-2020-28463</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-28463 (Medium) detected in reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2020-28463 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>The Reportlab Toolkit</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/dd/5d/aba3f29d2290c23df058c3d351074e463d465b7a9b038656df2e78e22eed/reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/dd/5d/aba3f29d2290c23df058c3d351074e463d465b7a9b038656df2e78e22eed/reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: AngleView</p>
<p>Path to vulnerable library: AngleView,AngleView/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **reportlab-3.5.42-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package reportlab are vulnerable to Server-side Request Forgery (SSRF) via img tags. In order to reduce risk, use trustedSchemes & trustedHosts (see in Reportlab's documentation) Steps to reproduce by Karan Bamal: 1. Download and install the latest package of reportlab 2. Go to demos -> odyssey -> dodyssey 3. In the text file odyssey.txt that needs to be converted to pdf inject <img src="http://127.0.0.1:5000" valign="top"/> 4. Create a nc listener nc -lp 5000 5. Run python3 dodyssey.py 6. You will get a hit on your nc showing we have successfully proceded to send a server side request 7. dodyssey.py will show error since there is no img file on the url, but we are able to do SSRF
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28463>CVE-2020-28463</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in reportlab whl cve medium severity vulnerability vulnerable library reportlab whl the reportlab toolkit library home page a href path to dependency file angleview path to vulnerable library angleview angleview requirements txt dependency hierarchy x reportlab whl vulnerable library vulnerability details all versions of package reportlab are vulnerable to server side request forgery ssrf via img tags in order to reduce risk use trustedschemes trustedhosts see in reportlab s documentation steps to reproduce by karan bamal download and install the latest package of reportlab go to demos odyssey dodyssey in the text file odyssey txt that needs to be converted to pdf inject create a nc listener nc lp run dodyssey py you will get a hit on your nc showing we have successfully proceded to send a server side request dodyssey py will show error since there is no img file on the url but we are able to do ssrf publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href step up your open source security game with whitesource | 0 |
462,720 | 13,252,531,211 | IssuesEvent | 2020-08-20 05:36:56 | wso2/kubernetes-pipeline | https://api.github.com/repos/wso2/kubernetes-pipeline | closed | Upgrade Prometheus operator and Spinnaker charts | Priority/Normal Type/Improvement | **Description:**
Upgrade Prometheus operator and Spinnaker charts - 2020/08/18 | 1.0 | Upgrade Prometheus operator and Spinnaker charts - **Description:**
Upgrade Prometheus operator and Spinnaker charts - 2020/08/18 | priority | upgrade prometheus operator and spinnaker charts description upgrade prometheus operator and spinnaker charts | 1 |
63,013 | 17,330,200,910 | IssuesEvent | 2021-07-28 00:21:46 | dkfans/keeperfx | https://api.github.com/repos/dkfans/keeperfx | closed | Arrow gets through rebound shield in 'Standard' rule set | Type-Defect | In the unreleased versions, the 'standard' rule set has arrow set to 'strength based', which causes it to go through rebound.
Arrows should be rebounded. | 1.0 | Arrow gets through rebound shield in 'Standard' rule set - In the unreleased versions, the 'standard' rule set has arrow set to 'strength based', which causes it to go through rebound.
Arrows should be rebounded. | non_priority | arrow gets through rebound shield in standard rule set in the unreleased versions the standard rule set has arrow set to strength based which causes it to go through rebound arrows should be rebounded | 0 |
66,510 | 12,796,828,549 | IssuesEvent | 2020-07-02 11:10:59 | SleepyTrousers/EnderIO | https://api.github.com/repos/SleepyTrousers/EnderIO | closed | disabling ender io refined storage conduits does not disable the import/export | 1.12 Code Complete bug | #### Issue Description:
When you attach an ender io refined storage conduit to a machine and supply a filter (item or fluid), it will import/export from the RS system. There is another checkbox to enable/disable the import or export. Disabling the RS conduit does not prevent items/fluids from being imported/exported.
#### What happens:
Import/export of items/fluids still occurs even when conduit is checked disabled.
#### What you expected to happen:
That disabling the conduit will disable import/export. I don't think I should need to remove the filter.
#### Steps to reproduce:
Put an RS conduit on a machine (Sag Mill), put a basic item filter with coal in it, watch coal be exported from system. Uncheck the enabled box on the RS conduit section. Coal will still be exported from RS to Sag Mill.
...
____
#### Affected Versions (Do *not* use "latest"):
- EnderIO: 1.12.2-5.1.5.2
- EnderCore: 1.12.2-0.5.73
- Minecraft: 1.12.2
- Forge: 14.23.5.2847
- SpongeForge? yes/no No
- Optifine? yes/no No
- Single Player and/or Server? Single Player
#### Your most recent log file where the issue was present:
http://batman.gyptis.org/zerobin/?70142000909baecf#y32hNVuHM8CCpbYSEbdCqew1FT4/NjVt8n8IbgwSVBc=
[pastebin/gist/etc link here]
| 1.0 | disabling ender io refined storage conduits does not disable the import/export - #### Issue Description:
When you attach an ender io refined storage conduit to a machine and supply a filter (item or fluid), it will import/export from the RS system. There is another checkbox to enable/disable the import or export. Disabling the RS conduit does not prevent items/fluids from being imported/exported.
#### What happens:
Import/export of items/fluids still occurs even when conduit is checked disabled.
#### What you expected to happen:
That disabling the conduit will disable import/export. I don't think I should need to remove the filter.
#### Steps to reproduce:
Put an RS conduit on a machine (Sag Mill), put a basic item filter with coal in it, watch coal be exported from system. Uncheck the enabled box on the RS conduit section. Coal will still be exported from RS to Sag Mill.
...
____
#### Affected Versions (Do *not* use "latest"):
- EnderIO: 1.12.2-5.1.5.2
- EnderCore: 1.12.2-0.5.73
- Minecraft: 1.12.2
- Forge: 14.23.5.2847
- SpongeForge? yes/no No
- Optifine? yes/no No
- Single Player and/or Server? Single Player
#### Your most recent log file where the issue was present:
http://batman.gyptis.org/zerobin/?70142000909baecf#y32hNVuHM8CCpbYSEbdCqew1FT4/NjVt8n8IbgwSVBc=
[pastebin/gist/etc link here]
| non_priority | disabling ender io refined storage conduits does not disable the import export issue description when you attach an ender io refined storage conduit to a machine and supply a filter item or fluid it will import export from the rs system there is another checkbox to enable disable the import or export disabling the rs conduit does not prevent items fluids from being imported exported what happens import export of items fluids still occurs even when conduit is checked disabled what you expected to happen that disabling the conduit will disable import export i don t think i should need to remove the filter steps to reproduce put an rs conduit on a machine sag mill put a basic item filter with coal in it watch coal be exported from system uncheck the enabled box on the rs conduit section coal will still be exported from rs to sag mill affected versions do not use latest enderio endercore minecraft forge spongeforge yes no no optifine yes no no single player and or server single player your most recent log file where the issue was present | 0 |
253,664 | 8,058,897,263 | IssuesEvent | 2018-08-02 20:02:21 | kubeapps/kubeapps | https://api.github.com/repos/kubeapps/kubeapps | opened | Ensure nginx vhost mount paths are correct with latest Bitnami nginx image | component/cli component/frontend-proxy component/helm-chart priority/backlog | According to https://github.com/bitnami/bitnami-docker-nginx#debian-9-1140-r25-and-ol-7-1140-r46 the path for mounting the nginx vhost has changed from /bitnami/nginx to /opt/bitnami/nginx. We should ensure we update our manifests to mount the vhosts in the right paths. | 1.0 | Ensure nginx vhost mount paths are correct with latest Bitnami nginx image - According to https://github.com/bitnami/bitnami-docker-nginx#debian-9-1140-r25-and-ol-7-1140-r46 the path for mounting the nginx vhost has changed from /bitnami/nginx to /opt/bitnami/nginx. We should ensure we update our manifests to mount the vhosts in the right paths. | priority | ensure nginx vhost mount paths are correct with latest bitnami nginx image according to the path for mounting the nginx vhost has changed from bitnami nginx to opt bitnami nginx we should ensure we update our manifests to mount the vhosts in the right paths | 1 |
13,583 | 10,333,185,607 | IssuesEvent | 2019-09-03 04:04:57 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | The enabled flag isn't respected when models are copied from Replacements | bug interface/infrastructure | This happens at the time of running a simulation. | 1.0 | The enabled flag isn't respected when models are copied from Replacements - This happens at the time of running a simulation. | non_priority | the enabled flag isn t respected when models are copied from replacements this happens at the time of running a simulation | 0 |
462,032 | 13,239,967,193 | IssuesEvent | 2020-08-19 05:07:20 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | opened | JWT sub claim mismatch for same super tenant user for different grant types | Priority/Normal Type/Bug | ### Description:
The **sub** claim value in a JWT token obtained using client credentials grant for a non-admin user will be as follows.
```"sub": "user1@carbon.super"```
A JWT token obtained using a password grant for the same user for the same app will be as follows.
```"sub": "user1"```
Because of this, the application throttling does not work as expected, since the throttle key differs for the requests.
### Steps to reproduce:
1. Generate a client credentials token for a non-admin super tenant user
2. Generate a password grant token for the same user for the same application
| 1.0 | JWT sub claim mismatch for same super tenant user for different grant types - ### Description:
The **sub** claim value in a JWT token obtained using client credentials grant for a non-admin user will be as follows.
```"sub": "user1@carbon.super"```
A JWT token obtained using a password grant for the same user for the same app will be as follows.
```"sub": "user1"```
Because of this, the application throttling does not work as expected, since the throttle key differs for the requests.
### Steps to reproduce:
1. Generate a client credentials token for a non-admin super tenant user
2. Generate a password grant token for the same user for the same application
| priority | jwt sub claim mismatch for same super tenant user for different grant types description the sub claim value in a jwt token obtained using client credentials grant for a non admin user will be as follows sub carbon super a jwt token obtained using a password grant for the same user for the same app will be as follows sub because of this the application throttling does not work as expected since the throttle key differs for the requests steps to reproduce generate a client credentials token for a non admin super tenant user generate a password grant token for the same user for the same application | 1 |
326,201 | 27,979,269,600 | IssuesEvent | 2023-03-26 00:21:17 | F4KER-X/TalentVault-SOEN-341-Project-2023 | https://api.github.com/repos/F4KER-X/TalentVault-SOEN-341-Project-2023 | opened | UAT 11.1 Applicants view statistics | User Story 11 user acceptance test |
**User Acceptance Flow**
1. User as applicant is sent to a page that lists statistics about their applications
2. User can view how many jobs they've applied to, how many jobs are pending, how many jobs they've been selected for an interview and how many rejections they have | 1.0 | UAT 11.1 Applicants view statistics -
**User Acceptance Flow**
1. User as applicant is sent to a page that lists statistics about their applications
2. User can view how many jobs they've applied to, how many jobs are pending, how many jobs they've been selected for an interview and how many rejections they have | non_priority | uat applicants view statistics user acceptance flow user as applicant is sent to a page that lists statistics about their applications user can view how many jobs they ve applied to how many jobs are pending how many jobs they ve been selected for an interview and how many rejections they have | 0 |
186,652 | 14,403,159,601 | IssuesEvent | 2020-12-03 15:44:08 | eclipse/openj9 | https://api.github.com/repos/eclipse/openj9 | closed | jdk8 sanity.functional cmdLineTest_sigabrtHandlingTest_0 consumes excessive heap/disk space? | comp:test test failure | Testcase: sanity.functional cmdLineTest_sigabrtHandlingTest_0 consumes Gb's of disk space with core dumps, typically causing AdoptOpenJDK AIX machines to run out of disk space, see: https://github.com/AdoptOpenJDK/openjdk-tests/issues/2075
This runs test https://github.com/eclipse/openj9/blob/master/test/functional/cmdline_options_testresources/src/VMBench/GPTests/GPTest.java
and on some AIX machines the dumps are between 6-9Gb each, and this testcase produces 5 of them, so that's roughly 40Gb of space required.
It is puzzling why this simple test is producing such a large core dump, is there a memory leak? is the heap pre-allocating to a % of the physical memory?
Either way, is it possible to alter the test behaviour to not produce such a large dump?
| 2.0 | jdk8 sanity.functional cmdLineTest_sigabrtHandlingTest_0 consumes excessive heap/disk space? - Testcase: sanity.functional cmdLineTest_sigabrtHandlingTest_0 consumes Gb's of disk space with core dumps, typically causing AdoptOpenJDK AIX machines to run out of disk space, see: https://github.com/AdoptOpenJDK/openjdk-tests/issues/2075
This runs test https://github.com/eclipse/openj9/blob/master/test/functional/cmdline_options_testresources/src/VMBench/GPTests/GPTest.java
and on some AIX machines the dumps are between 6-9Gb each, and this testcase produces 5 of them, so that's roughly 40Gb of space required.
It is puzzling why this simple test is producing such a large core dump, is there a memory leak? is the heap pre-allocating to a % of the physical memory?
Either way, is it possible to alter the test behaviour to not produce such a large dump?
| non_priority | sanity functional cmdlinetest sigabrthandlingtest consumes excessive heap disk space testcase sanity functional cmdlinetest sigabrthandlingtest consumes gb s of disk space with core dumps typically causing adoptopenjdk aix machines to run out of disk space see this runs test and on some aix machines the dumps are between each and this testcase produces of them so that s roughly of space required it is puzzling why this simple test is producing such a large core dump is there a memory leak is the heap pre allocating to a of the physical memory either way is it possible to alter the test behaviour to not produce such a large dump | 0 |
30,290 | 2,723,432,650 | IssuesEvent | 2015-04-14 12:35:35 | CruxFramework/crux-widgets | https://api.github.com/repos/CruxFramework/crux-widgets | closed | Create a method to avoid duplicated requests on critical server methods | CruxCore enhancement imported Milestone-2.2.0 Priority-Medium | _From [tr_busta...@yahoo.com.br](https://code.google.com/u/115454294030253308352/) on March 19, 2010 14:40:33_
The purpose of this it to avoid duplicated processing of some sensitive
methods.
The solution could be based on Synchronizer Token pattern
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=42_ | 1.0 | Create a method to avoid duplicated requests on critical server methods - _From [tr_busta...@yahoo.com.br](https://code.google.com/u/115454294030253308352/) on March 19, 2010 14:40:33_
The purpose of this it to avoid duplicated processing of some sensitive
methods.
The solution could be based on Synchronizer Token pattern
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=42_ | priority | create a method to avoid duplicated requests on critical server methods from on march the purpose of this it to avoid duplicated processing of some sensitive methods the solution could be based on synchronizer token pattern original issue | 1 |
74,814 | 9,121,791,973 | IssuesEvent | 2019-02-23 01:28:03 | GoogleCloudPlatform/k8s-cluster-bundle | https://api.github.com/repos/GoogleCloudPlatform/k8s-cluster-bundle | closed | Rethink minRequirements design | design thinking | For now, the min requirements and component API version will be removed. This sort of version dependency is hard and needs a more careful design. | 1.0 | Rethink minRequirements design - For now, the min requirements and component API version will be removed. This sort of version dependency is hard and needs a more careful design. | non_priority | rethink minrequirements design for now the min requirements and component api version will be removed this sort of version dependency is hard and needs a more careful design | 0 |
22,063 | 2,644,956,286 | IssuesEvent | 2015-03-12 19:48:41 | acardona/CATMAID | https://api.github.com/repos/acardona/CATMAID | closed | Selection table: add way to hide only arbor | context: 3d-viewer priority: low type: enhancement | should hide whole actor, if shift pressed, only hide arbor.
does not hide soma spheres | 1.0 | Selection table: add way to hide only arbor - should hide whole actor, if shift pressed, only hide arbor.
does not hide soma spheres | priority | selection table add way to hide only arbor should hide whole actor if shift pressed only hide arbor does not hide soma spheres | 1 |
187,753 | 15,107,536,997 | IssuesEvent | 2021-02-08 15:32:11 | kubernetes-client/python | https://api.github.com/repos/kubernetes-client/python | closed | How would I go about listing services and their IPs? | kind/documentation | https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#python-client
I'm trying to figure out how to do something similar to the example in the link, but listing services and their IPs, rather than pods and their IPs, and then print out said list of IPs. | 1.0 | How would I go about listing services and their IPs? - https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#python-client
I'm trying to figure out how to do something similar to the example in the link, but listing services and their IPs, rather than pods and their IPs, and then print out said list of IPs. | non_priority | how would i go about listing services and their ips i m trying to figure out how to do something similar to the example in the link but listing services and their ips rather than pods and their ips and then print out said list of ips | 0 |
63,753 | 15,715,784,515 | IssuesEvent | 2021-03-28 03:22:39 | salewski/ads-github-tools | https://api.github.com/repos/salewski/ads-github-tools | closed | ads-github-tools: prep release 0.3.3 | area:build-system area:documentation area:packaging area:website priority: 1 (now) status:in-progress type:task | - [x] prep branch for release and tag
- [x] produce release artifacts
- [x] update README.md
- [x] update project web site
| 1.0 | ads-github-tools: prep release 0.3.3 - - [x] prep branch for release and tag
- [x] produce release artifacts
- [x] update README.md
- [x] update project web site
| non_priority | ads github tools prep release prep branch for release and tag produce release artifacts update readme md update project web site | 0 |
515,985 | 14,973,395,466 | IssuesEvent | 2021-01-28 00:58:50 | fasten-project/fasten-web | https://api.github.com/repos/fasten-project/fasten-web | opened | Represent packages with no versions | Priority: Medium bug enhancement good first issue | ## Is your feature request related to a problem? Please describe.
In the FASTEN system, you can find packages that don't have any versions. And in the current stage, the frontend application is uncapable of handling these.
## Describe the solution you'd like
The application must comply with the situation and represent the package as it is.
There are 2 parts:
1. Not all packages may have the latest version included in the response (`version` in `src/requests/payloads/package-payload.ts`). Thus, make it nullable.
2. If `version` is null, we can assume there are no versions for the package. In this case, render the page with information available and a proper message that clarifies the situation.
## Additional context
Relevant: [fasten-project/fasten#206](https://github.com/fasten-project)
| 1.0 | Represent packages with no versions - ## Is your feature request related to a problem? Please describe.
In the FASTEN system, you can find packages that don't have any versions. And in the current stage, the frontend application is uncapable of handling these.
## Describe the solution you'd like
The application must comply with the situation and represent the package as it is.
There are 2 parts:
1. Not all packages may have the latest version included in the response (`version` in `src/requests/payloads/package-payload.ts`). Thus, make it nullable.
2. If `version` is null, we can assume there are no versions for the package. In this case, render the page with information available and a proper message that clarifies the situation.
## Additional context
Relevant: [fasten-project/fasten#206](https://github.com/fasten-project)
| priority | represent packages with no versions is your feature request related to a problem please describe in the fasten system you can find packages that don t have any versions and in the current stage the frontend application is uncapable of handling these describe the solution you d like the application must comply with the situation and represent the package as it is there are parts not all packages may have the latest version included in the response version in src requests payloads package payload ts thus make it nullable if version is null we can assume there are no versions for the package in this case render the page with information available and a proper message that clarifies the situation additional context relevant | 1 |
552,099 | 16,195,022,312 | IssuesEvent | 2021-05-04 13:38:10 | cerner/terra-core | https://api.github.com/repos/cerner/terra-core | closed | [terra-profile-image] Placeholder image seems to have extra space in its div | :package: terra-profile-image Priority: High Up Next - BLR bug | # Bug Report
## Description
<!-- A clear and concise description of what the bug is. -->
<!-- Providing a link to a live example / minimal demo of the problem greatly helps us debug issues. -->
This problem was discovered in demographics banner, during loading of a profile image, there was a slight difference in size of the banner between showing placeholder image and showing the desired image. Upon inspection of the documentation at https://engineering.cerner.com/terra-ui/components/terra-profile-image/profile-image/profile-image it appears to happen there as well.
## Steps to Reproduce
<!-- Please specify the exact steps you took for this bug to occur. -->
<!-- Provide as much detail as possible so we're able to reproduce these steps. -->
1. Navigate to https://engineering.cerner.com/terra-ui/components/terra-profile-image/profile-image/profile-image
2. Inspect divs around Successful Profile Image and Failed Profile Image and compare the heights of the divs
3.
4.
## Additional Context / Screenshots
<!-- Add any other context about the problem here. If applicable, add screenshots to help explain. -->


## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
I would expect both images to have the same size so that things like the demographics banner does not resize upon loading.
## Possible Solution
<!--- If you have suggestions to fix the bug, let us know -->
## Environment
<!-- Include as many relevant details about the environment you experienced the bug in -->
* Component Name and Version: terra-profile-image
* Browser Name and Version: Google Chrome Version 89.0.4389.90 (Official Build) (x86_64)
* Node/npm Version: [e.g. Node 8/npm 5]
* Webpack Version:
* Operating System and version (desktop or mobile): MacOS Catalina 10.15.7
## @ Mentions
<!-- @ Mention anyone on the terra team that you have been working with so far. -->
| 1.0 | [terra-profile-image] Placeholder image seems to have extra space in its div - # Bug Report
## Description
<!-- A clear and concise description of what the bug is. -->
<!-- Providing a link to a live example / minimal demo of the problem greatly helps us debug issues. -->
This problem was discovered in demographics banner, during loading of a profile image, there was a slight difference in size of the banner between showing placeholder image and showing the desired image. Upon inspection of the documentation at https://engineering.cerner.com/terra-ui/components/terra-profile-image/profile-image/profile-image it appears to happen there as well.
## Steps to Reproduce
<!-- Please specify the exact steps you took for this bug to occur. -->
<!-- Provide as much detail as possible so we're able to reproduce these steps. -->
1. Navigate to https://engineering.cerner.com/terra-ui/components/terra-profile-image/profile-image/profile-image
2. Inspect divs around Successful Profile Image and Failed Profile Image and compare the heights of the divs
3.
4.
## Additional Context / Screenshots
<!-- Add any other context about the problem here. If applicable, add screenshots to help explain. -->


## Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
I would expect both images to have the same size so that things like the demographics banner does not resize upon loading.
## Possible Solution
<!--- If you have suggestions to fix the bug, let us know -->
## Environment
<!-- Include as many relevant details about the environment you experienced the bug in -->
* Component Name and Version: terra-profile-image
* Browser Name and Version: Google Chrome Version 89.0.4389.90 (Official Build) (x86_64)
* Node/npm Version: [e.g. Node 8/npm 5]
* Webpack Version:
* Operating System and version (desktop or mobile): MacOS Catalina 10.15.7
## @ Mentions
<!-- @ Mention anyone on the terra team that you have been working with so far. -->
| priority | placeholder image seems to have extra space in its div bug report description this problem was discovered in demographics banner during loading of a profile image there was a slight difference in size of the banner between showing placeholder image and showing the desired image upon inspection of the documentation at it appears to happen there as well steps to reproduce navigate to inspect divs around successful profile image and failed profile image and compare the heights of the divs additional context screenshots expected behavior i would expect both images to have the same size so that things like the demographics banner does not resize upon loading possible solution environment component name and version terra profile image browser name and version google chrome version official build node npm version webpack version operating system and version desktop or mobile macos catalina mentions | 1 |
67,104 | 9,001,040,217 | IssuesEvent | 2019-02-04 00:45:26 | VlachosGroup/pMuTT | https://api.github.com/repos/VlachosGroup/pMuTT | closed | Documentation Improvements | documentation | - [x] [Thermdat section of Input and Output ](https://vlachosgroup.github.io/PyMuTT/io.html#id4) needs an example reading thermdat files
- [x] [Examples page](https://vlachosgroup.github.io/PyMuTT/examples.html) only has NASA examples
- [x] Headers in [NASA Polynomial Input Example](https://vlachosgroup.github.io/PyMuTT/io.html#nasa-polynomial-input-example) are outdated ('~' delimiter should be switched to the '.' delimiter).
| 1.0 | Documentation Improvements - - [x] [Thermdat section of Input and Output ](https://vlachosgroup.github.io/PyMuTT/io.html#id4) needs an example reading thermdat files
- [x] [Examples page](https://vlachosgroup.github.io/PyMuTT/examples.html) only has NASA examples
- [x] Headers in [NASA Polynomial Input Example](https://vlachosgroup.github.io/PyMuTT/io.html#nasa-polynomial-input-example) are outdated ('~' delimiter should be switched to the '.' delimiter).
| non_priority | documentation improvements needs an example reading thermdat files only has nasa examples headers in are outdated delimiter should be switched to the delimiter | 0 |
26,383 | 6,767,136,306 | IssuesEvent | 2017-10-26 01:20:38 | ahmedahamid/temp-third | https://api.github.com/repos/ahmedahamid/temp-third | closed | Create Example: CSLinqExtension | bug CodePlexMigrationInitiated Impact: Low | Demonstrate how to create some LINQ extension in C# application, such as Dynamic LINQ, recursive query in LINQ, etc.
#### This work item was migrated from CodePlex
CodePlex work item ID: '2668'
Vote count: '1'
| 1.0 | Create Example: CSLinqExtension - Demonstrate how to create some LINQ extension in C# application, such as Dynamic LINQ, recursive query in LINQ, etc.
#### This work item was migrated from CodePlex
CodePlex work item ID: '2668'
Vote count: '1'
| non_priority | create example cslinqextension demonstrate how to create some linq extension in c application such as dynamic linq recursive query in linq etc this work item was migrated from codeplex codeplex work item id vote count | 0 |
508,121 | 14,690,158,202 | IssuesEvent | 2021-01-02 13:53:02 | fossasia/open-event-frontend | https://api.github.com/repos/fossasia/open-event-frontend | closed | event: save as draft shows unexpected error | Priority: High Priority: Urgent bug | **Describe the bug**
shows unexpected behaviour ( all required fields are filled ) please see the video
https://www.loom.com/share/475a4bb3abcd4c5c925ee6db2af44b23
**Expected behaviour**
it should save the event as a draft
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
<!-- Add any other context about the problem here. -->
| 2.0 | event: save as draft shows unexpected error - **Describe the bug**
shows unexpected behaviour ( all required fields are filled ) please see the video
https://www.loom.com/share/475a4bb3abcd4c5c925ee6db2af44b23
**Expected behaviour**
it should save the event as a draft
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
<!-- Add any other context about the problem here. -->
| priority | event save as draft shows unexpected error describe the bug shows unexpected behaviour all required fields are filled please see the video expected behaviour it should save the event as a draft desktop please complete the following information os browser version smartphone please complete the following information device os browser version additional context | 1 |
457,230 | 13,153,359,449 | IssuesEvent | 2020-08-10 03:01:28 | openemr/openemr | https://api.github.com/repos/openemr/openemr | opened | Update CCD generator templating | 2015 ONC Priority: Blocking | Currently generator will conform to MU2 however it only produces a Continuity of Care document. Just like imports we need to support other document types such as Transfer of Care etc.
Need to bring the documents up to the current formatting version.
This is prob the most important requirement to reach for 2015. | 1.0 | Update CCD generator templating - Currently generator will conform to MU2 however it only produces a Continuity of Care document. Just like imports we need to support other document types such as Transfer of Care etc.
Need to bring the documents up to the current formatting version.
This is prob the most important requirement to reach for 2015. | priority | update ccd generator templating currently generator will conform to however it only produces a continuity of care document just like imports we need to support other document types such as transfer of care etc need to bring the documents up to the current formatting version this is prob the most important requirement to reach for | 1 |
674,173 | 23,041,797,452 | IssuesEvent | 2022-07-23 08:53:18 | HughCraig/TLCMap | https://api.github.com/repos/HughCraig/TLCMap | opened | Share and Embed Visualisations | enhancement priority 2 | Make the 'share' button more like everywhere else on the internet. IE: click a conventional share icon, which has 'copy link' and 'embed' option that gives you the embed code snippet. | 1.0 | Share and Embed Visualisations - Make the 'share' button more like everywhere else on the internet. IE: click a conventional share icon, which has 'copy link' and 'embed' option that gives you the embed code snippet. | priority | share and embed visualisations make the share button more like everywhere else on the internet ie click a conventional share icon which has copy link and embed option that gives you the embed code snippet | 1 |
11,427 | 2,610,120,977 | IssuesEvent | 2015-02-26 18:37:33 | chrsmith/scribefire-chrome | https://api.github.com/repos/chrsmith/scribefire-chrome | closed | Mass picture upload | auto-migrated Milestone-1.6 Priority-High Type-Enhancement | ```
I have tried figuring out if this feature exists or not, but it does not seem
to.. or my brain is simply not working correctly. but well.
I would love the ability attach A LOT of photos at the same time, I travel blog
from India and share them in a photo album style directly on the blog.
Most importantly I would love the ability to make the photos click able, when i
upload them at the moment, they dont support clicking etc.
I hope this makes just a bit sense..
best wishes and thanks for a great project.
:)
```
-----
Original issue reported on code.google.com by `Synon...@gmail.com` on 4 Oct 2010 at 9:14
* Merged into: #93 | 1.0 | Mass picture upload - ```
I have tried figuring out if this feature exists or not, but it does not seem
to.. or my brain is simply not working correctly. but well.
I would love the ability attach A LOT of photos at the same time, I travel blog
from India and share them in a photo album style directly on the blog.
Most importantly I would love the ability to make the photos click able, when i
upload them at the moment, they dont support clicking etc.
I hope this makes just a bit sense..
best wishes and thanks for a great project.
:)
```
-----
Original issue reported on code.google.com by `Synon...@gmail.com` on 4 Oct 2010 at 9:14
* Merged into: #93 | priority | mass picture upload i have tried figuring out if this feature exists or not but it does not seem to or my brain is simply not working correctly but well i would love the ability attach a lot of photos at the same time i travel blog from india and share them in a photo album style directly on the blog most importantly i would love the ability to make the photos click able when i upload them at the moment they dont support clicking etc i hope this makes just a bit sense best wishes and thanks for a great project original issue reported on code google com by synon gmail com on oct at merged into | 1 |
663,784 | 22,206,497,996 | IssuesEvent | 2022-06-07 15:16:22 | fpdcc/ccfp-asset-dashboard | https://api.github.com/repos/fpdcc/ccfp-asset-dashboard | closed | Add all sections | high priority | Add these sections:
- Planning/Feasibility
- Preliminary Engineering
- Design Engineering
- Construction
- Construction Engineering
- Maintenance/Repair | 1.0 | Add all sections - Add these sections:
- Planning/Feasibility
- Preliminary Engineering
- Design Engineering
- Construction
- Construction Engineering
- Maintenance/Repair | priority | add all sections add these sections planning feasibility preliminary engineering design engineering construction construction engineering maintenance repair | 1 |
529,987 | 15,414,432,872 | IssuesEvent | 2021-03-05 00:17:57 | mina-andrawis/LKLD | https://api.github.com/repos/mina-andrawis/LKLD | closed | Character movement | high priority | Movement and physics scripts are written but character still have no movement abilities. | 1.0 | Character movement - Movement and physics scripts are written but character still have no movement abilities. | priority | character movement movement and physics scripts are written but character still have no movement abilities | 1 |
137,595 | 5,312,835,584 | IssuesEvent | 2017-02-13 10:17:53 | pmem/issues | https://api.github.com/repos/pmem/issues | opened | unit tests: obj_bucket/TEST0: SETUP (all/pmem/debug/pmemcheck) fails | Exposure: Low OS: Linux Priority: 4 low Type: Bug | Revision: 0fd509d73382160069b98525597976e20d98f1ea
> obj_bucket/TEST0: SETUP (all/pmem/debug/pmemcheck)
> obj_bucket/TEST0: START: obj_bucket
> obj_bucket/TEST0 crashed (signal 11). err0.log below.
> {ut_backtrace.c:203 ut_sighandler} obj_bucket/TEST0:
>
> {ut_backtrace.c:204 ut_sighandler} obj_bucket/TEST0: Signal 11, backtrace:
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 0: ./obj_bucket (ut_sighandler+0x52) [0x436915] [0x36915]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 1: /lib/x86_64-linux-gnu/libc.so.6 (killpg+0x40) [0x54aa4ef] [0x354ef]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 2: ./obj_bucket (bucket_insert_block+0x3b) [0x404558] [0x4558]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 3: ./obj_bucket (test_bucket_insert_get+0xfb) [0x40322c] [0x322c]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 4: ./obj_bucket (main+0x44) [0x4034fb] [0x34fb]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 5: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xf0) [0x5495830] [0x20830]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 6: ./obj_bucket (_start+0x29) [0x402f39] [0x2f39]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 7: ? (?+0x29) [0x29] [0x0]
> {ut_backtrace.c:206 ut_sighandler} obj_bucket/TEST0:
>
> pmemcheck0.log below.
> obj_bucket/TEST0 pmemcheck0.log ==27933== pmemcheck-0.2, a simple persistent store checker
> obj_bucket/TEST0 pmemcheck0.log ==27933== Copyright (c) 2014-2016, Intel Corporation
> obj_bucket/TEST0 pmemcheck0.log ==27933== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
> obj_bucket/TEST0 pmemcheck0.log ==27933== Command: ./obj_bucket
> obj_bucket/TEST0 pmemcheck0.log ==27933== Parent PID: 27884
> obj_bucket/TEST0 pmemcheck0.log ==27933==
> obj_bucket/TEST0 pmemcheck0.log ==27933==
> obj_bucket/TEST0 pmemcheck0.log ==27933== Number of stores not made persistent: 0
> obj_bucket/TEST0 pmemcheck0.log ==27933== ERROR SUMMARY: 0 errors
>
> out0.log below.
> obj_bucket/TEST0 out0.log obj_bucket/TEST0: START: obj_bucket
> obj_bucket/TEST0 out0.log ./obj_bucket
>
> pmem0.log below.
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:244 out_init] pid 27933: program: /home/jenkins/workspace/nvml_fork_tests_valgrind_force_enable_ubuntu/src/test/obj_bucket/obj_bucket
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:246 out_init] libpmem version 1.0
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:247 out_init] src version SRCVERSION:1.2+wtp1-241-g0fd509d
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:255 out_init] compiled with support for Valgrind pmemcheck
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:260 out_init] compiled with support for Valgrind helgrind
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:265 out_init] compiled with support for Valgrind memcheck
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:270 out_init] compiled with support for Valgrind drd
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [mmap.c:59 util_mmap_init]
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [libpmem.c:56 libpmem_init]
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [pmem.c:1197 pmem_init]
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [pmem.c:1163 pmem_get_cpuinfo] clflush supported
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [pmem.c:1141 pmem_log_cpuinfo] using clflush
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [pmem.c:1148 pmem_log_cpuinfo] using movnt
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [libpmem.c:69 libpmem_fini]
>
> pmemobj0.log below.
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:244 out_init] pid 27933: program: /home/jenkins/workspace/nvml_fork_tests_valgrind_force_enable_ubuntu/src/test/obj_bucket/obj_bucket
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:246 out_init] libpmemobj version 2.0
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:247 out_init] src version SRCVERSION:1.2+wtp1-241-g0fd509d
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:255 out_init] compiled with support for Valgrind pmemcheck
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:260 out_init] compiled with support for Valgrind helgrind
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:265 out_init] compiled with support for Valgrind memcheck
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:270 out_init] compiled with support for Valgrind drd
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [mmap.c:59 util_mmap_init]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [libpmemobj.c:52 libpmemobj_init]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [obj.c:180 obj_init]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [set.c:95 util_remote_init]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [libpmemobj.c:65 libpmemobj_fini]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [obj.c:209 obj_fini]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [set.c:107 util_remote_fini]
>
> RUNTESTS: stopping: obj_bucket//TEST0 failed, TEST=all FS=pmem BUILD=debug
> | 1.0 | unit tests: obj_bucket/TEST0: SETUP (all/pmem/debug/pmemcheck) fails - Revision: 0fd509d73382160069b98525597976e20d98f1ea
> obj_bucket/TEST0: SETUP (all/pmem/debug/pmemcheck)
> obj_bucket/TEST0: START: obj_bucket
> obj_bucket/TEST0 crashed (signal 11). err0.log below.
> {ut_backtrace.c:203 ut_sighandler} obj_bucket/TEST0:
>
> {ut_backtrace.c:204 ut_sighandler} obj_bucket/TEST0: Signal 11, backtrace:
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 0: ./obj_bucket (ut_sighandler+0x52) [0x436915] [0x36915]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 1: /lib/x86_64-linux-gnu/libc.so.6 (killpg+0x40) [0x54aa4ef] [0x354ef]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 2: ./obj_bucket (bucket_insert_block+0x3b) [0x404558] [0x4558]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 3: ./obj_bucket (test_bucket_insert_get+0xfb) [0x40322c] [0x322c]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 4: ./obj_bucket (main+0x44) [0x4034fb] [0x34fb]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 5: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xf0) [0x5495830] [0x20830]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 6: ./obj_bucket (_start+0x29) [0x402f39] [0x2f39]
> {ut_backtrace.c:120 ut_dump_backtrace} obj_bucket/TEST0: 7: ? (?+0x29) [0x29] [0x0]
> {ut_backtrace.c:206 ut_sighandler} obj_bucket/TEST0:
>
> pmemcheck0.log below.
> obj_bucket/TEST0 pmemcheck0.log ==27933== pmemcheck-0.2, a simple persistent store checker
> obj_bucket/TEST0 pmemcheck0.log ==27933== Copyright (c) 2014-2016, Intel Corporation
> obj_bucket/TEST0 pmemcheck0.log ==27933== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
> obj_bucket/TEST0 pmemcheck0.log ==27933== Command: ./obj_bucket
> obj_bucket/TEST0 pmemcheck0.log ==27933== Parent PID: 27884
> obj_bucket/TEST0 pmemcheck0.log ==27933==
> obj_bucket/TEST0 pmemcheck0.log ==27933==
> obj_bucket/TEST0 pmemcheck0.log ==27933== Number of stores not made persistent: 0
> obj_bucket/TEST0 pmemcheck0.log ==27933== ERROR SUMMARY: 0 errors
>
> out0.log below.
> obj_bucket/TEST0 out0.log obj_bucket/TEST0: START: obj_bucket
> obj_bucket/TEST0 out0.log ./obj_bucket
>
> pmem0.log below.
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:244 out_init] pid 27933: program: /home/jenkins/workspace/nvml_fork_tests_valgrind_force_enable_ubuntu/src/test/obj_bucket/obj_bucket
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:246 out_init] libpmem version 1.0
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:247 out_init] src version SRCVERSION:1.2+wtp1-241-g0fd509d
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:255 out_init] compiled with support for Valgrind pmemcheck
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:260 out_init] compiled with support for Valgrind helgrind
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:265 out_init] compiled with support for Valgrind memcheck
> obj_bucket/TEST0 pmem0.log <libpmem>: <1> [out.c:270 out_init] compiled with support for Valgrind drd
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [mmap.c:59 util_mmap_init]
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [libpmem.c:56 libpmem_init]
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [pmem.c:1197 pmem_init]
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [pmem.c:1163 pmem_get_cpuinfo] clflush supported
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [pmem.c:1141 pmem_log_cpuinfo] using clflush
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [pmem.c:1148 pmem_log_cpuinfo] using movnt
> obj_bucket/TEST0 pmem0.log <libpmem>: <3> [libpmem.c:69 libpmem_fini]
>
> pmemobj0.log below.
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:244 out_init] pid 27933: program: /home/jenkins/workspace/nvml_fork_tests_valgrind_force_enable_ubuntu/src/test/obj_bucket/obj_bucket
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:246 out_init] libpmemobj version 2.0
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:247 out_init] src version SRCVERSION:1.2+wtp1-241-g0fd509d
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:255 out_init] compiled with support for Valgrind pmemcheck
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:260 out_init] compiled with support for Valgrind helgrind
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:265 out_init] compiled with support for Valgrind memcheck
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <1> [out.c:270 out_init] compiled with support for Valgrind drd
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [mmap.c:59 util_mmap_init]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [libpmemobj.c:52 libpmemobj_init]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [obj.c:180 obj_init]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [set.c:95 util_remote_init]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [libpmemobj.c:65 libpmemobj_fini]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [obj.c:209 obj_fini]
> obj_bucket/TEST0 pmemobj0.log <libpmemobj>: <3> [set.c:107 util_remote_fini]
>
> RUNTESTS: stopping: obj_bucket//TEST0 failed, TEST=all FS=pmem BUILD=debug
> | priority | unit tests obj bucket setup all pmem debug pmemcheck fails revision obj bucket setup all pmem debug pmemcheck obj bucket start obj bucket obj bucket crashed signal log below ut backtrace c ut sighandler obj bucket ut backtrace c ut sighandler obj bucket signal backtrace ut backtrace c ut dump backtrace obj bucket obj bucket ut sighandler ut backtrace c ut dump backtrace obj bucket lib linux gnu libc so killpg ut backtrace c ut dump backtrace obj bucket obj bucket bucket insert block ut backtrace c ut dump backtrace obj bucket obj bucket test bucket insert get ut backtrace c ut dump backtrace obj bucket obj bucket main ut backtrace c ut dump backtrace obj bucket lib linux gnu libc so libc start main ut backtrace c ut dump backtrace obj bucket obj bucket start ut backtrace c ut dump backtrace obj bucket ut backtrace c ut sighandler obj bucket log below obj bucket log pmemcheck a simple persistent store checker obj bucket log copyright c intel corporation obj bucket log using valgrind and libvex rerun with h for copyright info obj bucket log command obj bucket obj bucket log parent pid obj bucket log obj bucket log obj bucket log number of stores not made persistent obj bucket log error summary errors log below obj bucket log obj bucket start obj bucket obj bucket log obj bucket log below obj bucket log pid program home jenkins workspace nvml fork tests valgrind force enable ubuntu src test obj bucket obj bucket obj bucket log libpmem version obj bucket log src version srcversion obj bucket log compiled with support for valgrind pmemcheck obj bucket log compiled with support for valgrind helgrind obj bucket log compiled with support for valgrind memcheck obj bucket log compiled with support for valgrind drd obj bucket log obj bucket log obj bucket log obj bucket log clflush supported obj bucket log using clflush obj bucket log using movnt obj bucket log log below obj bucket log pid program home jenkins workspace nvml fork tests valgrind force enable ubuntu src test obj bucket obj bucket obj bucket log libpmemobj version obj bucket log src version srcversion obj bucket log compiled with support for valgrind pmemcheck obj bucket log compiled with support for valgrind helgrind obj bucket log compiled with support for valgrind memcheck obj bucket log compiled with support for valgrind drd obj bucket log obj bucket log obj bucket log obj bucket log obj bucket log obj bucket log obj bucket log runtests stopping obj bucket failed test all fs pmem build debug | 1 |
822,294 | 30,863,588,424 | IssuesEvent | 2023-08-03 06:16:36 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.chewy.com - site is not usable | browser-firefox priority-normal engine-gecko | <!-- @browser: Firefox 116.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/116.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/125288 -->
**URL**: https://www.chewy.com/app/write-review?id=56751
**Browser / Version**: Firefox 116.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
hitting the submit button. Nothing happened. Not the first time this has happened with websites. Really sad to have to resort to using edge because of issues with firefox. Especially with the constant updates.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.chewy.com - site is not usable - <!-- @browser: Firefox 116.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/116.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/125288 -->
**URL**: https://www.chewy.com/app/write-review?id=56751
**Browser / Version**: Firefox 116.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Site is not usable
**Description**: Buttons or links not working
**Steps to Reproduce**:
hitting the submit button. Nothing happened. Not the first time this has happened with websites. Really sad to have to resort to using edge because of issues with firefox. Especially with the constant updates.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | site is not usable url browser version firefox operating system windows tested another browser yes edge problem type site is not usable description buttons or links not working steps to reproduce hitting the submit button nothing happened not the first time this has happened with websites really sad to have to resort to using edge because of issues with firefox especially with the constant updates browser configuration none from with ❤️ | 1 |
556,535 | 16,485,394,280 | IssuesEvent | 2021-05-24 17:10:40 | microsoft/terminal | https://api.github.com/repos/microsoft/terminal | closed | Overlapping text in Command Palette submenu after deleting ">" | Area-CmdPal Help Wanted In-PR Issue-Bug Needs-Tag-Fix Needs-Triage Priority-2 Product-Terminal | ### Windows Terminal version (or Windows build number)
1.7.1033.0
### Other Software
_No response_
### Steps to reproduce
1. Press Ctrl+Shift+P. The Command Palette opens.
2. Scroll to the "Select color scheme…" item and click that. The Command Palette displays a list of color schemes.
3. Press Backspace. The list of color schemes disappears, but the "Select color scheme…" title is still visible.
4. Type `cmd`. Do not press Enter.
### Expected Behavior
It should tell me what will happen if I press Enter.
> Executing command line will invoke the following commands:
> New tab, commandline: cmd
### Actual Behavior
The dim "Select color scheme…" title overlaps the description of the command line.

| 1.0 | Overlapping text in Command Palette submenu after deleting ">" - ### Windows Terminal version (or Windows build number)
1.7.1033.0
### Other Software
_No response_
### Steps to reproduce
1. Press Ctrl+Shift+P. The Command Palette opens.
2. Scroll to the "Select color scheme…" item and click that. The Command Palette displays a list of color schemes.
3. Press Backspace. The list of color schemes disappears, but the "Select color scheme…" title is still visible.
4. Type `cmd`. Do not press Enter.
### Expected Behavior
It should tell me what will happen if I press Enter.
> Executing command line will invoke the following commands:
> New tab, commandline: cmd
### Actual Behavior
The dim "Select color scheme…" title overlaps the description of the command line.

| priority | overlapping text in command palette submenu after deleting windows terminal version or windows build number other software no response steps to reproduce press ctrl shift p the command palette opens scroll to the select color scheme… item and click that the command palette displays a list of color schemes press backspace the list of color schemes disappears but the select color scheme… title is still visible type cmd do not press enter expected behavior it should tell me what will happen if i press enter executing command line will invoke the following commands new tab commandline cmd actual behavior the dim select color scheme… title overlaps the description of the command line | 1 |
107,968 | 9,255,691,930 | IssuesEvent | 2019-03-16 12:45:15 | KhronosGroup/MoltenVK | https://api.github.com/repos/KhronosGroup/MoltenVK | closed | XPC_ERROR_CONNECTION_INTERRUPTED on a 'simple' fragment shader | fixed - please test & close spirv-cross | Hello,
I am new to MoltenVK, but I face an issue with a fragment shader.
I am using the MoltenVK from LunarG vulkansdk-macos-1.1.92.1 on MacOs 10.13.1.
I made a minimalist code to reproduce the issue, which seems related to accessing an array of struct with a non const index.
I got this error message:
```
2019-01-13 19:44:41.919480+0100 VulkanFirst[1533:393273] Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
2019-01-13 19:44:41.940978+0100 VulkanFirst[1533:393273] Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
2019-01-13 19:44:41.964891+0100 VulkanFirst[1533:393276] Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
[***MoltenVK ERROR***] VK_ERROR_INITIALIZATION_FAILED: Render pipeline compile failed (error code 1):
Compiler encountered an internal error.
```
Of course, I can modify my data storage, but it is not very convenient on the long run.
What could be my mistake, or is there any way to solve this?
Best regards.
[bug.txt](https://github.com/KhronosGroup/MoltenVK/files/2753376/bug.txt)
| 1.0 | XPC_ERROR_CONNECTION_INTERRUPTED on a 'simple' fragment shader - Hello,
I am new to MoltenVK, but I face an issue with a fragment shader.
I am using the MoltenVK from LunarG vulkansdk-macos-1.1.92.1 on MacOs 10.13.1.
I made a minimalist code to reproduce the issue, which seems related to accessing an array of struct with a non const index.
I got this error message:
```
2019-01-13 19:44:41.919480+0100 VulkanFirst[1533:393273] Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
2019-01-13 19:44:41.940978+0100 VulkanFirst[1533:393273] Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
2019-01-13 19:44:41.964891+0100 VulkanFirst[1533:393276] Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
[***MoltenVK ERROR***] VK_ERROR_INITIALIZATION_FAILED: Render pipeline compile failed (error code 1):
Compiler encountered an internal error.
```
Of course, I can modify my data storage, but it is not very convenient on the long run.
What could be my mistake, or is there any way to solve this?
Best regards.
[bug.txt](https://github.com/KhronosGroup/MoltenVK/files/2753376/bug.txt)
| non_priority | xpc error connection interrupted on a simple fragment shader hello i am new to moltenvk but i face an issue with a fragment shader i am using the moltenvk from lunarg vulkansdk macos on macos i made a minimalist code to reproduce the issue which seems related to accessing an array of struct with a non const index i got this error message vulkanfirst compiler failed with xpc error connection interrupted vulkanfirst compiler failed with xpc error connection interrupted vulkanfirst compiler failed with xpc error connection interrupted vk error initialization failed render pipeline compile failed error code compiler encountered an internal error of course i can modify my data storage but it is not very convenient on the long run what could be my mistake or is there any way to solve this best regards | 0 |
328,434 | 9,994,907,528 | IssuesEvent | 2019-07-11 18:52:02 | nick-baliesnyi/wams | https://api.github.com/repos/nick-baliesnyi/wams | closed | Reproduce video player on Canvas | Examples/Docs Priority | Take the Distributed video player example that is built with HTML and a lot of client-side code, and reproduce it with Canvas and as much stuff on the server-side as possible.
**UPD: See comment in full issue** | 1.0 | Reproduce video player on Canvas - Take the Distributed video player example that is built with HTML and a lot of client-side code, and reproduce it with Canvas and as much stuff on the server-side as possible.
**UPD: See comment in full issue** | priority | reproduce video player on canvas take the distributed video player example that is built with html and a lot of client side code and reproduce it with canvas and as much stuff on the server side as possible upd see comment in full issue | 1 |
95,361 | 10,878,638,262 | IssuesEvent | 2019-11-16 19:04:05 | bastienboutonnet/sheetload | https://api.github.com/repos/bastienboutonnet/sheetload | closed | Remove sheet id for secrecy | documentation 📖 | The documentation points to an actual google sheet and since repo is public it may be good to remove it. | 1.0 | Remove sheet id for secrecy - The documentation points to an actual google sheet and since repo is public it may be good to remove it. | non_priority | remove sheet id for secrecy the documentation points to an actual google sheet and since repo is public it may be good to remove it | 0 |
61,769 | 3,152,665,521 | IssuesEvent | 2015-09-16 14:49:20 | weaveworks/weave | https://api.github.com/repos/weaveworks/weave | closed | simplify and document instructions for weave on a Mac | chore [component/docs] [component/proxy] {priority/high} | @squaremo pointed out that boot2docker uses tcp instead of unix sockets to run a proxy. Therefor, the user must set their DOCKER_HOST manually and also needs to run 'weave launch-proxy' after launching weave.
| 1.0 | simplify and document instructions for weave on a Mac - @squaremo pointed out that boot2docker uses tcp instead of unix sockets to run a proxy. Therefor, the user must set their DOCKER_HOST manually and also needs to run 'weave launch-proxy' after launching weave.
| priority | simplify and document instructions for weave on a mac squaremo pointed out that uses tcp instead of unix sockets to run a proxy therefor the user must set their docker host manually and also needs to run weave launch proxy after launching weave | 1 |
124,392 | 17,772,545,248 | IssuesEvent | 2021-08-30 15:10:54 | kapseliboi/async-website | https://api.github.com/repos/kapseliboi/async-website | opened | CVE-2018-19838 (Medium) detected in node-sass-4.14.1.tgz, opennmsopennms-source-26.0.0-1 | security vulnerability | ## CVE-2018-19838 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.14.1.tgz</b>, <b>opennmsopennms-source-26.0.0-1</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: async-website/package.json</p>
<p>Path to vulnerable library: async-website/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.1.0.tgz (Root Library)
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/async-website/commit/7608d58e5948d825ba687240a82be5edbf410aa8">7608d58e5948d825ba687240a82be5edbf410aa8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, functions inside ast.cpp for IMPLEMENT_AST_OPERATORS expansion allow attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, as demonstrated by recursive calls involving clone(), cloneChildren(), and copy().
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19838>CVE-2018-19838</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/node-sass/blob/v4.14.0/package.json">https://github.com/sass/node-sass/blob/v4.14.0/package.json</a></p>
<p>Release Date: 2018-12-04</p>
<p>Fix Resolution: node-sass - 4.14.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-19838 (Medium) detected in node-sass-4.14.1.tgz, opennmsopennms-source-26.0.0-1 - ## CVE-2018-19838 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.14.1.tgz</b>, <b>opennmsopennms-source-26.0.0-1</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: async-website/package.json</p>
<p>Path to vulnerable library: async-website/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.1.0.tgz (Root Library)
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/async-website/commit/7608d58e5948d825ba687240a82be5edbf410aa8">7608d58e5948d825ba687240a82be5edbf410aa8</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, functions inside ast.cpp for IMPLEMENT_AST_OPERATORS expansion allow attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, as demonstrated by recursive calls involving clone(), cloneChildren(), and copy().
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19838>CVE-2018-19838</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sass/node-sass/blob/v4.14.0/package.json">https://github.com/sass/node-sass/blob/v4.14.0/package.json</a></p>
<p>Release Date: 2018-12-04</p>
<p>Fix Resolution: node-sass - 4.14.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in node sass tgz opennmsopennms source cve medium severity vulnerability vulnerable libraries node sass tgz opennmsopennms source node sass tgz wrapper around libsass library home page a href path to dependency file async website package json path to vulnerable library async website node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library found in head commit a href found in base branch master vulnerability details in libsass prior to functions inside ast cpp for implement ast operators expansion allow attackers to cause a denial of service resulting from stack consumption via a crafted sass file as demonstrated by recursive calls involving clone clonechildren and copy publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node sass step up your open source security game with whitesource | 0 |
675,498 | 23,096,592,067 | IssuesEvent | 2022-07-26 20:13:25 | prysmaticlabs/documentation | https://api.github.com/repos/prysmaticlabs/documentation | closed | Guide: Migrating Keys Between Eth2 Clients or Between Machines | priority:high | This is a high priority item for us before we go to mainnet. We need to have detailed documentation regarding migrating validator keys between eth2 clients safely. Although different eth2 clients have various ways of representing keys, we all follow the common EIP-2335 standard for keystore.json files to store secrets. Migrating keys is not an easy feat, as you also need to migrate [`slashing protection histories`](https://hackmd.io/@sproul/Bk0Y0qdGD) which is still a work in progress. We should aim to include the following in this documentation:
- A preface on how Prysm handles validator keys
- How to backup your keys and delete them from Prysm
- How to import keys from an external source, such as from a backup generated by lighthouse's eth2 client
- How to import keys in other eth2 clients (might be tough as if they change their code, our docs become stale)
- How to export your validating history for slashing protection purposes (work in progress) | 1.0 | Guide: Migrating Keys Between Eth2 Clients or Between Machines - This is a high priority item for us before we go to mainnet. We need to have detailed documentation regarding migrating validator keys between eth2 clients safely. Although different eth2 clients have various ways of representing keys, we all follow the common EIP-2335 standard for keystore.json files to store secrets. Migrating keys is not an easy feat, as you also need to migrate [`slashing protection histories`](https://hackmd.io/@sproul/Bk0Y0qdGD) which is still a work in progress. We should aim to include the following in this documentation:
- A preface on how Prysm handles validator keys
- How to backup your keys and delete them from Prysm
- How to import keys from an external source, such as from a backup generated by lighthouse's eth2 client
- How to import keys in other eth2 clients (might be tough as if they change their code, our docs become stale)
- How to export your validating history for slashing protection purposes (work in progress) | priority | guide migrating keys between clients or between machines this is a high priority item for us before we go to mainnet we need to have detailed documentation regarding migrating validator keys between clients safely although different clients have various ways of representing keys we all follow the common eip standard for keystore json files to store secrets migrating keys is not an easy feat as you also need to migrate which is still a work in progress we should aim to include the following in this documentation a preface on how prysm handles validator keys how to backup your keys and delete them from prysm how to import keys from an external source such as from a backup generated by lighthouse s client how to import keys in other clients might be tough as if they change their code our docs become stale how to export your validating history for slashing protection purposes work in progress | 1 |
197,716 | 6,963,191,618 | IssuesEvent | 2017-12-08 16:28:14 | Parabot/Parabot | https://api.github.com/repos/Parabot/Parabot | opened | Read version from pom.xml for Travis | priority:low status:accepted status:under consideration type:improvement | Currently we have to adjust both the pom.xml and the .travis.yml file, if a new version gets released.
Could we maybe put the pom.xml property into the environment configuration and read it with Travis? | 1.0 | Read version from pom.xml for Travis - Currently we have to adjust both the pom.xml and the .travis.yml file, if a new version gets released.
Could we maybe put the pom.xml property into the environment configuration and read it with Travis? | priority | read version from pom xml for travis currently we have to adjust both the pom xml and the travis yml file if a new version gets released could we maybe put the pom xml property into the environment configuration and read it with travis | 1 |
463,050 | 13,258,654,387 | IssuesEvent | 2020-08-20 15:39:59 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | missing role template after upgrade | area/iam kind/bug priority/high | ## English only!
**注意!GitHub Issue 仅支持英文,中文 Issue 请在 [论坛](https://kubesphere.com.cn/forum/) 提交。**
**General remarks**
> Please delete this section including header before submitting
>
> This form is to report bugs. For general usage questions refer to our Slack channel
> [KubeSphere-users](https://join.slack.com/t/kubesphere/shared_invite/enQtNTE3MDIxNzUxNzQ0LTdkNTc3OTdmNzdiODViZjViNTU5ZDY3M2I2MzY4MTI4OGZlOTJmMDg5ZTFiMDAwYzNlZDY5NjA0NzZlNDU5NmY)
**Describe the Bug**
Some role templates are missing after upgrade.
/kind bug
/area iam
/priority high
/assign @wansir | 1.0 | missing role template after upgrade - ## English only!
**注意!GitHub Issue 仅支持英文,中文 Issue 请在 [论坛](https://kubesphere.com.cn/forum/) 提交。**
**General remarks**
> Please delete this section including header before submitting
>
> This form is to report bugs. For general usage questions refer to our Slack channel
> [KubeSphere-users](https://join.slack.com/t/kubesphere/shared_invite/enQtNTE3MDIxNzUxNzQ0LTdkNTc3OTdmNzdiODViZjViNTU5ZDY3M2I2MzY4MTI4OGZlOTJmMDg5ZTFiMDAwYzNlZDY5NjA0NzZlNDU5NmY)
**Describe the Bug**
Some role templates are missing after upgrade.
/kind bug
/area iam
/priority high
/assign @wansir | priority | missing role template after upgrade english only 注意!github issue 仅支持英文,中文 issue 请在 提交。 general remarks please delete this section including header before submitting this form is to report bugs for general usage questions refer to our slack channel describe the bug some role templates are missing after upgrade kind bug area iam priority high assign wansir | 1 |
392,093 | 11,583,414,482 | IssuesEvent | 2020-02-22 10:51:25 | TheTofuShop/Menu | https://api.github.com/repos/TheTofuShop/Menu | closed | Add permission for specific features on the menu(specifically for staff tools) | feature-request priority-high | ⭐ **What feature do you want to be added?**
Add permission for specific features on the menu, like kick, ban
🔎 **Why do you want this feature to be added?**
So moderators can't ban people.
❓ **Is there something else that we need to know?**
No.
| 1.0 | Add permission for specific features on the menu(specifically for staff tools) - ⭐ **What feature do you want to be added?**
Add permission for specific features on the menu, like kick, ban
🔎 **Why do you want this feature to be added?**
So moderators can't ban people.
❓ **Is there something else that we need to know?**
No.
| priority | add permission for specific features on the menu specifically for staff tools ⭐ what feature do you want to be added add permission for specific features on the menu like kick ban 🔎 why do you want this feature to be added so moderators can t ban people ❓ is there something else that we need to know no | 1 |
664,219 | 22,261,705,929 | IssuesEvent | 2022-06-10 01:32:27 | bottlerocket-os/bottlerocket | https://api.github.com/repos/bottlerocket-os/bottlerocket | closed | Populate /etc/hosts with custom DNS mapping entries | priority/p1 core | **What I'd like:**
Add custom static DNS mapping entries for `/etc/hosts` through settings.
Our project requires us to support resolving certain DNS domain(s) to IP(s) even when the host is disconnected from the internet.
I wonder if BottleRocket can provide some way to set/add such DNS mappings reliably (can sustain a reboot).
E.g. By running `apiclient set settings.dns.mappings=["10.0.0.2 somehost.com"]` to set it into `/etc/hosts`
**Any alternatives you've considered:**
* We tried adding some DNS mappings inside `/etc/hosts` through admin container. However, we found those changes are lost after we reboot the host. (Due to /etc mapped as tmpfs mount)
| 1.0 | Populate /etc/hosts with custom DNS mapping entries - **What I'd like:**
Add custom static DNS mapping entries for `/etc/hosts` through settings.
Our project requires us to support resolving certain DNS domain(s) to IP(s) even when the host is disconnected from the internet.
I wonder if BottleRocket can provide some way to set/add such DNS mappings reliably (can sustain a reboot).
E.g. By running `apiclient set settings.dns.mappings=["10.0.0.2 somehost.com"]` to set it into `/etc/hosts`
**Any alternatives you've considered:**
* We tried adding some DNS mappings inside `/etc/hosts` through admin container. However, we found those changes are lost after we reboot the host. (Due to /etc mapped as tmpfs mount)
| priority | populate etc hosts with custom dns mapping entries what i d like add custom static dns mapping entries for etc hosts through settings our project requires us to support resolving certain dns domain s to ip s even when the host is disconnected from the internet i wonder if bottlerocket can provide some way to set add such dns mappings reliably can sustain a reboot e g by running apiclient set settings dns mappings to set it into etc hosts any alternatives you ve considered we tried adding some dns mappings inside etc hosts through admin container however we found those changes are lost after we reboot the host due to etc mapped as tmpfs mount | 1 |
130,377 | 27,658,934,582 | IssuesEvent | 2023-03-12 09:48:28 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | Sootman not setting outpost on fire or returning to submarine | Bug Need more info Code | ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
After freeing sootman, he just stood around and didn't do anything, preventing us from proceeding. There was also another jailbreak mission happening simultaneously, which could have caused the issue. Another side note is that after enough time passed, the prisoners used all the oxygen in their cell and suffocated.


### Reproduction steps
Have both jailbreak missions at the same outpost?
### Bug prevalence
Just once
### Version
Faction/endgame test branch
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_ | 1.0 | Sootman not setting outpost on fire or returning to submarine - ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
After freeing sootman, he just stood around and didn't do anything, preventing us from proceeding. There was also another jailbreak mission happening simultaneously, which could have caused the issue. Another side note is that after enough time passed, the prisoners used all the oxygen in their cell and suffocated.


### Reproduction steps
Have both jailbreak missions at the same outpost?
### Bug prevalence
Just once
### Version
Faction/endgame test branch
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_ | non_priority | sootman not setting outpost on fire or returning to submarine disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened after freeing sootman he just stood around and didn t do anything preventing us from proceeding there was also another jailbreak mission happening simultaneously which could have caused the issue another side note is that after enough time passed the prisoners used all the oxygen in their cell and suffocated reproduction steps have both jailbreak missions at the same outpost bug prevalence just once version faction endgame test branch no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response | 0 |
516,652 | 14,985,703,887 | IssuesEvent | 2021-01-28 20:17:42 | kymckay/f21as-project | https://api.github.com/repos/kymckay/f21as-project | closed | Set up JUnit CI | priority|low type|task | I'd like to get JUnit tests set up to run via GitHub Actions (will run any time pushes are made for easy test automation).
Seems like a good run through of one possible setup: https://dev.to/ewefie/getting-started-with-github-actions-run-junit-5-tests-in-a-java-project-with-maven-20g4 | 1.0 | Set up JUnit CI - I'd like to get JUnit tests set up to run via GitHub Actions (will run any time pushes are made for easy test automation).
Seems like a good run through of one possible setup: https://dev.to/ewefie/getting-started-with-github-actions-run-junit-5-tests-in-a-java-project-with-maven-20g4 | priority | set up junit ci i d like to get junit tests set up to run via github actions will run any time pushes are made for easy test automation seems like a good run through of one possible setup | 1 |
240,840 | 7,806,453,860 | IssuesEvent | 2018-06-11 14:06:54 | fac-13/GP_ProjectBernadette | https://api.github.com/repos/fac-13/GP_ProjectBernadette | closed | Prevent users from changing their answers after they submit the form | priority-3 | It might be worth preventing the user from changing their choices after they submit the form as this could potentially result in multiple forms from the same user with only small changes. This would result in more work for GP. | 1.0 | Prevent users from changing their answers after they submit the form - It might be worth preventing the user from changing their choices after they submit the form as this could potentially result in multiple forms from the same user with only small changes. This would result in more work for GP. | priority | prevent users from changing their answers after they submit the form it might be worth preventing the user from changing their choices after they submit the form as this could potentially result in multiple forms from the same user with only small changes this would result in more work for gp | 1 |
320,083 | 9,769,353,901 | IssuesEvent | 2019-06-06 08:21:30 | mschubert/clustermq | https://api.github.com/repos/mschubert/clustermq | opened | Switch to `pbdZMQ` package for ZeroMQ backend | next-version priority | `rzmq` does not support #150, and the package is not really in active development. Switch to `pbdZMQ`, and interface with the library directly
Enables solving of #150 and #125
In addition, inner loops could be written in compiled code this way | 1.0 | Switch to `pbdZMQ` package for ZeroMQ backend - `rzmq` does not support #150, and the package is not really in active development. Switch to `pbdZMQ`, and interface with the library directly
Enables solving of #150 and #125
In addition, inner loops could be written in compiled code this way | priority | switch to pbdzmq package for zeromq backend rzmq does not support and the package is not really in active development switch to pbdzmq and interface with the library directly enables solving of and in addition inner loops could be written in compiled code this way | 1 |
125,766 | 12,268,701,849 | IssuesEvent | 2020-05-07 12:58:25 | mainflux/mainflux | https://api.github.com/repos/mainflux/mainflux | closed | Write swagger file for Twins service | documentation | Write a swagger specification that describes the HTTP CRUD for the Twins service. | 1.0 | Write swagger file for Twins service - Write a swagger specification that describes the HTTP CRUD for the Twins service. | non_priority | write swagger file for twins service write a swagger specification that describes the http crud for the twins service | 0 |
728,181 | 25,070,175,896 | IssuesEvent | 2022-11-07 11:29:04 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Reduce log metadata when debugging | priority-3-medium type:refactor status:ready | ### Describe the proposed change(s).
We should reduce the amount of metadata when debugging, especially if it is not error related. An example is the `No concurrency limits` message.
Simple strings should be inlined so that they don't "stand out" when viewing logs. | 1.0 | Reduce log metadata when debugging - ### Describe the proposed change(s).
We should reduce the amount of metadata when debugging, especially if it is not error related. An example is the `No concurrency limits` message.
Simple strings should be inlined so that they don't "stand out" when viewing logs. | priority | reduce log metadata when debugging describe the proposed change s we should reduce the amount of metadata when debugging especially if it is not error related an example is the no concurrency limits message simple strings should be inlined so that they don t stand out when viewing logs | 1 |
14,427 | 9,179,311,922 | IssuesEvent | 2019-03-05 02:39:15 | jstanden/cerb | https://api.github.com/repos/jstanden/cerb | closed | Decimal custom fields count suggestion is not a default | bug usability | When creating a decimal-type custom field, the number of decimal places is suggested at two, but it's not actually a default. If you leave the suggested "2", it creates the field with zero decimal places. It's also not a very different color, and not clear that it's just a hint rather than a pre-filled entry.
I would make the decimal places counter more explicit, maybe with a number-entry box like a "number of copies" printer dialog, where it's got an up and down arrow to cycle through the number of decimal points, and perhaps even a 'sample' next to it that goes from '1.23' to '1.23456' as the count increases. | True | Decimal custom fields count suggestion is not a default - When creating a decimal-type custom field, the number of decimal places is suggested at two, but it's not actually a default. If you leave the suggested "2", it creates the field with zero decimal places. It's also not a very different color, and not clear that it's just a hint rather than a pre-filled entry.
I would make the decimal places counter more explicit, maybe with a number-entry box like a "number of copies" printer dialog, where it's got an up and down arrow to cycle through the number of decimal points, and perhaps even a 'sample' next to it that goes from '1.23' to '1.23456' as the count increases. | non_priority | decimal custom fields count suggestion is not a default when creating a decimal type custom field the number of decimal places is suggested at two but it s not actually a default if you leave the suggested it creates the field with zero decimal places it s also not a very different color and not clear that it s just a hint rather than a pre filled entry i would make the decimal places counter more explicit maybe with a number entry box like a number of copies printer dialog where it s got an up and down arrow to cycle through the number of decimal points and perhaps even a sample next to it that goes from to as the count increases | 0 |
217,082 | 24,312,780,841 | IssuesEvent | 2022-09-30 01:18:35 | mgh3326/data-jpa | https://api.github.com/repos/mgh3326/data-jpa | reopened | CVE-2020-11996 (High) detected in tomcat-embed-core-9.0.30.jar | security vulnerability | ## CVE-2020-11996 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/data-jpa/commit/6b15d122cb986f910de69529dad6afdeaaf8610d">6b15d122cb986f910de69529dad6afdeaaf8610d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A specially crafted sequence of HTTP/2 requests sent to Apache Tomcat 10.0.0-M1 to 10.0.0-M5, 9.0.0.M1 to 9.0.35 and 8.5.0 to 8.5.55 could trigger high CPU usage for several seconds. If a sufficient number of such requests were made on concurrent HTTP/2 connections, the server could become unresponsive.
<p>Publish Date: 2020-06-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996>CVE-2020-11996</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html">https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2020-06-26</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.36</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.2.8.RELEASE</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11996 (High) detected in tomcat-embed-core-9.0.30.jar - ## CVE-2020-11996 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar,/root/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.3.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.3.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mgh3326/data-jpa/commit/6b15d122cb986f910de69529dad6afdeaaf8610d">6b15d122cb986f910de69529dad6afdeaaf8610d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A specially crafted sequence of HTTP/2 requests sent to Apache Tomcat 10.0.0-M1 to 10.0.0-M5, 9.0.0.M1 to 9.0.35 and 8.5.0 to 8.5.55 could trigger high CPU usage for several seconds. If a sufficient number of such requests were made on concurrent HTTP/2 connections, the server could become unresponsive.
<p>Publish Date: 2020-06-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996>CVE-2020-11996</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html">https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2020-06-26</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.36</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.2.8.RELEASE</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file build gradle path to vulnerable library root gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar root gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details a specially crafted sequence of http requests sent to apache tomcat to to and to could trigger high cpu usage for several seconds if a sufficient number of such requests were made on concurrent http connections the server could become unresponsive publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend | 0 |
804,079 | 29,368,597,817 | IssuesEvent | 2023-05-29 00:35:41 | steedos/steedos-platform | https://api.github.com/repos/steedos/steedos-platform | closed | [Bug]: 管理员新建列表视图设置的过滤条件保存后直接丢失了 | bug done priority: High | ### Description
- 只有新建才有问题,编辑是正常的
- 普通用户前台新建列表视图没有问题,因为新建时不能设置过滤条件
- 只有过滤条件字段值丢失其他字段值都能正常保存

### Steps To Reproduce 重现步骤
- 管理员账户进入对象详细设置页面
- 新建一个列表视图,设置过滤条件并保存
- 找到刚新建的列表视图,编辑该列表视图会到到刚设置的过滤条件丢失了
### Version 版本
2.5 | 1.0 | [Bug]: 管理员新建列表视图设置的过滤条件保存后直接丢失了 - ### Description
- 只有新建才有问题,编辑是正常的
- 普通用户前台新建列表视图没有问题,因为新建时不能设置过滤条件
- 只有过滤条件字段值丢失其他字段值都能正常保存

### Steps To Reproduce 重现步骤
- 管理员账户进入对象详细设置页面
- 新建一个列表视图,设置过滤条件并保存
- 找到刚新建的列表视图,编辑该列表视图会到到刚设置的过滤条件丢失了
### Version 版本
2.5 | priority | 管理员新建列表视图设置的过滤条件保存后直接丢失了 description 只有新建才有问题,编辑是正常的 普通用户前台新建列表视图没有问题,因为新建时不能设置过滤条件 只有过滤条件字段值丢失其他字段值都能正常保存 steps to reproduce 重现步骤 管理员账户进入对象详细设置页面 新建一个列表视图,设置过滤条件并保存 找到刚新建的列表视图,编辑该列表视图会到到刚设置的过滤条件丢失了 version 版本 | 1 |
811,651 | 30,295,012,677 | IssuesEvent | 2023-07-09 18:50:10 | fossmium/OneDrive-Cloud-Player | https://api.github.com/repos/fossmium/OneDrive-Cloud-Player | closed | Cerificate Expired | high priority | I recently came across this app and tried to install it. But the certificate expired on 29th May 2023. Today is 13th June 2023. So i couldnt install the certificate. So i couldnt install the app. Please fix this | 1.0 | Cerificate Expired - I recently came across this app and tried to install it. But the certificate expired on 29th May 2023. Today is 13th June 2023. So i couldnt install the certificate. So i couldnt install the app. Please fix this | priority | cerificate expired i recently came across this app and tried to install it but the certificate expired on may today is june so i couldnt install the certificate so i couldnt install the app please fix this | 1 |
11,053 | 13,888,692,097 | IssuesEvent | 2020-10-19 06:45:13 | fluent/fluent-bit | https://api.github.com/repos/fluent/fluent-bit | closed | Add S3 bucket Output plugin | work-in-process | Feature:
I have always wanted to push my logs to aws S3 bucket directly.
Will it be possible for us to have an output plugin that will push the logs to s3 bucket real time which will create a file based on daily or weekly log file. Similar to pushing logs to elasticsearch.
| 1.0 | Add S3 bucket Output plugin - Feature:
I have always wanted to push my logs to aws S3 bucket directly.
Will it be possible for us to have an output plugin that will push the logs to s3 bucket real time which will create a file based on daily or weekly log file. Similar to pushing logs to elasticsearch.
| non_priority | add bucket output plugin feature i have always wanted to push my logs to aws bucket directly will it be possible for us to have an output plugin that will push the logs to bucket real time which will create a file based on daily or weekly log file similar to pushing logs to elasticsearch | 0 |
35,681 | 2,792,405,453 | IssuesEvent | 2015-05-10 23:46:17 | e-government-ua/i | https://api.github.com/repos/e-government-ua/i | opened | дать возможность ставить пояснительный текст для каждого поля (данные должны храниться в поля модели Активити) | hi priority | Потом создать задачу на @Kiaba , чтоб значение этого поля "выводить мелким серым шрифтом под соответствующим полем" | 1.0 | дать возможность ставить пояснительный текст для каждого поля (данные должны храниться в поля модели Активити) - Потом создать задачу на @Kiaba , чтоб значение этого поля "выводить мелким серым шрифтом под соответствующим полем" | priority | дать возможность ставить пояснительный текст для каждого поля данные должны храниться в поля модели активити потом создать задачу на kiaba чтоб значение этого поля выводить мелким серым шрифтом под соответствующим полем | 1 |
757,301 | 26,506,064,235 | IssuesEvent | 2023-01-18 13:56:36 | kir-dev/konzisite-api | https://api.github.com/repos/kir-dev/konzisite-api | opened | Schema small changes | good first issue medium priority | - Group name should be unique (make sure to catch the Prisma error if the unique constraint is violated in create and update)
- `descMarkdown` should be optional in `Consultation `and `ConsultationRequest` (it's already like this in the DTOs I think)
- ... ? | 1.0 | Schema small changes - - Group name should be unique (make sure to catch the Prisma error if the unique constraint is violated in create and update)
- `descMarkdown` should be optional in `Consultation `and `ConsultationRequest` (it's already like this in the DTOs I think)
- ... ? | priority | schema small changes group name should be unique make sure to catch the prisma error if the unique constraint is violated in create and update descmarkdown should be optional in consultation and consultationrequest it s already like this in the dtos i think | 1 |
224,685 | 7,472,053,348 | IssuesEvent | 2018-04-03 11:20:50 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | Suite P - Emails View Relationship popup collapses on second use | Fix Proposed Low Priority Resolved: Next Release bug | #### Issue
Using the Suite P theme in version 7.7.9. In the Emails component right click on an email and select the 'View Relationships' option. The 'Email Record' popup displays with sufficient height to view the contents (see attachment 1).
If the popup is closed and then the action is repeated (right click on an email and select the 'View Relationships' option) the popup reappears but only the popup header is visible (none of the content is visible) rendering the popup useless (see attachment 2).
Refreshing the page works around the issue but is very inconvenient to have to do so every time.
[This forum post](https://suitecrm.com/forum/suite-themes/12065-suitep-quick-create-from-email-multiselect-dropdown-box-size#40926) describes the same bad behaviour for the 'Quick Create' -> 'Contact' popup
#### Expected Behavior
The popup should open every time with at least adequate minimum height to make the content view-able.
#### Actual Behavior
The popup opens up the first time with adequate height to make the content view-able, but all subsequent openings result in the popup having far too little height to view any contents.
#### Possible Fix
Looking at the attachments you can see the height of the container is calculated the first time at 381px but the second time the height is calculated at only 10px!
I suggest that when dynamically calculating the height of a popup's content there should be a lower limit (say 150px) that the height will not be set lower than ever. This would at least allow the content to be viewed.
#### Steps to Reproduce
1. Visit [Live Demo](http://demo.suiteondemand.com/index.php?module=Emails&action=index&parentTab=All)
2. Got to Emails module and select 'My Sent Emails'
3. Right click on an email and select the 'View Relationships' option.
4. Close popup
5. Right click on an email and select the 'View Relationships' option.
#### Context
Our users spend most of there time in the Emails module so this is a very annoying and time wasting bug.
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: Version 7.7.9
* Browser name and version (e.g. Chrome Version 51.0.2704.63 (64-bit)): Chrome Version 55.0.2883.87 (64-bit)
* Environment name and version (e.g. MySQL, PHP 7): MySQL 5.6, PHP 5.4.36
* Operating System and version (e.g Ubuntu 16.04): Ubuntu 14.04.1 LTS
| 1.0 | Suite P - Emails View Relationship popup collapses on second use - #### Issue
Using the Suite P theme in version 7.7.9. In the Emails component right click on an email and select the 'View Relationships' option. The 'Email Record' popup displays with sufficient height to view the contents (see attachment 1).
If the popup is closed and then the action is repeated (right click on an email and select the 'View Relationships' option) the popup reappears but only the popup header is visible (none of the content is visible) rendering the popup useless (see attachment 2).
Refreshing the page works around the issue but is very inconvenient to have to do so every time.
[This forum post](https://suitecrm.com/forum/suite-themes/12065-suitep-quick-create-from-email-multiselect-dropdown-box-size#40926) describes the same bad behaviour for the 'Quick Create' -> 'Contact' popup
#### Expected Behavior
The popup should open every time with at least adequate minimum height to make the content view-able.
#### Actual Behavior
The popup opens up the first time with adequate height to make the content view-able, but all subsequent openings result in the popup having far too little height to view any contents.
#### Possible Fix
Looking at the attachments you can see the height of the container is calculated the first time at 381px but the second time the height is calculated at only 10px!
I suggest that when dynamically calculating the height of a popup's content there should be a lower limit (say 150px) that the height will not be set lower than ever. This would at least allow the content to be viewed.
#### Steps to Reproduce
1. Visit [Live Demo](http://demo.suiteondemand.com/index.php?module=Emails&action=index&parentTab=All)
2. Got to Emails module and select 'My Sent Emails'
3. Right click on an email and select the 'View Relationships' option.
4. Close popup
5. Right click on an email and select the 'View Relationships' option.
#### Context
Our users spend most of there time in the Emails module so this is a very annoying and time wasting bug.
#### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* SuiteCRM Version used: Version 7.7.9
* Browser name and version (e.g. Chrome Version 51.0.2704.63 (64-bit)): Chrome Version 55.0.2883.87 (64-bit)
* Environment name and version (e.g. MySQL, PHP 7): MySQL 5.6, PHP 5.4.36
* Operating System and version (e.g Ubuntu 16.04): Ubuntu 14.04.1 LTS
| priority | suite p emails view relationship popup collapses on second use issue using the suite p theme in version in the emails component right click on an email and select the view relationships option the email record popup displays with sufficient height to view the contents see attachment if the popup is closed and then the action is repeated right click on an email and select the view relationships option the popup reappears but only the popup header is visible none of the content is visible rendering the popup useless see attachment refreshing the page works around the issue but is very inconvenient to have to do so every time describes the same bad behaviour for the quick create contact popup expected behavior the popup should open every time with at least adequate minimum height to make the content view able actual behavior the popup opens up the first time with adequate height to make the content view able but all subsequent openings result in the popup having far too little height to view any contents possible fix looking at the attachments you can see the height of the container is calculated the first time at but the second time the height is calculated at only i suggest that when dynamically calculating the height of a popup s content there should be a lower limit say that the height will not be set lower than ever this would at least allow the content to be viewed steps to reproduce visit got to emails module and select my sent emails right click on an email and select the view relationships option close popup right click on an email and select the view relationships option context our users spend most of there time in the emails module so this is a very annoying and time wasting bug your environment suitecrm version used version browser name and version e g chrome version bit chrome version bit environment name and version e g mysql php mysql php operating system and version e g ubuntu ubuntu lts | 1 |
235,668 | 7,741,571,799 | IssuesEvent | 2018-05-29 06:29:05 | giatschool/webgis.nrw | https://api.github.com/repos/giatschool/webgis.nrw | reopened | add historische karten to base maps | priority | https://www.bezreg-koeln.nrw.de/brk_internet/geobasis/webdienste/geodatendienste/index.html
add the followingg WMS:
1801 – 1828 Tranchot: https://www.wms.nrw.de/geobasis/wms_nw_tranchot
1836 – 1850 Uraufnahme: https://www.wms.nrw.de/geobasis/wms_nw_uraufnahme
1881 – 1883 Fürstenthum Lippe: https://www.wms.nrw.de/geobasis/wms_nw_lippe
1891 – 1912 Neuaufnahme: https://www.wms.nrw.de/geobasis/wms_nw_neuaufnahme
TK25 1936-1945: https://www.wms.nrw.de/geobasis/wms_nw_tk25_1936-1945
DGK5 – historisch: https://www.wms.nrw.de/geobasis/wms_nw_dgk5
| 1.0 | add historische karten to base maps - https://www.bezreg-koeln.nrw.de/brk_internet/geobasis/webdienste/geodatendienste/index.html
add the followingg WMS:
1801 – 1828 Tranchot: https://www.wms.nrw.de/geobasis/wms_nw_tranchot
1836 – 1850 Uraufnahme: https://www.wms.nrw.de/geobasis/wms_nw_uraufnahme
1881 – 1883 Fürstenthum Lippe: https://www.wms.nrw.de/geobasis/wms_nw_lippe
1891 – 1912 Neuaufnahme: https://www.wms.nrw.de/geobasis/wms_nw_neuaufnahme
TK25 1936-1945: https://www.wms.nrw.de/geobasis/wms_nw_tk25_1936-1945
DGK5 – historisch: https://www.wms.nrw.de/geobasis/wms_nw_dgk5
| priority | add historische karten to base maps add the followingg wms – tranchot – uraufnahme – fürstenthum lippe – neuaufnahme – historisch | 1 |
24,383 | 5,053,266,908 | IssuesEvent | 2016-12-21 07:20:03 | divio/django-cms | https://api.github.com/repos/divio/django-cms | closed | apphook docs talk about mptt | component: documentation status: accepted | The apphook docs talk about mptt: http://docs.django-cms.org/en/develop/how_to/apphooks.html
Does that make sense? As I understood it, the project does not use mptt anymore.
| 1.0 | apphook docs talk about mptt - The apphook docs talk about mptt: http://docs.django-cms.org/en/develop/how_to/apphooks.html
Does that make sense? As I understood it, the project does not use mptt anymore.
| non_priority | apphook docs talk about mptt the apphook docs talk about mptt does that make sense as i understood it the project does not use mptt anymore | 0 |
579,844 | 17,199,145,029 | IssuesEvent | 2021-07-16 23:25:47 | zulip/zulip-mobile | https://api.github.com/repos/zulip/zulip-mobile | opened | Support search syntax | P1 high-priority webapp parity | At present, the mobile app does not support search syntax (e.g. `stream:`, `sender:`, etc.).
We should add support for search syntax by reusing the webapp code. | 1.0 | Support search syntax - At present, the mobile app does not support search syntax (e.g. `stream:`, `sender:`, etc.).
We should add support for search syntax by reusing the webapp code. | priority | support search syntax at present the mobile app does not support search syntax e g stream sender etc we should add support for search syntax by reusing the webapp code | 1 |
52,859 | 6,283,865,739 | IssuesEvent | 2017-07-19 05:42:18 | intel-analytics/BigDL | https://api.github.com/repos/intel-analytics/BigDL | opened | Pip install python bigdl need sudo | 0.2 release test high priority | If I run the install commands in doc
```
pip install --upgrade pip
pip install BigDL==0.2.0.dev3 # for Python 2.7
```
It will throw exception
```
Downloading/unpacking pip from https://pypi.python.org/packages/b6/ac/7015eb97dc749283ffdec1c3a88ddb8ae03b8fad0f0e611408f196358da3/pip-9.0.1-py2.py3-none-any.whl#md5=297dbd16ef53bcef0447d245815f5144
Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB): 1.3MB downloaded
Installing collected packages: pip
Found existing installation: pip 1.5.4
Not uninstalling pip at /usr/lib/python2.7/dist-packages, owned by OS
Can't roll back pip; was not uninstalled
Cleaning up...
Downloading/unpacking BigDL==0.2.0.dev3
Could not find any downloads that satisfy the requirement BigDL==0.2.0.dev3
Cleaning up...
No distributions at all found for BigDL==0.2.0.dev3
Storing debug log for failure in /tmp/tmpMCxkeC
```
use sudo help fix this problem for me
```
sudo pip install --upgrade pip
sudo pip install BigDL==0.2.0.dev3 # for Python 2.7
``` | 1.0 | Pip install python bigdl need sudo - If I run the install commands in doc
```
pip install --upgrade pip
pip install BigDL==0.2.0.dev3 # for Python 2.7
```
It will throw exception
```
Downloading/unpacking pip from https://pypi.python.org/packages/b6/ac/7015eb97dc749283ffdec1c3a88ddb8ae03b8fad0f0e611408f196358da3/pip-9.0.1-py2.py3-none-any.whl#md5=297dbd16ef53bcef0447d245815f5144
Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB): 1.3MB downloaded
Installing collected packages: pip
Found existing installation: pip 1.5.4
Not uninstalling pip at /usr/lib/python2.7/dist-packages, owned by OS
Can't roll back pip; was not uninstalled
Cleaning up...
Downloading/unpacking BigDL==0.2.0.dev3
Could not find any downloads that satisfy the requirement BigDL==0.2.0.dev3
Cleaning up...
No distributions at all found for BigDL==0.2.0.dev3
Storing debug log for failure in /tmp/tmpMCxkeC
```
use sudo help fix this problem for me
```
sudo pip install --upgrade pip
sudo pip install BigDL==0.2.0.dev3 # for Python 2.7
``` | non_priority | pip install python bigdl need sudo if i run the install commands in doc pip install upgrade pip pip install bigdl for python it will throw exception downloading unpacking pip from downloading pip none any whl downloaded installing collected packages pip found existing installation pip not uninstalling pip at usr lib dist packages owned by os can t roll back pip was not uninstalled cleaning up downloading unpacking bigdl could not find any downloads that satisfy the requirement bigdl cleaning up no distributions at all found for bigdl storing debug log for failure in tmp tmpmcxkec use sudo help fix this problem for me sudo pip install upgrade pip sudo pip install bigdl for python | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.